id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
f1701079ed26fba17e13db08da45e0f89f0f35bb
|
Study on Visual Knowledge Structure Reasoning
Huimin Lu
1College of Software Engineering, Changchun University of Technology, Changchun, China
2College of Computer Science and Technology, Jilin University, Changchun, China
Email: luhm.cc@gmail.com
2Liang Hu and Gang Liu1
Email: hul@jlu.edu.cn, liug8818@mail.ccut.edu.cn
Abstract—Intelligent Topic Map (ITM) embodies the multi-level, multi-granularity and the inherent relevant characteristics of knowledge. With ITM as infrastructure, this paper presents a visual knowledge structure reasoning method integrates the logic-based knowledge reasoning and the structure-based knowledge reasoning. The logic-based knowledge reasoning implements knowledge consistency checking and the implicit associations reasoning between knowledge points, it can help us obtain the optimal description of knowledge. In order to construct the complete knowledge structure, a Knowledge Unit Circle Search strategy for structure-based knowledge reasoning is proposed, by which more detailed semantic association of knowledge is provided and the inherent relevant characteristics of knowledge is obtained. The knowledge reasoning results are visualized by ITM, which provides a visual knowledge map. It is available for users to acquire the knowledge and associations among them. A prototype system has been implemented and applied to the massive knowledge organization, management and service for education.
Index Terms—topic map, intelligent topic map, knowledge reasoning, knowledge visualization
I. INTRODUCTION
Knowledge reasoning mainly includes two types: the logic-based knowledge reasoning and the structure-based knowledge reasoning. The logic-based knowledge reasoning often used to describe knowledge representation and reasoning based on the logic. It is rigorous, flexible and with a strict formal definition, but the lack of structure constraint. The structure-based knowledge reasoning constructs knowledge based on some data structure, such as vector space, tree, graph, etc. It bodes well for knowledge and the relations between them. Knowledge doesn’t exist by itself, since knowledge always has all kinds of relations with other knowledge. According to constructivism theory and cognitive load theory perspective, the inner relevance of knowledge can contribute to achieving consistent with the person’s own cognitive pattern, and thereby the cognitive efficiency can be increased [1], but knowledge reasoning can not guarantee as effective as logical representation. So, a knowledge representation model should be built to integrate these two types of knowledge reasoning in order to obtain the satisfactory knowledge reasoning results [2]. Moreover, the reasoning results should be displayed by visual knowledge structure. Its goal is to transfer and create new knowledge through using visualizations.
Topic Map(TM) is an ISO standard (ISO/IEC 13250) that describes knowledge structures and associates them with information resources [3] [4]. Topic map constructs a structured semantic network above the knowledge resources. It describes the concepts and the semantic relations between them, and can locate the resources which are associated with the concepts and realize the concrete objects to be joined with abstract concepts. It provides a visual knowledge map, which is available for users to acquire knowledge and associations among them. However, the conventional topic map can not provide users with efficient knowledge navigation, and we unable to acquire the implicit knowledge for it lack of reasoning abilities. So, we extend the conventional topic map in structure and enhance the reasoning functions, which is defined Intelligent Topic Map (ITM) [5]. EXTM (Extended XTM) extended the syntax and semantics of XTM (XML for Topic Maps) [6] so that it can describe ITM elements (such as clusters, topics, knowledge elements), and provides a model and grammar for representing the structure of ITM and defining reasoning rules. EXTM makes XML extend to the semantic field. It defines an abstract, graphics-based knowledge association model and allows the logic-based knowledge reasoning to discover new knowledge.
We propose a novel method of visual knowledge structure reasoning with the intelligent topic map as infrastructure, which can efficiently implement both the structure-based knowledge reasoning and the logic-based knowledge reasoning. The reasoning results are visualized by ITM. It provides a visual knowledge map, which is available for users to acquire the knowledge and associations among them. Visualization navigation capabilities of exploiting the created knowledge structures are based on hyperbolic geometry concepts and provide users with intuitive access mechanisms to the required knowledge.
II. RELATED WORKS
The knowledge representation model which is able to integrate logic reasoning and structure reasoning includes XML, RDF, ontology, etc. XML provides a flexible, general, rich structured information representation and convenient for the cooperative processing of heterogeneous knowledge [7]. RDF is an effective means of semantic information description [8]. Ontology establishes a classified hierarchy by defining the concepts and the relevance between them, and thus to build the semantic space of concepts [9]. However, they are not in an intuitive and graphical way to display knowledge, and there is no relationship between the resources and the related concepts contained. The structure of topic map composed of Topics, Associations and Occurrences (TAO) [10], which describes the concepts and the semantic relationships between them and can locate the resource which are associated with the concept. TM establishes a structured semantic web above the resources level and implements the semantic organization and joining between the physical resource entities and the abstract concepts. Topic maps are dubbed “the GPS of the information universe”. TM can be applied to cross-system since the XTM (XML for Topic Maps) syntax is based on XML and is an exchangeable data standard. The greatest advantage of TM is the discovery and visualization of knowledge architecture [11] [12].
Graphic display based on topic map is more perceivable, it can provide visual knowledge navigation mechanism. Topic map inherits the characteristics of knowledge organization methods such as index, glossary, thesaurus, taxonomy, concept map, ontology, etc. Consequently, topic map adapts to knowledge logical organization and becomes the state-of-art semantic technologies, such as the application of topic maps technology in context of e-learning environment, especially based on analyses of topic relative semantic structure, and used topic maps to represent learning resources and associated semantics such as metadata [13][14][15]. H. Lu et al proposed a novel concept of intelligent topic map for knowledge organization and knowledge services, which embodies the multi-level, multi-granularity and inherent relevant characteristics of knowledge and realizes knowledge reasoning [16].
III. ITM DESCRIPTION
A. Overview of ITM Structure
The structure of topic map is shown in Fig. 1. It composed of Topics, Associations and Occurrences (TAO). In order to overcome the drawbacks of topic map, we add a clustering level and a knowledge element level in ITM, which depicts the hierarchical relation of “cluster - topic - knowledge element - occurrence”. The structure of ITM is shown in Fig. 2.
Cluster: Each cluster contains several closely related topics so that the topics in the same cluster are similar in some sense. Clusters provide the effective navigation and browsing mechanism for users.
Definition 1: When given an ITM, a cluster (c) is defined as following two tuples:
\[ c = (Nc, Tc) \]
\[ Nc \] — the name of cluster
\[ Tc \] — the set of all topics in the c
Topic: It can be any “thing” (such as a person, an entity, a concept, really anything) — regardless of whether it exists or has any other specific characteristics.
Definition 2: When given an ITM, a topic (t) is defined as following six tuples:
\[ t = (Nt, At, Dt, E, g, f) \]
\[ Nt \] — the name of topic
\[ At \] — a set of associations with topic Nt
\[ Dt = \{dt_1, dt_2, ..., dt_m\} \] — a set of topic association types \( m \leq n \)
\[ E = \{e_1, e_2, ..., e_n\} \] — a set of elements relevant to Nt, the element is cluster, topic or knowledge element
Function g : At \( \rightarrow \) E — given a association relevant to element
Function f : At \( \rightarrow \) Dt — given a association relevant to type
Definition 3: When given an ITM, a knowledge element (ke) is defined as following six tuples:
\[ ke = (Nke, Ake, Dke, E, g, f) \]
\[ Nke \] — the name of knowledge element
\[ Ake = \{ake_1, ake_2, ..., ake_n\} \] — a set of associations with knowledge element Nke
Figure 1. The structure of conventional topic map.
Figure 2. The structure of intelligent topic map.
At E
At Dt
Dke = \{dke_1, dke_2, ..., dke_m\} — a set of knowledge element association types (m \leq n)
E = \{e_1, e_2, ..., e_n\} — a set of elements relevant to Nke
Function g : At \to E — given a association relevant to element
Function f : At \to Dt — given a association relevant to type
Occurrence: representing information resources relevant to a particular topic. An occurrence can be a document, a picture or video depicting the topic, a simple mention of the topic in the context of something else.
Association: A topic association asserts a relationship between two or more topics.
Definition 4: When given an ITM, an association (a) is defined as following three tuples:
\[ a = (e_1, e_2, d) \]
\[ e_1, e_2 — \text{the elements of ITM} \]
\[ d — \text{the association type} \]
ITM provides strong paradigm and concept for the semantic structuring of linked networks. It can establish the relations among unstructured information resources, thereby allowing to link heterogeneous, unmodified resources of information semantically by creating a semantic web and implement concrete objects to be joined with abstract concepts. It lays a foundation for high-quality structure-based knowledge reasoning.
B. XTM
XTM was proposed by Newcomb and Biezunsk. It provides a model and grammar for representing the structure of information resources used to define the topics and their associations. Moreover, we enhance the reasoning functions in ITM. We establish corresponding logical reasoning rules and grammar, and then realize reasoning functions in ITM. We establish corresponding topics and their associations. Moreover, we enhance the structure of information resources used to define the high-quality structure-based knowledge reasoning.
Step 1: Defining the top-level composite processes. As shown in Fig. 3, three composite processes which named “LogicKnowledgeReasoning”, “StructureKnowledgeReasoning”, and “VisualizationDisplay” are defined, respectively. “Join” denotes the former processes must be finished before the last one is started. The input of process “VisualizationDisplay” is the reasoning results while the outputs of it is the visual knowledge structure.
Step 2: Refining the definition of process “LogicKnowledgeReasoning” as shown in Fig. 4, it includes two processes: the knowledge consistency checking and the implicit associations reasoning.
A. The Knowledge Consistency Checking
In the process of ITM constructing, conflicts can be caused by many reasons, like the differences of people’s understanding, the marking of knowledge resources, and the constructing of knowledge organization. These conflicts cause information redundancies, contradictions and mistakes. The knowledge consistency checking can eliminate them and can help us obtain the optimal description of ITM. It includes the reflexivity checking.
loop transitivity checking, knowledge redundancy checking and knowledge contradiction checking.
Reflexivity checking: If an element (topic or knowledge element) of ITM is associated with itself, there exists reflexivity conflict. It is defined as follows:
\[ \exists e \in ITM, \ a \ A e \]
(1)
When the reflexivity conflict is detected, the association between the same elements would be deleted.
Loop transitivity checking: If there is an association loop between the two directly related elements of ITM, there exists a loop transitivity conflict. It is defined as follows:
\[ \exists e_1 \in ITM, \ \exists e_2 \in ITM, \ e_1, A e_2 \wedge e_2 A e_1 \]
(2)
When the transitivity conflict is detected, one of the associations between the elements would be deleted.
Knowledge redundancy checking: There exists redundancy if have the same elements (topics or knowledge elements) in an ITM.
\[ \exists e_1 \in ITM, \ \exists e_2 \in ITM, \ e_1 = e_2 \]
(3)
Though knowledge redundancy is not a mistake on semantics, it would be resolved when it is detected for ensuring certainty and uniqueness.
Knowledge redundancy checking includes two steps: the same elements searching and merging.
First, we adopt a similarity measure algorithm for topics (or knowledge elements) which called Comprehensive Information-based Similarity Measure Algorithm (CISMA) [17]. This algorithm describes how similar the related topics (or knowledge elements) are. The process used in the similarity algorithm consists of syntactic matching, semantic matching, and pragmatic matching. For an element pair \((e_1, e_2)\), we calculate the similarity as follows:
\[ SIM(e_1, e_2) = w_1SIM_{Syntax}(e_1, e_2) + w_2SIM_{Semantics}(e_1, e_2) \]
\[ SIM_{Pragmatics}(e_1, e_2) \]
(4)
\(SIM_{Syntax}(e_1, e_2)\): denotes syntactic matching. It is used to compute the syntactic similarity by analyzing the character composition of elements.
\(SIM_{Semantics}(e_1, e_2)\): denotes semantic matching. It analyses the static semantic similarity with aspect to synonyms.
\(SIM_{Pragmatics}(e_1, e_2)\): denotes pragmatic matching. It computes dynamic semantic similarity, which resolves the problem of polysemy.
\(w\) is weight.
Second, merging the same elements adopt the following rules.
Rule 1: Attribute Merging (AM). When given a merging element, AM is defined as following five tuples:
\[ AM = \{Ne, Na, D, V_f, \theta\} \]
\(Ne\) —the name of element
\(Na\) —the name of attribute
\(D\) —the values range of \(Na\)
\[ V_f = \{I_1, I_2, \ldots, I_n\} \] —a set of \(Na\) values in range of \(D\)
\(\theta\) —merging operator
If given a question about attribute merging \(AM = \{Ne, Na, D, V_f, \theta\}\), its solution \(K_a\) is defined as follows:
\[ K_a = \{Ne, Na, D, \theta(I_1, I_2, \ldots, I_n)\} \]
(5)
Rule 2: Element Merging (EM). If element \(e_1\) has high similarity with \(e_2\) in ITM, the two elements would be merged into one element \(e_1\) or \(e_2\). Element merging is defined as following four tuples:
\[ EM = \{NE, E_A, E_{Al}, E_{\theta}\} \]
\(NE = \{ne_1, ne_2, \ldots, ne_k\}\) —a set of the element name
\(E_A = \{A_1, A_2, \ldots, A_n\}\) —a set of all \(EM\) attributes
\(E_{Al} = \{E_{I1}, E_{I2}, \ldots, E_{In}\}\) —a set of all attribute values
\(E_{\theta} = \{\theta, \theta_1, \theta_2, \ldots, \theta_n\}\) —a set of merging operators for each attribute used
If given a question about elements merging \(EM = \{NE, E_A, E_{Al}, E_{\theta}\}\), its solution \(K_{ea}\) is defined as follows:
\[ K_{ea} = \{\theta(ne_1, ne_2, \ldots, ne_k), E_A, E_{Al}, E_{\theta}\} \]
(6)
Rule 3: Association Merging (AssM). When two elements are merged, the association merging would be considered. It is defined as following three tuples:
\[ AssM = \{NE, R, \theta\} \]
\(NE = \{ne_1, ne_2, \ldots, ne_k\}\) —a set of the element name
\(R = \{(R_{S1}, R_{O1}), (R_{S2}, R_{O2}), \ldots, (R_{Sn}, R_{On})\}\) —a set of elements related to \(NE\)
\(R_{Sn}\) —association type
\(R_{On}\) —association object
\(\theta\) —merging operator
Through knowledge consistency checking, we can obtain an ideal ITM description. It lays a foundation for the structure-based knowledge reasoning.
B. The Implicit Associations Reasoning
The implicit associations reasoning can discover new associations between elements and can help us obtain new knowledge. In this paper, we mainly discuss the association of subClassOf, instanceOf, memberOf, precoderOf, and postorderOf.
subClassOf: When given element \(t_s\) and \(t_b\), subClassOf \((t_s, t_b)\) indicates topic \(t_s\) is a subclass of \(t_b\), \(t_s\) is called sub-topic and \(t_b\) is called the relevant parent-topic. Knowledge reasoning rules based on subClassOf is as follows:
\[ \text{subClassOf}(t_s, t_b) \land \text{subClassOf}(t_s, t_b) \rightarrow \text{subClassOf}(t_s, t_s) \]
(7)
\[\text{subClassOf}(t_i, t_j) \land \text{hasAttribute}(t_i, A) \rightarrow \text{hasAttribute}(t_j, A) \quad (8)\]
\[\text{subClassOf}(t_i, t_j) \land \text{instanceOf}(i, t_k) \rightarrow \text{instanceOf}(i, t_j) \quad (9)\]
\text{instanceOf} : \text{For the element } e \text{ and its instance set } I_e, \text{ the association between } i (i \in I_e) \text{ instanceOf}(i, e) \text{ denotes } i \text{ is an instance of } e. \text{ Knowledge reasoning rule based on instanceOf is as follows:}
\[\text{instanceOf}(i, e) \land \text{hasProperty}(e, P) \rightarrow \text{hasProperty}(i, P) \quad (10)\]
\text{memberOf} : \text{memberOf}(M, W) \text{ denotes } M \text{ is a member of } W. \text{ memberOf and instanceOf are two kinds of completely different associations, it emphasizes on the association between elements.}
\text{preorderOf} \text{ and postorderOf} : \text{The preorderOf represents that one elements } B \text{ is comes out before another element } A, \text{ denoted as preorderOf}(B, A). \text{ The postorderOf represents that } A \text{ is comes out after } B, \text{ denoted as postorderOf}(A, B). \text{ Knowledge reasoning rules based on the preorderOf and postorderOf associations are as follows:}
\[\text{preorderOf}(B, A) \land \text{preorderOf}(A, C) \rightarrow \text{preorderOf}(B, C) \quad (11)\]
\[\text{postorderOf}(A, B) \land \text{postorderOf}(B, C) \rightarrow \text{postorderOf}(A, C) \quad (12)\]
\text{Inverse relation between preorderOf and postorderOf:}
\[\text{preorderOf}(B, A) \rightarrow \text{postorderOf}(A, B) \quad (13)\]
\[\text{postorderOf}(A, B) \rightarrow \text{preorderOf}(B, A) \quad (14)\]
In addition to the above association types, there are \text{causalOf}, \text{referenceOf}, \text{exampleOf}, \text{and so on.}
\text{Step 3:} \text{Refining the definition of process “StructureKnowledgeReasoning” as shown in Fig. 5, it includes two processes: Get user interest node and Structure reasoning method.}
\text{Structure reasoning method:} \text{Since knowledge is highly correlated with each other, in order to acquire the complete knowledge structure, we must implement the semantic implication extension, the semantic relevant extension and the semantic class belonging confirmation. According to the characteristics of ITM, we propose an extended algorithm based on knowledge unit circle, named Knowledge Unit Circle Search (KUCS) strategy.}
\text{Before discussing what can be reasoned based on knowledge structure in ITM, we would like to define three concepts: knowledge path and knowledge radius.}
\text{Definition 1: Knowledge path.} \text{In ITM, if there is a sequence } e_p, e_1, e_2, ..., e_n, e_q \text{, and there are association between } (e_p, e_1), (e_1, e_2), ..., (e_n, e_q) \text{ respectively in ITM, then we said that there exists a knowledge path between concept } e_p \text{ and } e_q.
\text{Definition 2: Knowledge radius.} \text{A knowledge path is a sequence of consecutive elements in ITM, and the knowledge radius is the minimum number of elements traversed in a knowledge path, i.e., the length of the path.}
KUCS is described as follows:
\[r = 1; \text{// r is knowledge radius}\]
\text{for } \forall t \in T \text{ do } //T \text{ is the set of topic }
\text{if } \text{associationOf}(t \_point, t) = \text{true then}
\text{set } _T \leftarrow t; \text{ HashSet } \leftarrow t;
\text{else}
\text{set } _T \leftarrow t;
\text{end}
\text{while } r \leq R \text{ do}
\text{for } \forall t_h \in \text{HashSet} \text{ do}
\text{for } \forall t \in T \text{ do}
\text{if } \text{associationOf}(t_h, t) = \text{true then}
\text{set } _T \leftarrow t; \text{ HashSet} \leftarrow t;
\text{end}
\text{end}
\text{r} = \text{r} + 1; \text{ HashSet } = \text{HashSet} \leftarrow t;
\text{end}
\text{for } \forall t \in \text{set } _T \text{ do}
\text{if } \text{associationOf}(t, ke) = \text{true then}
\text{set } _KE \leftarrow ke;
\text{if } \text{associationOf}(t, e) = \text{true then}
\text{set } _C = \text{set } _C \cup \{c\};
\text{end}
\text{ETM_building();}
Through the structure-based knowledge reasoning, we can obtain all the knowledge elements, topics, cluster, and resource occurrence which are associated with the knowledge point within a certain knowledge radius.
**Step 4.** Refining the definition of process “VisualizationDisplay” is shown as follows:
Based on the ITM logical representation of knowledge, the visual knowledge map constructing tool is designed, it is free software coded by Java applet, to assist users in sharing, and navigating the domain knowledge. The ITM document is visually displayed as a double-layer network, the schematic diagram is shown in Fig. 6.

Clusters, topics and topic associations are represented in the upper layer in which fillet rectangular node is regarded as a topic. The dark node is regarded as the knowledge point. Each edge is regarded as an association of topics. When user clicking the edge, it will display the association type. Knowledge elements and their associations are in the lower layer in which ellipse node is regarded as a knowledge element. Each edge is regarded as an association of knowledge elements. When user clicking the edge, it will display the association type. When clicking the nodes in the knowledge element layer, it will display the occurrences which are associated with the knowledge element.
**V. EMPIRICAL EVALUATION**
**A. The Experimental Data**
We built the corpus of Computer Network, which includes 34007 topics, 3307 knowledge elements, 4317 associations between topics, 2214 associations between knowledge elements, 1872 associations between topic and knowledge element and 7031 domain-specific terms.
**B. The Logic Knowledge Reasoning Experiment**
We implement the knowledge consistency checking and the implicit relations reasoning experiment respectively. The knowledge consistency checking includes the reflexivity checking and loop transitivity checking, knowledge redundancy checking and contradiction checking. The implicit relations reasoning can discover the new associations between elements. The results are shown in Table 1.
<table>
<thead>
<tr>
<th>Checking item</th>
<th>Statistics</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reflexivity checking</td>
<td>72</td>
</tr>
<tr>
<td>Transitivity checking</td>
<td>216</td>
</tr>
<tr>
<td>Redundancy checking</td>
<td>161</td>
</tr>
<tr>
<td>Contradiction checking</td>
<td>19</td>
</tr>
<tr>
<td>New associations</td>
<td></td>
</tr>
<tr>
<td>New associations between topics</td>
<td>516</td>
</tr>
<tr>
<td>New associations between knowledge elements</td>
<td>312</td>
</tr>
</tbody>
</table>
The main conflict type is transitivity conflict, which makes up 52% of total conflicts, knowledge redundancy conflict type makes up 34% of total conflicts, and knowledge reflexivity conflict and knowledge transitivity conflict make up 14% of total conflicts. Conflicts can be caused by many reasons. The ITM corpus construction is a process that needs many people’s collaboration and many times of revision, and the local ITM to be reused, they first need to be merged or aligned to one another to produce a single integrated and reconciled global ITM that deals with a larger domain of interest. In the process of building, conflicts can be caused by many reasons, so the consistency checking is a key component of knowledge reasoning strategy. The implicit relations reasoning can reason out new associations between topics (or knowledge elements), provide knowledge structure more detailed semantic association and provide inherent relevant characteristics of knowledge to constructing the complete knowledge structure, but we find that some reasoning relations between topics (or knowledge elements) are not tight enough.
**C. The Knowledge Structure Reasoning Experiment**
We select a topic “TCP/IP protocol” as knowledge point and different knowledge radius to carry out the structure-based knowledge reasoning experiment. It returns all the knowledge elements and topics which are associated with the knowledge point within a certain knowledge radius. The structure-based knowledge reasoning results is shown in Fig. 7. With the knowledge radius increasing, the number of topics, knowledge elements and relations continuously increase. When knowledge radius is equal to 2, the structure-based knowledge reasoning results include ten topics (such as “IP protocol”, “TCP/IP protocol”, “TCP protocol”, etc.) and twelve associations between the topics, six knowledge elements (“TCP protocol definition”, “IP protocol definition”, “TCP/IP protocol definition”, etc.) and five associations between the knowledge elements, and six relations between the topic and knowledge element. The knowledge structure is depicted in Fig. 8.
VI. CONCLUSIONS
The proposed visual knowledge structure reasoning model provides us a means to organize, discovery and display knowledge. Visual knowledge structure reasoning based on ITM not only achieves the better structure-based knowledge reasoning results and provides users with intuitive access mechanisms for the required knowledge. Knowledge has been provided by a stereo knowledge map and hence overcomes the shortcoming of linear display. The ongoing work is knowledge organization, knowledge search and knowledge reasoning can be carried out by computing cloud with huge computing ability and storage capacity distributed and parallel. We hope that the real visual knowledge structure reasoning system will be widely deployed in the future.
ACKNOWLEDGMENT
This work is supported in part by Northeast Asia Chinese International Promotion Information Platform (Hanban). This work was also supported in part by the National High-Tech Research and Development Plan of China under Grant No. 2008AAA01Z131.
REFERENCES
Huimin Lu received the M.S. degree and Ph.D degree in the major of computer science and technology from Xi’an Jiaotong University, Xi’an, China, in 2005 and 2010, respectively. She is working in Changchun University of Technology, Changchun, China. She has published 15 papers in referred
Figure 7. The structure-based knowledge reasoning results.
journals and international conferences. Her current research interests include: knowledge science and knowledge engineering, topic map.
Dr. Lu is the member of ACM, IEEE, IEICE and CCF. Now she is also working in the computer science and technology post-doctoral research center of Jilin University.
Liang Hu is a professor at the college of computer science and technology, Jilin University, Changchun, China. His current research interests include: knowledge science, computer network. He has published more than fifty research articles in referred journals and international conferences.
Gang Liu is a professor at the college of software engineering, Changchun University of Technology, Changchun, China. His current research interests include: software engineering, knowledge science. He has published 5 research articles in referred journals and international conferences.
|
{"Source-Url": "http://ojs.academypublisher.com/index.php/jsw/article/viewFile/0605783/3032", "len_cl100k_base": 6560, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27844, "total-output-tokens": 8199, "length": "2e12", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.0012111663818359375, "__label__crime_law": 0.0006594657897949219, "__label__education_jobs": 0.020050048828125, "__label__entertainment": 0.00019121170043945312, "__label__fashion_beauty": 0.00030803680419921875, "__label__finance_business": 0.0008587837219238281, "__label__food_dining": 0.0004901885986328125, "__label__games": 0.0007085800170898438, "__label__hardware": 0.0010728836059570312, "__label__health": 0.0008568763732910156, "__label__history": 0.0006265640258789062, "__label__home_hobbies": 0.00023937225341796875, "__label__industrial": 0.0007581710815429688, "__label__literature": 0.0010242462158203125, "__label__politics": 0.0005316734313964844, "__label__religion": 0.0007143020629882812, "__label__science_tech": 0.359375, "__label__social_life": 0.0004341602325439453, "__label__software": 0.0579833984375, "__label__software_dev": 0.55029296875, "__label__sports_fitness": 0.0002779960632324219, "__label__transportation": 0.0006356239318847656, "__label__travel": 0.0002751350402832031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31084, 0.02723]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31084, 0.94891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31084, 0.85712]], "google_gemma-3-12b-it_contains_pii": [[0, 4785, false], [4785, 8953, null], [8953, 11814, null], [11814, 16708, null], [16708, 20798, null], [20798, 25531, null], [25531, 30203, null], [30203, 31084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4785, true], [4785, 8953, null], [8953, 11814, null], [11814, 16708, null], [16708, 20798, null], [20798, 25531, null], [25531, 30203, null], [30203, 31084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31084, null]], "pdf_page_numbers": [[0, 4785, 1], [4785, 8953, 2], [8953, 11814, 3], [11814, 16708, 4], [16708, 20798, 5], [20798, 25531, 6], [25531, 30203, 7], [30203, 31084, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31084, 0.04412]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
49d36f0903458cf5d9033350bb13edcc8517a334
|
[REMOVED]
|
{"Source-Url": "http://ftp10.us.freebsd.org/users/azhang/disc/springer/0558/papers/2453/24530524.pdf", "len_cl100k_base": 4110, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20424, "total-output-tokens": 5286, "length": "2e12", "weborganizer": {"__label__adult": 0.0003690719604492187, "__label__art_design": 0.0005064010620117188, "__label__crime_law": 0.0004336833953857422, "__label__education_jobs": 0.0016984939575195312, "__label__entertainment": 9.566545486450197e-05, "__label__fashion_beauty": 0.0002237558364868164, "__label__finance_business": 0.000560760498046875, "__label__food_dining": 0.00035572052001953125, "__label__games": 0.000568389892578125, "__label__hardware": 0.0007333755493164062, "__label__health": 0.0008072853088378906, "__label__history": 0.00043129920959472656, "__label__home_hobbies": 0.00011664628982543944, "__label__industrial": 0.0005731582641601562, "__label__literature": 0.0005364418029785156, "__label__politics": 0.0002994537353515625, "__label__religion": 0.0005083084106445312, "__label__science_tech": 0.1290283203125, "__label__social_life": 0.00014984607696533203, "__label__software": 0.02581787109375, "__label__software_dev": 0.8349609375, "__label__sports_fitness": 0.0002503395080566406, "__label__transportation": 0.0005660057067871094, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24122, 0.0124]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24122, 0.38716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24122, 0.87432]], "google_gemma-3-12b-it_contains_pii": [[0, 2694, false], [2694, 5848, null], [5848, 7496, null], [7496, 10499, null], [10499, 12671, null], [12671, 14540, null], [14540, 17046, null], [17046, 19465, null], [19465, 22494, null], [22494, 24122, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2694, true], [2694, 5848, null], [5848, 7496, null], [7496, 10499, null], [10499, 12671, null], [12671, 14540, null], [14540, 17046, null], [17046, 19465, null], [19465, 22494, null], [22494, 24122, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24122, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24122, null]], "pdf_page_numbers": [[0, 2694, 1], [2694, 5848, 2], [5848, 7496, 3], [7496, 10499, 4], [10499, 12671, 5], [12671, 14540, 6], [14540, 17046, 7], [17046, 19465, 8], [19465, 22494, 9], [22494, 24122, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24122, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
00ac46a9b5e9a4644467609c8f6ad5b3fb068c21
|
SOA Governance from an Enterprise Architecture Viewpoint
Thaíssa Diírr, Leonardo Guerreiro Azevedo, Flávia Maria Santoro
1Post-Graduation Program in Informatics
Federal University of the State of Rio de Janeiro (UNIRIO), Brazil
2IBM Research - Brazil
{thaissa.medeiros, azevedo, flavia.santoro}@uniriotec.br; lga@br.ibm.com
Abstract. To obtain business benefits resulting from the implementation of an SOA approach is not sufficient managing technical features. A strategy aligned to business should be considered as a basis for activities of implementation, validation, development and services management. A business case, a reference model and an architecture of the organization should be established. This paper proposes SOA governance processes. The processes were evaluated by SOA experts who argue that they would adopt them for SOA governance in their organizations.
Resumo. Para obter os benefícios corporativos com a implantação de uma abordagem SOA não é suficiente tratar características técnicas. Uma estratégia alinhada ao negócio deve ser considerada como base para as atividades de implementação, validação, desenvolvimento e gestão de serviços. Um caso de negócio, um modelo de referência e uma arquitetura da organização devem ser estabelecidos. Este trabalho propõe processos para governança SOA. Os processos foram avaliados por especialistas na área de SOA que argumentam que adotariam os mesmos para a governança SOA em suas organizações.
1. Introduction
According to Deler and Weinreich (2006), many studies in SOA focus on the creation and use of technologies for Web Services development. However, the adoption of such specifications (e.g., WSDL (http://www.w3.org/TR/wsdl), UDDI (http://www.uddi.org/, 2008), WS-* (Erl, 2005), SOAP (http://www.w3.org/TR/soap/) and related activities (e.g., creation, validation, implementation and management) by themselves do not suffice to obtain the expected benefits of a SOA initiative from a business perspective. They are not enough to ensure that the corporation’s dynamism is consistent according to SOA principles. In other words, this view is more technical and has low adherence to business goals. The definition of a SOA strategy helps to focus SOA efforts, to clarify its expected results and to identify appropriate uses for services so as to foster business benefits. It is a good practice to establish a business case, define a reference model and build/update the enterprise architecture models. All previous tasks are addressed by applying a SOA governance approach. However, the deployment of an adequate SOA governance approach still faces several challenges, such as how to encapsulate business activities into services, how to manage service changes, the establishment of new responsibilities and architecture roles, how to define and use metrics to measure achieved results, and how to establish and implement policies and standards (Schepers et al., 2008; Kajko-Mattsson et al., 2007).
SOA Governance is the definition, implementation and subsequent enforcement of a decision model and a structure of responsibilities which ensure that an organization pursues a SOA strategy and that all SOA initiatives walk together to meet the organization requirements (Marks, 2008). SOA strategy must be aligned to business objectives; SOA governance has to ensure the implementation of this strategy in accordance with principles and policies, through committees or work groups, governance processes, checkpoints, reviews and tools and technologies. SOA governance includes the deployment of a SOA initiative according to business processes, technology standards and business priorities. Finally, SOA governance explicitly involves stakeholders from business and IT in the decision-making process related to SOA (Marks, 2008). On the other hand, Enterprise Architecture is defined as the organizing logic for business processes, data, and technology (Ross, 2011). This leads to the necessity of defining a SOA governance approach that is totally aligned to the Enterprise Architecture.
This work proposes SOA governance processes from the perspective of Enterprise Architecture since it extends approaches in this context described in Section 2. The proposal was evaluated by SOA experts from different organizations. The results indicate ease of understanding of processes and their usefulness to organizations.
The remaining of this paper is structured as follows. Section 2 presents related work. Section 3 details the proposed processes for SOA governance. Section 4 describes the evaluation of the proposal. Section 5 presents the conclusions and future work.
2. Related Work
Niemann et al. (2010) propose a generic SOA governance model including: SOA goals, SOA as enterprise architecture and governance control cycle. SOA goals are derived from global IT goals and correspond to specialized business goals. These goals are: SOA compliance (adherence to internal, technical and legal regulations); alignment between business and IT (integration and adoption of IT processes in the business environment is crucial to the success of SOA), and long-term reliable operation (resulting from the management of SOA). SOA as enterprise architecture consists of processes such as production, operation and maintenance of services, beyond the technical view including registers and enterprise service bus. The SOA governance control cycle is the central part that implements and operates effective governance. The cycle represents crucial processes, including and involving organizational entities (roles and responsibilities and governance processes), governance policies, catalog of best practices, compliance observation and enforcement techniques, and a component for measuring SOA maturity. Niemann et al. (2010) also investigated and compared approaches to SOA governance proposed by academia and industry. The following concepts of governance that were considered in these proposals were identified: impact on the organization, SOA maturity model, new roles and responsibilities, best practices, metrics model, impact on people's behavior, SOA life cycle. SOA roadmap, policies catalog, services lifecycle, governance processes, policy enforcement mechanisms. The concepts presented by Niemann et al. (2010) are generic and difficult to follow in practice. However, it addresses key aspects of SOA governance. Among these concepts, our proposal does not cover the SOA maturity model and best practices. On the other hand, we discuss the concepts relevant to the application of SOA governance, and the level of detail allows its practical application.
Schepers et al. (2008) define a SOA governance life cycle composed by the following processes: define SOA strategy; align the organization; manage services portfolio; manage services life cycle; manage SOA policies and manage service levels. The authors also point out
some important issues of SOA: tracking of IT systems and services to ensure compliance with standards and legislation; creation of budget according to the property and costs of services; analysis of impacts on services maintenance related to service consumers; quality assurance in services design and implementation; change of team behavior for the adoption of SOA.
Brown et al. (2006) mention the impact of SOA governance in SOA life cycle defined in four stages: modeling, assembly, deployment and control. The authors argue that SOA governance life cycle is distinct of the life cycle of services being governed. This SOA governance life cycle is characterized as a process comprising four phases: Planning: Understanding the structure of governance and the current environment; creating a starting point for IT governance; defining the scope of the governance model; driving change; Definition: Defining and refining governance processes, quality and decision making processes; defining organizational changes; defining IT changes in deployment of SOA processes; Permission: implementing the transition plan; initiating organizational changes; implementing SOA infrastructure; Measurement: measuring the effectiveness of governance process; measuring the effectiveness of organizational changes; reviewing and refining development and operational environments.
Regarding the related works, this work presents the processes at a higher level of detail. It makes explicit roles and responsibilities. Activities are presented in business process models which facilitates the execution by the participants of the SOA initiative.
3. Process for SOA Governance
This work proposes SOA governance processes that extend the works from Botto (2004), Spewak and Hill (1992), Kajko-Mattsson et al. (2007), Niemann (2010) and Schepers et al. (2008). The processes were detailed in macro-processes diagrams (VAC - Added Value Chain), event-oriented process flows (EPC - Event-Driven Process Chains) and function trees (Scheer, 2000). VAC corresponds to abstract descriptions (macro-process) of organization's functions that influence directly the added value of organization business (Aris, 2006). EPCs show a dynamic view, detailing the process flows and how they are supported by the business infrastructure (Davis and Brabander, 2007). Thus, the EPCs were used in processes whose activities have defined sequences. Function trees show a static view of functions, illustrating how a function is detailed in sub-functions without regard to the process flows (Davis and Brabander, 2007). Therefore, we use function trees for activities whose execution sequence is not known or varies in each organization.
The SOA governance processes and the organizational roles for them are presented as follows. A previous work presents the first insights towards SOA governance processes (Azevedo et al., 2010a). Due to space constraints some details are omitted. The processes complete specification is presented by Azevedo et al. (2010b).
3.1. Organization Roles for SOA Governance
We defined organizational roles which are responsible for executing the SOA governance processes (Figure 1). We propose a SOA Controller Organization Unit, which is responsible for coordinating all the SOA initiative. It includes the following roles: SOA Applications Analyst: Its responsibilities go beyond SOA-based applications that are in direct contact with the customer. Its tasks goes in the direction to ensure all customers’ needs are fulfilled by the
service, e.g., understand customer requirements, define service interfaces, define integration with existing applications, and so on; SOA Analyst: Its responsibilities includes modeling and design business processes and mapping business needs to existing or new services; SOA Architect: Its responsibility is to ensure that the infrastructure meets the needs of business and defined techniques; SOA Developer: Its responsibility is to develop and publish the services; SOA Manager: Its responsibility is to maintain and govern the SOA-based systems, according to a broader SOA strategy, ensuring that business needs are met on a strategic, tactical and operational level.
3.2. Manage Service-Oriented Architecture
The macro-process "Manage Service-Oriented Architecture" (Erro! Fonte de referência não encontrada.) is responsible to manage SOA approach in the organization and it is subdivided into sub-processes for building the current and future environment for SOA support, maintenance of support environment, definition of policies and standards, prospection of needed technologies and monitoring and measuring performed activities. These processes are detailed as follows.
3.2.1. Build an Environment for SOA Support
The process Build an Environment for SOA Support documents the current organizational environment considering existing services and systems that consume them as well as databases that are accessed by them. Besides the infrastructure used by services is documented. The activities comprised by this process are: Survey the standards currently used: Identify standards currently used for service development, e.g. standards for service implementation, service orchestration, service portfolio etc.; Survey existing services: Identify and document existing services. During this elicitation process it is important to check redundancy; Survey existing
infrastructure: Identify the existing infrastructure, such as service bus, service registry, tools used for modeling, implementation and orchestration of services, servers for backup and provision of services etc.; Map services with the existing infrastructure: Map services to related infrastructure used to provision, monitoring, backup, clustering etc.; Map existing services and applications: Map which applications consume which services, reporting problems, difficulties and facilities in the consumption of services; Map services and databases: Map which databases are accessed by services and CRUD (Create, Retrieve, Update and Delete) operations performed by the service.
3.2.2. Build Future Environment for SOA Support
The process "Build future environment for SOA support" is presented in the EPC process of Figure 3. It defines the strategy and scope of the SOA initiative, including the definition of the responsibilities of stakeholders, the creation of processes to maintain the initiative and to provide the environment infrastructure. The activities that comprise it are presented as follows: Define SOA strategy: Define the strategy to implement SOA initiative, setting goals, indicators, the steps to obtain funding and human resources to the initiative; Define scope: Define the scope of the initiative, including identification of most important business processes, products that will be maintained etc.; Establish project groups: Define involved working groups; Assign responsibilities to the project groups: Define which tasks each working group must execute and what are the relationships among working groups; Define control unit: Define the organizational unit responsible for monitoring the implementation and maintaining the initiative; Assign roles to stakeholders: Define responsibilities of each stakeholder of the initiative; Define SOA processes: Define the processes of the initiative, taking into account development and management of services, registry of services in the portfolio etc.; Define infrastructures: Defining the infrastructure to be used by services (e.g., structure of services portfolio, modeling and implementation tools, technology and development standards etc.); Deploy infrastructure: Put into practice the defined infrastructure (acquisition, installation and testing of tools, and creation of services responsible to infrastructure tasks); Perform training: Conduct staff training on aspects of the SOA initiative in the organization related to their responsibilities.
3.2.3. Maintain Environment for SOA Support
The process "Maintain Environment for SOA Support" is responsible to develop and maintain available services according to business requirements and changes related to errors found, new requirements or business rules as well as changes on existing requirements or business rules. Furthermore, the infrastructure used in SOA environment is also maintained. The process is divided into other sub-processes that, due to space constraints, will not be detailed. These sub-
processes are responsible for the following tasks.
- **Maintain SOA planning**: Maintain SOA planning regarding changes required in the strategy, scope, project groups and control unit and their responsibilities;
- **Maintain infrastructure**: Maintain the defined infrastructure for SOA environment, including updating tools, creation of user access profile and resolution of problems in the computing environment;
- **Maintain services portfolio**: Evaluate the need for changes in portfolio structure, ways to service discovery and in service documentations, and evaluation of the service level agreements established between consumers and service providers;
- **Build services**: Build new services and maintain existing services using a service development life cycle (as proposed by Gu and Lago (2007)). It includes activities such as identify services (e.g., using the method proposed by Azevedo et al. (2009) or the method proposed by Leopold and Mendling (2012) which identify services from business process models), analysis services (e.g., using the method proposed by Azevedo et al. (2011a) that takes as input the services identified from business processes models, and using information from these models to execute a set of heuristic for service analysis), design and implement services (e.g., using the approach proposed by Diirr et al. (2012) which presents the steps to be conducted to design and implement services using UML diagrams and Java technology), test services (e.g., using proposals described by Canfora and Di Penta (2009) which presents a report of results obtained in services test area, in addition of approaches of unit test, integration test, non-functional test and regression test), publish services (e.g., using the approach proposed by Arnold et al. (2007) to gather models and tools, models-based standards using formal methods that represent deployment topologies), provide services (e.g., using the SPML protocol (Oasis, 2006) proposed by OASIS and on which different data models can be used to define the actual provisioning data), monitor services (e.g., using a module for service monitoring at the Enterprise Service Bus (ESB) proposed by Bluenke and Warda (2008) - ESB is the core technology in an SOA (Hewitt, 2009) - or using the extensible monitoring model from the perspective of others proposed by Qi et al. (2010) and retire services no longer in use (as characterized by Josuttis, 2007);
- **Consume services**: This process corresponds to the steps performed by consumers to invoke services, such as, discovery service in a repository. If there is no service that fulfills consumer requirement, then the consumer request service development. On the other hand, if there is a service that can execute the requirement considering some adjustment, the consumer request for service maintenance. If a service composition is required, the consumer orchestrates services. The consumer also has to negotiate service contract, invoke service, test application and monitor service execution.
### 3.2.4. Define policies and standards for SOA
The process "Define policies and standards for SOA" sets policies and standards for SOA environment, including its creation, maintenance, disclosure and audit. The process is divided into other sub-processes that, due to space constraints are not detailed. These sub-processes are responsible to: Create policies and standards for SOA: This process analyze characteristics to be
standardized, set and validate policies and standards; Maintain policies and standards for SOA: When there are opportunities for improvement (selection of standards to be analyzed, identifying opportunities for improvement and change in standards); Divulge policies and standards for SOA: Provide and advertise standard, train resources to use the standards; Control policies and standards of SOA: Define standards to be audit, collect a sample of projects, verify the use of standards, publish update rate, program audit disclosure.
3.2.5. Prospect Technologies for SOA
The process "Prospect technologies for SOA" presented in Figure 4 continuously prospects technologies for SOA and the activities that comprise it are described as follows: Perform search for information about tools: Search for information about tools for the SOA environment in forums, conferences, on the Web and contacting tool vendors; Assess tools: Evaluate tools executing the following steps: define evaluation criteria, compare candidate tools and select tools. Azevedo et al. (2011b) present details in how to execute tool evaluation; Define guidelines for integration technologies: Set guidelines for the integration of technology in current environment; Publish results of tools assessment: Publish the results of conducted evaluations to participants in the initiative; Assess technology viability for SOA environment: Evaluate the feasibility of implementing the selected technology; Deploy technology: Deploy the selected technologies in the environment.
Figure 4. Prospect technologies for SOA
3.2.6. Monitor SOA Activities
The "Monitor SOA activities" process performs measurements and evaluations to monitor the activities executed during the SOA initiative. The activities that comprise this process are described as follows: Establish indicators: Establish quality indicators for activities related to SOA; Monitor indicators execution: Monitor the execution of initiative to compute indicators that verify whether the internal activities of the area are being carried out properly; Measure indicators execution: pointing out which gaps between planned and performed activities; Assess activities execution: Evaluate the execution of activities related to SOA, checking that they are within expected to meet the needs of the organization; Communicate achievements obtained by area: Present results achieved by the SOA initiative.
4. Processes evaluation
In order to evaluate the proposed processes for SOA governance the Delphi technique was used. This technique is a systematic and iterative estimation based on the experience of independent experts. These experts must be carefully selected to answer a questionnaire based on their experience. According to Rowe (2001), "Expert opinion is often necessary in forecasting tasks because of the lack of appropriate available information or using statistical procedures." In the case of this research, we used estimation because, since the processes are not yet implemented in a real environment, its applicability and reliability can only be inferred based on the experience of such professionals. A detailed view of the Delphi technique can be obtained from Rowe (1999) and Green et al. (2007). We selected five professionals with proven expertise in SOA to participate in the research. The objective of the research was evaluate whether the processes are applicable in a real environment by checking some aspects, such as, ease to understand the processes, compliance of the processes in relation to current practices, usefulness of the processes, degree of difficulty to deploy the processes, favorability to the adoption of processes, and strengths and weakness observed in the processes. During the interview, we first presented the proposed governance processes to make easy for the participants to understand the proposed processes before answering the questionnaire. The professionals have performed different roles in SOA for at least one year. They have worked as managers (2 respondents), architects (3 respondents), analysts (3 respondents), developers (1 respondent) and researchers (1 respondent) in SOA. All of them have been working for organizations with more than five thousand employees and whose units responsible for SOA are implemented or under implementation.
Participants pointed out some difficulties observed during the deployment of SOA in the organizations they work, which are directly related to the activities of SOA governance. The difficulties are: obtain executive support, change team culture of development and train staff, management teams (e.g., define the responsibility of each team and each person in the initiative), define the technologies to be used, define security policies, manage data exchanged between services, manage services repository, define funding model (e.g., share costs of services between different projects that use them). Moreover, participants consider the following components as important for SOA governance: management support for implementation of the initiative; definition of tools to be used; management of data exchanged by services; adherence to standards; management of repositories; and, use of indicators and metrics to monitor activities.
Currently, participants are carrying out the following activities in SOA initiative (some already implemented or under implementation) they take part: modeling, design, development and publication of services, definition of services governance processes, definition of governance standards, and control the development, quality and publication of services.
Some quality aspects of the proposed processes were asked to participants. Regarding the ease of understanding, the following question was done: "The proposed processes are easy to understand?" and, for each process, the possible answers to this question were "very easy", "easy", "medium", "difficult" and "very difficult. For all processes, 5 participants considered their understanding as easy. Participants emphasized that some aspects that ease understanding are simplicity of process design and the fact process are presented in a high level of details designed in business processes models. These observations confirm what was presented in the related work section. Besides, they mentioned that their experience in SOA as another factor that contributes to that.
Then, participants were asked about the compliance of the processes in relation to current practices of SOA initiatives in the organizations where they work. The question asked to them
was: "Are the proposed processes in accordance with the processes used by the organization you work within a SOA initiative?" and, to answer it, participants had to classify each process in "nonconforming", "little conforming" and "conforming". Some non-conformity and little conformity responses were presented and the reasons given by participants for these results were:
- The process "Prospect technologies for SOA" in many cases is not specific to SOA, but rather a standard process established in the organization to prospect any technology. Furthermore, there are cases in which the prospect of technologies for SOA did not occur. It happens, for example, when the organization has already previous contract to acquire software from a specific vendor;
- Unlike what is proposed in the process "Build future environment for SOA support", SOA initiatives have emerged in the organization on an ad-hoc manner and a general plan of the important aspects of its implementation was not carried out;
- In relation to the processes "Define policies and standards for SOA" and "Monitor SOA activities," the respondents indicated that few (or no) activities are performed for monitoring activities and define policies within the organizations they work. As participants indicated, without the implementation of governance processes, activities can occur without planning and monitoring, resulting in disorganization and causing problems in the implementation and maintenance of the SOA initiative.
Regarding the usefulness of the proposed processes, it was asked then to participants to “Classify the proposed processes according to the degree of usefulness to the organization where you work. Consider the following scale: Useless (1) (2) (3) (4) (5) Useful”. Thus, for each process, the participants attributed a degree of usefulness. 5 participants classified the processes ‘Build an environment for SOA support’, ‘Build future environment for SOA support’, ‘Maintain environment for SOA support’, ‘Define policies and standards for SOA’ and ‘Prospect technologies SOA’ as processes with usefulness level equals to 4 or 5. Respondents justified that these processes are very important for building and maintaining a successful SOA initiative. Moreover, the processes ‘Build an environment for SOA’ and ‘Maintain environment for SOA support’ were the processes which received more responses equals to 5. These processes are considered essential to implement SOA in organizations. On the other hand, the process ‘Monitor SOA activities’ received only one response equal to 5, and the corresponding respondent argues that all processes are necessary to prevent future problems in the SOA initiative. The other respondents reported that, in organizations where they work, quality indicators are not defined and monitored, as there is more pressure to service provisioning than to verify the results obtained from them.
Considering the degree of difficulty to deploy the proposed processes, the following request was made to the participants: “Please indicate how you classify the difficulty to deploy each of the proposed processes in an organization.” The possible answers to this question were “very easy”, “easy”, “medium”, “difficult” and “very difficult”. The participants believe that, in general, the levels of difficulty are medium and difficult. Some reasons for these classifications were: the difficulty to deploy the processes in alignment with business needs; the number of people involved in the processes that need to be adherents to them can be large (mainly in large organizations); the lack of preparation and experience of the team in SOA initiatives; and the adaptation of processes’ activities according to organization specific characteristics. Thus, according to participants, the complexity involved in implementing and maintaining the SOA
initiative is the factor that complicates the use of processes.
Then, participants were asked about the adoption of proposed processes with the following question: “Would you adopt the proposed governance processes to support a SOA initiative in the organization you work?” Despite the difficulties considered for deployment, the five participants answer “Yes”. They justified that they would use all processes, as they recognize the importance they have in an SOA initiative and the benefits they can bring.
Finally, participants were asked to indicate strengths and weaknesses observed in the proposed processes. The weaknesses mentioned were: lack of activities about service quality, data quality and definition of SLAs. The strong points were: the processes are presented in a simple, objective, explanatory and well structured form (preparation, establishment of policies and standards, construction, maintenance, preparation for the future and measurement); processes are grounded in a pre-defined set of roles; processes emphasize the importance of planning and monitoring the SOA initiative rather than a disorganized implementation; the implementation of processes resulted in greater maturity of the initiative and improve the quality of services. As mentioned in the related work section and pointed out by the participants, an important characteristic of our proposal is to define the roles that are part of the initiative.
5. Conclusion
This paper proposed a set of processes for SOA governance based on the works of Botto (2004), Spewak and Hill (1992), Kajko-Mattsson et al. (2007), Niemann (2010) and Schepers et al. (2008). The proposed processes were, namely: Build an environment for SOA support; Build future environment for SOA support; Maintain environment for SOA support; Define policies and standards for SOA; Prospect technologies for SOA; Monitor SOA activities. In addition, the roles responsible for executing these processes were proposed: SOA applications analyst, SOA analyst, SOA architect, SOA developer and SOA manager.
In order to evaluate the processes, five participants of SOA initiatives in different organizations responded to a questionnaire about the quality and usefulness of the processes. The results indicate ease of understanding of processes and their usefulness to organizations. Some indications of little compliance or noncompliance of the proposed processes according to what is executed by participant’s organizations were obtained. They emphasized that this occurs because current activities organizations perform are executed without the use of well-defined processes, as well as with little or no planning and monitoring activities. They pointed to the difficulty of implementing the proposed processes due to the complexity of a service-oriented architecture, which must be aligned to business and requires well-prepared teams. Despite the difficulties of implementation, the five participants emphasized that they would implement all processes due to its importance in an SOA initiative and the benefits that can be obtained from their implementation. They also pointed out additional strengths related to: the way in which processes are presented; definition of roles who execute the activities; better maturity and quality of SOA initiative arising from the implementation of processes; emphasis on planning and monitoring of the SOA initiative rather than an ad-hoc deployment. However, despite the opinions of the participants, there are some weaknesses in the study. The processes are not yet implemented in a real environment and therefore we cannot prove its applicability. Furthermore, the evaluation by specialists requires reading the proposed processes, which makes the evaluation time-consuming and may introduce bias if the participants do not understand the processes correctly.
As future work, we propose the improvement of the highlighted weaknesses through addressing services and data quality more specifically, the definition of SLAs and handling deactivation of services. We also suggest using the proposed processes in real scenarios in medium and large organizations in order to assess the proposal in practice.
References
Systems 2012, pp. 197-204, 10-12 March, Berlim, Alemannia.
|
{"Source-Url": "http://www.lbd.dcc.ufmg.br/colecoes/sbsi/2014/0042.pdf", "len_cl100k_base": 6324, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27556, "total-output-tokens": 8700, "length": "2e12", "weborganizer": {"__label__adult": 0.0005064010620117188, "__label__art_design": 0.0017290115356445312, "__label__crime_law": 0.0006504058837890625, "__label__education_jobs": 0.009124755859375, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.00032782554626464844, "__label__finance_business": 0.0100555419921875, "__label__food_dining": 0.0004987716674804688, "__label__games": 0.0007567405700683594, "__label__hardware": 0.0008549690246582031, "__label__health": 0.0008258819580078125, "__label__history": 0.0007042884826660156, "__label__home_hobbies": 0.000148773193359375, "__label__industrial": 0.0007562637329101562, "__label__literature": 0.0007266998291015625, "__label__politics": 0.0007052421569824219, "__label__religion": 0.0006117820739746094, "__label__science_tech": 0.06060791015625, "__label__social_life": 0.00017452239990234375, "__label__software": 0.0196380615234375, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.000308990478515625, "__label__transportation": 0.0006961822509765625, "__label__travel": 0.0003256797790527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39118, 0.03161]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39118, 0.24902]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39118, 0.91654]], "google_gemma-3-12b-it_contains_pii": [[0, 2978, false], [2978, 6909, null], [6909, 10440, null], [10440, 12316, null], [12316, 15359, null], [15359, 18831, null], [18831, 21280, null], [21280, 25405, null], [25405, 29270, null], [29270, 33128, null], [33128, 36269, null], [36269, 39118, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2978, true], [2978, 6909, null], [6909, 10440, null], [10440, 12316, null], [12316, 15359, null], [15359, 18831, null], [18831, 21280, null], [21280, 25405, null], [25405, 29270, null], [29270, 33128, null], [33128, 36269, null], [36269, 39118, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39118, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39118, null]], "pdf_page_numbers": [[0, 2978, 1], [2978, 6909, 2], [6909, 10440, 3], [10440, 12316, 4], [12316, 15359, 5], [15359, 18831, 6], [18831, 21280, 7], [21280, 25405, 8], [25405, 29270, 9], [29270, 33128, 10], [33128, 36269, 11], [36269, 39118, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39118, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
794faa85ef1e8e27769c2fb1890685f540888234
|
Comparing and Contrasting Adaptive Middleware Support in Wide-Area and Embedded Distributed Object Applications
Joseph Loyall, Richard Schantz, John Zinky, Partha Pal, Richard Shapiro, Craig Rodrigues, Michael Atighetchi, David Karr BBN Technologies jloyall@bbn.com
Jeanna M. Gossett The Boeing Company Jeanna.Gossett@mw.boeing.com
Christopher D. Gill Washington University, St. Louis cdgill@cs.wustl.edu
Abstract
The Quality Objects (QuO) middleware is a set of extensions to standard distributed object computing middleware that is used to control and adapt quality of service in a number of distributed application environments, from wide-area to embedded distributed applications. This paper compares and contrasts the characteristics of key use cases and the variations in QuO implementations that have emerged to support them. We present these variations in the context of several actual applications being developed using the QuO middleware.
1. Introduction
Distributed Object Computing (DOC) middleware has emerged and gained acceptance for the development and implementation of a wide variety of applications in a wide variety of environments. As DOC middleware has gained acceptance and has been applied to a broader variety of use cases, there has been a natural growth in extensions, features, and services to support these use cases. For example, the Minimum CORBA specification [15], the Real-time CORBA 1.0 specification [16], and the Real-Time Specification for Java (RTSJ) [2] are examples of extensions and services that have grown out of a need to support embedded and real-time applications.
We have developed a DOC middleware extension called Quality Objects (QuO) [24], which supports adaptive quality-of-service (QoS) specification, measurement, and control, and which we have described in a number of earlier papers. QuO is being used in a number of demonstrations and applications, ranging from wide-area distributed applications to embedded real-time systems. These diverse use-cases have led to a natural set of usage patterns, tailoring, and enhancements to the QuO middleware that has simultaneously broadened its applicability and refined its focus on the specific problems of particular environments.
This paper describes several applications developed using QuO middleware and compares and contrasts the usage patterns exhibited by them. We describe the particular flavors of QuO that have been developed to support the characteristics of these use-cases. Section 2 provides a brief overview of the QuO middleware. More detail about QuO can be found in [12, 13, 17, 18, 21, 24]. Section 3 describes, compares, and contrasts the various usage patterns that have emerged for the QuO middleware. Section 4 describes the different implementations of QuO available for specific use-cases. All of these implementations provide similar QuO functionality and features, but with characteristics tailored for the specific use-cases. Section 5 describes three specific applications being developed using QuO middleware that provide concrete examples of the use-cases described in Section 4. Section 6 discusses some issues arising from the different implementations. Finally, Section 7 provides concluding remarks.
2. Overview of the adaptive QuO middleware
Figure 1 illustrates a client-to-object logical method call. In a traditional CORBA application, a client makes a logical method call to a remote object. A local ORB proxy (i.e., a stub) marshals the argument data, which the local ORB then transmits across the network. The ORB on the server side receives the message call, and a remote proxy (i.e., a skeleton) then unmarshals the data and delivers it to the remote servant. Upon method return, the process is reversed.
Quality Objects (QuO) is a distributed object computing (DOC) framework designed to develop distributed ap-
applications that can specify (1) their QoS requirements, (2) the system elements that must be monitored and controlled to measure and provide QoS, and (3) the behavior for adapting to QoS variations that occur at run-time. By providing these features, QUO opens up distributed object implementations [1] to control an application's functional aspects and implementation strategies that are encapsulated within its functional interfaces.
A method call in the QUO framework is a superset of a traditional DOC call, and includes the following components, illustrated in Figure 2:
- **Contracts** specify the level of service desired by a client, the level of service an object expects to provide, operating regions indicating possible measured QoS, and actions to take when the level of QoS changes.
- **Delegates** act as local proxies for remote objects. Each delegate provides an interface similar to that of the remote object stub, but adds locally adaptive behavior based upon the current state of QoS in the system, as measured by the contract.
- **System condition objects** provide interfaces to resources, mechanisms, objects, and ORBs in the system that need to be measured and controlled by QUO contracts.
In addition, QUO applications may use property managers and specialized ORBs. Property managers are responsible for managing a given QoS property (such as the availability property via replication management [5] or controlled throughput via RSVP reservation management [11]) for a set of QUO-enabled server objects on behalf of the QUO clients using those server objects. In some cases, the managed property requires mechanisms at lower levels in the protocol stack. To support this, QUO includes a gateway mechanism [18], which enables special purpose transport protocols and adaptation below the ORB.
In addition to traditional application developers (who develop the client and object implementations) and mechanism developers (who develop the ORBs, property managers, and other distributed resource control infrastructure), QUO applications involve another group of developers, namely QoS developers. QoS developers are responsible for defining QUO contracts, system condition objects, callback mechanisms, and object delegate behavior. To support the added role of QoS developer, we are developing a QUO toolkit, described in earlier papers such as [12], [13] and [21], and consisting of the following components:
- **Quality Description Languages (QDL)** for describing the QoS aspects of QUO applications, such as QoS contracts (specified by the Contract Description Language, CDL) and the adaptive behavior of objects and delegates (specified by the Structure Description Language, SDL). CDL and SDL are described in [12, 13].
- **The QUO runtime kernel**, which coordinates evaluation of contracts and monitoring of system condition objects. The QUO kernel and its runtime architecture are described in detail in [21].
- **Code generators** that weave together QDL descriptions, the QUO kernel code, and client code to produce a single application program. Runtime integration of QDL specifications is discussed in [12].
3. Usage patterns of the QUO adaptive middleware
CORBA and other DOC frameworks, such as Java RMI, are being used to implement diverse types of distributed applications in diverse environments, from wide-area networks such as the Internet to embedded systems [6, 8, 22]. These applications and environments exhibit different characteristics and, while they are supported by DOC middleware in general, they also use services tailored to support their specific characteristics.
For example, CORBA IDL is of general use in exposing the functional interfaces of objects, while hiding the implementation details. However, it can be argued that this is more important for heterogeneous, distributed applications, where the implementation details might include multiple languages, platforms, operating systems, and mechanisms than it is for embedded applications. Meanwhile, QUO has been designed to provide customized support for adaptive distributed object computing with quality-of-service requirements, above and beyond the generic
3.1 Wide-area distributed object applications
Wide-area network applications, which have gained prominence due to the emergence of the Internet and new networking technologies, have characteristics that differ significantly from traditional, non-networked or locally networked applications. In this section and the next, we contrast WAN-based distributed applications and embedded distributed applications, to motivate the QUO features that have emerged to support these two different application contexts. This discussion is summarized in Table 1.
WAN applications often utilize components that 1) exist on heterogeneous hosts, 2) are implemented in multiple languages, 3) are not discovered until runtime (e.g., through a Naming Service or DNS lookup), and 4) for which network latency is an issue as much as, or more than, CPU availability. Many WAN applications exhibit some or all of the following distinguishing characteristics:
- **Widely varying data content and size** – Servers often have little control over the amount, quality, or content of data that clients send to them.
- **Widely varying network latency times** – The distance between objects, the capacity of the networks, and the amount and size of competing traffic can all contribute to unpredictable delays in message and data delivery.
- **Application performance can be dominated by network transport times** – Higher and less predictable network latency plays a larger role in the performance of WAN applications.
- **Heterogeneity in platforms and languages** – Server objects can be written in a variety of languages and be hosted on a variety of platforms within an application. The specifics of language and platform are often hidden behind a common interface language, like CORBA IDL or HTML.
- **Dynamic distribution of objects** – References to objects can be obtained dynamically, using a service such as the CORBA Naming Service, the CORBA Trading Service, or DNS. This means that objects can migrate, different objects can service subsequent requests, and so forth.
3.2 Embedded distributed object applications
In contrast, embedded distributed applications, such as avionics sensor-actuator applications, typically operate within more resource constrained, but more predictable, environments. They usually must operate within tight timing deadlines (e.g., sensor data must be processed before the next data element is acquired from the same sensor) and therefore cannot abide varying data size or content. However, since they typically exist within LANs, across a hardware bus, or on a single processor, there is significantly more predictability in resource availability, communication latency, object location, and nature of object implementation.
In this paper, we are concentrating on a class of embedded avionics and shipboard embedded applications, which exhibit some or all of the following distinguishing characteristics:
- **Predictable data content and size** – Data size is generally constrained so that it can be processed within a fixed period. Likewise, a single sensor, or a small set of sensors, generally provides data with predictable content and size.
- **Fewer variances in network latency** – Embedded applications often exist on a single processor or on a LAN, so that network latency is low and fairly predictable. There is some external data input, but most data transport between embedded components is local, with smaller, more controlled network latency times.
- **Application performance is often dominated by CPU allocation** – Scheduling the CPU so that all real-time tasks meet their deadlines is a dominant feature of many embedded applications. Message processing is equally likely to be constrained by processor contention as by network contention.
<table>
<thead>
<tr>
<th>Data Content and Size</th>
<th>WAN Distributed Applications</th>
<th>Embedded Distributed Applications</th>
</tr>
</thead>
<tbody>
<tr>
<td>Network Latency</td>
<td>Can vary widely</td>
<td>Often predictable and constrained</td>
</tr>
<tr>
<td>Dominant Resource</td>
<td>Network bandwidth</td>
<td>CPU cycles</td>
</tr>
<tr>
<td>Platforms</td>
<td>Often heterogeneous and remote</td>
<td>Typically homogeneous</td>
</tr>
<tr>
<td>Languages</td>
<td>Sometimes heterogeneous</td>
<td>Typically one</td>
</tr>
<tr>
<td>Object Distribution</td>
<td>Can be dynamic</td>
<td>Typically fixed and local</td>
</tr>
<tr>
<td>Object Location</td>
<td>Can be dynamic</td>
<td>Often preset and fixed</td>
</tr>
</tbody>
</table>
Table 1: Comparison of characteristics of WAN and (avionics) embedded applications
• Homogeneity in platforms and languages – Embedded applications typically run on only one processor or a few identical processors in a LAN, using a single operating system, and are typically written in one language.
• Objects are typically fixed and predefined – The numbers and types of objects are often predefined and object instances are usually local and created up front.
3.3 Event channel, periodic tasking
Sensor-actuator applications, such as those found on avionics platforms, often follow an event-driven, periodic tasking model. In such a model, an avionics application consists of many periodic tasks with real-time deadlines (traditionally all are hard real-time deadlines, however hybrid hard real-time/soft real-time scheduling is becoming more prevalent in embedded systems). These tasks are scheduled at a particular rate and allocated the CPU at that rate. The deadline for each task is chosen to allotted enough time for the task to perform its function (e.g., process sensor data, compute navigation heading). The period of tasks are chosen to ensure that all tasks can be scheduled.
The traditional DOC benefits, e.g., the hiding of implementation details behind functional interfaces and a common data transport protocol, may ease the programming of such embedded real-time applications. However, modularization and decomposition are still the primary benefits, because these embedded real-time applications do not utilize a variety of implementations, platforms, and languages. Furthermore, the real-time embedded software industry has not yet widely adopted the DOC computing paradigm [3].
Because of this, and to extend the current state-of-the-practice in real-time embedded computing, DOC services are emerging that support event-driven, periodic tasking models. Two examples of these are the real-time CORBA Event Service [10] and the real-time CORBA Scheduling Service [9] in TAO [20], a real-time CORBA compliant ORB. Another example is the real-time specification for Java (RTSJ) [2]. The use of TAO's real-time CORBA Event Service and real-time CORBA Scheduling Service in an avionics application is described in Section 5.2.
3.4 Adaptation at many levels
QuO's contracts and delegates support adaptation at many levels, from managers mediating adaptation for many applications, to adaptation within an application, to adaptive resource control mechanisms, to adaptation at the transport layer. QuO's contracts and delegates provide the adaptation that can be used within a single application and also within system managers. QuO's system condition objects provide a uniform interface to system resources, mechanisms, and managers to translate between application-level concepts, such as operating modes, to resource and mechanism-level concepts, such as scheduling methods and real-time attributes.
Finally, QuO provides a gateway component, which allows low-level communication mechanisms and special-purpose transport-level adaptation to be plugged into an application [18]. The QuO gateway resides between the client and server ORBs. It is a mediator [7] that intercepts IIOP messages sent from the client-side ORB and delivers IIOP messages to the server-side ORB (on the message return the roles are reversed). On the way, the gateway translates the IIOP messages into a custom transport protocol, such as group multicast in a replicated, dependable system.
The gateway also provides an API that allows adaptive behavior or processing control to be configured below the ORB layer. For example, the gateway can select between alternate transport mechanisms based on low-level message filtering or shaping, as well as the overall system's state and condition objects. Likewise, the gateway can be used to integrate security measures, such as authenticating the sender and verifying access rights to the destination object.
3.5 Synchronous and asynchronous adaptation
QuO contracts and delegates support two means for triggering manager-level, middleware-level, and application-level adaptation. The delegate triggers in-band adaptation by making choices upon method calls and returns. The contract triggers out-of-band adaptation when changes in observed system condition objects cause region transitions.
Figure 3 illustrates QuO's in-band and out-of-band adaptation. The QuO delegate supports in-band adaptation (Figure 3a) whenever a client makes a method call and whenever a called method returns. The delegates (on the client and server side) check the state of the relevant contracts and choose behaviors based upon the state of the system. These behaviors can include shaping or filtering the method data, choosing alternate methods or server objects, performing local functionality, and so on.
QuO contracts and system condition objects support out-of-band adaptation (Figure 3b) by monitoring conditions in the system, whether they are the states of resources, mechanisms, or managers. Whenever the monitored conditions change (or whenever they change beyond a specified threshold), the system condition object triggers an asynchronous evaluation of the relevant contracts. If this results in a change in contract region (i.e., state), it in turn triggers adaptive behavior that occurs asynchronous to any object interactions.
System condition objects can interface to other, lower-level system condition objects, and can be either observed or non-observed. Changes in the values measured by observed system conditions trigger contract evaluation, possibly resulting in region transitions and triggering out-of-band adaptive behavior. Observed system condition objects are suitable for measuring conditions that either change infrequently or for which a measured change can indicate an event of notice to the application or system. Non-observed system condition objects represent the current value of whatever condition they are measuring, but do not trigger an event whenever the value changes. Instead, they provide the value upon demand, i.e., whenever the contract is evaluated due to a method call or return or due to an event from an observed system condition object.
This combination of observed and non-observed system condition objects, along with the nesting of system condition objects, provides flexibility to support a wide variety of in-band and out-of-band adaptation, while providing needed support to avoid instability problems and hysteresis effects. Observed system condition objects can measure frequently changing system conditions by smoothing out continuous changes (e.g., by measuring statistical average of changes over time) or by reporting only when the system condition crosses a threshold. This can be implemented by a single system condition object or an observed system condition that periodically polls a non-observed system condition object monitoring the frequently changing condition. The threshold can be dynamically supplied by another system condition object.
4. Implementation choices for QuO middleware
The initial prototype implementation of QuO middleware, covered briefly in Section 4.1, has been described in earlier papers [13, 17, 18, 21]. In addition, we have developed other implementations and services supporting specific use-cases of QuO. These are described in Sections 4.2 and 4.3 and have led to the following variants of QuO that are more suitable for particular applications. These variants advance QuO in a complementary direction to other DOC middleware, such as TAO, CORBA, and Java.
4.1 Java QuO with threading
The initial prototype of QuO had two goals that led to specific implementation decisions: (1) rapid prototyping for early baseline functionality and (2) maximum flexibility. To achieve these goals, the baseline version of QuO is written in Java, is multi-threaded, and takes maximum advantage of Java's meta-object support.
This baseline version of QuO supports multiple languages (Java and C++ clients and objects) and multiple ORBs (Visibroker and TAO). The QuO kernel, system condition objects, and contracts are implemented in Java and the QuO kernel and many system condition objects run in their own thread. Contracts are scheduled for evaluation by placing them on a queue and a QuO kernel thread runs in a tight loop that pulls one contract at a time from the queue and evaluates it. Contracts, regions, transitions, predicates, elements of predicates, and so on are all represented as Java objects. This facilitates runtime interpretation of contract elements and keeps the QDL languages from having to implement typing features or type inference. System condition objects maintain their values and return them immediately upon demand, thereby ensuring predictable execution times of contract region predicates.
All interactions with the QuO kernel and between QuO objects are through CORBA interfaces. Therefore, C++ clients would have a C++ delegate (which resembles a CORBA IDL stub) that checks the state of a contract with a CORBA call to the contract. We have also developed a configuration of this QuO prototype that supports the Java RMI inter-object protocol, in place of CORBA, for the DARPA Advanced Logistics Program (ALP). This provides the same QuO functionality but it is provided by RMI servants instead of CORBA servants.
Table 2: Comparison of the three example applications
<table>
<thead>
<tr>
<th>Bottleneck</th>
<th>Avionics</th>
<th>UAV</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Performance</strong></td>
<td>Dominated by IIOP data delivery, overhead of delegate small when compared to marshalling and delivery of data</td>
<td>Requires low overhead to be a small percentage of task deadline</td>
</tr>
<tr>
<td><strong>Components</strong></td>
<td>Distributed across WAN, heterogeneous hosts, heterogeneous languages</td>
<td>Two heterogeneous nodes, but single processor and language on each node</td>
</tr>
<tr>
<td><strong>Resource Contention</strong></td>
<td>Network and CPU usage can vary dynamically</td>
<td>Network is constrained, but main contention is between tasks for CPU cycles</td>
</tr>
<tr>
<td><strong>Threading, Distribution</strong></td>
<td>Non-deterministic number and distribution of objects. Dynamic, changing number of threads.</td>
<td>Controlled, fixed number of threads and objects (and tasks).</td>
</tr>
</tbody>
</table>
The kernel can be **integrated** or **non-integrated**. An integrated kernel runs in the JVM of the client or servant (with a Java application), while a non-integrated kernel has its own JVM.
**4.2 Non-threaded C++ and Java QuO**
To support the application of QuO to an embedded dynamic mission planning avionics application, described in Section 5.2, we developed a **passive C++ version of QuO**. The existing Java version of QuO was not suitable for the avionics environment for the following reasons:
- **Java** – The embedded avionics environment does not have the spare memory and CPU to host a JVM and the extra Java ORB (Visibroker) needed by the Java QuO implementation.
- **Threading** – The embedded avionics environment has a fixed number of threads and closely controls access to these threads.
- **Overhead** – The overhead introduced by QuO delegates and contract evaluations must be minimized in the avionics environment, because of the lower latencies involved and the need to fit processing within a task's period. We have measured the threaded Java QuO kernel as imposing approximately 3 ms extra processing per method call on a 200 MHz Linux host using JDK 1.1.5, which is likely insignificant for a WAN application, but can be significant in a real-time application.
Accordingly, we implemented a version of the QuO kernel, contracts, and system condition library in C++ on top of the Adaptive Communication Environment (ACE) framework [19]. The QuO kernel, contracts, and system condition objects are all passive. They are implemented simply as function calls that get linked in with the application. Contract evaluation and QuO kernel services (such as system condition monitoring) execute in the thread of the calling process. In the case of in-band contract evaluation, the delegate call, the contract evaluation, and any system condition object processing used to evaluate contract regions execute in the thread of the client (or servant). In the case of out-of-band contract evaluation, the contract evaluation, any system condition object processing, and any triggered adaptation execute in the thread of the observed condition, e.g., a system resource manager or a host load monitor.
This model, where contract evaluations and all the actions spawned by them run in the thread of an existing client or manager, adds slightly different semantics to the QuO infrastructure over the previous prototype. In the previous prototype, system condition objects are defined as returning a value immediately when requested. The work needed to determine the values is done continuously in anticipation of their need. That is, the updating of system condition object values is performed asynchronously with respect to contract evaluation, delegate execution, and client execution. This enables contract evaluation to be bounded because system condition objects can run in other threads.
In the non-threaded model, the system condition object code executes to determine its value only when a contract evaluation accesses it to determine the current operating region. That is, the values of system condition objects are now computed in-band. This is necessary because there are no extra threads for system condition objects to run asynchronously. It still results in predictable contract evaluation time, however, as long as the system condition objects are written to execute within predictable time bounds.
The passive C++ version of QuO reduces the overhead of QuO adaptation through its reduced use of CORBA calls, its use of C++ native types instead of Java objects,
and the improved performance of compiled C++ over interpreted Java.
After implementing the passive C++ version of QuO, we used the same approach to rework the original prototype and develop an additional passive Java version of QuO, offering a Java choice to programmers that need strict control over the threads in their applications.
5. Examples of applications using these implementation choices
This section examines three demonstration applications that use QuO to perform adaptive QoS management. The three applications exhibit many of the characteristics of the use-cases described above and motivate the need for the various variants of QuO to support them. Table 2 summarizes the differences and similarities between these applications.
5.1 Case study 1: data dissemination in a wide-area network
This is one of the earliest examples that we developed using the QuO adaptive middleware. Dubbed Bottleneck, it consists of a client requesting still images from a remote data server and adapting to the response time it requires to receive the images. When round-trip image delivery and processing slows, the Bottleneck application examines resource and instrumentation data to determine whether the source of the slowdown is network or CPU degradation.
If the source of the slowdown is the network, the QuO middleware triggers adaptation to attempt to reserve bandwidth, using RSVP [23] or Darwin [4], if either is available. If this is not successful, the QuO middleware triggers application adaptation, in which the application trades off its data quantity or data quality requirements for its timing requirement, by requesting smaller images (lower data quantity) or lower resolution images (lower data quality) to reduce the amount of network traffic.
If the source of the slowdown is the CPU, the application responds by requesting unprocessed images. This reduces the load on the CPU used to process the images and enables them to be received faster, but reduces the quality of the display or analysis of the images, as illustrated in Figure 4.
The flexibility of the threaded Java QuO implementation is the best choice for this implementation. The QuO components are separate objects and have separate threads, so they are subject to dynamic configuration and modification, without relinking the whole application, which can be distributed across many remote hosts. Furthermore, this version of QuO supports more rapid prototyping and easier modification of Bottleneck, since the QuO objects and threads are decoupled from the application objects and threads.
5.2 Case study 2: dynamic mission planning in an avionics platform
As part of a collaborative research effort, we have been using QuO as part of a dynamic mission planning avionics application. The application, illustrated in Figure 5, consists of a command and control (C2) aircraft and a fighter aircraft collaborating during flight to redirect the fighter’s mission parameters. The C2 aircraft sends virtual target folders (VTFs), consisting of image data (as in case study 1), to the fighter aircraft, where they are processed to update the fighter’s mission.
This has aspects of the WAN use-case, in that there is a (wireless) connection between the C2 and the fighter nodes, across which VTF image data is sent. This application uses QuO for in-band and out-of-band adaptation on the fighter side, as illustrated in Figure 6. During VTF image download QuO manages the tradeoffs of timeliness versus image quality. This is accomplished through image compression, image tiling, processor resource management, and network resource management.
When the fighter node requests an image from the C2 node, a QuO delegate breaks the image request into a sequence of smaller tile requests. The number of tiles that the delegate requests is based upon the image size while the compression level of an individual tile is based upon the deadline for receiving the full image and the expected download time for the tile. During image downloading a contract monitors the progress of receiving the tiles and influences the compression level of subsequent tiles based upon whether the image is behind schedule, on schedule, or ahead of schedule. The image is tiled from the point of interest first, with the early tiles containing the most important target data, so that decreased content of the later

tiles will have minimal impact on the dynamic planning capabilities.
In addition to the in-band adaptation of tiling and compression, QuO provides out-of-band adaptation in conjunction with the processor resource manager and dynamic scheduler components of the system. The processor resource manager selects task event rates from the ranges available for different tasks to optimally utilize the CPU. The contract monitors the progress of the image download through system condition objects interfacing to the network and CPU monitors. If the processing of the image tiles falls behind schedule, the contract prompts the processor resource manager to attempt to adjust the rates to allocate more CPU cycles to the decompression routine. This is in addition to, and orthogonal to, the in-band adaptation to adjust the compression level of the next tile.
If these adaptation attempts are not successful the QuO middleware triggers application adaptation. The application adjusts its timeliness or image quality requirements, by requesting longer deadlines or lower image resolution to reduce the urgency or amount of processing needed. Figure 7 illustrates the regions of the contract and the available adaptation options when the contract indicates that image receipt is early or late.
This application uses the passive C++ version of the QuO middleware for the reasons described in Section 4.2, i.e., the avionics software uses a fixed number of threads, has no JVM, and demands minimal overhead.
5.3 Case study 3: shipboard dissemination of UAV video
As part of an activity for the US Navy, we have been developing a demonstration application utilizing QuO to control the dissemination of Unmanned Air Vehicle (UAV) data throughout a ship. Figure 8 illustrates the initial architecture of the demonstration. It is a three-stage pipeline, with an off-board UAV sending MPEG video to an on-board video distribution process. The off-board UAV is simulated in early prototypes by a process that continually reads an MPEG file and sends it to the distribution process. The video distribution process sends the video frames to video display processes throughout the ship, each with their own mission requirements.
QuO adaptation is used as part of an overall system concept to provide load-invariant performance. The video displays throughout the ship must display the current images observed by the UAV with acceptable fidelity, regardless of the network and host load, in order for the shipboard operators to achieve their missions (e.g., flying the UAV or tracking a target). To accomplish this, system condition objects monitor the frame rate and the host load on the video display hosts. As the frame rate declines and/or the host load exceeds a threshold, they cause region transitions, which trigger the following adaptation:
- The video distribution process is told to drop frames going to the display on the overloaded host.
- The video display on the overloaded host is told to reduce its display frame rate to the rate at which frames are being sent it.
Simultaneously, system condition objects on the video distribution host are monitoring the host load, the input and output queues, and the frame rate. If the queues fill up or if the host load exceeds a threshold, the contract tells the video distribution process to drop frames to compensate. In this way, the adaptation attempts to maintain the video display processes displaying the current images that
Refin execution rates
Notii
Start Deadline
Time
Figure 7: Receipt of VTF images can be either early, on time, or late
the UAV is observing with appropriate fidelity, regardless of the load on the various hosts.
The contracts on each host are simultaneously reporting the current contract region and video distribution metrics (e.g., queue lengths, frame rate, and number of dropped frames) to a system resource manager (RM). When QuO recognizes that the load on the video distribution host has become unacceptable, it notifies the RM. The RM then has the option of starting a video distribution process on another, less loaded host, and hooking it up to the UAV video source process and the video display processes. It then kills the processes on the overloaded host.
This application uses the passive C++ version of QuO and exhibits only out-of-band adaptation. The application maintains a data path, across which video frames are sent, and a separate control path, using CORBA IIOP, across which QuO adaptive control is sent. This seemed a better approach than putting a delegate in the path of the video frames because adaptation decisions do not need to be made between each frame and doing so could lead to hysteresis, rather than the desired load invariant performance.
6. Issues
Development of the variants of QuO middleware described in this paper raises a number of issues. We discuss a few of these here.
Choice of QuO implementation. For ease of prototyping, the original version of QuO is usually still the best choice. It has been used in many example applications, with different languages and different platforms. However, for applications that require a smaller footprint and fewer threads, either the passive Java or C++ version is necessary. Ultimately, our goal is a single design for QuO with an implementation in each language, each of which is configured differently for different use cases.
Unification of implementations. The original effort to develop the passive C++ version of QuO began as a porting effort from the original prototype. However, as we improved on the design to take advantage of C++ features, to take advantage of the portable ACE interface, and to improve the performance and footprint, we began to fold some of these improvements back into the original prototype. We are currently working on unifying the designs as much as possible, with the best features of each variant.
Maintenance and extending QuO. As long as there are different variants of QuO, the burden of maintaining, testing, and extending QuO will be increased. However, as we unify the variants around a single design, this burden should be reduced. We are also currently developing an extensive regression test suite to help support and ease the maintenance burden.
Combination and dynamic configuration of QuO. In theory, applications or components built around one variant of QuO or another should interoperate seamlessly. However, we have yet to develop any use cases or examples to test this. Other ideas to consider are whether the variants described in this paper are separate implementa-
Figure 8: Architecture of the UAV demonstration
633
as a real-time event service [10] and a hybrid static/dynamic scheduling service [9] for a Real-Time CORBA[16] ORB.
This paper has compared and contrasted the characteristics of the wide-area and embedded applications that have led to the emergence of different variants of the QUO middleware. In addition, we described three specific example applications that use different aspects of the QUO implementations to address their respective requirements and use-cases.
References
|
{"Source-Url": "https://www.cse.wustl.edu/~cdgill/publications/ICDCS2001AdaptiveMiddleware.pdf", "len_cl100k_base": 7486, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31022, "total-output-tokens": 9189, "length": "2e12", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.0003135204315185547, "__label__crime_law": 0.000308990478515625, "__label__education_jobs": 0.0003960132598876953, "__label__entertainment": 8.726119995117188e-05, "__label__fashion_beauty": 0.00015783309936523438, "__label__finance_business": 0.00032401084899902344, "__label__food_dining": 0.0003123283386230469, "__label__games": 0.0006079673767089844, "__label__hardware": 0.0022983551025390625, "__label__health": 0.00045013427734375, "__label__history": 0.0003104209899902344, "__label__home_hobbies": 7.742643356323242e-05, "__label__industrial": 0.0005850791931152344, "__label__literature": 0.00020706653594970703, "__label__politics": 0.0002760887145996094, "__label__religion": 0.0004758834838867187, "__label__science_tech": 0.0693359375, "__label__social_life": 7.081031799316406e-05, "__label__software": 0.01558685302734375, "__label__software_dev": 0.90576171875, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0010480880737304688, "__label__travel": 0.0002446174621582031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42913, 0.02226]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42913, 0.64119]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42913, 0.9116]], "google_gemma-3-12b-it_contains_pii": [[0, 3870, false], [3870, 8052, null], [8052, 12659, null], [12659, 17945, null], [17945, 21944, null], [21944, 26658, null], [26658, 31184, null], [31184, 34658, null], [34658, 37831, null], [37831, 42913, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3870, true], [3870, 8052, null], [8052, 12659, null], [12659, 17945, null], [17945, 21944, null], [21944, 26658, null], [26658, 31184, null], [31184, 34658, null], [34658, 37831, null], [37831, 42913, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42913, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42913, null]], "pdf_page_numbers": [[0, 3870, 1], [3870, 8052, 2], [8052, 12659, 3], [12659, 17945, 4], [17945, 21944, 5], [21944, 26658, 6], [26658, 31184, 7], [31184, 34658, 8], [34658, 37831, 9], [37831, 42913, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42913, 0.09032]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9b07fe94e43f97c7a91bf09de2f138ac05b94f65
|
Florian Matthes, Christian Neubert, Alexander W. Schneider
Fostering Collaborative and Integrated Enterprise Architecture Modelling
Enterprise Architecture Management (EAM) is a challenging task in modern enterprises. Tools supporting EA management activities are based on extensive models either created by enterprise architects or built-in. However, these models cannot be adapted according to specific data acquisition needs by persons having the architectural knowledge on the instance level. In this article, we describe how Hybrid Wikis empower these information carriers and enterprise architects to collaboratively and incrementally develop and manage a model in a bottom-up fashion by using wiki pages enriched with types and attributes. We visualise these emergent models by using UML-like class diagrams. Our approach is evaluated in a case study in a global operating bank participating in our Wiki4EAM community, a community of German enterprise architects applying Hybrid Wikis in different EA management contexts.
1 Motivation & Problem Statement
The field of Enterprise Architecture (EA) management constantly evolves since the often cited publication of Zachman (1987). Until today, the application of different subsequently published approaches did not result in the achievement of all promised benefits in practice. For example, Lucke et al. (2010) report that communication, shared understanding, and insufficient tool support are still critical issues in EA management. Tool support is often provided by means of heavyweight and expensive EA management tools to gather, structure, visualise, and analyse architectural information, such as business processes, applications, and organisational units. In order to obtain a holistic and consistent view of the EA to be managed it is required to define the concepts existing within an enterprise formally (cf. Bunge 1977). Although foundational ontologies exist for EAM frameworks currently applied in practice, for example ArchiMate (Azevedo et al. 2011) and TOGAF (Gerber et al. 2010), their meta-models need to be tailored (Källgren et al. 2009) to reflect the specific organisational context. In this situation often the so-called ivory tower syndrome (Raadt et al. 2008) emerges. That is, enterprise architects ‘invent’ an information model intended to be filled with data by the employees of the technical departments (Buckl et al. 2009b). However, these models are often rather unsuitable for the data acquisition as required by the persons having the concrete architectural knowledge as they are too abstract or too complex. Due to the high amount of data necessary to provide a holistic EA description many organisations today use specialised EAM tools (Matthes et al. 2008). The actual data input is thereby often performed by many different employees, because the architectural knowledge about single instances is spread over different employees within the company.
Our approach evaluates ways of bringing information carriers having deeper insights in an enterprise’s structure within their individual scope and enterprise architects closer together by providing an integrated environment to collaboratively develop and manage both models and instances. It facilitates instance and model co-evolution by using lightweight techniques, such as suggestions. Information carriers are enabled to create and adapt the description of instances according to their specific needs. From these in-
stances a model can be derived in a second step. Additionally, enterprise architects can explicitly define a model. For example, an architect can introduce new elements not yet represented by instances (e.g., introduce a new attribute) or introduce elements frequently used in instances but not yet defined in the enterprise model. Both models (i.e., derived and defined) are presented in a combined view helping enterprise architects to reflect their design decisions regarding the model. This way our approach mitigates the ivory tower syndrome.
The remainder of this article is structured as follows. In Sect. 2 we present a bottom-up approach for the creation of EA management models which uses Hybrid Wikis as concept and tool (Matthes et al. 2011). In order to enable enterprise architects to manage the resulting emergent and collaboratively created model, we introduce a suitable extension to UML class diagrams in Sect. 3. In Sect. 4 we present the findings of a case study with a globally operating bank, in order to demonstrate the feasibility of our approach for bottom-up and collaborative EA model creation. We demarcate our approach from other approaches for EA model creation in Sect. 5. Finally, Sect. 6 summarises this article and Sect. 7 provides a critical reflection and outlook on future research.
2 Approach
To address the challenges introduced in Sect. 1 Matthes et al. (2011) developed the concept of Hybrid Wikis and implemented it. Hybrid Wikis combine traditional wikis with a small set of structuring concepts in order to allow querying like in an object-oriented database. The provided structuring concepts are: attributes, types, attribute definitions, and constraints. These concepts are maintained (created, deleted, changed) by wiki users.
2.1 Structured Wiki Pages
Attributes and type assignments enrich individual wiki pages with structured information. For instance, a page can be typed with business application and provide an attribute status with value planned (cf. Fig. 1). A typed page represents an instance of a type. The data type of an attribute value can either be a literal (Text, Date, Number) or a hyperlink to another (typed) page. An attribute can be multivalued (i.e., an ordered list of values). Attributes can be freely added and removed to and from pages independent of the currently assigned types. The same applies to types, that is, a type can be removed from a page without changing the currently assigned attributes. This means that types and attributes are basically independent from each other.
Wiki pages structured with attributes and type assignments represent the instances.
2.2 Underlying Model and its Validation
Types, attribute definitions, and constraints are elements representing the model of all instances. By means of attribute definitions attribute keys (e.g., status in Fig. 1) can be bound to a type. Constraints belong to an attribute definition and allow the specification of value ranges for attributes. A constraint, for example, can require that the values of an attribute (e.g., responsible unit) are links to pages having a specific type (e.g., organisational unit). A page’s attributes (and their values) are validated by means of a constraint if the constraint’s attribute definition is related to the respective attribute. That is, if their keys are matching (e.g., both have key responsible unit) and the page uses the same type the attribute definition belongs to. Constraint violations are indicated to the user on wiki page level by showing a validation message (cf. requester Fig. 1). However, users are never forced to enter valid values when changing an invalid page. Therefore constraints are referred to as ‘soft’ in Hybrid Wikis.
Additionally, types and attribute definitions can be declared as strict. Thus, changes to a page can only be stored if all attributes of this page are valid. However, in Hybrid Wikis everybody allowed to modify wiki pages (with attributes and type assignments) can also change types,
attribute definitions, and constraints. Therefore, when using strict types (or attribute definitions) users are rather urged to enter valid values only, but never forced.
In Hybrid Wikis instances and their model can diverge, even when using strict constraints. This is due to the fact that constraints can be defined even if violating pages result from this definition. But due to the loose coupling of both they can be changed independently without being restricted by each other. This way instance and model evolution is facilitated.
In Matthes et al. (2011) and Neubert (2012) all modelling elements of Hybrid Wikis, as shown in Fig. 2, are explained in detail. Additionally, it is explained how these concepts are implemented based on an existing wiki system in terms of, for example, algorithms, performance, system architecture, and technology. An important fact is, that attributes are related directly to wiki pages without the indirection of a type. Nevertheless, a type can be used to group attributes via the usage of attribute definitions. By this way of attribute modelling, users are enabled to assign them without thinking about types while modelers can still use familiar modelling concepts.
2.3 Instance and Model Coherence and Evolution
With the possibility to assign attributes to content without using a type in parallel to the creation of models instances can differ from the model. In order to counteract a potential divergence of a model and its instances Hybrid Wikis provide the following mechanisms:
- Suggestions (e.g., of attributes)
- Search for violations of constraints
- Consolidation (e.g., of types)
2.3.1 Suggestions
On the instance level Hybrid Wikis provide attribute and type suggestions. Suggestions are calculated based on page’s content and its currently used attributes and types. For instance, if a page is typed business application an attribute with key status is suggested if another page exists also typed business application additionally having an attribute status, or a key is suggested if the type of a typed page has an attribute definition.
On the model level, for example, a constraint is suggested if a certain number of pages (i.e., according to a specific threshold) uses similar attribute values. For example, if two pages typed with business application provide an attribute responsible unit and both have link values to pages typed with organisational unit a corresponding constraint is suggested for the attribute definition responsible unit, that is, restricting attribute values to be hyperlinks to pages of type organisational unit.
Furthermore, all input fields provide autocompletion support in order to guide users towards a consistent usage of terms and model elements. For instance, already used keys are suggested when the user enters an attribute key. Likewise, literals and links are suggested when entering an attribute value.
Suggestions are lightweight means to encourage users to provide structured content without forcing them. Moreover, they help to develop a coherent model when a bottom-up approach is used.
2.3.2 Inconsistency Detection
In Hybrid Wikis structured pages potentially become invalid without modifying the page, for example, by specifying a constraint. Therefore, users are supported in finding inconsistencies. Users can search for wiki pages
- containing any invalid attributes and
- having specific invalid attributes (e.g., searching for pages having an attribute requester with invalid values).
Based on these queries users can design their own dashboard in order to be aware of constraint violations.
2.3.3 Consolidation techniques
Once inconsistencies are detected, users can harmonise instances by means of the model. For example, inconsistencies are detected, users can harmonise instances by means of the model. For example, if two pages typed with business application provide an attribute responsible unit and both have link values to pages typed with organisational unit a corresponding constraint is suggested for the attribute definition responsible unit, that is, restricting attribute values to be hyperlinks to pages of type organisational unit.
Furthermore, all input fields provide autocompletion support in order to guide users towards a consistent usage of terms and model elements. For instance, already used keys are suggested when the user enters an attribute key. Likewise, literals and links are suggested when entering an attribute value.
Suggestions are lightweight means to encourage users to provide structured content without forcing them. Moreover, they help to develop a coherent model when a bottom-up approach is used.
2.4 Integrated Model Management in Hybrid Wikis
In Hybrid Wikis models emerge from structured wiki pages and hyperlinks between them (i.e., from the actual instances). Model evolution is facilitated by suggesting attribute definitions and constraints based on an analysis of the instances. Evolution on the instance level is facilitated by providing attribute suggestions based on the underlying model. Additionally, the model can be used to harmonise (i.e., consolidate) derivations of the instances, because instances can be constrained by the model. This way, users are urged to follow the schema, but they are never forced. Figure 3 depicts Hybrid Wikis’ instance level and model level and sketches their interplay. Additionally, two participation roles are distinguished (Neubert 2012). Authors collaborate on the instance level by managing wiki pages with attributes and types, while tailors collaborate on the model level by managing attribute definitions and constraints.
2.5 Using Hybrid Wikis for EA Management
Hybrid Wikis provide means to collaboratively manage and model information for different purposes in different application domains such as notes and publications for (personal) information management within an university’s chair. In the context of EA management, the idea is to start with existing unstructured information sources captured as wiki pages (e.g., derived from Office documents) and then to incrementally and collaboratively structure these pages with attributes and types as needed for enterprise architecture modelling. In Hybrid Wikis persons providing instance data (i.e., the authors, such as persons from the technical departments having the architectural knowledge) are never prevented from data entry caused by hard schema constraints and structures (i.e., types and attributes) can always flexibly be adapted as needed. This way, the ‘true’ model emerges bottom-up derived from user-managed, structured wiki pages (i.e., from the instances). Once a core model has emerged, enterprise architects (i.e., the tailors) can make this core explicit (e.g., by means of attribute definitions) and optionally define additional integrity constraints. However, the degree of rigidity can be softened by all users, if needed.
In this article we focus on how to derive EA models from collaboratively created and structured wiki pages and how these models can be visualised. In Buckl et al. (2009a) we discuss further advantages of using wikis for EA management (e.g., versioning, awareness).
3 Visualising Emergent Meta-Models
When authors model collaboratively with Hybrid Wikis, modelling usually occurs implicitly on the instance level. Thus, the model itself is not in the primary focus of participating authors and a visualisation of it is not available upfront. In the following the need for such visualisation as well as the used techniques will be described. In addition, an explicit model can be defined by tailors which also needs to be visualised. Finally, a combination of both is presented.
3.1 The Need for Visualisation
When authors create new instances or adapt the model implicitly during their daily work, they might be interested in its structure. This is particularly the case, if a type has many attributes.
linking to different other types. In such case, the current way of displaying attributes and their values as a table might not give an adequate overview about a type and its relations to other types. In Hybrid Wikis the authors modelling on the instance level can only see directly associated instances. Hence, they are not able to see the structure or relationships of the associated types. With a UML class diagram visualising the underlying model this drawback of data-driven modelling can be mitigated as it provides a navigational aid. In addition, the visualisation of the model can be used to discover the terminology used on the instance level. This includes, among others, attribute names for new types or similar attributes in different types.
The second target group for the visualisation are enterprise architects who transfer control over the enterprise’s model to authors providing data. If doing so, they need a familiar overview about the evolving model to be able to intervene if necessary. For example, if different semantically equivalent terms are used within the model enterprise architects can use the visualisation to discover them. Beside the consolidation of types and attributes enterprise architects have the ability to adapt the model by creating constraints or using the ‘strict’ property (cf. Sect. 2.2). Attributes declared as strict urge authors to provide a specific model element. Thus, enterprise architects can rather control the model evolution towards a specific target state.
By the use of a UML-like visualisation (cf. Fig. 7), which presents the model defined by the enterprise architects beside the actual instances filled by the authors, the enterprise architect is also able to reflect her decisions forming the enterprise model. With such visualisation at hand, it furthermore becomes easy to discover unused types and attributes, model parts which are regularly contravened as well as ‘strict’ properties which are not obeyed.
Extending Hybrid Wikis with the capability to visualise the data-driven model in parallel to the explicit model eases its usage for both, authors and tailors.
3.2 Mapping Hybrid Wikis’ Instances to UML Object Diagrams
Each instance in Hybrid Wikis can be modeled as a single object in a UML object diagram. Thereby, the page’s type is used as the UML object’s type since it classifies a set of instances having a similar structure. Because there is no obligation to assign a type to each instance, untyped instances will be neglected within the resulting UML diagram.
The attributes within Hybrid Wikis can directly be mapped to UML’s attributes. Therefore, the attribute’s name becomes the UML attribute’s name.
Within Hybrid Wikis, hyperlinks are the primary means to establish a relationship between two instances. They can either be used in the content section of a wiki page or as an attribute value. As the content part is completely neglected during model derivation, only links used as attribute values are analysed. If the linked instance has a type assigned, a UML association will be derived and the attribute’s name will be used as role name. Currently, there is no possibility to define bidirectional links in Hybrid Wikis. Therefore, only independent directed links can be established on the instance level. As consequence, actually bidirectional links are represented as two independent directed associations in the derived UML diagram.
Figure 4 shows an exemplary object diagram. Therein, wiki pages of type project are shown with their respective attributes and associations.
As also visualised in Fig. 4 attributes are not associated to the type of an instance with the result that various instances of the same type can have different attributes. Consequently, a respective class diagram has to be abstract enough to account for that kind of diversity.
3.3 Mapping Hybrid Wikis’ modelling Elements to UML Class Diagrams
In addition to an object diagram UML class diagrams are better suited to provide a holistic overview about the structure of types used for Hybrid Wikis’ instances. The basis for such class diagram forms the already described object diagram. Classes and their attributes as well as their associations can be directly derived from it. By contrast, the attribute’s data type has to be derived based on the various instances. Because an attribute’s values can have different data types, the model derivation needs to account for inconsistencies. As a preliminary solution, the data type with the highest relative frequency will be used if attribute value types differ between instances.
3.4 Explicit Model Visualisation
In parallel to editing instances Hybrid Wikis support modelling also explicitly by the specification of attribute definitions and constraints. Always bound to a specific type, attribute definitions can be used to explicitly describe and refine type-specific attributes. By the use of constraints the values of an attribute can be defined in more detail. For example, the number of values can be set to 0..1, 1, 1..*, or *. An attribute’s data type can be restricted to: Text, Number, Date, or Link whereas for data type Link one can also specify the concrete type the linked instance has to be assigned to. When defining a constraint for a specific attribute the constraint can optionally be marked to be ‘strict’. In that case, the constraint will be enforced for all new instances while existing instances will not be changed. Because UML does not account for inconsistencies, the ‘strict’ property cannot be visualised in standard UML class diagrams. Using the same types already shown in Fig. 5, Fig. 6 shows an exemplary class diagram visualising explicitly modeled types.
3.5 Instance and Explicit Model Co-Visualisation
Since the explicit way of modelling is not bound to any instances it can of course lead to types without any instances. In that case, the respective UML classes are still depicted.
extension is presented in an exemplary UML-like class diagram derived from Hybrid Wiki data used within the domain of project management (cf. Fig. 7). In fact, the previously described class diagram visualising the instance level and the class diagram visualising the explicit model level are thereby combined.

### 3.5.1 Instance versus Model Level modelling
As previously explained, in Hybrid Wikis modelling takes place on two different levels: instance level and model level (cf. Sect. 2). Because UML class diagrams are not intended to distinguish between these two levels, an appropriate mechanism has to be introduced to express the difference of modelling levels.
First, the number of instances is an important characteristic of emergent models. Thus, it has to be reflected within the derived UML-like class diagrams for all elements including classes, attributes, and relationships. This can be achieved by inserting the concrete number after the respective element’s name aligned to the right. Thereby, it can be easily distinguished from the UML instance number constraints which use brackets. For example, in Fig. 7 there are two instances of type `project` but only for one instance a `budget` is provided.
Second, within emergent models derived from instance data some information about the underlying model might be missing, because users are not forced to provide it. As a UML class diagram might require information of that kind, it has to be derived based on the available implicit instance data. For example, if all instances of a specific type have an attribute whose value links to exactly one instance of another type, a multiplicity of 1 will be derived for the association. Because the tailors have not already decided about an upper bound for the number of instances, this fact has to be expressed in the resulting diagram. A possible way therefore is the use of colours to express the difference between information provided explicitly by tailors (on the model level) and information derived from the instances. In Fig. 7, the attribute `acronym` is mostly of data type Text but there might also be instances with an attribute `acronym` having another data type. Text is shown in the UML-like class diagram since it has the highest relative frequency.
Third, the distinction between the instance level and model level in general has to be expressed. This includes especially attribute definitions and constraints. Therefore, elements explicitly defined on the model level are highlighted in green and positioned above a dashed line (derived from the explicit class diagram, Fig. 6). Using different colours according to the level of modelling directly shows the diagram reader how the respective elements emerged. For example, in Fig. 7 the attributes `start`, `end`, and `status` of type `project` are explicitly defined in the model by the use of attribute definitions and consequently marked green. Especially for associations between two types instance and model data is displayed in parallel if available. For example, the association between the types `project` and `customer` depicted in Fig. 7 shows, that there is a multiplicity constraint of `[0..*]` explicitly defined in the model but on the instance level each instance of type `customer` has exactly two `projects` assigned.
Within the current visualisation some facts of the instance model are overlaid by the explicit model. If a multiplicity of `*` is defined in the explicit model for an association the fact that each instance has exactly one object it refers to will be hidden. In addition, for attributes having at least
one instance using a hyperlink as value, a UML association is derived even if other instances use other data types like Text. In general, the data type derivation algorithm works as follows: if an attribute definition is present it will be shown accordingly. Otherwise, the data type used most often among the respective instances will be used. If none of this rules returns a single data type the following sequence of types will be used: Link, Date, Number, Text. This sequence ranks hyperlinks highest because we consider knowledge about relationships to be more important than literal attributes. The other data types are ranked according to their specificity.
3.5.2 Model Inconsistencies
The decoupling of the instances from the explicit model in Hybrid Wikis allows for inconsistent instances which do not comply to the constraints defined by the model. Because UML class diagrams disregard possible inconsistencies the expression of that characteristic also requires a UML extension. Again, the concrete number of non-compliant instances is displayed aligned to the right. To distinguish this number from the actual amount of instances, it is put into parentheses. Furthermore, using two different colours, the following two cases of non-compliance can be distinguished:
- The Instance violates a Constraint
- The Instance violates a ‘strict’ Constraint
As strict constraints force users to comply for all newly added instances, this kind of violation might be worse than violating a standard constraint. By using the colours red and yellow, this distinction is visualised. For example, the attribute status of type project displayed in Fig. 7 has a ‘strict’ constraint (visualised by an exclamation mark) that its value has to be a Date. As indicated by the red number, one instance is in violation to that constraint, having a status attribute of another data type. If the ‘strict’ property is not set for a specific constraint, a yellow number indicates the number of violations.
4 Application and Evaluation
To validate our approach, in December 2010 at Technische Universität München we established a community, namely Wiki4EAM (Matthes and Neubert 2011), of experienced enterprise architects from 25 large German organisations in order to pursue a lightweight, wiki-based approach to EA management (Buckl et al. 2009a). In the following we introduce the experiences gained in applying Hybrid Wikis in enterprises participating in our Wiki4EAM community. These experiences are described in detail in Neubert (2012).
In the period from December 2010 to March 2012 we conducted seven workshops together with the community members. In the first workshop the participants were introduced to the main concepts underlying Hybrid Wikis by presenting some slides. Additionally, we used a projector to demonstrate the core system functions (e.g., structuring of wiki pages, creating visualisations) by using a small EA management example scenario from the banking industry. After the workshops our software was made available to the members of the community. Some used the system hosted at Technische Universität München, some downloaded it and installed it locally. In the subsequent workshops, members of the community presented their developed wiki-based EA management solutions. Based on the experience gained with the use of Hybrid Wikis, new requirements were collected and discussed. This way, we constantly adapted and improved our solution according to the feedback of our industry partners.
4.1 Survey
In the 6th workshop in December 2011, we asked (paper-based) the community members to provide the main reasons why they are using Hybrid Wikis in their enterprises. The participating members stated (extract and most frequent answers) that
• the model is flexibly adaptable (i.e., can be created incrementally and does not need to be fixed in advance) (6 of 7) and
• Hybrid Wikis are easy to use (i.e., provide a high level of usability and a clean user interfaces) (4 of 7).
Since only seven members attended the community meeting in December 2011, these survey results do not represent a strong, well-founded evaluation. However, the answers allow to assume that the adaptability of structures is the main reason for applying Hybrid Wikis in EA management scenarios. Furthermore, the results indicate that business users understand the concepts underlying Hybrid Wikis since they adapt structures without additional IT-support facilitated by a web interface that participants consider to be easy to use.
4.2 Case Study
In the following we briefly introduce one selected case study from a global operating bank participating in the Wiki4EAM community. This bank is referred to as Bank A subsequently.
4.2.1 Starting Point
In December 2010, Bank A started participating in the community to deepen the knowledge about EA management. In particular, one enterprise architect of a unit concerned with infrastructure architecture management evaluated whether a previously created landscape of the unit’s infrastructure can be represented by the concepts provided by Hybrid Wikis. That is, the architect evaluated if it is possible to build a model of the infrastructure landscape consisting of types, attributes, attribute definitions, and constraints. In an additional\(^1\) two hours personal lesson with the architect of the infrastructure unit, the core concepts (e.g., wikis, wiki pages, types, attributes, constraints) were explained in detail and technical questions about some system functions were clarified (e.g., how to define queries, how to embed a query in a wiki page in order to create a visualisation). After this additional lesson, Bank A accessed Hybrid Wikis through a system hosted at TU München. The system included some EA management demo data (e.g., business applications, organisational units) and some exemplary visualisations (e.g., a cluster map showing the relation between organisational unit and business applications).
4.2.2 Model
In the period from December 2010 to December 2011 Bank A prototypically modeled the architectural elements of the infrastructure unit in a wiki separated from the wiki with the demo data. The resulting model representing the infrastructure architecture is depicted in Fig. 8.
The concepts of this model represent infrastructure elements on a certain level of detail. The model consists of four levels. Each level represents a different level of granularity in the IT infrastructure stack. For instance, an Infrastructure Component (IC) (level 4) can either be software or hardware. Likewise, an area (level 3) represents the logical unit for a set of ICs (e.g., a set of software components). Bank A incrementally developed this model by using wiki pages structured with attributes and types. The wiki’s home page serves as a dashboard. The dashboard shows the relations between some types (e.g., between areas and infrastructure components) either as custom embedded table views or as graphical visualisations (e.g., as a cluster map). Both kinds of customised views answer specific EA management questions. For example, a cluster map shown on this dashboard indicates which ICs are used in which areas, or obsolete ICs are indicated in a (custom) tabular view by filtering for pages (typed with IC) having a colour attribute with value red.
\(^1\)In addition to the demonstration and visualisation during the initial workshop.
4.2.3 Lessons Learned
In April 2011 in a Wiki4EAM community workshop, Bank A summarised the experiences gained with Hybrid Wikis (extract):
- Absolute beginners are not able to start without any introduction making a short lesson (about 2 hours) or user manual mandatory.
- Creating and adapting a model is possible without any further help from IT experts.
- Starting with instances is simpler than creating a holistic model first.
- Having a previously developed model in mind is helpful since it provides at least a basic frame for structuring.
- Gradually adding new attributes to wiki pages works well and creates beneficial structures.
These experiences show that Hybrid Wikis are applicable in the domain of enterprise architecture management. They empowered enterprise architects to develop a model in a data-driven incremental way. Furthermore, the adaptability of the model helped to shape the model according to changing information needs in order to answer typical EA management questions. In the introduced case, it was helpful to have a vague idea of the target model and some demo data at hand.
5 Related Work
In current literature different approaches for enterprise modelling exist. In order to distinguish the approach presented in this article from existing ones prominent representatives of different streams will be described in this section.
5.1 MoKi Wiki
MoKi (Modelling wiKi) is a wiki for the purpose of enterprise modelling (Casagni et al. 2011; Ghidini et al. 2009) built on the basis of Semantic MediaWiki. Its objective is to foster collaborative modelling of the constituents of an enterprise, in particular domain objects, business processes, and competencies. Therefore, templates and forms are used to prevent users from learning a special syntax. By contrast, Hybrid Wikis do not focus on the modelling part; models
emerge implicitly while users are documenting knowledge. It is also not tied to a specific domain such as enterprise modelling (Matthes et al. 2011).
5.2 DBpedia
The goal of DBpedia is to extract structured information from Wikipedia’s content and to provide it publicly in the Web (Kobilarov et al. 2009; Lehmann et al. 2009). Thereby, a syntactic analysis examines the markup of Wikipedia pages using pattern matching techniques in order to extract RDF statements. More details about the extraction algorithm are described in (Fensel et al. 2011). In Hybrid Wikis, relationships between different pieces of content (wiki pages) can only be created explicitly in the structured part of each page. The unstructured part is not analysed to derive the underlying model up to now.
5.3 EA Management Tools
As outlined in the EA Management Tool Survey 2008 (Matthes et al. 2008) most EA management tools provide a rigid model. Often, such tools also include their own methodology of EA management and are not built to be adapted easily by the customer. Nevertheless, changes of class and attribute names as well as the introduction of new concepts is possible in some tools. By contrast to Hybrid Wikis, such changes cannot be implemented directly by the user but require the interaction of an administrator with special rights. Therefore, changes to a model can be implemented faster in Hybrid Wikis although losing a central governance instance.
5.4 Model and Meta-Model Co-Creation
The parallel evolution of model and meta-model are subject to research, for example, focusing on the model-to-model transformations (Cicchetti et al. 2008; Ruscio et al. 2011). Such approaches analyse possibilities of propagating changes made to a meta-model to the model and vice-versa. By contrast, Hybrid Wikis do not apply changes to one model to the other. Instead Hybrid Wikis just highlight possible mismatches of model and meta-model. Furthermore, both layers of models can now be visualised together in order to show implicit and explicit information structures in parallel.
5.5 Visualisation of Models
The visualisation of data models has a long tradition. In order to visualise a model it has to be present in some kind of standardised format. For instance, a widely used meta-model to describe models named Ecore has been developed by the Eclipse community (Steinberg et al. 2004). Because Ecore can be used to describe models as well as meta-models (since they are also models) both can be visualised by many different tools. But the goal of Hybrid Wikis is to visualise both models in a single diagram. To the authors’ knowledge there is no standardised format nor tool available to create a visualisation depicting both instances and their underlying model.
6 Conclusion
In this article we discussed how the concept of Hybrid Wikis fosters the collaborative and integrated (meta-) model development in the context of EA management. In Sect. 2, we briefly introduced the modelling concepts of Hybrid Wikis and explained how their provided mechanisms facilitate (meta-) model evolution. Subsequently, in Sect. 3, we presented a new visualisation of emergent models inspired by UML class diagrams and enriched by additional information from the instance level. To validate our approach, in Sect. 4, we presented the results of a survey conducted among members of our WikiEAM community who applied Hybrid Wikis within their organisation. Additionally, we presented a case study in a global operating bank using Hybrid Wikis for infrastructure modelling. We compared Hybrid Wikis in Sect. 5 with other modelling approaches described in scientific literature.
7 Discussion and Future Research
Although Hybrid Wikis seem to be useful for enterprise modelling their usage is subject to some preconditions. If used for EA management, the implementing organisation has to shift its culture to an open corporate culture. Enterprise architects transfer part of their control over the enterprise’s model to employees responsible for data input. Since the presented collaborative enterprise modelling approach depends heavily on the active contribution of the employees having the architectural knowledge data input has sometimes to be motivated, for example, by architects leading with good example. Another lesson learned during the evaluation was that starting modelling from scratch is difficult. Therefore, Hybrid Wikis need to provide the ability to import existing modelling solutions and additionally to adapt them. Another threat of using Hybrid Wikis for enterprise modelling are subsequent alternations of versions also known as edit wars, cf. Viégas et al. (2004). The presented novel approach to visualise instances in parallel to their explicitly defined model has still some limitations. Possibly, mutually directed associations are used instead of bidirectional associations. Moreover, the visualisation readers might be mislead by an attribute data type derived from heterogeneous instance data and the number of its instantiations. If different data types are used for a single attribute, the actual visualisation suggests that all instances have the same data type.
In addition, future research should a) examine if Hybrid Wikis can serve as model repository to exchange and merge models from different EAM scenarios, b) investigate possibilities to support behaviour and views based on these emergent, adaptive models, and c) analyse and compare models of many other Wiki4EAM community members, for example, to identify modelling patterns.
References
on Enterprise Information Systems (ICEIS), pp. 54–64
Florian Matthes, Christian Neubert, Alexander W. Schneider
Chair for Software Engineering for Business Information Systems (sebis), Institute for Informatics, Technische Universität München
{matthes | christian.neubert | alexander.schneider}@tum.de
|
{"Source-Url": "https://www.emisa-journal.org/emisa/article/download/100/75", "len_cl100k_base": 7837, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 37906, "total-output-tokens": 9952, "length": "2e12", "weborganizer": {"__label__adult": 0.0004837512969970703, "__label__art_design": 0.00678253173828125, "__label__crime_law": 0.0005025863647460938, "__label__education_jobs": 0.0116424560546875, "__label__entertainment": 0.00021278858184814453, "__label__fashion_beauty": 0.0003514289855957031, "__label__finance_business": 0.006496429443359375, "__label__food_dining": 0.0005059242248535156, "__label__games": 0.000827789306640625, "__label__hardware": 0.0008301734924316406, "__label__health": 0.0005035400390625, "__label__history": 0.0009241104125976562, "__label__home_hobbies": 0.0002341270446777344, "__label__industrial": 0.001361846923828125, "__label__literature": 0.0007815361022949219, "__label__politics": 0.0004813671112060547, "__label__religion": 0.0007104873657226562, "__label__science_tech": 0.09539794921875, "__label__social_life": 0.00025010108947753906, "__label__software": 0.06109619140625, "__label__software_dev": 0.80810546875, "__label__sports_fitness": 0.00029540061950683594, "__label__transportation": 0.0008521080017089844, "__label__travel": 0.0003437995910644531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44884, 0.03086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44884, 0.4909]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44884, 0.88246]], "google_gemma-3-12b-it_contains_pii": [[0, 3470, false], [3470, 7489, null], [7489, 9129, null], [9129, 13138, null], [13138, 15423, null], [15423, 19277, null], [19277, 21373, null], [21373, 25071, null], [25071, 28834, null], [28834, 32490, null], [32490, 34347, null], [34347, 38006, null], [38006, 41752, null], [41752, 44884, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3470, true], [3470, 7489, null], [7489, 9129, null], [9129, 13138, null], [13138, 15423, null], [15423, 19277, null], [19277, 21373, null], [21373, 25071, null], [25071, 28834, null], [28834, 32490, null], [32490, 34347, null], [34347, 38006, null], [38006, 41752, null], [41752, 44884, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44884, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44884, null]], "pdf_page_numbers": [[0, 3470, 1], [3470, 7489, 2], [7489, 9129, 3], [9129, 13138, 4], [13138, 15423, 5], [15423, 19277, 6], [19277, 21373, 7], [21373, 25071, 8], [25071, 28834, 9], [28834, 32490, 10], [32490, 34347, 11], [34347, 38006, 12], [38006, 41752, 13], [41752, 44884, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44884, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
0056b66a9db5bcef4c5fdbf50311477c8629f83e
|
mlogit postestimation — Postestimation tools for mlogit
Postestimation commands predict margins Remarks and examples
Reference Also see
## Postestimation commands
The following postestimation commands are available after `mlogit`:
<table>
<thead>
<tr>
<th>Command</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>contrast</td>
<td>contrasts and ANOVA-style joint tests of estimates</td>
</tr>
<tr>
<td>estat ic</td>
<td>Akaike’s and Schwarz’s Bayesian information criteria (AIC and BIC)</td>
</tr>
<tr>
<td>estat summarize</td>
<td>summary statistics for the estimation sample</td>
</tr>
<tr>
<td>estat vce</td>
<td>variance–covariance matrix of the estimators (VCE)</td>
</tr>
<tr>
<td>estat (svy)</td>
<td>postestimation statistics for survey data</td>
</tr>
<tr>
<td>estimates</td>
<td>cataloging estimation results</td>
</tr>
</tbody>
</table>
*forecast | dynamic forecasts and simulations |
*hausman | Hausman’s specification test |
lincom | point estimates, standard errors, testing, and inference for linear combinations of coefficients |
*lrttest | likelihood-ratio test |
Margins | marginal means, predictive margins, marginal effects, and average marginal effects |
marginsplot | graph the results from margins (profile plots, interaction plots, etc.) |
nlcom | point estimates, standard errors, testing, and inference for nonlinear combinations of coefficients |
predict | predictions, residuals, influence statistics, and other diagnostic measures |
predictnl | point estimates, standard errors, testing, and inference for generalized predictions |
pwcompare | pairwise comparisons of estimates |
suest | seemingly unrelated estimation |
test | Wald tests of simple and composite linear hypotheses |
testnl | Wald tests of nonlinear hypotheses |
* forecast, hausman, and lrttest are not appropriate with svy estimation results. forecast is also not appropriate with mi estimation results.
predict
Description for predict
predict creates a new variable containing predictions such as probabilities, linear predictions, and standard errors.
Menu for predict
Statistics > Postestimation
Syntax for predict
```
predict [type] {stub*|newvar|newvarlist} [if] [in] [, statistic outcome(outcome)]
```
<table>
<thead>
<tr>
<th>statistic</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Main</td>
<td></td>
</tr>
<tr>
<td>pr</td>
<td>predicted probabilities; the default</td>
</tr>
<tr>
<td>xb</td>
<td>linear prediction</td>
</tr>
<tr>
<td>stdp</td>
<td>standard error of the linear prediction</td>
</tr>
<tr>
<td>stddp</td>
<td>standard error of the difference in two linear predictions</td>
</tr>
<tr>
<td>scores</td>
<td>equation-level scores</td>
</tr>
</tbody>
</table>
You specify one or \( k \) new variables with `pr`, where \( k \) is the number of outcomes. If you specify one new variable and you do not specify `outcome()`, then `outcome(#1)` is assumed.
You specify one new variable with `xb`, `stdp`, and `stddp`. If you do not specify `outcome()`, then `outcome(#1)` is assumed. You must specify `outcome()` with the `stddp` option.
These statistics are available both in and out of sample; type `predict ... if e(sample) ...` if wanted only for the estimation sample.
Options for predict
```
, Main
pr , the default, computes the predicted probabilities for all outcomes or for a specific outcome. To compute probabilities for all outcomes, you specify \( k \) new variables, where \( k \) is the number of categories of the dependent variable. Alternatively, you can specify `stub*`; in which case, `pr` will store predicted probabilities in variables `stub1, stub2, ..., stubk`. To compute the probability for a specific outcome, you specify one new variable and, optionally, the outcome value in option `outcome()`; if you omit `outcome()`, the first outcome value, `outcome(#1)`, is assumed.
Say that you fit a model by typing `estimation_cmd y x1 x2`, and `y` takes on four values. Then, you could type `predict p1 p2 p3 p4` to obtain all four predicted probabilities; alternatively, you could type `predict p*` to generate the four predicted probabilities. To compute specific probabilities one at a time, you can type `predict p1, outcome(#1)` (or simply `predict p1`), `predict p2`, `outcome(#2)`, and so on. See option `outcome()` for other ways to refer to outcome values.
xb calculates the linear prediction. You must also specify the `outcome(outcome)` option.
stdp calculates the standard error of the linear prediction. You must also specify the `outcome(outcome)` option.
stddp calculates the standard error of the difference in two linear predictions. You must specify the `outcome(outcome)` option, and here you specify the two particular outcomes of interest inside the parentheses, for example, `predict sed, stddp outcome(1,3)`. `outcome(outcome)` specifies for which outcome the predicted probabilities are to be calculated. `outcome()` should contain either one value of the dependent variable or one of #1, #2, ..., with #1 meaning the first category of the dependent variable, #2 meaning the second category, etc. `outcome()` is not allowed with `scores`.
`scores` calculates equation-level score variables. The number of score variables created will be one less than the number of outcomes in the model. If the number of outcomes in the model were $k$, then
- the first new variable will contain $\partial \ln L / \partial (x_j \beta_1)$;
- the second new variable will contain $\partial \ln L / \partial (x_j \beta_2)$;
... the $(k-1)$th new variable will contain $\partial \ln L / \partial (x_j \beta_{k-1})$.
**margins**
**Description for margins**
`margins` estimates margins of response for probabilities and linear predictions.
**Menu for margins**
Statistics > Postestimation
**Syntax for margins**
```
margins [marginlist] [ , options ]
margins [marginlist], predict(statistic ...) [ predict(statistic ...) ... ] [ options ]
```
<table>
<thead>
<tr>
<th>statistic</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>default</td>
<td>probabilities for each outcome</td>
</tr>
<tr>
<td>pr</td>
<td>probability for a specified outcome</td>
</tr>
<tr>
<td>xb</td>
<td>linear prediction for a specified outcome</td>
</tr>
<tr>
<td>stdp</td>
<td>not allowed with <code>margins</code></td>
</tr>
<tr>
<td>stddp</td>
<td>not allowed with <code>margins</code></td>
</tr>
</tbody>
</table>
`pr` and `xb` default to the first outcome.
Statistics not allowed with `margins` are functions of stochastic quantities other than $e(b)$.
For the full syntax, see [R] `margins`.
Remarks and examples
Remarks are presented under the following headings:
- Obtaining predicted values
- Calculating marginal effects
- Testing hypotheses about coefficients
Obtaining predicted values
Example 1: Obtaining predicted probabilities
After estimation, we can use `predict` to obtain predicted probabilities, index values, and standard errors of the index, or differences in the index. For instance, in example 4 of [R] `mlogit`, we fit a model of insurance choice on various characteristics. We can obtain the predicted probabilities for outcome 1 by typing
```
. use https://www.stata-press.com/data/r16/sysdsn1
(Health insurance data)
. mlogit insure age i.male i.nonwhite i.site
(output omitted)
. predict p1 if e(sample), outcome(1)
(option pr assumed; predicted probability)
(29 missing values generated)
. summarize p1
```
We added the `i.` prefix to the `male`, `nonwhite`, and `site` variables to explicitly identify them as factor variables. That makes no difference in the estimated results, but we will take advantage of it in later examples. We also included `if e(sample)` to restrict the calculation to the estimation sample. In example 4 of [R] `mlogit`, the multinomial logit model was fit on 615 observations, so there must be missing values in our dataset.
Although we typed `outcome(1)`, specifying 1 for the indemnity outcome, we could have typed `outcome(Indemnity)`. For instance, to obtain the probabilities for prepaid, we could type
```
. predict p2 if e(sample), outcome(Prepaid)
(option pr assumed; predicted probability)
(29 missing values generated)
. summarize p2
```
We must specify the label exactly as it appears in the underlying value label (or how it appears in the `mlogit` output), including capitalization.
Here we have used `predict` to obtain probabilities for the same sample on which we estimated. That is not necessary. We could use another dataset that had the independent variables defined (in our example, `age`, `male`, `nonwhite`, and `site`) and use `predict` to obtain predicted probabilities; here, we would not specify `if e(sample)`.
Example 2: Obtaining index values
predict can also be used to obtain the index values—the $\sum x_i \hat{\beta}_i^{(k)}$—as well as the probabilities:
```
predict idx1, outcome(Indemnity) xb
(1 missing value generated)
predict idx2, outcome(Prepaid) xb
(1 missing value generated)
predict idx3, outcome(Uninsure) xb
(1 missing value generated)
```
```
summarize idx1
Variable | Obs | Mean | Std. Dev. | Min | Max
---------|------|------|-----------|-----|-----
idx1 | 643 | 0 | 0 | 0 | 0
The indemnity outcome was our base outcome—the outcome for which all the coefficients were set to 0—so the index is always 0. For the prepaid and uninsured outcomes, we type
```
predict idx2, outcome(Prepaid) xb
predict idx3, outcome(Uninsure) xb
```
```
summarize idx2 idx3
Variable | Obs | Mean | Std. Dev. | Min | Max
---------|------|--------|-----------|-------------|-------------
idx2 | 643 | -.0566| .4962973 | -1.298198 | 1.700719
didx3 | 643 | -1.9807| .6018139 | -3.112741 | -.8258458
```
We can obtain the standard error of the index by specifying the stdp option:
```
predict se2, outcome(Prepaid) stdp
(1 missing value generated)
```
```
list p2 idx2 se2 in 1/5
```
```
<table>
<thead>
<tr>
<th>p2</th>
<th>idx2</th>
<th>se2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.</td>
<td>-.4831167</td>
<td>.2437772</td>
</tr>
<tr>
<td>2.</td>
<td>.055111</td>
<td>.1694686</td>
</tr>
<tr>
<td>3.</td>
<td>-.1712106</td>
<td>.1793498</td>
</tr>
<tr>
<td>4.</td>
<td>.3788345</td>
<td>.2513701</td>
</tr>
<tr>
<td>5.</td>
<td>-.0925817</td>
<td>.1452616</td>
</tr>
</tbody>
</table>
```
We obtained the probability, p2, in the previous example.
Finally, `predict` can calculate the standard error of the difference in the index values between two outcomes with the `stddp` option:
```stata
predict se_2_3, outcome(Prepaid,Uninsure) stddp
(1 missing value generated)
list idx2 idx3 se_2_3 in 1/5
```
```
idx2 idx3 se_2_3
1. -0.4831167 -3.073253 0.5469354
2. 0.055111 -2.715986 0.4331918
3. -0.1712106 -1.579621 0.3053815
4. 0.3788345 -1.462007 0.4492552
5. -0.0925817 -2.814022 0.4024784
```
In the first observation, the difference in the indexes is $-0.483 - (-3.073) = 2.59$. The standard error of that difference is 0.547.
Example 3: Interpreting results using predictive margins
It is more difficult to interpret the results from `mlogit` than those from `clogit` or `logit` because there are multiple equations. For example, suppose that one of the independent variables in our model takes on the values 0 and 1, and we are attempting to understand the effect of this variable. Assume that the coefficient on this variable for the second outcome, $\beta^{(2)}$, is positive. We might then be tempted to reason that the probability of the second outcome is higher if the variable is 1 rather than 0. Most of the time, that will be true, but occasionally we will be surprised. The probability of some other outcome could increase even more (say, $\beta^{(3)} > \beta^{(2)}$), and thus the probability of outcome 2 would actually fall relative to that outcome. We can use `predict` to help interpret such results.
Continuing with our previously fit insurance-choice model, we wish to describe the model’s predictions by race. For this purpose, we can use the method of predictive margins (also known as recycled predictions), in which we vary characteristics of interest across the whole dataset and average the predictions. That is, we have data on both whites and nonwhites, and our individuals have other characteristics as well. We will first pretend that all the people in our data are white but hold their other characteristics constant. We then calculate the probabilities of each outcome. Next we will pretend that all the people in our data are nonwhite, still holding their other characteristics constant. Again, we calculate the probabilities of each outcome. The difference in those two sets of calculated probabilities, then, is the difference due to race, holding other characteristics constant.
```stata
. gen byte nonwhold=nonwhite // save real race
. replace nonwhite=0 // make everyone white
(126 real changes made)
. predict wpind, outcome(Indemnity) // predict probabilities
(option `pr' assumed; predicted probability)
(1 missing value generated)
. predict wpp, outcome(Prepaid)
(option `pr' assumed; predicted probability)
(1 missing value generated)
. predict wpnoi, outcome(Uninsure)
(option `pr' assumed; predicted probability)
(1 missing value generated)
. replace nonwhite=1 // make everyone nonwhite
(644 real changes made)
```
. predict nwpind, outcome(Indemnity)
(option pr assumed; predicted probability)
(1 missing value generated)
. predict nwpp, outcome(Prepaid)
(option pr assumed; predicted probability)
(1 missing value generated)
. predict nwpnoi, outcome(Uninsure)
(option pr assumed; predicted probability)
(1 missing value generated)
. replace nonwhite=nonwhold // restore real race
(518 real changes made)
. summarize wp* nwp*, sep(3)
<table>
<thead>
<tr>
<th>Variable</th>
<th>Obs</th>
<th>Mean</th>
<th>Std. Dev.</th>
<th>Min</th>
<th>Max</th>
</tr>
</thead>
<tbody>
<tr>
<td>wpind</td>
<td>643</td>
<td>0.5141673</td>
<td>0.0872679</td>
<td>0.3092903</td>
<td>0.71939</td>
</tr>
<tr>
<td>wpp</td>
<td>643</td>
<td>0.4082052</td>
<td>0.0993286</td>
<td>0.1964103</td>
<td>0.6502247</td>
</tr>
<tr>
<td>wpnoi</td>
<td>643</td>
<td>0.0776275</td>
<td>0.0360283</td>
<td>0.0273596</td>
<td>0.1302816</td>
</tr>
<tr>
<td>nwpind</td>
<td>643</td>
<td>0.3112809</td>
<td>0.0817693</td>
<td>0.1511329</td>
<td>0.535021</td>
</tr>
<tr>
<td>nwpp</td>
<td>643</td>
<td>0.630078</td>
<td>0.0979976</td>
<td>0.3871782</td>
<td>0.8278881</td>
</tr>
<tr>
<td>nwpnoi</td>
<td>643</td>
<td>0.0586411</td>
<td>0.0360283</td>
<td>0.0209648</td>
<td>0.0933874</td>
</tr>
</tbody>
</table>
In example 1 of [R] mlogit, we presented a cross-tabulation of insurance type and race. Those values were unadjusted. The means reported above are the values adjusted for age, sex, and site. Combining the results gives
<table>
<thead>
<tr>
<th></th>
<th>Unadjusted</th>
<th>Adjusted</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>white</td>
<td>nonwhite</td>
</tr>
<tr>
<td>Indemnity</td>
<td>0.51</td>
<td>0.36</td>
</tr>
<tr>
<td>Prepaid</td>
<td>0.42</td>
<td>0.57</td>
</tr>
<tr>
<td>Uninsured</td>
<td>0.07</td>
<td>0.07</td>
</tr>
</tbody>
</table>
We find, for instance, after adjusting for age, sex, and site, that although 57% of nonwhites in our data had prepaid plans, 63% of nonwhites chose prepaid plans.
Computing predictive margins by hand was instructive, but we can compute these values more easily using the margins command (see [R] margins). The two margins for the indemnity outcome can be estimated by typing
. margins nonwhite, predict(outcome(Indemnity)) noesample
Predictive margins
Number of obs = 643
Model VCE : OIM
Expression : Pr(insure==Indemnity), predict(outcome(Indemnity))
<table>
<thead>
<tr>
<th>Delta-method</th>
<th></th>
<th></th>
<th></th>
<th>[95% Conf. Interval]</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Margin</td>
<td>Std. Err.</td>
<td>z</td>
<td>P></td>
</tr>
<tr>
<td>nonwhite</td>
<td>0</td>
<td>0.5141673</td>
<td>0.0223485</td>
<td>23.01</td>
</tr>
<tr>
<td></td>
<td>1</td>
<td>0.3112809</td>
<td>0.0418049</td>
<td>7.45</td>
</tr>
</tbody>
</table>
margins also estimates the standard errors and confidence intervals of the margins. By default, margins uses only the estimation sample. We added the noesample option so that margins would use the entire sample and produce results comparable with our earlier analysis.
We can use `marginsplot` to graph the results from `margins`:
```
. marginsplot
Variables that uniquely identify margins: nonwhite
```
The margins for the other two outcomes can be computed by typing
```
. margins nonwhite, predict(outcome(Prepaid)) noesample
(output omitted)
. margins nonwhite, predict(outcome(Uninsure)) noesample
(output omitted)
```
The margins for each outcome is computed when no outcome is specified. For example,
```
. margins nonwhite, noesample
(output omitted)
```
**Technical note**
You can use `predict` to classify predicted values and compare them with the observed outcomes to interpret a multinomial logit model. This is a variation on the notions of sensitivity and specificity for logistic regression. Here we will classify indemnity and prepaid as definitely predicting indemnity, definitely predicting prepaid, and ambiguous.
```
. predict indem, outcome(Indemnity) index // obtain indexes
(1 missing value generated)
. predict prepaid, outcome(Prepaid) index
(1 missing value generated)
. gen diff = prepaid-indem // obtain difference
(1 missing value generated)
. predict sediff, outcome(Indemnity,Prepaid) stddp // & its standard error
(1 missing value generated)
. gen type = 1 if diff/sediff < -1.96 // definitely indemnity
(504 missing values generated)
. replace type = 3 if diff/sediff > 1.96 // definitely prepaid
(100 real changes made)
```
. replace type = 2 if type>=. & diff/sediff < . // ambiguous
(404 real changes made)
. label def type 1 "Def Ind" 2 "Ambiguous" 3 "Def Prep"
. label values type type // label results
. tabulate insure type
<table>
<thead>
<tr>
<th>insure</th>
<th>type</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Def Ind</td>
<td>Ambiguous</td>
<td>Def Prep</td>
<td>Total</td>
<td></td>
</tr>
<tr>
<td>Indemnity</td>
<td>78</td>
<td>183</td>
<td>33</td>
<td>294</td>
<td></td>
</tr>
<tr>
<td>Prepaid</td>
<td>44</td>
<td>177</td>
<td>56</td>
<td>277</td>
<td></td>
</tr>
<tr>
<td>Uninsure</td>
<td>12</td>
<td>28</td>
<td>5</td>
<td>45</td>
<td></td>
</tr>
<tr>
<td>Total</td>
<td>134</td>
<td>388</td>
<td>94</td>
<td>616</td>
<td></td>
</tr>
</tbody>
</table>
We can see that the predictive power of this model is modest. There are many misclassifications in both directions, though there are more correctly classified observations than misclassified observations.
Also, the uninsured look overwhelmingly as though they might have come from the indemnity system rather than from the prepaid system.
Calculating marginal effects
Example 4
We have already noted that the coefficients from multinomial logit can be difficult to interpret because they are relative to the base outcome. Another way to evaluate the effect of covariates is to examine the marginal effect of changing their values on the probability of observing an outcome.
The margins command can be used for this too. We can estimate the marginal effect of each covariate on the probability of observing the first outcome—indemnity insurance—by typing
. margins, dydx(*) predict(outcome(Indemnity))
Average marginal effects
| | dy/dx | Std. Err. | z | P>|z| | [95% Conf. Interval] |
|-----|-------|-----------|-------|------|---------------------|
| age | 0.0026655 | 0.001399 | 1.91 | 0.057 | -0.0000765 0.0054074 |
| 1.male | -0.1295734 | 0.0450945 | -2.87 | 0.004 | -0.2179571 -0.0411898 |
| 1.nonwhite | -0.2032404 | 0.0482554 | -4.21 | 0.000 | -0.2978192 -0.1086616 |
| site | 2.10070995 | 0.04799993 | 0.15 | 0.882 | -0.0869775 0.1011765 |
| 3 | 0.1216165 | 0.0506833 | 2.40 | 0.016 | 0.022475 0.220758 |
Note: dy/dx for factor levels is the discrete change from the base level.
By default, margins estimates the average marginal effect over the estimation sample, and that is what we see above. Being male decreases the average probability of having indemnity insurance by 0.130. We also see, from the note at the bottom of the table, that the marginal effect was computed as a discrete change in the probability of being male rather than female. That is why we made male a factor variable when fitting the model.
The `dydx(*)` option requested that `margins` estimate the marginal effect for each regressor, `dydx(age)` would have produced estimates only for the effect of `age`. `margins` has many options for controlling how the marginal effect is computed, including the ability to average over subgroups or to compute estimates for specified values of the regressors; see [R] `margins`.
`margins` will compute the marginal effects on each outcome when no outcome is specified.
```
margins, dydx(*)
(output omitted)
```
---
### Testing hypotheses about coefficients
#### Example 5
`test` tests hypotheses about the coefficients just as after any estimation command; see [R] `test`. Note, however, `test`’s syntax for dealing with multiple-equation models. Because `test` bases its results on the estimated covariance matrix, we might prefer a likelihood-ratio test; see example 5 in [R] `mlogit` for an example of `lrtest`.
If we simply list variables after the `test` command, we are testing that the corresponding coefficients are zero across all equations:
```
.test 2.site 3.site
( 1) [Indemnity]2o.site = 0
( 2) [Prepaid]2.site = 0
( 3) [Uninsure]2o.site = 0
( 4) [Indemnity]3o.site = 0
( 5) [Prepaid]3o.site = 0
( 6) [Uninsure]3o.site = 0
Constraint 1 dropped
Constraint 4 dropped
chi2( 4) = 19.74
Prob > chi2 = 0.0006
```
We can test that all the coefficients (except the constant) in an equation are zero by simply typing the outcome in square brackets:
```
.test [Uninsure]
( 1) [Uninsure]age = 0
( 2) [Uninsure]0b.male = 0
( 3) [Uninsure]1.male = 0
( 4) [Uninsure]0b.nonwhite = 0
( 5) [Uninsure]1.nonwhite = 0
( 6) [Uninsure]1b.site = 0
( 7) [Uninsure]2o.site = 0
( 8) [Uninsure]3o.site = 0
Constraint 2 dropped
Constraint 4 dropped
Constraint 6 dropped
chi2( 5) = 9.31
Prob > chi2 = 0.0973
```
We specify the outcome just as we do with `predict`; we can specify the label if the outcome variable is labeled, or we can specify the numeric value of the outcome. We would have obtained the same test as above if we had typed `test [3]` because 3 is the value of `insure` for the outcome uninsured.
We can combine the two syntaxes. To test that the coefficients on the site variables are 0 in the equation corresponding to the outcome prepaid, we can type
We specified the outcome and then followed that with a colon and the variables we wanted to test.
We can also test that coefficients are equal across equations. To test that all coefficients except the constant are equal for the prepaid and uninsured outcomes, we can type
```
. test [Prepaid] = [Uninsured]
( 1) [Prepaid]age - [Uninsured]age = 0
( 2) [Prepaid]0b.male - [Uninsured]0b.male = 0
( 3) [Prepaid]1.male - [Uninsured]1.male = 0
( 4) [Prepaid]0b.nonwhite - [Uninsured]0b.nonwhite = 0
( 5) [Prepaid]1.nonwhite - [Uninsured]1.nonwhite = 0
( 6) [Prepaid]1b.site - [Uninsured]1b.site = 0
( 7) [Prepaid]2.site - [Uninsured]2.site = 0
( 8) [Prepaid]3.site - [Uninsured]3.site = 0
Constraint 2 dropped
Constraint 4 dropped
Constraint 6 dropped
chi2( 5) = 13.80
Prob > chi2 = 0.0169
```
To test that only the site variables are equal, we can type
```
. test [Prepaid] = [Uninsured]: 2.site 3.site
( 1) [Prepaid]2.site - [Uninsured]2.site = 0
( 2) [Prepaid]3.site - [Uninsured]3.site = 0
chi2( 2) = 12.68
Prob > chi2 = 0.0018
```
Finally, we can test any arbitrary constraint by simply entering the equation and specifying the coefficients as described in [U] 13.5 Accessing coefficients and standard errors. The following hypothesis is senseless but illustrates the point:
```
. test ([Prepaid]age+ [Uninsured]2.site)/2 = 2-[Uninsured]1.nonwhite
( 1) .5* [Prepaid]age + [Uninsured]1.nonwhite + .5* [Uninsured]2.site = 2
chi2( 1) = 22.45
Prob > chi2 = 0.0000
```
See [R] test for more information about test. The information there about combining hypotheses across test commands (the accumulate option) also applies after mlogit.
Reference
Also see
[R] mlogit — Multinomial (polytomous) logistic regression
[U] 20 Estimation and postestimation commands
|
{"Source-Url": "https://www.stata.com/manuals/rmlogitpostestimation.pdf", "len_cl100k_base": 7054, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25537, "total-output-tokens": 7643, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0011053085327148438, "__label__crime_law": 0.0005831718444824219, "__label__education_jobs": 0.01222991943359375, "__label__entertainment": 0.00023174285888671875, "__label__fashion_beauty": 0.00028014183044433594, "__label__finance_business": 0.002590179443359375, "__label__food_dining": 0.0004584789276123047, "__label__games": 0.001567840576171875, "__label__hardware": 0.0008435249328613281, "__label__health": 0.0010938644409179688, "__label__history": 0.000743865966796875, "__label__home_hobbies": 0.0002624988555908203, "__label__industrial": 0.0011386871337890625, "__label__literature": 0.0008945465087890625, "__label__politics": 0.0006127357482910156, "__label__religion": 0.0005145072937011719, "__label__science_tech": 0.412109375, "__label__social_life": 0.0003771781921386719, "__label__software": 0.10626220703125, "__label__software_dev": 0.454345703125, "__label__sports_fitness": 0.0005092620849609375, "__label__transportation": 0.0006127357482910156, "__label__travel": 0.00030612945556640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24613, 0.06659]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24613, 0.42851]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24613, 0.78791]], "google_gemma-3-12b-it_contains_pii": [[0, 2393, false], [2393, 4920, null], [4920, 6953, null], [6953, 9081, null], [9081, 10609, null], [10609, 13529, null], [13529, 16327, null], [16327, 17737, null], [17737, 20371, null], [20371, 22657, null], [22657, 24613, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2393, true], [2393, 4920, null], [4920, 6953, null], [6953, 9081, null], [9081, 10609, null], [10609, 13529, null], [13529, 16327, null], [16327, 17737, null], [17737, 20371, null], [20371, 22657, null], [22657, 24613, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24613, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24613, null]], "pdf_page_numbers": [[0, 2393, 1], [2393, 4920, 2], [4920, 6953, 3], [6953, 9081, 4], [9081, 10609, 5], [10609, 13529, 6], [13529, 16327, 7], [16327, 17737, 8], [17737, 20371, 9], [20371, 22657, 10], [22657, 24613, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24613, 0.17027]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
f4041e19d4e783ed448e89140d4aab9802e431d1
|
Introduction to Caché
Version 2018.1
2019-09-20
# Table of Contents
## About This Book
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>About This Book</td>
<td>1</td>
</tr>
</tbody>
</table>
## 1 What Is Caché?
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.1 A Unique Architecture</td>
<td>3</td>
</tr>
<tr>
<td>1.2 A High-performance Object Database — with Relational Access</td>
<td>4</td>
</tr>
<tr>
<td>1.3 A Broad Tool Set</td>
<td>4</td>
</tr>
<tr>
<td>1.4 Caché in Action</td>
<td>5</td>
</tr>
<tr>
<td>1.5 Accessibility — Section 508</td>
<td>5</td>
</tr>
<tr>
<td>1.6 Contacting InterSystems</td>
<td>6</td>
</tr>
</tbody>
</table>
## 2 The Caché Database Engine
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.1 Transactional Multidimensional Storage</td>
<td>7</td>
</tr>
<tr>
<td>2.1.1 Mapping</td>
<td>7</td>
</tr>
<tr>
<td>2.2 Process Management</td>
<td>8</td>
</tr>
<tr>
<td>2.3 Lock Management</td>
<td>8</td>
</tr>
<tr>
<td>2.4 Distributed Data Management</td>
<td>8</td>
</tr>
<tr>
<td>2.5 Journal Management</td>
<td>9</td>
</tr>
<tr>
<td>2.6 Database Portability</td>
<td>9</td>
</tr>
<tr>
<td>2.7 Deployment Options</td>
<td>9</td>
</tr>
<tr>
<td>2.7.1 Basic Client/Server Configuration</td>
<td>9</td>
</tr>
<tr>
<td>2.7.2 Shadow Server Configuration</td>
<td>10</td>
</tr>
<tr>
<td>2.7.3 Multi-Tier Configuration</td>
<td>10</td>
</tr>
</tbody>
</table>
## 3 Objects, SQL, and the Unified Data Architecture
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.1 Unified Data Dictionary</td>
<td>13</td>
</tr>
<tr>
<td>3.1.1 Flexible Storage</td>
<td>14</td>
</tr>
<tr>
<td>3.2 Objects</td>
<td>14</td>
</tr>
<tr>
<td>3.2.1 Defining Classes</td>
<td>15</td>
</tr>
<tr>
<td>3.3 SQL</td>
<td>16</td>
</tr>
<tr>
<td>3.3.1 The Object/Relational Connection</td>
<td>16</td>
</tr>
<tr>
<td>3.3.2 Inheritance and SQL</td>
<td>17</td>
</tr>
<tr>
<td>3.3.3 Object Extensions to SQL</td>
<td>18</td>
</tr>
</tbody>
</table>
List of Figures
Figure 2–1: Enterprise Cache Protocol ........................................................................................................ 8
Figure 2–2: Client/Server Configuration ..................................................................................................... 9
Figure 2–3: Shadow Server Configuration .................................................................................................. 10
Figure 2–4: Multi-tier Configuration ......................................................................................................... 10
Figure 3–1: Unified Data Architecture ....................................................................................................... 13
List of Tables
Table 3–1: Relational View of Object Features ................................................................. 16
Table 3–2: SQL View of the Person class: SELECT * FROM Person ............................... 17
Table 3–3: SQL View of the Employee class: SELECT * FROM Employee .................... 17
Table 3–4: Revised SQL View of the Person class: SELECT * FROM Person ................. 18
About This Book
This book describes Caché, a high-performance database that combines an object database, high-performance SQL, and powerful multidimensional data access.
This book addresses the following topics:
- What Is Caché?
- The Caché Database Engine
- Objects, SQL, and the Unified Data Architecture
For a detailed outline, see the table of contents.
Also see the following documents:
- Caché Programming Orientation Guide
- Using Caché Objects
- Using Caché ObjectScript
- Using Caché SQL
- The Caché System Administration Guide
- The Caché Security Administration Guide
- InterSystems Programming Tools Index
For general information, see Using InterSystems Documentation.
What Is Caché?
Welcome to Caché, a high-performance object database.
This book provides an overview of the major features of Caché.
For more details on a specific topic, refer to one of the other books available from the online documentation home page. In addition, Caché includes a number of online tutorials on various development and system administration topics.
1.1 A Unique Architecture
Caché derives much of its power from its unique architecture. At the core, the Caché database engine provides the complete set of services — including data storage, concurrency management, transactions, and process management — needed to build complex database management systems. You can think of the Caché engine as a powerful database toolkit. Using this toolkit, Caché implements a complete object and relational database management system.
The benefits of this architecture are manifold:
• The object and relational database systems talk directly to the database engine for extremely efficient operation; there is no object-relational middleware or SQL-to-object bridge technology.
• The logical separation of the database from its physical implementation makes it possible to radically reconfigure application deployments with no change to application logic.
• Because the database engine interface is open, you can make direct use of its features where needed. This can range from building your own customized database management system to adding targeted optimizations to performance critical applications.
• A platform for the future: The Caché architecture makes future database engine enhancements possible without impact on existing applications. For example, Caché v4.1 introduced a brand-new physical data structure, with dramatically improved scalability and performance, that not only required no change to existing applications but also required no change to the Caché object or relational systems. As new technologies emerge, Caché can add support for them as native, high-performance components with little impact to existing applications.
1.2 A High-performance Object Database — with Relational Access
Caché is designed to transcend the limitations of the relational model while providing an evolutionary upgrade path for the thousands of existing relational database applications as well as support for the many SQL-based reporting tools on the market.
In addition to being a high-performance object database, Caché is also a full-featured relational database. All the data within a Caché database is available as true relational tables and can be queried and modified using standard SQL via ODBC, JDBC, or object methods. Because of the power of the underlying Caché database engine, we believe that Caché is the fastest, most reliable, and most scalable relational database available today.
Additionally, Caché offers features that go beyond the limits of relational databases, while still supporting a standard relational view of data. These features include:
• The ability to model data as objects (each with an automatically created and synchronized native relational representation) while eliminating both the impedance mismatch between databases and object-oriented application environments as well as reducing the complexity of relational modeling
• A simpler, object-based concurrency model
• User-defined data types
• The ability to take advantage of methods and inheritance, including polymorphism, within the database engine
• Object-extensions for SQL to handle object identity and relationships
• The ability to intermix SQL and object-based access within a single application, using each for what they are best suited
• Control over the physical layout and clustering used to store data in order to ensure the maximum performance for applications
While most databases with both object and relational access provide one form of access on top of the other, the SQL and object aspects of Caché both go directly to the data — so that users enjoy the performance benefit of either approach.
For information, see the first several chapters of the *Caché Programming Orientation Guide*.
1.3 A Broad Tool Set
Caché offers a broad set of tools, which include:
• **ObjectScript**, the language in which most of Caché is written.
• Native implementations of **SQL**, **MultiValue**, and **Basic**.
• A well-developed, built-in **security model**
• A suite of technologies and tools that provide rapid development for database and web applications
• Native, object-based XML and web services support
• Device support (such as files, TCP/IP, printers)
• Automatic interoperability via Java, JDBC, ActiveX, .NET, C++, ODBC, XML, SOAP, Perl, Python, and more
• Support for common Internet protocols: POP3, SMTP, MIME, FTP, and so on
• A reusable user portal for your end users
• Support for analyzing unstructured data
• Support for Business Intelligence (BI)
• Built-in testing facilities
For a comprehensive list of tools, see the table of contents of the *InterSystems Programming Tools Index*.
## 1.4 Caché in Action
Caché is used around the world for a wide variety of applications ranging from single-user embedded systems, to enterprise-wide multiserver installations with tens of thousands of concurrent users, to statewide and nationwide applications.
A small sample of applications built with Caché includes:
- As the application platform for a large healthcare network running hundreds of patient-critical applications. The network includes a set of Caché systems acting as data and application servers and has over 30,000 client machines.
- As the data server for a Java-based enterprise messaging system for large financial institutions. Caché was chosen both for its performance and its ability to carry out customized tasks not possible within a traditional relational database.
- As an SQL-based OLTP (online transaction processing) system for a large government organization with over 1400 concurrent users. Caché was a drop-in (no application changes) replacement when other relational products failed to perform.
- As an object database and web application framework for an online educational system used by a leading technical university. Caché was chosen for its rapid development (the application had to be built in three months), its object capabilities, as well as its ability to scale without application reworking.
- As an object database used to track real-time position and velocity of professional athletes during a world championship. Caché was chosen for its performance (compared with the leading object and relational databases) and its native C++ interface.
- As a distributed SQL data engine for a major web site with millions of users. This site uses a set of cost-effective Linux-based servers and uses the Caché distributed data management to provide a scalable, personalized site with no middleware or web caching infrastructure. The hardware costs of this system (four off-the-shelf Linux machines) were less than 10% of those quoted by a “leading database for Internet applications.”
## 1.5 Accessibility — Section 508
InterSystems believes that its products and services can be used by individuals regardless of differences in physical ability. We are committed to compliance with section 508 of the Rehabilitation Act of 1973 (29 U.S.C. 794d), as amended by Congress in 1998. We welcome and encourage suggestions that improve the accessibility and usability of our offerings; please contact us if you have contributions or questions.
1.6 Contacting InterSystems
Support
For support questions about any InterSystems products, please contact the InterSystems Worldwide Support Center:
- Telephone: +1 617 621-0700
- Fax: +1 617 374-9391
- Email: support@intersystems.com
- Web: http://www.intersystems.com
Section 508 — Accessibility
For questions or suggestions regarding the Rehabilitation Act (29 USC 794d) section 508:
- Telephone: +1 617 621-0700
- Fax: +1 617 374-9391
- Email: section508@intersystems.com
- Web: http://www.intersystems.com
2
The Caché Database Engine
At the heart of Caché lies the Caché Database Engine. The database engine is highly optimized for performance, concurrency, scalability, and reliability. There is a high degree of platform-specific optimization to attain maximum performance on each supported platform.
Caché is a full-featured database system; it includes all the features needed for running mission-critical applications (including journaling, backup and recovery, and system administration tools). To help reduce operating costs, Caché is designed to require significantly less database administration than other database products. The majority of deployed Caché systems have no database administrators.
The major features of the database engine are described in the following sections.
2.1 Transactional Multidimensional Storage
All data within Caché is stored within sparse, multidimensional arrays. Unlike the multidimensional arrays used by typical OLAP (online analytic processing) products, Caché supports transaction processing operations (inserts, updates, locking, transactions) within its multidimensional structures. Also, unlike most OLAP engines, these multidimensional structures are not limited in size to available memory. Instead, Caché includes a sophisticated, efficient data cache.
Because Caché data is of inherently variable length and is stored in sparse arrays, Caché often requires less than half of the space needed by a relational database. In addition to reducing disk requirements, compact data storage enhances performance because more data can be read or written with a single I/O operation, and data can be cached more efficiently.
Multidimensional arrays give applications a great degree of flexibility in how they store their data. For example, a set of closely related objects, say an Invoice object and its corresponding LineItem objects, can easily be configured so that the LineItem objects are physically clustered with an Invoice object for highly efficient access.
The flexibility of transactional multidimensional storage gives Caché a significant advantage over the two-dimensional structure used by traditional relational databases: it is this flexibility that allows Caché to be a high-performance SQL, object, and XML database without compromise. It also means that Caché applications are better prepared for future changes in technology.
2.1.1 Mapping
Using a unique feature known as mapping, you can specify how the data within one or more arrays (or parts of arrays) is mapped to a physical database file. Such mapping is a database administration task and requires no change to class/table definitions or application logic. Moreover, mapping can be done within a specific sparse array; you can map one range of values to one physical location while mapping another to another file, disk drive, or even to another database server. This makes it possible to reconfigure Caché applications (such as for scaling) with little effort.
2.2 Process Management
A process is an instance of a Caché virtual machine running on a Caché server. A typical Caché server can run thousands of simultaneous processes depending on hardware and operating system. Each process has direct, efficient access to the multidimensional storage system.
The Caché virtual machine executes instructions (called *P-code*) that are highly optimized for the database, I/O, and logic operations typically required by transaction processing and data warehousing applications.
2.3 Lock Management
To support concurrent database access, Caché includes a powerful Lock Management System.
In systems with thousands of users, reducing conflicts between competing processes is critical to providing high performance. One of the biggest conflicts is between transactions wishing to access the same data. Caché offers the following features to alleviate such conflicts:
- **Atomic Operations** — To eliminate typical performance hot spots, Caché supports a number of atomic operations, that is with no need for application level locks. An example is the ability to atomically allocate unique values for object/row identity (a common bottleneck in relational applications).
- **Logical Locks** — Caché does not lock entire pages of data while performing updates. Because most transactions require frequent access or changes to small quantities of data, Caché supports granular logical locks that can be taken out on a per-object (row) basis.
- **Distributed Locks** — In distributed database configurations (see the next topic), the system automatically supports distributed locks.
2.4 Distributed Data Management
One of the most powerful features of Caché is its ability to link servers together to form a distributed data network. In such a network, machines that primarily serve data are known as Data Servers while those that mainly host processes, but little to no data, are known as Application Servers.

Servers can share data (as well as locks) using the Caché Enterprise Cache Protocol (ECP). ECP is effective because data is transported in packages. When information is requested across the network, the reply data package includes the desired
data as well as additional related data. The natural data relationships inherent to objects and the Caché multidimensional
data model make it possible to identify and include information that is related to the originally requested data. This “asso-
ciated” information is cached locally either at the client or on the application server. Usually, subsequent requests for data
can be satisfied from a local cache, thus avoiding additional trips across the network. If the client changes any data, only
the updates are propagated back to the database server.
ECP makes it possible for applications to support a wide variety of runtime configurations including multi-tier and peer-
to-peer.
2.5 Journal Management
To provide database integrity and reliability, Caché includes a number of journaling subsystems that keep track of physical
and logical database updates. The journal management technology is also used to provide transaction support (a journal is
used to perform transaction rollback operations) as well as database shadowing (a journal is used to synchronize a shadow
server with a primary data server). As with the rest of the system, Caché lets you configure its journaling system to meet
your needs.
2.6 Database Portability
Caché runs on, and is optimized for, a wide variety of hardware platforms and operating systems, as documented in the
online InterSystems Supported Platforms document for this release.
You can easily port applications developed with Caché as well as data from one platform to another. This can be as easy
as installing Caché on the new platform and moving the database files to new system. When moving between some systems,
you may need to run an in-place data conversion utility (to convert one endian representation to another).
2.7 Deployment Options
Caché supports a variety of different runtime configurations giving you maximum flexibility when you deploy your appli-
cations. You can switch between different deployment options by changing Caché system settings; typically there is no
need to change your application logic.
Some basic deployment options are listed below.
2.7.1 Basic Client/Server Configuration
In the simplest client/server configuration, a single Caché data server services a number of clients (from one to many
thousands, depending on the application and platform).
The client systems can be any of the following:
- Stand-alone desktop systems running a client application that connects via a client/server protocol (such as ODBC, ActiveX, JDBC, Java).
- Web server processes talking to Caché via Zen, CSP (Caché Server Pages), SOAP, or some other connectivity option (such as ODBC, JDBC). Each web server process may then service a number of browser-based or machine-to-machine sessions.
- Middleware processes (such as an Enterprise Java Bean application server) that connect to Caché via ODBC, JDBC, etc.
- Devices, such as terminals or lab equipment, that connect to Caché using one of many supported protocols (including TELNET and TCP/IP).
- Some combination of the above.
### 2.7.2 Shadow Server Configuration
The shadow server configuration builds upon the basic client/server setup by adding one or more shadow servers. Each shadow server synchronizes itself with the data within the main data server by connecting to and monitoring its journal.
*Figure 2–3: Shadow Server Configuration*
![Shadow Server Configuration Diagram]
Shadow servers are typically used to service ad hoc queries, large reports, and batch processes to limit their impact on the main transaction system. They can also be used to provide failover systems.
### 2.7.3 Multi-Tier Configuration
The multi-tier configuration uses the Caché distributed database technology — the Enterprise Cache Protocol (ECP) — to make it possible for a greater number of clients to connect to the system.
*Figure 2–4: Multi-tier Configuration*
![Multi-tier Configuration Diagram]
In the simplest multi-tier setup, one or more Caché systems, acting as application servers, are placed between the central data server and the various client systems. In this case the application servers do not store any data, instead they host processes that perform work for the benefit of the client, off-loading the CPU of the data server. This type of configuration scales best for applications that exhibit good “locality of reference,” that is most transactions involve reasonably related data so
that locking across application servers is limited. Such applications, as well as those with a fair amount of read access (like most typical web applications), work extremely well in this model.
More complex configurations, with multiple data servers as well as data stored on application server machines, are also possible.
Typically applications use the multi-tier configuration for scaling as well as for providing high-availability (with applications servers serving as hot standby systems).
A powerful and unique feature of Caché is its unique Unified Data Architecture that provides simultaneous, high-performance object and relational access to data stored within Caché.
### 3.1 Unified Data Dictionary
Within Caché, you can model your application components as objects. Objects are organized by classes which define the data (properties) and behavior (methods) of the object.
The meta-information, or definition, of each class is stored within a common repository referred to as the Caché class dictionary. The class dictionary is itself an object database, stored within Caché, whose contents can be accessed using objects. The class dictionary, by means of a class compiler, defines the storage structure needed by persistent objects and converts class definitions into parallel sets of executable code that provide both the object and relational access to this storage.
structure. By means of this architecture, the object and relational code paths are efficient and automatically synchronized with one another.
Class definitions can be added to the class dictionary in a number of ways:
- Interactively, using the Studio development environment.
- Relationally, using DDL. Caché accepts standard SQL DDL statements and automatically creates corresponding class and table definitions.
- Textually, using XML. Caché supports an external, XML representation of class definitions. Typically this is used for source code management, deployment, automatic code generation, and interoperation with other tools.
- Programmatically, using objects. Using the Caché set of class definition objects, you can create programs that communicate directly with the class dictionary and create new classes at application runtime.
- Using an XML Schema Wizard, included within Studio, that can create class definitions from most XML schema files.
### 3.1.1 Flexible Storage
The Caché object model differs from those of programming languages in that in addition to properties and methods, you can specify storage-related behavior such as indices, constraints, and storage structure.
The storage structure used by persistent objects is independent of the logical definition of a class and is quite flexible: developers can use the default structures provided by the class compiler or they can tune the structures for specific cases.
### 3.2 Objects
Caché includes a full-featured, next-generation object database specifically designed to meet the needs of complex, transaction oriented applications. The Caché object model includes the following features:
- Classes — You can define classes that represent the state (data) and behavior (code) of your application components. Classes are used to create instances of objects as both runtime components and as items stored within the database.
- Properties — Classes can include properties, which specify the data associated with each object instance. Properties can be simple literals (such as strings or integers), user-defined types (defined using data type classes), complex (or embedded) objects, collections, or references to other objects.
- Relationships — Classes can define how instances of objects are related to one another. The system automatically provides navigational methods for relationships as well as referential integrity within the database.
- Methods — Classes can define behavior by means of methods: executable code associated with an object. Object methods run within a Caché server process (though they can be invoked from a remote client). Object methods can be scripted using ObjectScript, SQL, or they can be generated using method generators, which are code that automatically creates customized methods according to user-defined rules.
- Object persistence — Persistent objects have the ability to automatically store and retrieve themselves to a database. The persistence support includes complete database functionality including automatic transaction management, concurrency control, index maintenance, and data validation. Persistent objects are automatically visible through SQL queries.
- Inheritance — By deriving new classes from existing ones, you can reuse previously written code as well as create specialized versions of classes.
- Polymorphism — Caché supports complete object polymorphism. This means that applications can use a well-defined interface (a set of methods and properties provided by a superclass) and the system will automatically invoke the correct
interface implementation based on the type of each object. This makes it much easier to develop flexible database applications.
- Swizzling (also known as “lazy loading”) — Caché automatically swizzles (brings into memory from disk) any related persistent objects when they are referenced from other objects. This greatly simplifies working with complex data models.
The Caché object functionality is not a separate part of Caché; it is a central part of Caché programming and is fully integrated with relational access described elsewhere. However, for those who are interested specifically in object-oriented programming, the manual Using Caché Objects discusses Caché programming from this point of view.
3.2.1 Defining Classes
The simplest and most common way to define classes within Caché is to use the Studio development environment. The Studio lets you define classes either using a simple text format within a syntax-coloring editor or by using a graphical point-and-click interface. These two views are interchangeable and are automatically synchronized.
Here is the definition of an extremely simple persistent object, Component, as seen within Studio:
```objectscript
Class MyApp.Component Extends %Persistent
{
Property TheName As %String;
Property TheValue As %Integer;
}
```
This class is defined as a persistent class (that is, it can store itself within a database). In this case, the Caché-provided, %Persistent class (system class names start with a “%” character to distinguish them from application classes) provides all the needed persistence code via inheritance. The class belongs to the package, “MyApp”. Packages group related classes together and greatly simplify development of large applications. The class defines two properties: TheName, which has a string value, and TheValue, which has an integer value.
From within ObjectScript code, such as within a method, you can use this object syntax to manipulate instances of Component object:
```objectscript
// Create a new component
Set component = ##class(MyApp.Component).%New()
Set component.TheName = "Widget"
Set component.TheValue = 22
// Save the new Component to the database
Do component.%Save()
```
Using Basic, you can define a method to manipulate instances of the Component object:
```basic
' Create a new component
component = New Component()
component.TheName = "Widget"
component.TheValue = 22
' Save the new Component to the database
component.%Save()
```
At this point, a new instance of Component is stored within the database with a system-assigned unique object identifier. You can later retrieve this object by opening it (using its object identifier):
```basic
' Open an instance and double its value:
component = OpenId Component(id)
component.TheValue = component.TheValue * 2
component.%Save()
```
You can perform the exact same operations using native Java, C++, or other Caché client bindings. The class compiler can generate, and synchronize, any additional code required to access objects externally. For example, if you are using Caché
with Java, you can specify that the class compiler automatically generate and maintain Java proxy classes that provide remote access to persistent database classes. Within a Java program you can use this object naturally:
```java
// Get an instance of Component from the database
component = (MyApp.Component)MyApp.Component._open(database, new Id(id));
// Inspect some properties of this object
System.out.println("Name: " + component.getName());
System.out.println("Value: " + component.getValue());
```
### 3.3 SQL
Caché SQL is a full-featured relational database engine that is fully integrated with the Caché object technology. In addition to standard SQL-92 features, Caché SQL offers:
- Support for streams (known in SQL as Binary Large Objects, or BLOBS).
- Support for stored procedures (implemented as object methods).
- A set of object-based extensions.
- User-definable data types.
- Support for Transactional Bitmap Indices.
Bitmap indices, typically used in large data warehousing and OLAP systems, offer the ability to perform high-speed searches based on complex combinations of conditions. Such bitmap indices cannot be updated in real-time, however and are typically updated as a batch process. Caché SQL supports bitmap indices that offer high-performance searching power combined with no loss in insert/update performance. This gives transaction processing applications the ability to perform data warehouse-style queries and gives data warehouse applications the ability to perform real-time updates. For more information, refer to the “Bitmap Indices” content in the Caché SQL Optimization Guide.
### 3.3.1 The Object/Relational Connection
All components within the Caché dictionary are defined as classes. The class compiler automatically projects persistent classes as relational tables. For every object feature, there is a corresponding relational equivalent, as illustrated in the following table:
**Table 3–1: Relational View of Object Features**
<table>
<thead>
<tr>
<th>Object Feature</th>
<th>Relational Equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Package</td>
<td>Schema</td>
</tr>
<tr>
<td>Class</td>
<td>Table</td>
</tr>
<tr>
<td>Object instance</td>
<td>Row within a table</td>
</tr>
<tr>
<td>Property</td>
<td>Column</td>
</tr>
<tr>
<td>Relationship</td>
<td>Foreign key</td>
</tr>
<tr>
<td>Embedded object</td>
<td>Multiple columns</td>
</tr>
<tr>
<td>Method</td>
<td>Stored procedure</td>
</tr>
<tr>
<td>Index</td>
<td>Index</td>
</tr>
</tbody>
</table>
When Caché loads SQL DDL (Data Definition Language) statements, it uses the inverse of this projection to create classes that correspond to relational tables.
To demonstrate the object-to-relational projection, consider a simple example. Here is the definition of a simple, persistent Person class (part of a package called “MyApp”) containing two properties, Name and Home:
```cachéscript
Class MyApp.Person Extends %Persistent
{
Property Name As %String(MAXLEN=100);
Property Home As Address;
}
```
The Person class gets its persistent behavior from the %Persistent superclass provided with Caché. The Name property is defined as a simple String of up to 100 characters.
The Home property illustrates the use of complex, user-defined data types, in this case the Address class, which is defined as:
```cachéscript
Class MyApp.Address Extends %SerialObject
{
Property City As %String;
Property State As %String;
}
```
The Address class is derived from the %SerialObject superclass. This class provides the ability to serialize itself (convert itself to a single-string representation) and embed itself within another containing class (as with the Person class).
When viewed via SQL, the Person class has the following structure:
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Home_City</th>
<th>Home_State</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Smith, John</td>
<td>Cambridge</td>
<td>MA</td>
</tr>
<tr>
<td>2</td>
<td>Doe, Jane</td>
<td>Dallas</td>
<td>TX</td>
</tr>
</tbody>
</table>
Note that the object identifier is visible as a column. In addition, the fields of the embedded Address object are projected as separate fields. These fields are given the synthetic names Home_City and Home_State and behave exactly as if they were defined as two individual fields.
### 3.3.2 Inheritance and SQL
Inheritance is an important feature within object-based systems and is completely lacking within relational databases. Caché SQL makes it possible to use the power of inheritance using standard relational constructs. For example, we can derive a new Employee class from the Person class used in the previous example:
```cachéscript
Class MyApp.Employee Extends Person
{
Property Salary As %Integer(MINVAL=0,MAXVAL=100000);
}
```
This new class extends the Person class by adding an additional property, Salary.
When viewed via SQL, the Employee class has the following structure:
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Home_City</th>
<th>Home_State</th>
<th>Salary</th>
</tr>
</thead>
<tbody>
<tr>
<td>3</td>
<td>Divad, Gino</td>
<td>Irvine</td>
<td>CA</td>
<td>22000</td>
</tr>
</tbody>
</table>
Notice that all of the inherited properties are available as columns. Also note that only rows that are actual instances of Employee are included. If we again ask for all Person instances:
Table 3-4: Revised SQL View of the Person class: SELECT * FROM Person
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Home_City</th>
<th>Home_State</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Smith, John</td>
<td>Cambridge</td>
<td>MA</td>
</tr>
<tr>
<td>2</td>
<td>Doe, Jane</td>
<td>Dallas</td>
<td>TX</td>
</tr>
<tr>
<td>3</td>
<td>Divad, Gino</td>
<td>Irvine</td>
<td>CA</td>
</tr>
</tbody>
</table>
In this case, we see that all the rows are returned because every Employee is defined to be an instance of Person. In this case, however, only the properties defined by Person are displayed.
### 3.3.3 Object Extensions to SQL
To make it easier to use SQL within object applications, Caché includes a number of object extensions to SQL.
One of the most interesting of these extensions is ability to follow object references using the reference ("->") operator. For example, suppose you have a Vendor class that refers to two other classes: Contact and Region. You can refer to properties of the related classes using the reference operator:
```sql
SELECT ID, Name, ContactInfo->Name
FROM Vendor
WHERE Vendor->Region->Name = 'Antarctica'
```
Of course, you can also express the same query using SQL JOIN syntax. The advantage of the reference operator syntax is that it is succinct and easy to understand at a glance.
|
{"Source-Url": "https://cedocs.intersystems.com/latest/csp/docbook/pdfs/pdfs/GIC.pdf", "len_cl100k_base": 7791, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 43749, "total-output-tokens": 8394, "length": "2e12", "weborganizer": {"__label__adult": 0.00023376941680908203, "__label__art_design": 0.0001914501190185547, "__label__crime_law": 0.0002027750015258789, "__label__education_jobs": 0.0005464553833007812, "__label__entertainment": 4.595518112182617e-05, "__label__fashion_beauty": 8.541345596313477e-05, "__label__finance_business": 0.00027561187744140625, "__label__food_dining": 0.0002123117446899414, "__label__games": 0.0003654956817626953, "__label__hardware": 0.0006618499755859375, "__label__health": 0.0002651214599609375, "__label__history": 0.0001480579376220703, "__label__home_hobbies": 5.942583084106445e-05, "__label__industrial": 0.00030517578125, "__label__literature": 0.00015747547149658203, "__label__politics": 0.0001201629638671875, "__label__religion": 0.0002772808074951172, "__label__science_tech": 0.01387786865234375, "__label__social_life": 5.14984130859375e-05, "__label__software": 0.01229095458984375, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00015676021575927734, "__label__transportation": 0.0002720355987548828, "__label__travel": 0.00013506412506103516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36945, 0.02641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36945, 0.54952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36945, 0.87319]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 49, null], [49, 3003, null], [3003, 3742, null], [3742, 4151, null], [4151, 4151, null], [4151, 4839, null], [4839, 4839, null], [4839, 6901, null], [6901, 9538, null], [9538, 12363, null], [12363, 12878, null], [12878, 15865, null], [15865, 18104, null], [18104, 20447, null], [20447, 22537, null], [22537, 23035, null], [23035, 23035, null], [23035, 23923, null], [23923, 27497, null], [27497, 30566, null], [30566, 33041, null], [33041, 35538, null], [35538, 36945, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 49, null], [49, 3003, null], [3003, 3742, null], [3742, 4151, null], [4151, 4151, null], [4151, 4839, null], [4839, 4839, null], [4839, 6901, null], [6901, 9538, null], [9538, 12363, null], [12363, 12878, null], [12878, 15865, null], [15865, 18104, null], [18104, 20447, null], [20447, 22537, null], [22537, 23035, null], [23035, 23035, null], [23035, 23923, null], [23923, 27497, null], [27497, 30566, null], [30566, 33041, null], [33041, 35538, null], [35538, 36945, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36945, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 36945, null]], "pdf_page_numbers": [[0, 49, 1], [49, 49, 2], [49, 3003, 3], [3003, 3742, 4], [3742, 4151, 5], [4151, 4151, 6], [4151, 4839, 7], [4839, 4839, 8], [4839, 6901, 9], [6901, 9538, 10], [9538, 12363, 11], [12363, 12878, 12], [12878, 15865, 13], [15865, 18104, 14], [18104, 20447, 15], [20447, 22537, 16], [22537, 23035, 17], [23035, 23035, 18], [23035, 23923, 19], [23923, 27497, 20], [27497, 30566, 21], [30566, 33041, 22], [33041, 35538, 23], [35538, 36945, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36945, 0.16]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
d5b98dd56119111c83db0de833604b7d996ba80d
|
Let See Pettitt (1983) for details.
An observation is said to be right-censored if we can only observe \( Y_j \) for \( j = n+1, n+2, \ldots, q \); \( n \leq q \), are censored on the right. We define the rank \( r_j \) of \( Y_j \), for \( j = 1, 2, \ldots, n \), in the usual way; \( r_j \) equals \( i \) if and only if \( Y_j \) is the \( i \)th smallest amongst the \( Y_1, Y_2, \ldots, Y_n \). The right-censored \( Y_j^* \), for \( j = n+1, n+2, \ldots, q \), has rank \( r_j \) if and only if \( Y_j^* \) lies in the interval \([Y(r_j), Y(r_{j+1})]\), with \( Y_0 = -\infty \), \( Y_{(n+1)} = +\infty \) and \( Y_{(1)} < \cdots < Y_{(n)} \) the ordered \( Y_j \), for \( j = 1, 2, \ldots, n \).
The distribution of the \( Y \) is assumed to be of the following form. Let \( F_L(y) = e^y/(1 + e^y) \), the logistic distribution function, and consider the distribution function \( F_\gamma(y) \) defined by \( 1 - F_\gamma = [1 - F_L(y)]^{1/\gamma} \). This distribution function can be thought of as either the distribution function of the minimum, \( X \gamma \), of a random sample of size \( \gamma^{-1} \) from the logistic distribution, or as the \( F_\gamma(y - \log \gamma) \) being the distribution function of a random variable having the \( F \)-distribution with 2 and \( 2\gamma^{-1} \) degrees of freedom. This family of generalized logistic distribution functions \( [F_\gamma(.): 0 \leq \gamma < \infty] \) naturally links the symmetric logistic distribution \( (\gamma = 1) \) with the skew extreme value distribution \( (\lim \gamma \to 0) \) and with the limiting negative exponential distribution \( (\lim \gamma \to \infty) \). For this family explicit results are available for right-censored data. See Pettit (1983) for details.
Let \( l_R \) denote the logarithm of the rank marginal likelihood of the observations and define the \( q \times 1 \) vector \( a \) by \( a = l_R(\theta = 0) \), and let the \( q \) by \( q \) diagonal matrix \( B \) and \( q \) symmetric matrix \( A \) be given by \( B - A = -l_R'(\theta = 0) \). Then various statistics can be found from the analysis.
(a) The score statistic \( X^T a \). This statistic is used to test the hypothesis \( H_0: \beta = 0 \) (see (e)).
(b) The estimated variance-covariance matrix of the score statistic in (a).
(c) The estimate \( \hat{\beta}_R = MXTa \).
(d) The estimated variance-covariance matrix \( M = (XT(B - A)X)^{-1} \) of the estimate \( \hat{\beta}_R \).
(e) The $\chi^2$ statistic $Q = \hat{\beta}'R\hat{\beta} = a^T X^T (B - A)X^{-1}X^T a$, used to test $H_0 : \beta = 0$. Under $H_0$, $Q$ has an approximate $\chi^2$ distribution with $p$ degrees of freedom.
(f) The standard errors $M_i^{1/2}$ of the estimates given in (c).
(g) Approximate $z$-statistics, i.e., $Z_i = \hat{\beta}_i / se(\hat{\beta}_i)$ for testing $H_0 : \beta_i = 0$. For $i = 1, 2, \ldots, n$, $Z_i$ has an approximate $N(0, 1)$ distribution.
In many situations, more than one sample of observations will be available. In this case we assume the model,
$$ h_k(Y_k) = X_k^T \beta + e_k, \quad k = 1, 2, \ldots, ns, $$
where $ns$ is the number of samples. In an obvious manner, $Y_k$ and $X_k$ are the vector of observations and the design matrix for the $k$th sample respectively. Note that the arbitrary transformation $h_k$ can be assumed different for each sample since observations are ranked within the sample.
The earlier analysis can be extended to give a combined estimate of $\beta$ as $\hat{\beta} = Dd$, where
$$ D^{-1} = \sum_{k=1}^{ns} X_k^T (B_k - A_k)X_k $$
and
$$ d = \sum_{k=1}^{ns} X_k^T a_k, $$
with $a_k$, $B_k$ and $A_k$ defined as $a$, $B$ and $A$ above but for the $k$th sample.
The remaining statistics are calculated as for the one sample case.
4 References
5 Parameters
1: order -- Nag_OrderType
On entry: the order parameter specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by order = Nag_RowMajor. See Section 2.2.1.4 of the Essential Introduction for a more detailed explanation of the use of this parameter.
Constraint: order = Nag_RowMajor or Nag_ColMajor.
2: ns -- Integer
On entry: the number of samples.
Constraint: ns $\geq 1$.
3: nv[ns] -- const Integer
On entry: the number of observations in the $i$th sample, for $i = 1, 2, \ldots, ns$.
Constraint: $nv[i] \geq 1$ for $i = 0, 1, \ldots, ns - 1.$
4: \( y[\text{dim}] \) – const double
Note: the dimension, \( \text{dim} \), of the array \( y \) must be at least \( \sum_{i=0}^{\text{ns}-1} \text{nv}[i] \).
On entry: the observations in each sample. Specifically, \( y \left( \sum_{k=1}^{i-1} \text{nv}[k-1] + j \right) \) must contain the \( j \)th observation in the \( i \)th sample.
5: \( p \) – Integer
Input
On entry: the number of parameters to be fitted.
Constraint: \( p \geq 1 \).
6: \( x[\text{dim}] \) – const double
Note: the dimension, \( \text{dim} \), of the array \( x \) must be at least \( \max(1, \text{pdx} \times p) \) when \( \text{order} = \text{Nag\_ColMajor} \) and at least \( \max(1, \text{pdx} \times \sum_{i=0}^{\text{ns}-1} \text{nv}[i]) \) when \( \text{order} = \text{Nag\_RowMajor} \).
If \( \text{order} = \text{Nag\_ColMajor} \), the \((i, j)\)th element of the matrix \( X \) is stored in \( x \left( j-1 \times \text{pdx} + i - 1 \right) \) and if \( \text{order} = \text{Nag\_RowMajor} \), the \((i, j)\)th element of the matrix \( X \) is stored in \( x \left( i-1 \times \text{pdx} + j - 1 \right) \).
On entry: the design matrices for each sample. Specifically, \( x \left( \sum_{k=1}^{i-1} \text{nv}[k-1] + j, l \right) \) must contain the value of the \( l \)th explanatory variable for the \( j \)th observations in the \( i \)th sample.
Constraint: \( x \) must not contain a column with all elements equal.
7: \( \text{pdx} \) – Integer
Input
On entry: the stride separating matrix row or column elements (depending on the value of \( \text{order} \)) in the array \( x \).
Constraints:
\[
\begin{align*}
\text{if} \ \text{order} &= \text{Nag\_ColMajor}, \ \text{pdx} \geq \sum_{i=0}^{\text{ns}-1} \text{nv}[i]; \\
\text{if} \ \text{order} &= \text{Nag\_RowMajor}, \ \text{pdx} \geq p.
\end{align*}
\]
8: \( \text{icen[\text{dim}]} \) – const Integer
Note: the dimension, \( \text{dim} \), of the array \( \text{icen} \) must be at least \( \sum_{i=0}^{\text{ns}-1} \text{nv}[i] \).
On entry: defines the censoring variable for the observations in \( y \) as follows:
\[
\begin{align*}
\text{icen}[i-1] &= 0 \\
\text{If} \ y[i-1] \text{ is uncensored.}
\end{align*}
\]
\[
\begin{align*}
\text{icen}[i-1] &= 1 \\
\text{If} \ y[i-1] \text{ is censored.}
\end{align*}
\]
Constraint: \( \text{icen}[i-1] = 0 \) or \( 1 \), for \( i = 1, 2, \ldots, \sum_{i=0}^{\text{ns}-1} \text{nv}[i] \).
9: \( \text{gamma} \) – double
Input
On entry: the value of the parameter defining the generalized logistic distribution. For \( \text{gamma} \leq 0.0001 \), the limiting extreme value distribution is assumed.
Constraint: \( \text{gamma} > 0.0 \).
10: \( \text{nmax} \) – Integer
Input
On entry: the value of the largest sample size.
Constraint: \( \text{nmax} = \max_{1 \leq i \leq \text{ns}} (\text{nv}[i-1]) \) and \( \text{nmax} > p \).
tol – double
*Input*
On entry: the tolerance for judging whether two observations are tied. Thus, observations \( Y_i \) and \( Y_j \) are adjudged to be tied if \(|Y_i - Y_j| < \text{tol}| \).
Constraint: \( \text{tol} > 0.0 \).
12: parvar[dim] – double
*Output*
Note: the dimension, \( \text{dim} \), of the array \( \text{parvar} \) must be at least \( \max(1, \text{pdparvar} \times \text{p}) \) when \( \text{order} = \text{Nag\_ColMajor} \) and at least \( \max(1, \text{pdparvar} \times \text{p} + 1) \) when \( \text{order} = \text{Nag\_RowMajor} \).
Where \( \text{PARVAR}(i, j) \) appears in this document, it refers to the array element
- if \( \text{order} = \text{Nag\_ColMajor} \), \( \text{parvar}[(j - 1) \times \text{pdparvar} + i - 1] \);
- if \( \text{order} = \text{Nag\_RowMajor} \), \( \text{parvar}[(i - 1) \times \text{pdparvar} + j - 1] \).
On exit: the variance-covariance matrices of the score statistics and the parameter estimates, the former being stored in the upper triangle and the latter in the lower triangle. Thus for \( 1 \leq i \leq j \leq \text{p} \), \( \text{PARVAR}(i, j) \) contains an estimate of the covariance between the \( i \)th and \( j \)th score statistics. For \( 1 \leq j \leq i \leq \text{p} - 1 \), \( \text{PARVAR}(i + 1, j) \) contains an estimate of the covariance between the \( i \)th and \( j \)th parameter estimates.
13: pdparvar – Integer
*Input*
On entry: the stride separating matrix row or column elements (depending on the value of \( \text{order} \)) in the array \( \text{parvar} \).
Constraints:
- if \( \text{order} = \text{Nag\_ColMajor} \), \( \text{pdparvar} \geq \text{p} + 1 \);
- if \( \text{order} = \text{Nag\_RowMajor} \), \( \text{pdparvar} \geq \text{p} \).
14: irank[nmax] – Integer
*Output*
On exit: for the one sample case, \( \text{irank} \) contains the ranks of the observations.
15: zin[nmax] – double
*Output*
On exit: for the one sample case, \( \text{zin} \) contains the expected values of the function \( g(.) \) of the order statistics.
16: eta[nmax] – double
*Output*
On exit: for the one sample case, \( \text{eta} \) contains the expected values of the function \( g'(.) \) of the order statistics.
17: vapvec[dim] – double
*Output*
Note: the dimension, \( \text{dim} \), of the array \( \text{vapvec} \) must be at least \( \text{nmax} \times (\text{nmax} + 1)/2 \).
On exit: for the one sample case, \( \text{vapvec} \) contains the upper triangle of the variance-covariance matrix of the function \( g(.) \) of the order statistics stored column-wise.
18: parest[dim] – double
*Output*
Note: the dimension, \( \text{dim} \), of the array \( \text{parest} \) must be at least \( 4 \times \text{p} + 1 \).
On exit: the statistics calculated by the routine as follows. The first \( \text{p} \) components of \( \text{parest} \) contain the score statistics. The next \( \text{p} \) elements contain the parameter estimates. \( \text{parest}[2 \times \text{p}] \) contains the value of the \( \chi^2 \) statistic. The next \( \text{p} \) elements of \( \text{parest} \) contain the standard errors of the parameter estimates. Finally, the remaining \( \text{p} \) elements of \( \text{parest} \) contain the \( z \)-statistics.
19: fail – NagError *
*Input/Output*
The NAG error parameter (see the Essential Introduction).
6 Error Indicators and Warnings
**NE_INT**
On entry, **ns** = \langle value\rangle.
Constraint: **ns** \geq 1.
On entry, **p** = \langle value\rangle.
Constraint: **p** \geq 1.
On entry, **pdx** = \langle value\rangle.
Constraint: **pdx** > 0.
On entry, **pdparvar** = \langle value\rangle.
Constraint: **pdparvar** > 0.
**NE_INT_2**
On entry, **pdx** = \langle value\rangle, **p** = \langle value\rangle.
Constraint: **pdx** \geq **p**.
On entry, **pdparvar** = \langle value\rangle, **p** = \langle value\rangle.
Constraint: **pdparvar** \geq **p** + 1.
On entry, **pdparvar** = \langle value\rangle, **p** = \langle value\rangle.
Constraint: **pdparvar** \geq **p**.
On entry, **nmax** \leq **p**: **nmax** = \langle value\rangle, **p** = \langle value\rangle.
On entry, **pdx** < the sum of **nv[i]**: **pdx** = \langle value\rangle, sum **nv[i]** = \langle value\rangle.
**NE_INT_ARRAY**
On entry, **nv[i]** = \langle value\rangle.
Constraint: **nv[i]** \geq 1 for **i** = 0, \ldots, **ns** - 1.
**NE_INT_ARRAY_ELEMCONS**
\( M \) elements of array **icen** are not equal to 0 or 1: \( M = \langle value\rangle \).
\( M \) elements of array **nv** are less than or equal to zero: \( M = \langle value\rangle \).
**NE_MAT_ILL_DEFINED**
The matrix \( X^T(B - A)X \) is either singular or non-positive-definite.
**NE_OBSERVATIONS**
All the observations were adjudged to be tied.
**NE_REAL**
On entry, **tol** = \langle value\rangle.
Constraint: **tol** > 0.0.
On entry, **gamma** = \langle value\rangle.
Constraint: **gamma** \geq 0.0.
**NE_REAL_ARRAY_ELEMCONS**
On entry, all elements in column \langle value\rangle of **x** are equal to \langle value\rangle.
**NE_SAMPLE**
The largest sample size is \langle value\rangle which is not equal to **nmax**, **nmax** = \langle value\rangle.
7 Accuracy
The computations are believed to be stable.
8 Further Comments
The time taken by the routine depends on the number of samples, the total number of observations and the number of parameters fitted.
In extreme cases the parameter estimates for certain models can be infinite, although this is unlikely to occur in practice. See Pettitt (1982) for further details.
9 Example
A program to fit a regression model to a single sample of 40 observations using just one explanatory variable.
9.1 Program Text
```c
/* nag_rank_regsn_censored (g08rbc) Example Program.
* Copyright 2001 Numerical Algorithms Group.
* Mark 7, 2001. */
#include <stdio.h>
#include <nag.h>
#include <nag_stdlib.h>
#include <nagg08.h>
int main(void) {
/* Scalars */
double gamma, tol;
Integer exit_status, i, p, j, nmax, ns, nsum;
Integer pdx, pdparvar;
NagError fail;
Nag_OrderType order;
/* Arrays */
double *eta=0, *parest=0, *parvar=0, *vapvec=0, *x=0, *y=0, *zin=0;
Integer *icen=0, *irank=0, *iwa=0, *nv=0;
#ifdef NAG_COLUMN_MAJOR
#define X(I,J) x[(J-1)*pdx + I - 1]
#define PARVAR(I,J) parvar[(J-1)*pdparvar + I - 1]
order = Nag_ColMajor;
#else
#define X(I,J) x[(I-1)*pdx + J - 1]
#define PARVAR(I,J) parvar[(I-1)*pdparvar + J - 1]
order = Nag_RowMajor;
#endif
```
#endif
INIT_FAIL(fail);
exit_status = 0;
Vprintf("g08rbc Example Program Results\n");
/* Skip heading in data file */
Vscanf("%*[\n"]);
/* Read number of samples, number of parameters to be fitted, */
/* distribution power parameter and tolerance criterion for ties. */
Vscanf("%ld%ld%lf%lf%*[\n"] , &ns, &p, &gamma, &tol);
Vprintf("\n");
/* Allocate memory to nv only */
if ( !(nv = NAG_ALLOC(ns, Integer)) )
{
Vprintf("Allocation failure\n");
exit_status = -1;
goto END;
}
Vprintf("Number of samples =%2ld\n", ns);
Vprintf("Number of parameters fitted =%2ld\n", p);
Vprintf("Distribution power parameter =%10.5f\n", gamma);
Vprintf("\n");
/* Read the number of observations in each sample */
for (i = 1; i <= ns; ++i)
Vscanf("%ld", &nv[i - 1]);
Vscanf("%*[\n"] ;
nmax = 0;
nsum = 0;
for (i = 1; i <= ns; ++i)
{
nsum += nv[i - 1];
nmax = MAX(nmax, nv[i - 1]);
}
/* Allocate memory */
if ( !(eta = NAG_ALLOC(nmax, double)) ||
!(parest = NAG_ALLOC(4*p+1, double)) ||
!(parvar = NAG_ALLOC(7 * 6, double)) ||
!(vapvec = NAG_ALLOC(nmax*(nmax+1)/2, double)) ||
!(x = NAG_ALLOC(nsum * p, double)) ||
!(y = NAG_ALLOC(nsum, double)) ||
!(zin = NAG_ALLOC(nmax, double)) ||
!(irank = NAG_ALLOC(nmax, Integer)) ||
!(iwa = NAG_ALLOC(400, Integer)) )
{
Vprintf("Allocation failure\n");
exit_status = -1;
goto END;
}
#ifdef NAG_COLUMN_MAJOR
pdx = nsum;
pdparvar = p+1 ;
#else
pdx = p;
pdparvar = p;
#endif
/* Read in observations, design matrix and censoring variable */
for (i = 1; i <= nsum; ++i)
{ Vscanf("%lf", &y[i - 1]);
for (j = 1; j <= p; ++j)
{
Vscanf("%lf", &X(i,j));
}
Vscanf("%ld", &icen[i - 1]);
}
Vscanf("%*[\n] ");
g08rbc(order, ns, nv, y, p, x, pdx, icen, gamma,
nmax, tol, parvar, pdparvar, irank, zin, eta, vapvec,
parest, &fail);
if (fail.code != NE_NOERROR)
{
Vprintf("Error from g08rbc.
%s
", fail.message);
exit_status = 1;
goto END;
}
Vprintf("Score statistic
");
for (i = 1; i <= p; ++i)
Vprintf("%9.3f
", parest[i - 1]);
Vprintf("\n");
Vprintf("Covariance matrix of score statistic\n");
for (j = 1; j <= p; ++j)
{
for (i = 1; i <= j; ++i)
{
Vprintf("%9.3f
", PARVAR(i,j));
}
Vprintf("\n");
}
Vprintf("Parameter estimates\n");
for (i = 1; i <= p; ++i)
Vprintf("%9.3f
", parest[p + i - 1]);
Vprintf("\n");
Vprintf("Covariance matrix of parameter estimates\n");
for (j = 1; j <= i; ++j)
{
for (i = 1; i <= j; ++i)
{
Vprintf("%9.3f
", PARVAR(i + 1,j));
}
Vprintf("\n");
}
Vprintf("Chi-squared statistic =%9.3f with%2ld d.f.\n", parest[p * 2], p);
Vprintf("\n");
Vprintf("Standard errors of estimates and\n");
Vprintf("approximate z-statistics\n");
for (i = 1; i <= p; ++i)
Vprintf("%9.3f%14.3f
", parest[2*p + 1 + i - 1], parest[p * 3 + 1 + i - 1]);
END:
if (eta) NAG_FREE(eta);
if (parest) NAG_FREE(parest);
if (parvar) NAG_FREE(parvar);
if (vapvec) NAG_FREE(vapvec);
if (x) NAG_FREE(x);
if (y) NAG_FREE(y);
if (zin) NAG_FREE(zin);
if (icen) NAG_FREE(icen);
if (irank) NAG_FREE(irank);
if (iwa) NAG_FREE(iwa);
if (nv) NAG_FREE(nv);
return exit_status;
9.2 Program Data
g08rbc Example Program Data
1 1 0.00001 0.00001
40
143.0 0.0 0 164.0 0.0 0 188.0 0.0 0 188.0 0.0 0 190.0 0.0 0
192.0 0.0 0 206.0 0.0 0 209.0 0.0 0 213.0 0.0 0 216.0 0.0 0
220.0 0.0 0 227.0 0.0 0 230.0 0.0 0 234.0 0.0 0 246.0 0.0 0
265.0 0.0 0 304.0 0.0 0 216.0 0.0 1 244.0 0.0 1 142.0 1.0 0
156.0 1.0 0 163.0 1.0 0 198.0 1.0 0 205.0 1.0 0 232.0 1.0 0
232.0 1.0 0 233.0 1.0 0 233.0 1.0 0 233.0 1.0 0 233.0 1.0 0
239.0 1.0 0 240.0 1.0 0 261.0 1.0 0 280.0 1.0 0 280.0 1.0 0
296.0 1.0 0 296.0 1.0 0 323.0 1.0 0 204.0 1.0 1 344.0 1.0 1
9.3 Program Results
g08rbc Example Program Results
Number of samples = 1
Number of parameters fitted = 1
Distribution power parameter = 0.00001
Tolerance for ties = 0.00001
Score statistic
4.584
Covariance matrix of score statistic
7.653
Parameter estimates
0.599
Covariance matrix of parameter estimates
0.131
Chi-squared statistic = 2.746 with 1 d.f.
Standard errors of estimates and approximate z-statistics
0.361 1.657
|
{"Source-Url": "http://wwwuser.gwdg.de/~nag/NAGdoc/cl/pdf/G08/g08rbc.pdf", "len_cl100k_base": 6339, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 31138, "total-output-tokens": 7495, "length": "2e12", "weborganizer": {"__label__adult": 0.0003299713134765625, "__label__art_design": 0.00044918060302734375, "__label__crime_law": 0.0005254745483398438, "__label__education_jobs": 0.00213623046875, "__label__entertainment": 0.00011777877807617188, "__label__fashion_beauty": 0.0001928806304931641, "__label__finance_business": 0.00044345855712890625, "__label__food_dining": 0.0006275177001953125, "__label__games": 0.0009613037109375, "__label__hardware": 0.001781463623046875, "__label__health": 0.001354217529296875, "__label__history": 0.000476837158203125, "__label__home_hobbies": 0.0002142190933227539, "__label__industrial": 0.00110626220703125, "__label__literature": 0.0003616809844970703, "__label__politics": 0.00042057037353515625, "__label__religion": 0.0006155967712402344, "__label__science_tech": 0.4853515625, "__label__social_life": 0.00014030933380126953, "__label__software": 0.01171112060546875, "__label__software_dev": 0.4892578125, "__label__sports_fitness": 0.0004665851593017578, "__label__transportation": 0.0006380081176757812, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18222, 0.0533]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18222, 0.8217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18222, 0.56578]], "google_gemma-3-12b-it_contains_pii": [[0, 2468, false], [2468, 4787, null], [4787, 7644, null], [7644, 11000, null], [11000, 12814, null], [12814, 14170, null], [14170, 15720, null], [15720, 17243, null], [17243, 18222, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2468, true], [2468, 4787, null], [4787, 7644, null], [7644, 11000, null], [11000, 12814, null], [12814, 14170, null], [14170, 15720, null], [15720, 17243, null], [17243, 18222, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18222, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18222, null]], "pdf_page_numbers": [[0, 2468, 1], [2468, 4787, 2], [4787, 7644, 3], [7644, 11000, 4], [11000, 12814, 5], [12814, 14170, 6], [14170, 15720, 7], [15720, 17243, 8], [17243, 18222, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18222, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
1178d57aca7c9ef5edeb7a9f105898cd448dc666
|
Awesome-META+: Meta-Learning Research and Learning Platform
Jingyao Wang $^{1}$, Chuyuan Zhang $^{1}$, Ye Ding $^{1}$, and Yuxuan Yang $^{1}$
$^{1}$Affiliation not available
October 30, 2023
Abstract
Artificial intelligence technology has already had a profound impact in various fields such as economy, industry, and education, but still limited. Meta-learning, also known as “learning to learn”, provides an opportunity for general artificial intelligence, which can break through the current AI bottleneck. However, meta learning started late and there are fewer projects compare with CV, NLP etc. Each deployment requires a lot of experience to configure the environment, debug code or even rewrite, and the frameworks are isolated. Moreover, there are currently few platforms that focus exclusively on meta-learning, or provide learning materials for novices, for which the threshold is relatively high. Based on this, Awesome-META+, a meta-learning framework integration and learning platform is proposed to solve the above problems and provide a complete and reliable meta-learning framework application and learning platform. The project aims to promote the development of meta-learning and the expansion of the community, including but not limited to the following functions: 1) Complete and reliable meta-learning framework, which can adapt to multi-field tasks such as target detection, image classification, and reinforcement learning. 2) Convenient and simple model deployment scheme which provide convenient meta-learning transfer methods and usage methods to lower the threshold of meta-learning and improve efficiency. 3) Comprehensive researches for learning. 4) Objective and credible performance analysis and thinking.
Awesome-META+: Meta-Learning Research and Learning Platform
Jingyao Wang†‡, Chuyuan Zhang†‡, Ye Ding†‡, Yuxuan Yang§
*Institute of Software, Chinese Academy of Sciences, Beijing, China.
†University of Chinese Academy of Sciences, Beijing, China.
‡Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai, China.
§Suzhou University of Science and Technology, Suzhou, China.
Abstract—Artificial intelligence technology has already had a profound impact in various fields such as economy, industry, and education, but still limited. Meta-learning, also known as "learning to learn", provides an opportunity for general artificial intelligence, which can break through the current AI bottleneck. However, meta learning started late and there are fewer projects compare with CV, NLP etc. Each deployment requires a lot of experience to configure the environment, debug code or even rewrite, and the frameworks are isolated. Moreover, there are currently few platforms that focus exclusively on meta-learning, or provide learning materials for novices, for which the threshold is relatively high. Based on this, Awesome-META+, a meta-learning framework integration and learning platform is proposed to solve the above problems and provide a complete and reliable meta-learning framework application and learning platform. The project aims to promote the development of meta-learning and the expansion of the community, including but not limited to the following functions: 1) Complete and reliable meta-learning framework, which can adapt to multi-field tasks such as target detection, image classification, and reinforcement learning. 2) Convenient and simple model deployment scheme which provide convenient meta-learning transfer methods and usage methods to lower the threshold of meta-learning and improve efficiency. 3) Comprehensive researches for learning. 4) Objective and credible performance analysis and thinking.
Index Terms—System Design, Meta Learning, Software Development, Framework Integration
I. INTRODUCTION
With the vigorous development of the new global round of technological revolution and information technology, artificial intelligence (AI) technology has taken off and has had a profound impact on various fields such as the economy, society, industry, and education [1], [2], [3], [4]. AI technology has become the hottest topic in technology and the development of reliable, general AI has become a global consensus. However, current AI is still limited to specialized intelligence. Traditional machine learning paradigms, such as supervised learning based on labeled data or unsupervised learning represented by clustering, are specific to certain tasks and cannot provide effective feedback for unseen tasks or analyze unseen data [5], [5]. Meanwhile, specialized AI systems, due to their single task, clear demands, distinct application boundaries, rich domain knowledge, and relatively simple modeling, can surpass human intelligence in single item testing at the local intelligence level, forming a breakthrough in the field of AI. However, there is still a long way to go to meet the expectations of academic researchers for general AI, and it is difficult to overcome this limitation.
The emergence of meta-learning and related model research provides an opportunity for general AI. It can endow machine learning with the ability to adapt to new developments like humans, and complete multiple tasks that do not rely on human experience. Meta-learning [6], [7] can realize the idea of autonomous driving on roads with unknown conditions, or allow a robotic arm to handle various objects of different specifications and weights, achieving outstanding performance in multiple scenarios. This field is also one of the most promising areas currently [8], [9].
Although the current meta-learning frameworks in academia have superior performance in multiple fields, due to reasons such as numerous systems, wide application areas, narrow community, and high entry barriers, there are differences in the deployment of different models, or the application of the same model to different domain tasks [10], [11]. Moreover, there is a lack of relevant basic teaching resources. If we want to replicate and deploy multiple frameworks, it requires a lot of time, as meta-learning frameworks include optimization-based, model-based, and metric learning-based methods [12], [13], targeting areas such as reinforcement learning and few-shot learning [14], [15], [16]. In this condition, the deployment and application of multiple models are more complex. In addition, due to the narrow community of meta-learning compared to computer vision and other fields, and relatively high entry barriers, there are few summary platforms targeting newbies or comprehensive frameworks. Therefore, we decided to build a platform called Awesome-META+, which can provide various meta-learning framework optimization codes, deployment solutions, performance data, academic materials, and further expand to other fields, providing a standardized solution.
The goals and contributions of our system include:
- Providing a comprehensive and reliable meta-learning framework code that can adapt to multiple domains and improve academic research efficiency.
- Providing a convenient and simple model deployment solution to lower the threshold and promote the development of meta-learning and its transfer fields.
- Providing a comprehensive and complete information summary and learning platform for the meta-learning field to stimulate the vitality of the meta-learning community.
- Conducting objective and credible performance analysis and reflection to support framework selection and technology implementation.
Our work is shown in https://wangjingyao07.github.io/Awesome-Meta-Learning-Platform/. The Homepage is shown as Figure 1.
II. REQUIREMENTS ANALYSIS AND SYSTEM DESIGN
A. Audience groups
Awesome-META+ is a research and learning platform for meta-learning that is aimed at a wide range of Internet users. The target audience includes individuals who are interested in or working in the field of meta-learning. The platform is specifically divided into the following three groups based on the potential needs of users.
<table>
<thead>
<tr>
<th>No.</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Group1</td>
<td>Scholars or practitioners in the field of meta-learning.</td>
</tr>
<tr>
<td>Group2</td>
<td>Beginners interested in the field of meta-learning.</td>
</tr>
<tr>
<td>Group3</td>
<td>Scholars and industry practitioners in various fields who hope to use the meta-learning paradigm to improve framework performance or apply it to landing products.</td>
</tr>
</tbody>
</table>
TABLE I: Audience groups
B. Application Scenarios
Awesome-META+ is set up with four different scenarios, each of which targets different user groups and needs. The following table describes the users and their needs for each scenario.
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Target Audience</th>
<th>Requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td>Scenario 1</td>
<td>Group 1/2/3</td>
<td>Users need to configure specific frameworks on their local machines and understand the core technologies and ideas behind the framework’s code.</td>
</tr>
<tr>
<td>Scenario 2</td>
<td>Group 1</td>
<td>Academic researchers need to conduct comparative experiments on multiple meta-learning frameworks to obtain baseline data or improve the performance of specific tasks.</td>
</tr>
<tr>
<td>Scenario 3</td>
<td>Group 2</td>
<td>Individuals who want to understand the current development status of meta-learning, engage in systematic learning, and obtain relevant materials.</td>
</tr>
<tr>
<td>Scenario 4</td>
<td>Group 3</td>
<td>Users hope to use meta-learning to complete multiple specific tasks in fields such as reinforcement learning and achieve industrial application.</td>
</tr>
</tbody>
</table>
TABLE II: Application Scenarios
C. Intended function
Awesome-META+ has several key features that make it highly useful for users interested in meta-learning. Users can search for information, deploy frameworks, access learning resources, and transfer tasks across different domains. The platform is designed to be user-friendly and easy to navigate. It provides a range of resources and tutorials to help users learn more about meta-learning and related topics. The specific features of the platform are listed in the table below.
<table>
<thead>
<tr>
<th>Functional point ID</th>
<th>Function Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Function 1</td>
<td>Search Functionality</td>
<td>Users can locate the desired meta-learning framework, paper, and other related information by using the search bar or navigating through the menu bar, including "Home", "Tutorials", "Documentation", "Examples", "Papers", "Datasets", "Community", "Changelog", and "GitHub". Each module contains multiple sub-search options (e.g., "Changelog" shows version history).</td>
</tr>
<tr>
<td>Function 2</td>
<td>Framework Deployment</td>
<td>Users can browse the frameworks, models, and datasets provided by the platform on the "Home" page and then locate them according to their needs using two methods: directly entering keywords in the search bar or clicking to the corresponding framework deployment method on the "Home" page and obtaining specific details from modules such as "Tutorials". Users can also pull the source code of meta-learning frameworks and deployment details with one click for quick and easy use.</td>
</tr>
<tr>
<td>Function 3</td>
<td>Learning Platform</td>
<td>Users can locate the "Papers", "Datasets", "Community" modules according to their needs, and obtain resources such as the learning curve of the platform, as well as links to download relevant blogs, monographs, and papers.</td>
</tr>
<tr>
<td>Function 4</td>
<td>Multi-Domain Task Transfer</td>
<td>Users can use the "Tutorials", "Documentation", "Examples" modules to learn about the platform’s usage instructions and framework information, different domain tasks corresponding to framework details and optimization ideas, and actual cases (such as performance comparison of various frameworks in small-sample image classification using metrics such as ACC, AP, etc.) to locate their desired goals and complete configuration.</td>
</tr>
<tr>
<td>Function 5</td>
<td>Feedback</td>
<td>Users can write feedback or suggestions in the feedback section on the platform’s main page "Home" (which actually redirects to GitHub’s issues) for future maintenance or adding new learning materials.</td>
</tr>
</tbody>
</table>
TABLE III: Functional Points
D. Acceptance criteria
In order to better meet the needs of users, Awesome-META+ needs to achieve four goals, namely reliability, ease of use, maintainability, and iterative updates. The following table provides brief descriptions for these goals.
<table>
<thead>
<tr>
<th>Goal</th>
<th>Specifics</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reliability</td>
<td>The platform’s framework has been extensively tested and validated through experiments with ample data. All modules have been tested to ensure their reliability.</td>
</tr>
<tr>
<td>Ease of Use</td>
<td>The user interface is simplified to enable users to quickly and accurately find the required information and deploy the framework quickly.</td>
</tr>
<tr>
<td>Maintainability</td>
<td>User feedback is promptly addressed, and the platform is maintained within one week.</td>
</tr>
<tr>
<td>Updates and Iterations</td>
<td>The platform is designed to allow developers to quickly iterate and add necessary modules based on the current software ecosystem. A "Changelog" is specifically set up within the platform to view product update and iteration information.</td>
</tr>
</tbody>
</table>
TABLE IV: Acceptance criteria
III. SYSTEM DEVELOPMENT
Awesome-META+ is a meta-learning research and learning platform that will be made available to a wide range of internet users in the form of a website. The platform comprises nine modules, including "Home", "Tutorials", "Documentation", "Examples", "Papers", "Datasets", "Community", "Changelog", and "GitHub", covering everything that is required for the application of typical meta-learning frameworks. Figure 1 shows the homepage of our Awesome-META+.
These modules include deployment, usage tutorials, source code, practical examples, as well as academic materials related to meta-learning, such as papers, datasets, blogs, and video tutorials. Additionally, the platform features multiple modules for updating logs and community building.
The platform’s website and code are hosted on GitHub+Vercel, and although the relevant repositories are currently private, they will be made public once the domain formalities are completed.
Users can click on different modules in the menu bar based on their needs, navigate to the corresponding webpages, and complete the relevant operations. Users can also search for models, methods, and practical examples of their interest to enrich the meta-learning research community.
The development and functions of this system consisting of four modules:
- **Front-end system** (Sec.III-A): built with Python + Django + Material for MkDocs to create a responsive website that is bound to the meta-learning framework, deployment, and academic information retrieval functions. The main interface, "Home," provides content and usage instructions for each function page, with the design aimed at giving users a clear understanding of meta-learning and making it easy to use the Awesome-META+ platform.
- **Algorithms and deployment** (Sec.III-B): code is written and reproduced using Python + Anaconda to build twelve typical frameworks, including those based on machine learning frameworks such as Pytorch/TensorFlow [17], [18], seven benchmarks including CIFAR, and two training methods, distributed and single card. A modular design is used for easy developer rewriting, and online running examples are provided.
- **Information integration** (Sec.III-C): includes information crawling and push based on Python. High citation and click-through papers, "meta-learning" keyword papers from large conferences such as ICLR and ICML, and blogs from platforms such as Zhihu are crawled, scored and sorted, and then uploaded to the platform.
- **Testing and deployment** (Sec.IV): includes both automated and manual testing. Automated testing includes website function testing, access and API response testing, and automated testing of frameworks and algorithms. Deployment is done via GitHub + Vercel for web platform demo deployment.
A. Front-end system
The front-end of Awesome-META+ mainly consists of a user interaction interface, which is developed using Python and Django frameworks [19], as well as Material for MkDocs. We have also introduced responsive web design and multiple ports required for model deployment to support local deployment and learning of the meta-learning framework. In addition, we provide practical examples for users to perform online training on platforms such as Colab [20]. Figure 2 shows the process of web page interaction.
Django is an open-source web application framework written in Python. With Django, developers can implement a complete website with simple code and further develop a full-featured web service. Django is based on the MVC (Model-View-Controller) design pattern, consisting of three components: Model, View, and Controller. In addition, a URL
dispatcher is required to distribute page requests to different Views, which then call the corresponding Model and Template. Python+Django web development has advantages such as low coupling, fast development, high reusability, and low maintenance costs, making it possible to quickly deploy a website.
To maintain code compatibility and maintainability, while considering the characteristics of the Awesome-META+ platform and the time required for the deployment of the meta-learning framework, we chose to use Python for responsive interface development. This enables users to use the platform on any mobile or PC interface, improving the platform’s applicability and ease of use.
B. Algorithms and deployment
In order to meet the needs of users as much as possible and at the same time achieve the purpose of meta-learning research, we conducted research from the perspectives of framework expansion, standardization, universality and rapid deployment
Framework expansion. Before development, we need to understand the meta-learning process from the perspective of software development and coding. Meta-learning is the process of learning how to learn, which refers to training machine learning algorithms that can automatically adapt to new tasks and environments. The general meta-learning process is as follows: as follow:
1) Select meta-learning algorithm and model: Choose the appropriate meta-learning algorithm and model, such as gradient-based meta-learning, Bayesian meta-learning, meta reinforcement learning, etc.
2) Select task distribution: Select several tasks from the given task distribution. For example, in image classification tasks, different classification tasks can be selected from different datasets.
3) Divide the dataset: For each task, divide its dataset into training, validation, and test sets.
4) Train the model: For each task’s training set, use the meta-learning algorithm and model to train and obtain the model parameters for that task.
5) Evaluate the model: Use the validation set to evaluate the trained model and obtain the performance metrics for that task.
6) Update the meta-learning model: Use the performance metrics of the task as the input to update the parameters of the meta-learning model, so that it can better adapt to the learning process of different tasks.
7) Select the optimal model: For each task, select the optimal model parameters based on its performance metrics on the validation set.
8) Test the model: Use the test set to test the optimal model for each task and obtain the final performance metrics.
9) Apply the meta-learning model: For new tasks, use the trained meta-learning model to select the most suitable model and parameters based on the nature and characteristics of the task, and perform learning and prediction accordingly.
The above is the general meta-learning process, and the specific implementation can be adjusted and optimized based on specific algorithms and models. We have replicated and extended 12 meta-learning frameworks. Figure 3 lists the framework resources provided by the Awesome-META+ platform. Taking MAML as an example, this model is a classic work in the field of meta-learning, which uses optimization-based model training rules. After defining the initial parameters, the inner loop trains based on the initial parameters, and the loss on the task is accumulated and back-propagated to the outer loop. The outer loop uses SGD to compute the second-order derivative, which updates the initialization parameters. This framework is compatible with any model trained by gradient descent and is suitable for various learning problems. In the development of our platform, we use MAML as a typical example for multi-dataset scenarios and provide standardized instructions. Additionally, we configure multiple datasets, provide both PyTorch and TensorFlow code formats, support distributed and single-card training, and apply it to multiple scenarios such as reinforcement learning [21], [22], [23], gesture recognition [24], [25], and animal detection [26].
Standardization. We have standardized the reproduced meta-
learning framework, including task settings and SGD second-
order derivatives, in order to complete deployment with unified
settings. We have also added these explanations and specific stan-
dardization content to the "Documentation" module on the
Awesome-META+ platform.
1) Dataset standardization: Given any input dataset, custom
tasks can be easily generated based on user scenarios. A
set of tasks are created from the given dataset, which
accepts a list of task transforms that define the types
of tasks to be sampled from the dataset. These tasks
are lazily sampled when indexed (or called using the
.sample() method), and their descriptions are cached for
later use. If the num of tasks is set to -1, TaskDataset will
not cache task descriptions and will continuously sample
new descriptions. In this case, the length of TaskDataset
is set to 1.
2) Model standardization (using MAML as an example):
Taking the optimization-based model of the MAML class
as an example, this class wraps any nn.Module and
expands it using the clone() and adapt() methods. For
the first-order version of MAML (i.e., FOMAML), the
first-order flag is set to True during initialization. In
addition to reproducing the models based on standardization
rules, we have also reproduced the performance testing
experiments in the paper to ensure the correctness of the
reproduction of the twelve frameworks.
**Code for Dataset standardization**
```python
dataset (Dataset) - Dataset of data to compute tasks
task_transforms (list, optional, default=None) -
List of task transformations.
num_tasks (int, optional, default=-1) - Number of
tasks to generate.
dataset = [121].data.MetaDataset(MyDataset())
transforms = [121].data.transforms.NWayS(dataset, n=5),
121.data.transforms.KShots(dataset, k=1),
121.data.transforms.LoadData(dataset),]
taskset = TaskDataset(dataset, transforms, num_tasks
=200000)
for task in taskset:
X, y = task
```
**Code for Model standardization**
```python
model (Module) - Module to be wrapped
lr (float) - Fast adaptation learning rate
first_order (bool, optional, default=False) -
Whether to use the first-order approximation
of MAML (FOMAML)
allow_unused (bool, optional, default=None) -
Whether to allow differentiation of unused
parameters. Defaults to allow _nograd
allow_nograd (bool, optional, default=False) -
Whether to allow adaptation with parameters that
have requires_grad = False
linear = [121].algorithms.MAML(nn.Linear(20, 10), lr
=0.01)
clone = linear.clone()
error = loss(clone(X), y)
clone.adapt(error)
error = loss(clone(X), y)
error.backward()
```
**Universality.** To better meet the needs of users in deploying
frameworks, we have rewritten the framework to match multi-
datasets, training methods, and multi-tasking capabilities.
Additionally, we have made modifications to the training
methods and environment versions with consideration for the
hardware configurations of future servers and local hardware
resources available to users.
1) Regarding training methods, the framework includes dis-
tributed options (supporting multi-GPU training for hard-
ware configurations such as servers and host machines
with multiple graphics cards) as well as single-GPU
training (supporting GPU-based hardware platforms).
2) Regarding environment versions, some frameworks (such
as MAML and Prototypical Network) are offered both
PyTorch and TensorFlow versions, supporting multiple
training formats.
**Code for multi-GPU training**
```python
args.gpus = gpu
torch.cuda.set_device(gpu)
args.rank = args.node_rank * ngpus + gpu
device = torch.device('cuda:%d' % args.gpus)
if args.dist == 'ddp':
dist.init_process_group(
backend='nccl',
init_method='tcp://%s' % args,
dist_address,
world_size=args.world_size,
rank=args.rank,
)
n_gpus_total = dist.get_world_size()
assert args.batch_size % n_gpus_total == 0
args.batch_size /= n_gpus_total
if args.rank == 0:
print(f"{n_gpus_total} GPUs total;
batch_size={args.batch_size} per
GPU")
print(f'--> Proc {dist.get_rank()}/{dist.
get_world_size()}@{socket.gethostname()}'
, flush=True)
```
**Rapid deployment.** In order to allow users to quickly deploy
locally (online) and meet their needs as much as possible, we
have taken the following (attempted) actions:
1) Code encapsulation for on-demand running with only
two lines of code: Configuration parameters such as
dataset and training method are directly written into the
framework, while functional modules are encapsulated by
class. Users can directly select on the "Tutorials" module of
the Awesome-META+ platform, or follow the deploy-
mint instructions downloaded locally for training and
running. The whole process only involves two statements:
environment configuration and the command for using the
meta-learning framework.
2) Multi-scenario transfer: Experiment examples for multi-
ple scenarios are provided on the "Examples" module of
the Awesome-META+ platform, demonstrating the
effectiveness and practicality of the main framework.
3) Online demo: To facilitate online training, we have at-
ttempted to set up a port linked to Colab on the web page.
A MAML notebook is provided in Colab, which is a free
C. Information integration
Another core feature of the Awesome-META+ platform is to collect academic information, allowing users to access cutting-edge work in the field of meta-learning, including resources such as journals, conferences, and major reports. The information integration process is shown in the Figure 4. This includes:
1) High-quality papers published in recent years: This metric is evaluated based on the impact factor of the journal, the rating of the conference, and the citation count of the paper itself, such as SCI Q1, top conferences in CCF, such as ICML and ICLR.
2) Works in the field of meta-learning: including introductory books for beginners and advanced books for practitioners, works are ranked based on their influence and recommendations from multiple blogs.
3) Discussion videos and conference links related to meta-learning: workshops at important conferences, etc.
4) Meta-learning-related blogs and videos: Blog resources include domestic websites such as Zhihu and CSDN, and foreign websites such as Stackflow and DZone. Video resources are mainly from websites such as Bilibili and YouTube.
To achieve more comprehensive information collection and summary, and push it on the platform, we mainly did the following work:
1) Information was crawled based on keywords for resource web pages and addresses, looking for information related to "meta-learning" and "learn-to-learn"; for works and reports, manual search was conducted (based on discussions in forums).
2) The crawled information was sorted and filtered. The reference and filtering information included: selecting the top 10 papers for each website based on paper citation counts (specifically, the citation count at conferences such as ICLR, oral presentations, and the number of collections on websites such as ResearchGate, based on the influence evaluation indicators provided by these websites) and finally retaining 30 papers; selecting 20 blog and video information based on the number of likes and views, respectively.
3) The filtered information was summarized and manually uploaded to the platform.
Finally, users can access paper information and related materials on the "Home" and "Papers" pages, and download or jump to the content they want to learn.
IV. VERIFICATION AND APPLICATION
A. Testing and verification
The platform testing includes automation testing and manual testing. The automation testing includes function testing based on Playwright + Python for webpages, access and API response testing based on OctaGate SiteTimer, and automation testing based on Python for frameworks and algorithms. The deployment is based on GitHub + Vercel to deploy the webpage platform demo (V1.0).
Automation testing. The testing of the Awesome-META+ platform mainly involves two parts: web testing and integrated framework testing. The web testing includes two components: testing the navigation and interaction of each interface and function, as well as performance testing for the corresponding speed. The platform provides nine interfaces, including "Home", "Tutorials", "Documentation", "Examples", "Papers", "Datasets", "Community", "Changelog", and "GitHub". Among them, "Tutorials", "Papers", "Datasets", and "Examples" can be linked to the "download" operation. "Tutorials" also involves online deployment, and the most important aspect is the navigation relationship between different interfaces. Therefore, we conducted testing from three core functional perspectives: "System Front-end Web Interaction", "System Framework Integration Testing and Deployment", and "Meta-Learning Information Acquisition". The test results are shown in Table V.
For system front-end web interaction, we chose to use the Python community’s Playwright library for web automation testing. Playwright is a pure automation tool designed specifically for the Python language by Microsoft. It can automatically execute Chromium, Firefox, and WebKit browsers through a single API and can achieve automation functions while supporting headless and headed modes. Moreover, Playwright supports Linux, Mac, and Windows operating systems, making it a very suitable web testing tool for the Awesome-META+ platform we want to build.
For integrated framework testing: we designed each framework modularly, which can be activated or deactivated according to the needs. Table X shows the corresponding results of each framework from unit testing to functional testing. The unit testing involves each framework’s corresponding main function and standardized function, and the functional testing covers all algorithms provided by the platform. Each framework includes multiple datasets and tasks, and some frameworks contain multiple training modes (some have distributed and single-card training modes), and the core algorithms are encapsulated in the form of classes and functions.
For meta-learning information acquisition: we randomly extracted 600 data items related to meta-learning to construct the test set. The related entries include paper keywords, title entity words, author, and conference information. 60%...
<table>
<thead>
<tr>
<th>Test (%)</th>
<th>Front-end web interaction</th>
<th>Framework integration testing and deployment</th>
<th>Information acquisition</th>
</tr>
</thead>
<tbody>
<tr>
<td>Test 1</td>
<td>99.237</td>
<td>Unit Tests (99.372) Functional Tests (97.832)</td>
<td>100.000</td>
</tr>
<tr>
<td>Test 2</td>
<td>99.372</td>
<td>Unit Tests (100.00) Functional Tests (99.382)</td>
<td>99.827</td>
</tr>
<tr>
<td>Test 3</td>
<td>99.178</td>
<td>Unit Tests (98.893) Functional Tests (99.478)</td>
<td>98.728</td>
</tr>
<tr>
<td>Test 4</td>
<td>99.732</td>
<td>Unit Tests (100.00) Functional Tests (100.00)</td>
<td>99.387</td>
</tr>
<tr>
<td>Test 5</td>
<td>100.000</td>
<td>Unit Tests (100.00) Functional Tests (99.974)</td>
<td>100.000</td>
</tr>
<tr>
<td>Test 6</td>
<td>98.947</td>
<td>Unit Tests (99.287) Functional Tests (98.237)</td>
<td>98.728</td>
</tr>
<tr>
<td>Test 7</td>
<td>99.238</td>
<td>Unit Tests (100.00) Functional Tests (97.473)</td>
<td>100.000</td>
</tr>
<tr>
<td>Test 8</td>
<td>100.000</td>
<td>Unit Tests (100.00) Functional Tests (100.00)</td>
<td>100.000</td>
</tr>
<tr>
<td>Test 9</td>
<td>100.000</td>
<td>Unit Tests (99.783) Functional Tests (98.478)</td>
<td>100.000</td>
</tr>
<tr>
<td>Test 10</td>
<td>99.389</td>
<td>Unit Tests (100.00) Functional Tests (97.873)</td>
<td>99.738</td>
</tr>
</tbody>
</table>
TABLE V: The testing and verification of Awesome-META+. It shows the results of both automated testing and manual testing, with automated testing further divided into three aspects: front-end web interaction, framework integration testing, and deployment, and information acquisition.
Fig. 5: The performance of Awesome-META+.
The platform takes into integrated into The specific deployment schemes and provided platform and server+nginx for the system usage deployed on both the server and local machines for mentoring instructions, and the models were able to run (Astronomical Techniques and Methods) follow the operations, I had a friend from the Astronomy module without any functional errors. For model and the search function can quickly locate the empty form submissions, from affecting the user experience. Error handling and guided prompts were set up for potential issues such as empty form submissions, search failures, and ineffective model download controls to facilitate user use. Table VI shows the results of manual testing. Multiple tests have shown that for front-end interaction, users can obtain resource download responses within 50ms without being affected by internet speed. Empty form submissions and empty package downloads are not expected to occur, and the search function can quickly locate the corresponding module without any functional errors. For model deployment operations, I had a friend from the Astronomy Department (Astronomical Techniques and Methods) follow the deployment instructions, and the models were able to run smoothly on both the server and local machines for training.
B. Deployment and maintenance
Awesome-META+ is presented as a web page, and is deployed using two methods: GitHub Pages for the display platform and server+nginx for the system usage platform. The specific deployment schemes and provided computing resources are presented in Table VII.
To ensure the long-term performance of Awesome-META+ and support the research functionalities, we have designed the system sustainably and reserved interfaces for future updates and iterations. The main work includes:
Modularization of functionalities: All frameworks and specific functionalities are designed with modularity, and ports for activation or deactivation are integrated into deployment instructions, making it easy to locate the functional blocks for future modifications and supplements.
Developer-oriented comments are included in the code, and standardized documents and system design schemes are provided for other standardized engineering in different fields. The integrated frameworks, datasets, optimization packages, and academic materials are all packaged for upload, like building blocks that can be continuously supplemented on the basis of existing resources. With sufficient computing resources, there is no upper limit. The "Changelog" page of Awesome-META+ will provide explanations for version iterations and updates.
C. Performance optimization
The Awesome-META+ platform consists of nine major modules, including "Home", "Tutorials", "Documentation", "Examples", "Papers", "Datasets", "Community", "Changelog", and "GitHub". These modules cover all the necessary aspects for applying typical meta-learning frameworks, such as deployment, usage tutorials, source code, and practical cases, as well as providing information on meta-learning academic resources, including papers, datasets, blogs, and video tutorials. Additionally, the platform includes multiple modules related to platform updates and community building. Figure illustrates the effects of the Awesome-META+ platform.
V. CONCLUSION
The Awesome-META+ platform is a learning and research platform that focuses on the rapid deployment and integration of meta-learning frameworks, based on the background of the general artificial intelligence demand and the research of meta-learning frameworks. The platform takes into account the framework deployment mechanism and platform software ecosystem to build a meta-learning research and learning platform. In the process of system design and development, we also hope to provide a set of examples to provide ideas for the integration and standardization of frameworks in other fields.
The platform provides complete and reliable meta-learning framework code, convenient and simple model deployment solutions, comprehensive information integration and learning functions, and objective and trustworthy performance analysis. These features enable users to easily learn meta-learning in a convenient way, even if they do not have much related academic foundation, and apply it to various fields such as reinforcement learning, gesture recognition, and few-shot image classification.
The two core functions of the Awesome-META+ platform - rapid deployment of meta-learning frameworks and information aggregation and retrieval - provide a learning channel.
for novices and reduce the entry barrier to meta-learning. At the same time, the platform provides rich academic resources such as baselines and benchmarks for meta-learning scholars, saving their time and improving their research efficiency. In addition, the platform also provides a method for industrial personnel to use meta-learning for product development.
ACKNOWLEDGMENT
With many thanks to our professor, Tiejian Luo, for his guidance during this research.
REFERENCES
|
{"Source-Url": "https://d197for5662m48.cloudfront.net/documents/publicationstatus/169530/preprint_pdf/ee6b709b10cd4fae97f1c192719596fd.pdf", "len_cl100k_base": 7959, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 44943, "total-output-tokens": 10086, "length": "2e12", "weborganizer": {"__label__adult": 0.00046443939208984375, "__label__art_design": 0.0009179115295410156, "__label__crime_law": 0.00042176246643066406, "__label__education_jobs": 0.00685882568359375, "__label__entertainment": 0.00015342235565185547, "__label__fashion_beauty": 0.0002963542938232422, "__label__finance_business": 0.00047969818115234375, "__label__food_dining": 0.0004315376281738281, "__label__games": 0.0009889602661132812, "__label__hardware": 0.0012578964233398438, "__label__health": 0.000820159912109375, "__label__history": 0.000484466552734375, "__label__home_hobbies": 0.00021660327911376953, "__label__industrial": 0.0006699562072753906, "__label__literature": 0.0005669593811035156, "__label__politics": 0.00038504600524902344, "__label__religion": 0.0006265640258789062, "__label__science_tech": 0.173583984375, "__label__social_life": 0.00024628639221191406, "__label__software": 0.018402099609375, "__label__software_dev": 0.79052734375, "__label__sports_fitness": 0.0003709793090820313, "__label__transportation": 0.0006666183471679688, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42897, 0.02397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42897, 0.27858]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42897, 0.88403]], "google_gemma-3-12b-it_contains_pii": [[0, 1743, false], [1743, 7343, null], [7343, 12992, null], [12992, 16659, null], [16659, 20781, null], [20781, 25996, null], [25996, 31116, null], [31116, 32824, null], [32824, 37405, null], [37405, 42897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1743, true], [1743, 7343, null], [7343, 12992, null], [12992, 16659, null], [16659, 20781, null], [20781, 25996, null], [25996, 31116, null], [31116, 32824, null], [32824, 37405, null], [37405, 42897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42897, null]], "pdf_page_numbers": [[0, 1743, 1], [1743, 7343, 2], [7343, 12992, 3], [12992, 16659, 4], [16659, 20781, 5], [20781, 25996, 6], [25996, 31116, 7], [31116, 32824, 8], [32824, 37405, 9], [37405, 42897, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42897, 0.125]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
77a0d909055a3e86b10ce8114ac4320d9f4713f5
|
[REMOVED]
|
{"Source-Url": "http://se.kaist.ac.kr/ekjee/paper/SAFECOMP2005-ekjee.pdf", "len_cl100k_base": 6964, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32303, "total-output-tokens": 7898, "length": "2e12", "weborganizer": {"__label__adult": 0.0004150867462158203, "__label__art_design": 0.0004503726959228515, "__label__crime_law": 0.0004427433013916016, "__label__education_jobs": 0.0008974075317382812, "__label__entertainment": 8.320808410644531e-05, "__label__fashion_beauty": 0.00018405914306640625, "__label__finance_business": 0.00033020973205566406, "__label__food_dining": 0.00040340423583984375, "__label__games": 0.0008711814880371094, "__label__hardware": 0.00457000732421875, "__label__health": 0.0006241798400878906, "__label__history": 0.0002994537353515625, "__label__home_hobbies": 0.0001552104949951172, "__label__industrial": 0.00157928466796875, "__label__literature": 0.0002341270446777344, "__label__politics": 0.000240325927734375, "__label__religion": 0.0004553794860839844, "__label__science_tech": 0.132080078125, "__label__social_life": 7.37309455871582e-05, "__label__software": 0.0100250244140625, "__label__software_dev": 0.84423828125, "__label__sports_fitness": 0.000347137451171875, "__label__transportation": 0.0008745193481445312, "__label__travel": 0.00019538402557373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31114, 0.02379]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31114, 0.7543]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31114, 0.88551]], "google_gemma-3-12b-it_contains_pii": [[0, 2498, false], [2498, 5421, null], [5421, 6842, null], [6842, 9879, null], [9879, 10365, null], [10365, 13523, null], [13523, 13676, null], [13676, 16469, null], [16469, 16707, null], [16707, 19666, null], [19666, 23324, null], [23324, 26315, null], [26315, 29139, null], [29139, 31114, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2498, true], [2498, 5421, null], [5421, 6842, null], [6842, 9879, null], [9879, 10365, null], [10365, 13523, null], [13523, 13676, null], [13676, 16469, null], [16469, 16707, null], [16707, 19666, null], [19666, 23324, null], [23324, 26315, null], [26315, 29139, null], [29139, 31114, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31114, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31114, null]], "pdf_page_numbers": [[0, 2498, 1], [2498, 5421, 2], [5421, 6842, 3], [6842, 9879, 4], [9879, 10365, 5], [10365, 13523, 6], [13523, 13676, 7], [13676, 16469, 8], [16469, 16707, 9], [16707, 19666, 10], [19666, 23324, 11], [23324, 26315, 12], [26315, 29139, 13], [29139, 31114, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31114, 0.11852]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
2c2876c87ec66f81c6dc2db33544379be9cdf7dc
|
4.3 Minimum Spanning Trees
- edge-weighted graph API
- greedy algorithm
- Kruskal’s algorithm
- Prim’s algorithm
- advanced topics
Minimum spanning tree
Given. Undirected graph $G$ with positive edge weights (connected).
Def. A spanning tree of $G$ is a subgraph $T$ that is connected and acyclic.
Goal. Find a min weight spanning tree.
Minimum spanning tree
**Given.** Undirected graph $G$ with positive edge weights (connected).
**Def.** A spanning tree of $G$ is a subgraph $T$ that is connected and acyclic.
**Goal.** Find a min weight spanning tree.
Minimum spanning tree
**Given.** Undirected graph $G$ with positive edge weights (connected).
**Def.** A *spanning tree* of $G$ is a subgraph $T$ that is connected and acyclic.
**Goal.** Find a min weight spanning tree.
Minimum spanning tree
**Given.** Undirected graph $G$ with positive edge weights (connected).
**Def.** A spanning tree of $G$ is a subgraph $T$ that is connected and acyclic.
**Goal.** Find a min weight spanning tree.

spanning tree $T$: cost = 50 = 4 + 6 + 8 + 5 + 11 + 9 + 7
**Brute force.** Try all spanning trees?
Applications
MST is fundamental problem with diverse applications.
- Cluster analysis.
- Max bottleneck paths.
- Real-time face verification.
- LDPC codes for error correction.
- Image registration with Renyi entropy.
- Find road networks in satellite and aerial imagery.
- Reducing data storage in sequencing amino acids in a protein.
- Model locality of particle interactions in turbulent fluid flows.
- Autoconfig protocol for Ethernet bridging to avoid cycles in a network.
- Approximation algorithms for NP-hard problems (e.g., TSP, Steiner tree).
- Network design (communication, electrical, hydraulic, cable, computer, road).
Network design
MST of bicycle routes in North Seattle
http://www.flickr.com/photos/ewedistrict/21980840
MST describes arrangement of nuclei in the epithelium for cancer research
http://www.bccrc.ca/ci/ta01_archlevel.html
Genetic research
MST of tissue relationships measured by gene expression correlation coefficient
http://riodb.ibase.aist.go.jp/CELLPEDIA
edge-weighted graph API
- greedy algorithm
- Kruskal’s algorithm
- Prim’s algorithm
- advanced topics
# Weighted edge API
Edge abstraction needed for weighted edges.
```java
public class Edge implements Comparable<Edge> {
public Edge(int v, int w, double weight) {
// create a weighted edge v-w
}
public int either() {
// either endpoint
}
public int other(int v) {
// the endpoint that's not v
}
public int compareTo(Edge that) {
// compare this edge to that edge
}
public double weight() {
// the weight
}
public String toString() {
// string representation
}
}
```
Idiom for processing an edge `e`: `int v = e.either(), w = e.other(v);`
public class Edge implements Comparable<Edge>
{
private final int v, w;
private final double weight;
public Edge(int v, int w, double weight)
{
this.v = v;
this.w = w;
this.weight = weight;
}
public int either()
{ return v; }
public int other(int vertex)
{
if (vertex == v) return w;
else return v;
}
public int compareTo(Edge that)
{
if (this.weight < that.weight) return -1;
else if (this.weight > that.weight) return +1;
else return 0;
}
}
**Edge-weighted graph API**
```java
public class EdgeWeightedGraph {
EdgeWeightedGraph(int V) { /* create an empty graph with V vertices */ }
EdgeWeightedGraph(In in) { /* create a graph from input stream */ }
void addEdge(Edge e) { /* add weighted edge e */ }
Iterable<Edge> adj(int v) { /* edges incident to v */ }
Iterable<Edge> edges() { /* all of this graph's edges */ }
int V() { /* return number of vertices */ }
int E() { /* return number of edges */ }
String toString() { /* string representation */ }
}
```
**Conventions.** Allow self-loops and parallel edges.
Edge-weighted graph: adjacency-list representation
Maintain vertex-indexed array of Edge lists (use Bag abstraction).
```
adj[]
8 → 6 0 .58 → 0 2 .26 → 0 4 .38 → 0 7 .16
1 → 1 3 .29 → 1 2 .36 → 1 7 .19 → 1 5 .32
2 → 6 2 .40 → 2 7 .34 → 1 2 .36 → 0 2 .26 → 2 3 .17
3 → 3 6 .52 → 1 3 .29 → 2 3 .17
4 → 6 4 .93 → 0 4 .38 → 4 7 .37 → 4 5 .35
5 → 1 5 .32 → 5 7 .28 → 4 5 .35
6 → 6 4 .93 → 6 0 .58 → 3 6 .52 → 6 2 .40
7 → 2 7 .34 → 1 7 .19 → 0 7 .16 → 5 7 .28 → 5 7 .28
```
tinyEWG.txt
Bag objects
references to the same Edge object
public class EdgeWeightedGraph
{
private final int V;
private final Bag<Edge>[] adj;
public EdgeWeightedGraph(int V)
{
this.V = V;
adj = (Bag<Edge>[]) new Bag[V];
for (int v = 0; v < V; v++)
adj[v] = new Bag<Edge>();
}
public void addEdge(Edge e)
{
int v = e.either(), w = e.other(v);
adj[v].add(e);
adj[w].add(e);
}
public Iterable<Edge> adj(int v)
{ return adj[v]; }
}
Minimum spanning tree API
Q. How to represent the MST?
public class MST
MST(EdgeWeightedGraph G) constructor
Iterable<Edge> edges() edges in MST
double weight() weight of MST
Q. How to represent the MST?
public class MST
MST(EdgeWeightedGraph G)
constructor
Iterable<Edge> edges()
edges in MST
double weight()
weight of MST
public static void main(String[] args)
{
In in = new In(args[0]);
EdgeWeightedGraph G = new EdgeWeightedGraph(in);
MST mst = new MST(G);
for (Edge e : mst.edges())
StdOut.println(e);
StdOut.println(mst.weight());
}
edge-weighted graph API
• greedy algorithm
• Kruskal’s algorithm
• Prim’s algorithm
• advanced topics
Simplifying assumptions. Edge weights are distinct; graph is connected.
**Def.** A cut in a graph is a partition of its vertices into two (nonempty) sets. A crossing edge connects a vertex in one set with a vertex in the other.
**Cut property.** Given any cut, the crossing edge of min weight is in the MST.
Cut property: correctness proof
Simplifying assumptions. Edge weights are distinct; graph is connected.
Def. A cut in a graph is a partition of its vertices into two (nonempty) sets. A crossing edge connects a vertex in one set with a vertex in the other.
Cut property. Given any cut, the crossing edge of min weight is in the MST.
Pf. Let $e$ be the min-weight crossing edge in cut.
• Suppose $e$ is not in the MST.
• Adding $e$ to the MST creates a cycle.
• Some other edge $f$ in cycle must be a crossing edge.
• Removing $f$ and adding $e$ is also a spanning tree.
• Since weight of $e$ is less than the weight of $f$, that spanning tree is lower weight.
• Contradiction. $\blacksquare$
**Proposition.** The following algorithm computes the MST:
- Start with all edges colored gray.
- Find a cut with no black crossing edges, and color its min-weight edge black.
- Continue until $V - 1$ edges are colored black.
**Greedy MST algorithm**
**Proposition.** The following algorithm computes the MST:
- Start with all edges colored gray.
- Find a cut with no black crossing edges, and color its min-weight edge black.
- Continue until $V - 1$ edges are colored black.
**Pf.**
- Any edge colored black is in the MST (via cut property).
- If fewer than $V - 1$ black edges, there exists a cut with no black crossing edges. (consider cut whose vertices are one connected component)

**Greedy MST algorithm**
**Proposition.** The following algorithm computes the MST:
- Start with all edges colored gray.
- Find a cut with no black crossing edges, and color its min-weight edge black.
- Continue until $V - 1$ edges are colored black.
**Efficient implementations.** How to choose cut? How to find min-weight edge?
- **Ex 1.** Kruskal's algorithm. [stay tuned]
- **Ex 2.** Prim's algorithm. [stay tuned]
- **Ex 3.** Borůvka's algorithm.
Removing two simplifying assumptions
Q. What if edge weights are not all distinct?
A. Greedy MST algorithm still correct if equal weights are present! (our correctness proof fails, but that can be fixed)
Q. What if graph is not connected?
A. Compute minimum spanning forest = MST of each component.
Various MST anomalies
weights can be 0 or negative
MST may not be unique
when weights have equal values
weights need not be proportional to distance
no MST if graph is not connected
Greed is good
Gordon Gecko (Michael Douglas) address to Teldar Paper Stockholders in Wall Street (1986)
edge-weighted graph API
greedy algorithm
Kruskal’s algorithm
Prim’s algorithm
advanced topics
Kruskal’s algorithm. [Kruskal 1956] Consider edges in ascending order of weight. Add the next edge to the tree $T$ unless doing so would create a cycle.
Kruskal's algorithm visualization
Kruskal's algorithm visualization
25%
75%
50%
100%
Kruskal's algorithm: proof of correctness
**Proposition.** Kruskal's algorithm computes the MST.
**Pf.** Kruskal's algorithm is a special case of the greedy MST algorithm.
- Suppose Kruskal's algorithm colors edge $e = v - w$ black.
- **Cut** = set of vertices connected to $v$ (or to $w$) in tree $T$.
- No crossing edge is black.
- No crossing edge has lower weight. Why?
Challenge. Would adding edge \( v \rightarrow w \) to tree \( T \) create a cycle? If not, add it.
How difficult?
- \( O(E + V) \) time.
- \( O(V) \) time.
- \( O(\log V) \) time.
- \( O(\log^* V) \) time.
- Constant time.
---
**Kruskal’s algorithm: implementation challenge**
**run DFS from** \( v \), **check if** \( w \) **is reachable**
(T has at most \( V - 1 \) edges)
**use the union-find data structure!**
**add edge to tree**
**adding edge to tree would create a cycle**
Kruskal’s algorithm: implementation challenge
Challenge. Would adding edge $v \rightarrow w$ to tree $T$ create a cycle? If not, add it.
Efficient solution. Use the union-find data structure.
• Maintain a set for each connected component in $T$.
• If $v$ and $w$ are in same set, then adding $v \rightarrow w$ would create a cycle.
• To add $v \rightarrow w$ to $T$, merge sets containing $v$ and $w$.
Case 1: adding $v \rightarrow w$ creates a cycle
Case 2: add $v \rightarrow w$ to $T$ and merge sets containing $v$ and $w$
Kruskal's algorithm: Java implementation
```java
public class KruskalMST
{
private Queue<Edge> mst;
private MinPQ<Edge> pq;
public KruskalMST(EdgeWeightedGraph G)
{
mst = new Queue<Edge>();
pq = new MinPQ<Edge>(G.edges());
UnionFind uf = new UnionFind(G.V());
while (!pq.isEmpty() && mst.size() < G.V()-1)
{
Edge e = pq.delMin();
int v = e.either(), w = e.other(v);
if (!uf.find(v, w))
{
uf.union(v, w);
mst.enqueue(e);
}
}
}
public Iterable<Edge> edges()
{ return mst; }
}
```
**Proposition.** Kruskal's algorithm computes MST in $O(E \log E)$ time.
**Pf.**
<table>
<thead>
<tr>
<th>operation</th>
<th>frequency</th>
<th>time per op</th>
</tr>
</thead>
<tbody>
<tr>
<td>build pq</td>
<td>1</td>
<td>E</td>
</tr>
<tr>
<td>del min</td>
<td>E</td>
<td>log E</td>
</tr>
<tr>
<td>union</td>
<td>V</td>
<td>log* V †</td>
</tr>
<tr>
<td>find</td>
<td>E</td>
<td>log* V †</td>
</tr>
</tbody>
</table>
† amortized bound using weighted quick union with path compression
**Remark.** If edges are already sorted, order of growth is $E \log^* V$.
recall: $\log^* V \leq 5$ in this universe
- edge-weighted graph API
- greedy algorithm
- Kruskal’s algorithm
- **Prim’s algorithm**
- advanced topics
**Prim's algorithm**. [Jarník 1930, Dijkstra 1957, Prim 1959]
Start with vertex 0 and greedily grow tree $T$. At each step, add to $T$ the min weight edge with exactly one endpoint in $T$.
---
**Edges with exactly one endpoint in $T$ (sorted by weight):**
- 0-7 0.16
- 0-2 0.26
- 0-4 0.38
- 6-0 0.58
- 2-3 0.17
- 5-7 0.28
- 1-3 0.29
- 1-5 0.32
- 4-7 0.37
- 0-4 0.38
- 6-2 0.40
- 6-0 0.58
- 6-2 0.40
- 3-6 0.52
- 6-0 0.58
- 6-4 0.93
---
**Prim's algorithm example**
Prim’s algorithm: visualization
Prim's algorithm: visualization
Prim's algorithm: implementation challenge
**Challenge.** Find the min weight edge with exactly one endpoint in $T$.
**How difficult?**
- $O(E)$ time.
- try all edges
- $O(V)$ time.
- $O(\log E)$ time.
- use a priority queue!
- $O(\log^* E)$ time.
- Constant time.
![Graph with weights and priority queue]
1-7 is min weight edge with exactly one endpoint in $T$
Proposition. Prim's algorithm computes the MST.
Pf. Prim's algorithm is a special case of the greedy MST algorithm.
- Suppose edge $e = \min$ weight edge connecting a vertex on the tree to a vertex not on the tree.
- Cut = set of vertices connected on tree.
- No crossing edge is black.
- No crossing edge has lower weight.
**Prim's algorithm: lazy implementation**
**Challenge.** Find the min weight edge with exactly one endpoint in T.
**Lazy solution.** Maintain a PQ of *edges* with (at least) one endpoint in T.
- Delete min to determine next edge $e = v–w$ to add to $T$.
- Disregard if both endpoints $v$ and $w$ are in $T$.
- Otherwise, let $v$ be vertex not in $T$:
- add to PQ any edge incident to $v$ (assuming other endpoint not in $T$)
- add $v$ to $T$
1-7 is min weight edge with exactly one endpoint in T
priority queue of crossing edges
1-7 0.19
0-2 0.26
5-7 0.28
2-7 0.34
4-7 0.37
0-4 0.38
6-0 0.58
Prim's algorithm example: lazy implementation
Use $\text{MinPQ}$: key = edge, prioritized by weight.
(lazy version leaves some obsolete edges on the PQ)
* marks new priority queue entry
obsolete edges (gray)
Prim's algorithm: lazy implementation
```java
public class LazyPrimMST {
private boolean[] marked; // MST vertices
private Queue<Edge> mst; // MST edges
private MinPQ<Edge> pq; // PQ of edges
public LazyPrimMST(WeightedGraph G) {
pq = new MinPQ<Edge>();
mst = new Queue<Edge>();
marked = new boolean[G.V()];
visit(G, 0);
while (!pq.isEmpty()) {
Edge e = pq.delMin();
int v = e.either(), w = e.other(v);
if (marked[v] && marked[w]) continue;
mst.enqueue(e);
if (!marked[v]) visit(G, v);
if (!marked[w]) visit(G, w);
}
}
}
```
Prim's algorithm: lazy implementation
- **Assume G is connected**
- **Repeatedly delete the min weight edge e = v–w from PQ**
- **Ignore if both endpoints in T**
- **Add edge e to tree**
- **Add v or w to tree**
Prim's algorithm: lazy implementation
```java
private void visit(WeightedGraph G, int v) {
marked[v] = true;
for (Edge e : G.adj(v))
if (!marked[e.other(v)])
pq.insert(e);
}
public Iterable<Edge> mst() {
return mst;
}
```
- add v to T
- for each edge e = v–w, add to PQ if w not already in T
Proposition. Lazy Prim's algorithm computes the MST in time proportional to $E \log E$ in the worst case.
Pf.
<table>
<thead>
<tr>
<th>operation</th>
<th>frequency</th>
<th>binary heap</th>
</tr>
</thead>
<tbody>
<tr>
<td>delete min</td>
<td>E</td>
<td>$\log E$</td>
</tr>
<tr>
<td>insert</td>
<td>E</td>
<td>$\log E$</td>
</tr>
</tbody>
</table>
### Indexed priority queue
Associate an index between 0 and $N-1$ with each key in a priority queue.
- Client can insert and delete-the-minimum.
- Client can change the key by specifying the index.
```java
public class IndexMinPQ<Key extends Comparable<Key>>{
IndexMinPQ(int N) // create indexed priority queue with indices 0, 1, ..., N-1
void insert(int k, Key key) // associate key with index k
void decreaseKey(int k, Key key) // decrease the key associated with index k
boolean contains() // is k an index on the priority queue?
int delMin() // remove a minimal key and return its associated index
boolean isEmpty() // is the priority queue empty?
int size() // number of entries in the priority queue
}
```
**Challenge.** Find min weight edge with exactly one endpoint in \( T \).
**Eager solution.** Maintain a PQ of vertices connected by an edge to \( T \), where priority of vertex \( v \) = weight of shortest edge connecting \( v \) to \( T \).
- Delete min vertex \( v \) and add its associated edge \( e = v - w \) to \( T \).
- Update PQ by considering all edges \( e = v - x \) incident to \( v \)
- ignore if \( x \) is already in \( T \)
- add \( x \) to PQ if not already on it
- decrease priority of \( x \) if \( v - x \) becomes shortest edge connecting \( x \) to \( T \)
\[ \begin{array}{ccc} \text{from} & \text{to} & \text{weight} \\
0 & 1 & 0.19 \\
1 & 7 & 0.19 \\
2 & 0 & 0.26 \\
3 & 1 & 0.29 \\
4 & 0 & 0.38 \\
5 & 7 & 0.28 \\
6 & 6 & 0.58 \\
7 & 0 & 0.16 \\
\end{array} \]
Indexed priority queue implementation
Implementation.
- Start with same code as MinPQ.
- Maintain parallel arrays keys[], pq[], and qp[] so that:
- keys[i] is the priority of i
- pq[i] is the index of the key in heap position i
- qp[i] is the heap position of the key with index i
- Use swim(qp[k]) implement decreaseKey(k, key).
Prim's algorithm example: eager implementation
Use **IndexMinPQ**: key = edge weight, index = vertex.
(eager version has at most one PQ entry per vertex)
Prim's algorithm: running time
**Depends on PQ implementation:** $V$ insert, $V$ delete-min, $E$ decrease-key.
<table>
<thead>
<tr>
<th>PQ implementation</th>
<th>insert</th>
<th>delete-min</th>
<th>decrease-key</th>
<th>total</th>
</tr>
</thead>
<tbody>
<tr>
<td>array</td>
<td>$1$</td>
<td>$V$</td>
<td>$1$</td>
<td>$V^2$</td>
</tr>
<tr>
<td>binary heap</td>
<td>$\log V$</td>
<td>$\log V$</td>
<td>$\log V$</td>
<td>$E \log V$</td>
</tr>
<tr>
<td>d-way heap (Johnson 1975)</td>
<td>$d \log_d V$</td>
<td>$d \log_d V$</td>
<td>$\log_d V$</td>
<td>$E \log_{E/V} V$</td>
</tr>
<tr>
<td>Fibonacci heap (Fredman-Tarjan 1984)</td>
<td>$1$ †</td>
<td>$\log V$ †</td>
<td>$1$ †</td>
<td>$E + V \log V$</td>
</tr>
</tbody>
</table>
† amortized
**Bottom line.**
- Array implementation optimal for dense graphs.
- Binary heap much faster for sparse graphs.
- 4-way heap worth the trouble in performance-critical situations.
- Fibonacci heap best in theory, but not worth implementing.
edge-weighted graph API
greedy algorithm
Kruskal’s algorithm
Prim’s algorithm
advanced topics
Does a linear-time MST algorithm exist?
**Remark.** Linear-time randomized MST algorithm (Karger-Klein-Tarjan 1995).
<table>
<thead>
<tr>
<th>Year</th>
<th>Worst Case</th>
<th>Discovered By</th>
</tr>
</thead>
<tbody>
<tr>
<td>1975</td>
<td>$E \log \log V$</td>
<td>Yao</td>
</tr>
<tr>
<td>1976</td>
<td>$E \log \log V$</td>
<td>Cheriton-Tarjan</td>
</tr>
<tr>
<td>1984</td>
<td>$E \log^* V, E + V \log V$</td>
<td>Fredman-Tarjan</td>
</tr>
<tr>
<td>1986</td>
<td>$E \log (\log^* V)$</td>
<td>Gabow-Galil-Spencer-Tarjan</td>
</tr>
<tr>
<td>1997</td>
<td>$E \alpha(V) \log \alpha(V)$</td>
<td>Chazelle</td>
</tr>
<tr>
<td>2000</td>
<td>$E \alpha(V)$</td>
<td>Chazelle</td>
</tr>
<tr>
<td>2002</td>
<td>Optimal</td>
<td>Pettie-Ramachandran</td>
</tr>
<tr>
<td>20xx</td>
<td>$E$</td>
<td>???</td>
</tr>
</tbody>
</table>
Given $N$ points in the plane, find MST connecting them, where the distances between point pairs are their Euclidean distances.
**Brute force.** Compute $\sim \frac{N^2}{2}$ distances and run Prim's algorithm.
**Ingenuity.** Exploit geometry and do it in $\sim c N \log N$.
Euclidean MST
- edge-weighted graph API
- greedy algorithm
- Kruskal’s algorithm
- Prim’s algorithm
- advanced topics
Scientific application: clustering
**k-clustering.** Divide a set of objects classify into k coherent groups.
**Distance function.** Numeric value specifying "closeness" of two objects.
**Goal.** Divide into clusters so that objects in different clusters are far apart.
Applications.
- Routing in mobile ad hoc networks.
- Document categorization for web search.
- Similarity searching in medical image databases.
- Skycat: cluster $10^9$ sky objects into stars, quasars, galaxies.
outbreak of cholera deaths in London in 1850s (Nina Mishra)
Single-link clustering
**k-clustering.** Divide a set of objects classify into k coherent groups.
**Distance function.** Numeric value specifying "closeness" of two objects.
**Single link.** Distance between two clusters equals the distance between the two closest objects (one in each cluster).
**Single-link clustering.** Given an integer k, find a k-clustering that maximizes the distance between two closest clusters.
Single-link clustering algorithm
“Well-known” algorithm for single-link clustering:
• Form V clusters of one object each.
• Find the closest pair of objects such that each object is in a different cluster, and merge the two clusters.
• Repeat until there are exactly k clusters.
Observation. This is Kruskal’s algorithm (stop when k connected components).
Alternate solution. Run Prim’s algorithm and delete k-1 max weight edges.
Dendrogram. Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Dendrogram
Dendrogram. Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Dendrogram
**Dendrogram.** Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Dendrogram
Dendrogram. Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Dendrogram. Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Dendrogram. Tree diagram that illustrates arrangement of clusters.
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html
Tumors in similar tissues cluster together.
Reference: Botstein & Brown group
|
{"Source-Url": "http://www.cs.princeton.edu/courses/archive/spring11/cos226/lectures/14-43MinimumSpanningTrees.pdf", "len_cl100k_base": 6387, "olmocr-version": "0.1.50", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 102956, "total-output-tokens": 8758, "length": "2e12", "weborganizer": {"__label__adult": 0.0004701614379882813, "__label__art_design": 0.00030517578125, "__label__crime_law": 0.00048613548278808594, "__label__education_jobs": 0.0007162094116210938, "__label__entertainment": 9.733438491821288e-05, "__label__fashion_beauty": 0.0001913309097290039, "__label__finance_business": 0.0002084970474243164, "__label__food_dining": 0.0005640983581542969, "__label__games": 0.001190185546875, "__label__hardware": 0.0013799667358398438, "__label__health": 0.0010700225830078125, "__label__history": 0.0003139972686767578, "__label__home_hobbies": 0.00012695789337158203, "__label__industrial": 0.0004448890686035156, "__label__literature": 0.0003485679626464844, "__label__politics": 0.00025963783264160156, "__label__religion": 0.0004761219024658203, "__label__science_tech": 0.050811767578125, "__label__social_life": 0.0001143813133239746, "__label__software": 0.006011962890625, "__label__software_dev": 0.93310546875, "__label__sports_fitness": 0.0004682540893554687, "__label__transportation": 0.0006394386291503906, "__label__travel": 0.00021958351135253904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21702, 0.04289]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21702, 0.27218]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21702, 0.68805]], "google_gemma-3-12b-it_contains_pii": [[0, 132, false], [132, 341, null], [341, 562, null], [562, 785, null], [785, 1142, null], [1142, 1825, null], [1825, 1931, null], [1931, 2049, null], [2049, 2188, null], [2188, 2291, null], [2291, 2933, null], [2933, 3506, null], [3506, 4111, null], [4111, 4644, null], [4644, 5120, null], [5120, 5303, null], [5303, 5705, null], [5705, 5807, null], [5807, 6117, null], [6117, 6812, null], [6812, 7039, null], [7039, 7610, null], [7610, 8064, null], [8064, 8548, null], [8548, 8653, null], [8653, 8747, null], [8747, 8900, null], [8900, 8934, null], [8934, 8989, null], [8989, 9365, null], [9365, 9855, null], [9855, 10386, null], [10386, 11037, null], [11037, 11561, null], [11561, 11669, null], [11669, 12143, null], [12143, 12175, null], [12175, 12207, null], [12207, 12582, null], [12582, 12908, null], [12908, 13510, null], [13510, 13721, null], [13721, 14602, null], [14602, 14929, null], [14929, 15205, null], [15205, 15947, null], [15947, 16744, null], [16744, 17082, null], [17082, 17237, null], [17237, 18109, null], [18109, 18203, null], [18203, 18904, null], [18904, 19196, null], [19196, 19300, null], [19300, 19848, null], [19848, 20274, null], [20274, 20708, null], [20708, 20854, null], [20854, 21012, null], [21012, 21174, null], [21174, 21332, null], [21332, 21478, null], [21478, 21624, null], [21624, 21702, null]], "google_gemma-3-12b-it_is_public_document": [[0, 132, true], [132, 341, null], [341, 562, null], [562, 785, null], [785, 1142, null], [1142, 1825, null], [1825, 1931, null], [1931, 2049, null], [2049, 2188, null], [2188, 2291, null], [2291, 2933, null], [2933, 3506, null], [3506, 4111, null], [4111, 4644, null], [4644, 5120, null], [5120, 5303, null], [5303, 5705, null], [5705, 5807, null], [5807, 6117, null], [6117, 6812, null], [6812, 7039, null], [7039, 7610, null], [7610, 8064, null], [8064, 8548, null], [8548, 8653, null], [8653, 8747, null], [8747, 8900, null], [8900, 8934, null], [8934, 8989, null], [8989, 9365, null], [9365, 9855, null], [9855, 10386, null], [10386, 11037, null], [11037, 11561, null], [11561, 11669, null], [11669, 12143, null], [12143, 12175, null], [12175, 12207, null], [12207, 12582, null], [12582, 12908, null], [12908, 13510, null], [13510, 13721, null], [13721, 14602, null], [14602, 14929, null], [14929, 15205, null], [15205, 15947, null], [15947, 16744, null], [16744, 17082, null], [17082, 17237, null], [17237, 18109, null], [18109, 18203, null], [18203, 18904, null], [18904, 19196, null], [19196, 19300, null], [19300, 19848, null], [19848, 20274, null], [20274, 20708, null], [20708, 20854, null], [20854, 21012, null], [21012, 21174, null], [21174, 21332, null], [21332, 21478, null], [21478, 21624, null], [21624, 21702, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21702, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21702, null]], "pdf_page_numbers": [[0, 132, 1], [132, 341, 2], [341, 562, 3], [562, 785, 4], [785, 1142, 5], [1142, 1825, 6], [1825, 1931, 7], [1931, 2049, 8], [2049, 2188, 9], [2188, 2291, 10], [2291, 2933, 11], [2933, 3506, 12], [3506, 4111, 13], [4111, 4644, 14], [4644, 5120, 15], [5120, 5303, 16], [5303, 5705, 17], [5705, 5807, 18], [5807, 6117, 19], [6117, 6812, 20], [6812, 7039, 21], [7039, 7610, 22], [7610, 8064, 23], [8064, 8548, 24], [8548, 8653, 25], [8653, 8747, 26], [8747, 8900, 27], [8900, 8934, 28], [8934, 8989, 29], [8989, 9365, 30], [9365, 9855, 31], [9855, 10386, 32], [10386, 11037, 33], [11037, 11561, 34], [11561, 11669, 35], [11669, 12143, 36], [12143, 12175, 37], [12175, 12207, 38], [12207, 12582, 39], [12582, 12908, 40], [12908, 13510, 41], [13510, 13721, 42], [13721, 14602, 43], [14602, 14929, 44], [14929, 15205, 45], [15205, 15947, 46], [15947, 16744, 47], [16744, 17082, 48], [17082, 17237, 49], [17237, 18109, 50], [18109, 18203, 51], [18203, 18904, 52], [18904, 19196, 53], [19196, 19300, 54], [19300, 19848, 55], [19848, 20274, 56], [20274, 20708, 57], [20708, 20854, 58], [20854, 21012, 59], [21012, 21174, 60], [21174, 21332, 61], [21332, 21478, 62], [21478, 21624, 63], [21624, 21702, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21702, 0.04771]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7e7c522624d1c9451e3acaf50eb857daf9f39cc0
|
Chapter 8
Basic Synchronization Principles
Need for Synchronization
- Multiprogramming
- Multiple concurrent, independent processes
- Those processes might want to coordinate activities
\[
\text{shared } x, y
\]
Proc A {
while (true) {
<compute A1>
write(x)
<compute A2>
read(y)
}
}
Proc B {
while (true) {
\text{read}(x)
<compute B1>
write(y)
<compute B2>
}
}
- Clearly, synchronization is needed if
- A wants B to read \text{x} \text{ after} it writes it & \text{before} it re-writes
Barriers to providing synchronization
- What are the barriers to providing good synchronization capabilities?
- No widely accepted parallel programming languages
- CSP
- Linda
- No widely use paradigm
- How do you decompose a problem?
- OS only provides minimal support
- Test and Set
- Semaphore
- Monitor
shared float balance;
/* Code schema for p1 */
.. balance = balance + amount;
.. /* Schema for p1 */
load R1, balance
load R2, amount
add R1, R2
store R1, balance
/* Code schema for p2 */
.. balance = balance - amount;
.. /* Schema for p2 */
load R1, balance
load R2, amount
sub R1, R2
store R1, balance
Critical Section Problem...
/* Schema for p1 */
1. load R1, balance
2. load R2, amount
3. add R1, R2
4. store R1, balance
/* Schema for p2 */
1. load R1, balance
2. load R2, amount
3. sub R1, R2
4. store R1, balance
- Suppose:
- Execution sequence: 1, 2, 3
- Lost update: 2
- Execution sequence: 1, 4, 3, 6
- Lost update: 3
- Together => non-determinacy
- Race condition exists
Race Condition Example 2
Two processes want to access shared memory at same time
Taken from Modern Operating Systems, 2\textsuperscript{nd} Ed, Tanenbaum, 2001
Using Shared Global Variables – Ver 1
```
Shared integer processnumber = 1;
void processone;
{
while (true)
{
while (processnumber == 2)
{
criticalsectionone;
processnumber = 2;
otherstuffone;
}
}
}
void processtwo;
{
while (true)
{
while (processnumber == 1)
{
criticalsectiontwo;
processnumber = 1;
otherstufftwo;
}
}
}
```
Single global variable forces **lockstep synchronization**
Using Shared Global Variables – Ver 2
Shared boolean p1inside = false, p2inside = false;
void processone;
{
while (true) {
while (p2inside);
p1inside = true;
criticalsectionone;
p1inside = false;
otherstuffone;
}
}
void processtwo;
{
while (true) {
while (p1inside);
p2inside = true;
criticalsectiontwo;
p2inside = false;
otherstufftwo;
}
}
- Process 1 & 2 can both be in the critical sections at the same time
Because Test & Set operations are not atomic
==> Move setting of p1inside/p2inside before test
void processone;
{
while (true) {
p1wantsin = true;
while (p2wantsin)
{
}
criticalsectionone;
p1wantsin = false;
otherstuffone;
}
}
void processtwo;
{
while (true) {
p2wantsin = true;
while (p1wantsin)
{
}
criticalsectiontwo;
p2wantsin = false;
otherstufftwo;
}
}
- **Deadlock** can occur if both sets flag at the same time
==> Need a way to break out of loops.....
Using Shared Global Variables – Peterson
Shared boolean p1wantsin = false, p2wantsin = false;
Shared int will_wait;
void processone;
{
while (true) {
p1wantsin = true;
will_wait = 1;
while (p2wantsin && (will_wait == 1))
;
criticalsectionone;
p1wantsin = false;
otherstuffone;
}
}
void processtwo;
{
while (true) {
p2wantsin = true;
will_wait = 2;
while (p1wantsin && (will_wait == 2))
;
criticalsectiontwo;
p2wantsin = false;
otherstufftwo;
}
}
• Guarantees mutual exclusion and no blocking
Wherein Lies the Problem?
- Problem stems from interruption of software-based process while executing critical code (low-level)
- Solution
- Identify critical section
- *Disable interrupts* while in Critical Section
```c
/* Program for P1 */
DisableInterrupts();
balance = balance + amount; //CS
EnableInterrupts();
/* Program for P2 */
DisableInterrupts();
Balance = balance - amount; //CS
EnableInterrupts();
```
shared double balance;
Using Interrupts...
- This works *BUT*...
- Allows process to disable interrupts for arbitrarily long time
- What if I/O interrupt needed?
- What if one of the processes is in infinite loop inside the Critical Section
- Let’s examine the use of Shared Variables again....
/* Program for P1 */
..
/* Acquire lock */
while(lock) {NULL;}
lock = TRUE;
/* Execute critical section */
balance = balance + amount;
/* Release lock */
lock = FALSE;
..
/* Program for P2 */
..
/* Acquire lock */
while(lock) {NULL;}
lock = TRUE;
/* Execute critical section */
balance = balance - amount;
/* Release lock */
lock = FALSE;
..
lock == FALSE
=> No process in CS
=> Any process can enter CS
lock == TRUE
=> One process in CS
=> No other process admitted to CS
Synchronizing Variable...
- What if P1 interrupted after lock Set to TRUE
=> P2 cannot execute past while does hard wait
=> Wasted CPU time
- What if P1 interrupted after Test, before Set
=> P1 & P2 can be in the CS at the same time !!!
- Wasted CPU time is bad, but tolerable.....
Critical Section Violation cannot be tolerated
==> Need Un-interruptable “Test & Set” operation
Un-interruptible Test & Set
\textbf{enter}(\text{lock}) \{ \\
\text{disableInterrupts}(); \\
/* Loop until lock TRUE */ \\
\text{while} \ (\text{lock}) \{ \\
\text{/* Let interrupts occur */} \\
\text{enableInterrupts}(); \uparrow \\
\text{disableInterrupts}(); \\
\} \\
\text{lock} = \text{TRUE}; \\
\text{enableInterrupts}(); \\
\}
\textbf{exit}(\text{lock}) \{ \\
\text{disableInterrupts}(); \\
\text{lock} = \text{FALSE}; \\
\text{enableInterrupts}(); \\
\}
Enable interrupts so that the OS, I/O can use them
Re-disable interrupts when ready to test again
Un-interruptible Test & Set...
- **Solution**
P1
```
enter(lock);
CS { balance = balance + amount; }
exit(lock);
```
P2
```
enter(lock);
CS { balance = balance - amount; }
exit(lock);
```
- **Note**
- CS is totally bounded by enter/exit
- P2 can still wait (wasted CPU cycles) if P1 is interrupted after setting lock (i.e., entering critical section), but
- **Mutual exclusion is achieved!!!!!**
- Does not generalize to multi-processing
Protecting Multiple Components
Shared: list L,
boolean ListLK = False;
boolean LngthLK = False;
/* Program for P1 */
enter(listLK);
<delete element>;
exit(listLK);
<intermediate comp.>;
enter(lngthLK);
<update length>;
exit(lngthLK);
/* Program for P2 */
enter(lngthLK);
<update length>;
exit(lngthLK);
<intermediate comp.>;
enter(listLK);
<delete element>;
exit(listLK);
- Use enter/exit to update structure with 2 pieces if information
- \textit{But try to minimize time component locked out}
Protecting Multiple Components: 1st try
Shared: list L,
boolean ListLK = False;
boolean LngthLK = False;
/* Program for P1 */
enter(listLK);
<delete element>;
exit(listLK);
<intermediate comp.>;
enter(lngthLK);
<update length>;
exit(lngthLK);
/* Program for P2 */
enter(lngthLK);
<update length>;
exit(lngthLK);
<intermediate comp.>;
enter(listLK);
<delete element>;
exit(listLK);
Suppose: P1... ; P2 runs & finishes; P1 ......
Any access to Lnth vble during “intermediate comp.” will be incorrect !!!
=> Programming Error: List and variable need to be updated together
Protecting Multiple Components: 2nd try
Shared: list L,
boolean ListLK = False;
boolean LngthLK = False;
/* Program for P1 */
enter(listLK);
<delete element>;
<intermediate comp.>;
enter(lngthLK);
<update length>;
exit(listLK);
exit(lngthLK);
/* Program for P2 */
enter(lngthLK);
<update length>;
<intermediate comp.>;
enter(listLK)
<delete element>;
exit(lngthLK);
exit(listLK);
Suppose: P1...∩ ⊗ ;
P2 runs to ⊗ and blocks ;
P1 starts & blocks on “enter”
=> DEADLOCK
CS 3204 - Arthur
Deadlock
- When 2 or more processes get into a state whereby each is holding a resource requested by the other
P1
.
Request Resource$_1$
.
Request Resource$_2$
P2
.
Request Resource$_2$
.
Request Resource$_1$
P1 requests and gets R$_1$
interrupt
P2 requests and gets R$_2$
interrupt
P1 requests R$_2$ and blocks
P2 requests R$_1$ and blocks
Solution to Synchronization
- The previous examples have illustrated 2 methods for synchronizing / coordinating processes
- Interrupt
- Shared variable
- Each has its own set of problems
- Interrupt
- May be disabled for too long
- Shared variable
- Test, then set – interruptible
- Non-interruptible – gets complex
- Dijkstra introduces a 3\textsuperscript{rd} and much more preferable method
- Semaphore
Semaphore
- Dijkstra, 1965
- Synchronization primitive with no busy waiting
- It is an integer variable changed or tested by one of the two indivisible operations
- Actually implemented as a protected variable type
\[ \text{var } x : \text{ semaphore} \]
Semaphore operations
- **P** operation ("wait")
- Requests permission to use a critical resource
$$S = S - 1;$$
if
$$S < 0$$ then
put calling process on queue
- **V** operation ("signal")
- Releases the critical resource
$$S = S + 1;$$
if
$$S <= 0$$ then
remove one process from queue
- Queues are associated with each semaphore variable
Semaphore : Example
Critical resource \( T \)
Semaphore \( S \leftarrow \text{initial\_value} \)
Processes \( A, B \)
<table>
<thead>
<tr>
<th>Process A</th>
<th>Process B</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \cdot )</td>
<td>( \cdot )</td>
</tr>
<tr>
<td>( P(S); )</td>
<td>( P(S); )</td>
</tr>
<tr>
<td>(<\text{CS}> \ /* \text{access} \ T \ */ )</td>
<td>(<\text{CS}> \ /* \text{access} \ T \ */ )</td>
</tr>
<tr>
<td>( V(S); )</td>
<td>( V(S); )</td>
</tr>
<tr>
<td>( \cdot )</td>
<td>( \cdot )</td>
</tr>
</tbody>
</table>
Semaphore: Example...
\[ \text{var } S : \text{ semaphore } \leftarrow 1 \]
Queue associated with \( S \)
| | | | | | | |
Value of \( S \): 1
<table>
<thead>
<tr>
<th>Process A</th>
<th>Process B</th>
<th>Process C</th>
</tr>
</thead>
<tbody>
<tr>
<td>\text{P}(S); \newline <\text{cs}> \newline \text{V}(S);</td>
<td>\text{P}(S); \newline <\text{cs}> \newline \text{V}(S);</td>
<td>\text{P}(S); \newline <\text{cs}> \newline \text{V}(S);</td>
</tr>
</tbody>
</table>
Types of Semaphores
- Binary Semaphores
- Maximum value is 1
- Counting Semaphores
- Maximum value is greater than 1
- Both use same P and V definitions
- Synchronizing code and initialization determines what values are needed, and therefore, what kind of semaphore will be used
Using Semaphores
Shared semaphore \texttt{mutex} = 1;
\begin{verbatim}
proc_1() {
while(true) {
<compute section>
P(mutex);
<critical section>
V(mutex);
}
}
proc_2() {
while(true) {
<compute section>
P(mutex);
<critical section>
V(mutex);
}
}
\end{verbatim}
(1) P1 \implies P(mutex)
Decrements; < 0 ?; NO (0);
P1 Enters CS;
P1 interrupted
(2) P2 \implies P(mutex)
Decrements; < 0 ?; YES (-1)
P2 \textbf{blocks} on \texttt{mutex}
(3) P1 finishes CS work
P1 \implies V(mutex);
Increments; \leq 0 ?; YES (0)
P2 woken & proceeds
Non-Interruptable "Test & Sets"
Using Semaphores - Example 1
Shared semaphore mutex = 1;
proc_0() {
...
P(mutex);
balance = balance + amount;
V(mutex);
...
}
proc_1() {
...
P(mutex);
balance = balance - amount;
V(mutex);
...
}
Suppose P1 issues P(mutex) first ......
No Problem
Suppose P2 issues P(mutex) first ......
Note: Could use Interrupts to implement solution,
But (1) with interrupts masked off, what happens if
a prior I/O request is satisfied
(2) Interrupt approach would not work on Multiprocessor
Using Semaphores – Example 2
Shared semaphore: s1 = 0, s2 = 0;
Note: values started at 0... ok?
```
proc_A() {
while(true) {
<compute A1>
write(x);
V(s1);
<compute A2>;
P(s2);
read(y);
}
}
```
```
proc_B() {
while(true) {
P(s1);
read(x);
<compute B1>;
write(y);
V(s2);
<compute B2>;
}
}
```
- Cannot use Interrupt disable/enable here because we have *multiple distinct synchronization points*
- Interrupt disable/enable can only distinguish 1 synchronization event
- **Therefore, 2 Semaphores**
Using Hardware Test & Set [TS(s)] to Implement Binary Semaphore “Semantics”
boolean s = FALSE;
...
while( TS(s) );
<critical section>
S = FALSE;
...
■ TS(s)
■ Test s
■ Set s to True
■ Return original value
semaphore s = 1;
...
?
≡
P(s);
<critical section>
V(s);
...
Note: No actual queueing, each process just “hard waits”
Counting Semaphores
- Most of our examples have only required Binary Semaphore
- Only 0 or 1 values
- But synchronization problems arise that require a more general form of semaphores
- Use counting semaphores
- Values: non-negative integers
Classical Problems
- Producer / Consumer Problem
- Readers – Writers Problem
Producer / Consumer Problem (Classic)
- Critical resource
- Set of message buffers
- 2 Processes
- Producer: Creates a message and places it in the buffer
- Consumer: Reads a message and deletes it from the buffer
- Objective
- Allow the producer and consumer to run concurrently
P/C...
- **Constraints**
- Producer must have a non-full buffer to put its message into
- Consumer must have a non-empty buffer to read
- Mutually exclusive access to Buffer pool
- **Unbounded Buffer problem**
- Infinite buffers
- Producer never has to wait
- Not interesting nor practical
- **Bounded Buffer Problem**
- Limited set of buffers
P/C - Solution
Shared Full: semaphore \texttt{\leftarrow} 0;
Empty semaphore \texttt{\leftarrow} MaxBuffers;
MEPC: semaphore \texttt{\leftarrow} 1;
\begin{itemize}
\item Producer
\begin{verbatim}
Begin
...
P(Empty);
P(MEPC);
<add item to buffer>
V(MEPC);
V(Full);
...
End;
\end{verbatim}
\item Consumer
\begin{verbatim}
Begin
...
P(Full);
P(MEPC);
<remove item from buffer>
V(MEPC);
V(Empty);
...
End;
\end{verbatim}
\end{itemize}
P/C – Another Look
Pool full of Baskets
Consumer
Pool of empty Baskets
Producer
P/C – Another Look
- 9 Baskets – Bounded
- Consumer – Empties basket
- Can only remove basket from Full Pool, if one is there
- => Need “full” count
- Emptys basket and places it in Empty pool
- Producer – Fills basket
- Can only remove basket from Empty pool, if one is there
- => Need “empty” count
- Fills basket and places it in Full pool
P/C - Another Look
Shared semaphore: Emutex = 1, Fmutex = 1; full = 0, empty = 9;
Shared buf_type: buffer[9];
producer() {
buf_type *next, *here;
while(True) {
produce_item(next);
P(empty); /*Claim empty buffer*/
P(Emutex); /*Manipulate the pool*/
here = obtain(empty);
V(Emutex);
copy_buffer(next, here);
P(Fmutex); /*Manipulate the pool*/
release(here, fullpool);
V(Fmutex); /*Signal full buffer*/
V(full);
}
}
consumer() {
buf_type *next, *here;
while(True) {
P(full); /*Claim full buffer*/
P(Fmutex); /*Manipulate the pool*/
here = obtain(full);
V(Fmutex);
copy_buffer(here, next);
P(Emutex); /*Manipulate the pool*/
release(here, emptypool);
V(Emutex); /*Signal empty buffer*/
V(empty);
consume_item(next);
}
}
P/C - Example
- How realistic is PCP scenario?
- Consider a circular buffer
- 12 slots
- Producer points at next one it will fill
- Consumer points at next one it will empty
- Don’t want:
Producer = Consumer
=> (1) Consumer “consumed” faster than producer “produced”, or
(2) Producer “produced” faster than consumer “consumed”.
Do we need to synchronize access to buffer?
P/C – Real World Scenario
- CPU can produce data faster than terminal can accept or viewer can read
```
Communication buffers in both
Xon/Xoff Flow Control
```
Readers / Writers Problem (Classic)
- Multiple readers of the same file?
- No problem
- Multiple writers to the same file?
- Might be a problem writing same record
=> Potentially a “lost update”
- Writing while reading
- Might be a problem – read might occur while being written
=> Inconsistent data
Readers – Writers Problem
- Critical resource
- File
- Consider multiple processes which can read or write to the file
- What constraints must be placed on these processes?
- Many readers may read at one time
- Mutual exclusion between readers and writers
- Mutual exclusion between writers
Strong Reader Solution
Shared int: readCount = 0;
semaphore: mutexRC = 1, writeBlock = 1;
reader(){
while(TRUE) {
P(mutexRC);
readCount = readCount + 1;
if (readCount == 1)
P(writeBlock);
V(mutexRC);
access_file;
P(mutexRC);
readCount = readCount - 1;
if (readCount == 0)
V(writeBlock);
V(mutexRC);
}
}
writer(){
while(TRUE) {
P(writeBlock);
access_file;
V(writeBlock);
}
}
This solution gives preference to Readers
If a reader has access to file and other readers want access, they get it... all writers must wait until all readers are done
Reader / Writers – Ver 2
- Create a Strong Writer
- Give priority to a waiting writer
- If a writer wishes to access the file, then it must be the next process to enter its critical section
Strong Writers Solution
Shared int: readCount = 0, writeCount = 0
semaphore: mutex1 = 1, mutex2 = 1, readBlock = 1, writePending = 1, writeBlock = 1;
reader()
{
while(TRUE) {
P(writePending);
P(readBlock);
P(mutex1);
readCount = readCount + 1;
if (readCount == 1) then
P(writeBlock);
V(mutex1);
V(readBlock);
V(writePending);
access file;
P(mutex1);
readCount = readCount - 1;
if (readCount == 0) then
V(writeBlock);
V(mutex1);
}
}
writer()
{
while(TRUE) {
P(mutex2);
writeCount = writeCount + 1;
if (writeCount == 1) then
P(readBlock);
V(mutex2);
P(writeBlock);
access file;
V(writeBlock);
P(mutex2);
writeCount = writeCount - 1;
if (writeCount == 0) then
V(readBlock);
V(mutex2);
}
}
Implementing Counting Semaphores
```c
struct semaphore {
int value = <initial value>;
boolean mutex = FALSE;
boolean hold = TRUE;
};
Shared struct semaphore s;
P(struct semaphore s) {
while( TS(s.mutex) );
s.value = s.value - 1;
if (s.value < 0) {
s.mutex = FALSE;
while( TS(s.hold) );
}
else {
s.mutex = FALSE;
}
}
V(struct semaphore s) {
while( TS(s.mutex) );
s.value = s.value + 1;
if (s.value <= 0) {
while( !s.hold );
s.hold = FALSE;
}
s.mutex = FALSE;
}
```
|
{"Source-Url": "http://courses.cs.vt.edu:80/~cs3204/fall2003/ali/notes/chapter8/chapter8.pdf", "len_cl100k_base": 5396, "olmocr-version": "0.1.53", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 65774, "total-output-tokens": 7549, "length": "2e12", "weborganizer": {"__label__adult": 0.0002472400665283203, "__label__art_design": 0.0001958608627319336, "__label__crime_law": 0.0002593994140625, "__label__education_jobs": 0.0003457069396972656, "__label__entertainment": 5.131959915161133e-05, "__label__fashion_beauty": 8.618831634521484e-05, "__label__finance_business": 0.00016498565673828125, "__label__food_dining": 0.0003113746643066406, "__label__games": 0.0005154609680175781, "__label__hardware": 0.00164031982421875, "__label__health": 0.00030541419982910156, "__label__history": 0.00014066696166992188, "__label__home_hobbies": 9.179115295410156e-05, "__label__industrial": 0.0004000663757324219, "__label__literature": 0.00014495849609375, "__label__politics": 0.0001741647720336914, "__label__religion": 0.00031113624572753906, "__label__science_tech": 0.01763916015625, "__label__social_life": 5.7220458984375e-05, "__label__software": 0.0078887939453125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0002321004867553711, "__label__transportation": 0.00031065940856933594, "__label__travel": 0.00013697147369384766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19039, 0.01532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19039, 0.69765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19039, 0.65234]], "google_gemma-3-12b-it_contains_pii": [[0, 44, false], [44, 538, null], [538, 876, null], [876, 1205, null], [1205, 1600, null], [1600, 1762, null], [1762, 2297, null], [2297, 2908, null], [2908, 3420, null], [3420, 4052, null], [4052, 4498, null], [4498, 4778, null], [4778, 5264, null], [5264, 5655, null], [5655, 6219, null], [6219, 6698, null], [6698, 7199, null], [7199, 7799, null], [7799, 8303, null], [8303, 8656, null], [8656, 9087, null], [9087, 9349, null], [9349, 9773, null], [9773, 10161, null], [10161, 10574, null], [10574, 10861, null], [10861, 11505, null], [11505, 12014, null], [12014, 12628, null], [12628, 12961, null], [12961, 13210, null], [13210, 13289, null], [13289, 13580, null], [13580, 13941, null], [13941, 14382, null], [14382, 14466, null], [14466, 14824, null], [14824, 15583, null], [15583, 15969, null], [15969, 16131, null], [16131, 16475, null], [16475, 16777, null], [16777, 17464, null], [17464, 17657, null], [17657, 18476, null], [18476, 19039, null]], "google_gemma-3-12b-it_is_public_document": [[0, 44, true], [44, 538, null], [538, 876, null], [876, 1205, null], [1205, 1600, null], [1600, 1762, null], [1762, 2297, null], [2297, 2908, null], [2908, 3420, null], [3420, 4052, null], [4052, 4498, null], [4498, 4778, null], [4778, 5264, null], [5264, 5655, null], [5655, 6219, null], [6219, 6698, null], [6698, 7199, null], [7199, 7799, null], [7799, 8303, null], [8303, 8656, null], [8656, 9087, null], [9087, 9349, null], [9349, 9773, null], [9773, 10161, null], [10161, 10574, null], [10574, 10861, null], [10861, 11505, null], [11505, 12014, null], [12014, 12628, null], [12628, 12961, null], [12961, 13210, null], [13210, 13289, null], [13289, 13580, null], [13580, 13941, null], [13941, 14382, null], [14382, 14466, null], [14466, 14824, null], [14824, 15583, null], [15583, 15969, null], [15969, 16131, null], [16131, 16475, null], [16475, 16777, null], [16777, 17464, null], [17464, 17657, null], [17657, 18476, null], [18476, 19039, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19039, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19039, null]], "pdf_page_numbers": [[0, 44, 1], [44, 538, 2], [538, 876, 3], [876, 1205, 4], [1205, 1600, 5], [1600, 1762, 6], [1762, 2297, 7], [2297, 2908, 8], [2908, 3420, 9], [3420, 4052, 10], [4052, 4498, 11], [4498, 4778, 12], [4778, 5264, 13], [5264, 5655, 14], [5655, 6219, 15], [6219, 6698, 16], [6698, 7199, 17], [7199, 7799, 18], [7799, 8303, 19], [8303, 8656, 20], [8656, 9087, 21], [9087, 9349, 22], [9349, 9773, 23], [9773, 10161, 24], [10161, 10574, 25], [10574, 10861, 26], [10861, 11505, 27], [11505, 12014, 28], [12014, 12628, 29], [12628, 12961, 30], [12961, 13210, 31], [13210, 13289, 32], [13289, 13580, 33], [13580, 13941, 34], [13941, 14382, 35], [14382, 14466, 36], [14466, 14824, 37], [14824, 15583, 38], [15583, 15969, 39], [15969, 16131, 40], [16131, 16475, 41], [16475, 16777, 42], [16777, 17464, 43], [17464, 17657, 44], [17657, 18476, 45], [18476, 19039, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19039, 0.01429]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
1be07e04ec58b47fca84ee58ca46ba0cc4b0f4e8
|
Consider the following queries.
- Who was the president of China in 1998?
- How old is the universe?
The answer to the first query is definite.
The answer to the second query is provisional at any given time.
Search engines can assist in answering queries of the first kind.
However, queries of the second kind are mostly unexplored.
The mechanisms for answering the two kinds of queries are likely to be distinct.
Information queries / Discovery queries
Information queries ask for information that can be deduced.
Discovery queries ask for knowledge that can only be induced; at any given time, the answer is only a conjecture.
Data and background knowledge are two ingredients of the discovery process.
- Additional data may cause the current conjecture to be abandoned and replaced by a new one.
- If the discovery process is to be successful, then the correct answer will be found and never abandoned.
We will see that data and background knowledge can also assist in answering complex information queries.
Learning theory is uniquely positioned to provide a perspective on both information retrieval and discovery.
RichProlog offers a canonical implementation of the fundamental concepts of Learning theory in a logical setting.
We want to examine the following issues.
- If we use the web for discovery,
- what constitutes data?
- what is background knowledge?
- what are the queries?
- What could be the role of search engines in the discovery process?
We will see how Learning theory and RichProlog provide a framework for examining these issues.
Introduction
Queries
- Syntactic classification of queries
- Query answering and optimization
- Web, data, and background knowledge
Learning theory and RichProlog
- Identification in the limit
- RichProlog programs and queries
- RichProlog’s execution mechanism
Discovery from the web
- Answering the query on poetry
- Main issues
Conclusion
Σ_1 queries
In the terminology of Logic programming, current search engines answer Σ_1 queries of the form
\[ \exists \text{page } \varphi(\text{page}) \]
where \( \varphi(\text{page}) \) is a boolean combination of atoms of the form
\[ \text{contains}(\text{page, keyword}). \]
An example is:
\[ \exists \text{page}[\text{contains}(\text{page, indian}) \land \text{contains}(\text{page, flora}) \land \neg \text{contains}(\text{page, american})] \]
in a first attempt to find information on the flora of India.
Σ_2 queries
Discovery requires solving more complex Σ_2 queries. Examples:
\[ \exists x [\text{vaccine}(x) \land \forall y (\text{virus}_\text{instance}(y) \rightarrow \text{disables}(x, y))] \]
\[ \exists x [\text{law}(x) \land \forall y (\text{observed}_\text{datum}(y) \rightarrow \text{predicts}(x, y))] \]
Does there exist a book B such that B is a book on Indian flora and for all reviews R, if R is a review on B then R is positive?
\[ \exists \text{page}[\text{contains}(\text{page, indian}) \land \exists x [\text{contains}(\text{page, x}) \land \text{book}(x) \land \text{contains}(\text{page, flora}) \land \neg \text{contains}(\text{page, american})] \land \forall y (\text{review}_\text{on}(y, x) \rightarrow \text{positive}(y))] \]
Intended queries
With search engines, it is important to make the distinction between input queries like $\exists \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}))$ and intended queries.
Users will sometimes ask:
Does there exist a web page $P$ such that $P$ contains $\text{keyword1}$ and $P$ does not contain $\text{keyword2}$
when they expect the answer to:
$\exists \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}) \land \forall \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}) \rightarrow \neg \text{better}(\text{page}, \text{page}))$)
More formally
Users will sometimes ask the $\Sigma_1$ query:
$\exists \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}))$
when they expect the answer to the $\Sigma_2$ query:
$\exists \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}) \land \forall \text{page}(\text{contains}(\text{page}, \text{keyword1}) \land \neg \text{contains}(\text{page}, \text{keyword2}) \rightarrow \neg \text{better}(\text{page}, \text{page}))$
Input queries are $\Sigma_1$ whereas intended queries are sometimes $\Sigma_2$.
Computing solutions to intended queries
Existing search engines:
$\neg \text{better}$, based on numbers of hints, numbers of links, number of references, etc.;
compute all solutions to the input ($\Sigma_1$) query but rank them according to their definition of 'better';
the user is expected to consider one of the first few solutions output as the solution to the intended query.
When 10,000 hits are returned, the user ignores all except the first few ones; the other links are not considered to be a valuable source of information.
General $\Sigma_2$ queries / optimization (1)
The user has little control over the definition of 'better' employed by the search engine.
The definition of 'better' should not be independent of the keywords input by a particular user.
For instance, 'better' might mean 'cheaper,' for a user who wants to buy a bargain manual car and would like to get an answer to:
$\exists \text{page}(\text{contains}(\text{page}, \text{car}) \land \exists x(\text{contains}(\text{page}, x) \land \text{is}_a\text{car}(x) \land \text{manual}(x) \land \forall \text{page}(\forall y(\text{contains}(\text{page}, y) \land \text{is}_a\text{car}(y) \land \text{manual}(y) \rightarrow \neg \text{cheaper}(y, x))))$
Some search engine applications do address the $\Sigma_2$ queries that represent an optimization problem.
Solving a $\Sigma_2$ query is not always reducible to computing an optimal solution.
We are interested in $\Sigma_2$ queries that go beyond mere optimization.
A query on poetry (1)
Suppose you have heard of a poet who has only written sonnets, but do not remember his name.
A sonnet is a 14 line poem divided into 4 + 4 + 4 + 2 with the rhyme pattern abab cdcd efef gg (plus other syntactic constraints)
Typing the keyword "sonnet only" provides irrelevant information (103 hits on Google).
A query on poetry (2)
What you want is an answer to the $\Sigma_2$ query:
$$\exists \text{page}[\text{contains}(\text{page}, \text{poem}) \land \exists x(\text{contains}(\text{page}, x) \land \text{poet}(x) \land \forall \text{page}' \forall y(\text{contains}(\text{page}', y) \land \text{is}_a\text{-}\text{poem}(y) \land \text{written}_by(y, x) \rightarrow \text{sonnet}(y)))$$
A more careful user might rather opt for the query:
$$\exists \text{page}[\text{contains}(\text{page}, \text{poem}) \land \exists x(\text{contains}(\text{page}, x) \land \text{poet}(x) \land \forall \text{page}' \forall y(\forall t(\text{contains}(\text{page}', y) \land \text{contains}(\text{page'}, t) \land \text{has}_\text{title}(t, y) \land \text{is}_a\text{-}\text{poem}(y) \land \text{written}_by(t, x) \rightarrow \text{sonnet}(y))))$$
A sonnet: To Science by Edgar Allan Poe
Science! True daughter of Old Time thou art!
Who alterest all things with thy peering eyes.
Why preyest thou thus upon the poet’s heart,
Vulture, whose wings are dull realities?
How should he love thee? or how deem thee wise,
Who wouldst not leave him in his wandering
To seek for treasure in the jewelled skies,
Albeit he soared with an unwonted wing?
Hast thou not dragged Diana from her car?
And driven the Hamadryad from the wood
To seek a shelter in some happier star?
Hast thou not torn the Naiad from her flood,
The Elfin from the green grass, and from me
The summer dream beneath the tamarind tree?
Data and background knowledge
When a $\Sigma_1$ query is solved by the search engine, the (usually) large number of links returned can be conceived of as a stream of data, useful for answering information or discovery queries.
- For information queries, the answer will be deduced and known to be correct.
- For discovery queries, the answer will be induced, believed, and subject to changes.
In either case, background knowledge is helpful or even necessary.
- The background knowledge will be useful to extract information from the web page.
- Users should be able to input not only queries, but also knowledge.
Decomposing the query
The (first version of the) query suggests a three step approach.
- Solve $\exists \text{page} \text{contains}(\text{page}, \text{poem})$.
A standard search will do it and return a large number of links that will play the role of data.
- Solve $\exists \text{page}[\text{contains}(\text{page}, \text{poem}) \land
\exists \text{x} (\text{contains}(\text{page}, \text{x}) \land \text{poet} (\text{x}))]$
Poets have to be extracted from the data using some background knowledge.
- Solve the whole query. Sonnets have to be extracted from the data using (another) background knowledge.
General pattern
The general pattern of a discovery query for the web is:
$$\exists \text{page}[\text{contains}(\text{page}, \text{keywords}) \land$$
$$\exists (\varphi(\bar{x}) \land \forall \bar{y} \chi (\bar{x}, \bar{y}))]$$
Learning theory and RichProlog provide a framework for answering such queries.
Introductory example
A guessing game [Gold, 1967]:
Suppose I have a set of sets of numbers:
$$\{\{1, 2, 3, \ldots\}, \{2, 3, 4, \ldots\}, \{3, 4, 5, \ldots\}, \{4, 5, 6, \ldots\}, \ldots\}.$$
Suppose I pick 3, 4, 5, 6, 7, ... .
I’ll tell you a number in the set, and you have to guess the set.
- I tell you 10 - you guess {10, 11, 12, ...}
- I tell you 5 - you guess {5, 6, 7, ...}
- I tell you 7 - you still guess {5, 6, 7, ...}
...
- I tell you 3 - you still guess {3, 4, 5, ...}
...
Correctness in the limit
A good strategy:
*Guess the set whose least element is the lowest number thus far presented*
There’s no finite set which allows you to draw this conclusion with certainty, unless the smallest number is 1.
On a more technical note:
- In classical logic, compactness allows to know with certainty that a conclusion is correct from finite data.
- In Learning theory, we can only hypothesize that a conclusion is correct from finite data, without knowing with certainty. Correctness comes in the limit.
Logical connection
The scenario of learning in the limit can be cast in a logical setting for answering $\Sigma_2$ queries.
Learning in the limit corresponds to:
- guessing a witness to existentially quantified variables in a $\Sigma_2$ query
- before going through a limiting (noncompact) refutation stage involving the universally quantified variables.
RichProlog is to this logical setting what Prolog is to classical logic.
Solving queries: the Prolog approach
Axiomatize the problem: Write down a set of clauses $A$ and a sentence $\psi$, that represent knowledge about the reality being modeled.
Solve the problem: Prove $\psi$ from $A$.
This approach has many limitations. In particular:
- $A$ represents axiomatic knowledge only, not data.
- $\psi$ has to be a $\Sigma_1$ sentence; but we also want to be able to answer $\Sigma_2$ queries.
RichProlog programs
Predicates are partitioned into theoretical and observational.
For instance, \textit{father}(x,y) and \textit{price}(x,y) can be observational, whereas \textit{ancestor}(x,y) and \textit{cheap}(x) can be theoretical.
A RichProlog program consists of:
- a (possibly infinite) stream of data, built from the observational predicates—data can be positive only, negative only, or of either kind;
- some finite background knowledge, consisting of rules whose body is built from either observational or theoretical predicates, and whose head is built from a theoretical predicate.
RichProlog queries
The most general RichProlog queries are sentences of the form
\[
\exists \bar{x} (\varphi(\bar{x}) \land \neg \exists \bar{y} \chi(\bar{x}, \bar{y}))
\]
where \( \varphi(\bar{x}) \) and \( \chi(\bar{x}, \bar{y}) \) are restricted boolean combinations of atomic formulas built from either theoretical or observational predicates.
The syntax of RichProlog queries is not \textit{ad hoc} but is imposed by results from Learning theory: any solvable discovery query can be put in this form.
Strategy
To solve $\exists \bar{x} (\varphi(\bar{x}) \land \neg \exists \bar{y} \chi(\bar{x}, \bar{y}))$, RichProlog applies a generate and test strategy:
- generate a possible witness $\bar{t}$ such that $\varphi(\bar{t})$ can be deduced from the background knowledge plus the (finite) set of available data;
- test whether $\bar{t}$ is correct, by trying to prove $\exists \bar{y} \chi(\bar{t}, \bar{y})$ from the background knowledge plus the (finite) set of available data.
Testing whether $\bar{t}$ is correct is a refutation procedure, that will discard any incorrect witness in the limit, when enough data are available.
Generating possible poets
When Google answers the first part of the query, which is $\exists \text{page} \ contains(\text{page}, \text{poem})$, it returns 7,950,000 hits.
Using some background knowledge:
- author names start with capital letters;
- author names are preceded with by;
- ...
possible witnesses to $x$ for the query
$$\exists \text{page} [\ contains(\text{page}, \text{poem}) \land \ \exists x (\ contains(\text{page}, x) \land \ poet(x))]$$
will be generated.
Example
When trying to find an ideal wife, data and background are used to find a woman who has all the attributes of an ideal woman, solving the $\Sigma_1$ query $Q_D = \exists x \ ideal\_wife(x)$.
If $Q_D$ succeeds, the result is a woman whose name is $\text{name1}$.
Using background knowledge, RichProlog tries and solve the $\Sigma_1$ query $Q_I = \exists y (\ ancestor(y, \text{name1}) \land \ ancestor(y, \text{frank}))$
The name $\text{name1}$ is retained as long are $Q_I$ does not succeed.
If $Q_I$ succeeds, the computation backtracks to find alternative solutions to $Q_D$.
Refuting with poems
More background knowledge can be used to solve the remaining part of the query, by refuting the current witness.
Indeed, sonnets can easily be characterized using syntactic rules.
William Shakespeare wrote about 154 sonnets, and will be one of the first witnesses being generated. The query will find numerous Shakespearian sonnets, initially confirming this hypothesis.
Though we might think that he is a solution to the query, the poem A Lover’s Complaint eventually refutes that guess.
Sir Philip Sidney is one of the answers that can be discovered in the limit.
Building queries
Users are not experts in logic. They input keywords from the which a $\Sigma_1$ query is built; they do not input $\Sigma_1$ queries.
How will users express their discovery queries?
$\Sigma_2$ queries for optimization problems can be built automatically. A user who wants to buy a bargain manual car can input car and manual, ask to optimize according to first keyword and one of a few predefined keywords like cheap, and get an answer to:
$$
\exists \text{page} \left( \text{contains}(\text{page}, \text{car}) \land \exists x \left( \text{is\_a\_car}(x) \land \text{manual}(x) \land \forall y \left( \text{contains}(\text{page}, y) \land \text{is\_a\_car}(y) \land \text{manual}(y) \right) \rightarrow \neg \text{cheaper}(y, x) \right) \right)
$$
What about more general $\Sigma_2$ queries?
Variety of domains
$\Sigma_2$ queries contain formulas of the form $\chi(\bar{x}, \bar{y})$ for existentially quantified $\bar{x}$ and universally quantified $\bar{y}$.
This raises two main issues:
- The first issue is that variables in $\bar{x}$ and variables in $\bar{y}$ vary over different domains.
- With standard queries the domain is the set of web pages. To exploit information and get answers to more complex queries, arbitrary domains are needed.
In full generality, variables will have to vary over the set of web pages as well as over sets of concepts that make up the content of these pages.
Information extraction
The second issue is that $\chi(\bar{x}, \bar{y})$ is a relation between at least two pieces of information, not a property. Working with predicates of arity 2 at least is much more difficult than working with predicates of arity 1.
For example, the main difficulty to answer the query
**does there exist a web page P and a book on Indian flora B such that:**
- B has occurrences in P, and
- all reviews R on B are positive.
is to extract the information that $R$ is a review of $B$.
Background knowledge
Data are the result of a basic search. But the background knowledge will usually be given by the user.
Searching is an activity that can update the background knowledge.
- How will users input their background knowledge?
- How will search engines interact with the system that exploits the background knowledge?
- Searching is an activity that can update knowledge. How can we convert the result of searching into useful background knowledge?
- How can we improve the quality of the background knowledge?
Conclusion (1)
- We exploit very little of what is returned as the result of a search: we consider the first 10 results and ignore the remaining 10,000 links that have also been returned.
- We would like to use these links as data to help us answer discovery queries.
- We would like to ask more complex queries, whose solutions could only be computed in the limit.
- Learning theory and RichProlog provide a framework to reflect on these issues.
Conclusion (2)
- Users should be able to input not only queries, but also knowledge.
- Knowledge is as essential as data to answer discovery queries, but even information queries can benefit from knowledge.
- Knowledge can be shared, traded.
- Information extraction is one of the main challenges.
|
{"Source-Url": "http://www1.se.cuhk.edu.hk/~apweb04/arun-4.pdf", "len_cl100k_base": 4715, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33251, "total-output-tokens": 5390, "length": "2e12", "weborganizer": {"__label__adult": 0.0006070137023925781, "__label__art_design": 0.0007967948913574219, "__label__crime_law": 0.0012636184692382812, "__label__education_jobs": 0.0187835693359375, "__label__entertainment": 0.0005617141723632812, "__label__fashion_beauty": 0.0003817081451416016, "__label__finance_business": 0.0017242431640625, "__label__food_dining": 0.0008449554443359375, "__label__games": 0.001983642578125, "__label__hardware": 0.0010862350463867188, "__label__health": 0.0010833740234375, "__label__history": 0.0008406639099121094, "__label__home_hobbies": 0.00030684471130371094, "__label__industrial": 0.0007290840148925781, "__label__literature": 0.01149749755859375, "__label__politics": 0.0006799697875976562, "__label__religion": 0.0009794235229492188, "__label__science_tech": 0.310302734375, "__label__social_life": 0.0005388259887695312, "__label__software": 0.0924072265625, "__label__software_dev": 0.55078125, "__label__sports_fitness": 0.0004978179931640625, "__label__transportation": 0.000942230224609375, "__label__travel": 0.0003230571746826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17942, 0.00878]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17942, 0.59186]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17942, 0.8508]], "google_gemma-3-12b-it_contains_pii": [[0, 1579, false], [1579, 3195, null], [3195, 5816, null], [5816, 7899, null], [7899, 9936, null], [9936, 11323, null], [11323, 12432, null], [12432, 14729, null], [14729, 17195, null], [17195, 17942, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1579, true], [1579, 3195, null], [3195, 5816, null], [5816, 7899, null], [7899, 9936, null], [9936, 11323, null], [11323, 12432, null], [12432, 14729, null], [14729, 17195, null], [17195, 17942, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17942, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17942, null]], "pdf_page_numbers": [[0, 1579, 1], [1579, 3195, 2], [3195, 5816, 3], [5816, 7899, 4], [7899, 9936, 5], [9936, 11323, 6], [11323, 12432, 7], [12432, 14729, 8], [14729, 17195, 9], [17195, 17942, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17942, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
1634e2d7afd19db09e0d178bf00f27a50521e5b0
|
Smart Grid Serialization Comparison
Petersen, Bo Søborg; Bindner, Henrik W.; You, Shi; Poulsen, Bjarne
Published in:
Computing Conference 2017
Link to article, DOI:
10.1109/SAI.2017.8252264
Publication date:
2017
Document Version
Peer reviewed version
Link back to DTU Orbit
Citation (APA):
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
- Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
- You may not further distribute the material or use it for any profit-making activity or commercial gain
- You may freely distribute the URL identifying the publication in the public portal
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Smart Grid Serialization Comparision
Comparision of serialization for distributed control in the context of the Internet of Things
Bo Petersen, Henrik Bindner, Shi You
DTU Electrical Engineering
Technical University of Denmark
Lyngby, Denmark
bspet@elektro.dtu.dk, hwbi@elektro.dtu.dk, sy@elektro.dtu.dk
Bjarne Poulsen
DTU Compute
Technical University of Denmark
Lyngby, Denmark
bjpo@dtu.dk
Abstract—Communication between DERs and System Operators is required to provide Demand Response and solve some of the problems caused by the intermittency of much Renewable Energy. An important part of efficient communication is serialization, which is important to ensure a high probability of delivery within a given timeframe, especially in the context of the Internet of Things, using low-bandwidth data connections and constrained devices. The paper shows that there are better alternatives than XML & JAXB and gives guidance in choosing the most appropriate serialization format and library depending on the context.
Keywords—Smart Grid; Internet of Things; Serialization; XML; JSON; YAML; FST; Kryo; JAXB; Jackson; XStream; ProtoStuff; Gson; Genson; SnakeYAML; MsgPack; Smile; ProtoBuf; BSON; Hessian; CBOR; Avro
I. INTRODUCTION
In a future Smart Grid with a large share of Renewable Energy (EU 2020 & 2030 energy strategy), there will be problems caused by the intermittent nature of most Renewable Energy, especially solar and wind [1].
These problems primarily consists of times with either excess or lack of energy from renewable power sources.
Excess power will be wasted, transported to other regions or countries, stored or converted, all of which will cause a loss in energy.
Lack of energy will cause the use of more economically or environmentally expensive energy, in the form of non-renewable energy, bio-fuels or stored energy.
The most efficient solution to these problems, if done right, is Demand Response, which entails controlling consumption units, especially heating, cooling and production units.
In addition, control of production units, which have the capability to move their production can also help to solve these problems.
For the control of these Internet of Things Distributed Energy Resources (DER), both production and consumption units, communication between the units and the System Operators (Transmission System Operator, Distribution System Operator and Balance Responsible Party) is crucial.
The choice of technology for this communication (e.g. Web Services), called communication middleware is very important to ensure that the control messages are received within a given timeframe, depending on the needs of the power grid.
This need could be to avoid a fault, by initiating load shedding, with a timeframe of seconds to minutes, or moving the consumption of energy from peak hours, by initiating load shifting, with a timeframe from hours to days.
Communication in the scope of power system services lies between the physical hardware that is needed and basic communication protocols like TCP/IP, and the business logic in the form of control algorithms (fig. 1).
Another important part of ensuring that the control messages are received within the given timeframe is the choice of serialization format and library, which affects the size of the message and the serialization time.
Even though there is no guaranty of delivery for messages sent over the internet within a given timeframe as oppose to dedicated lines, the probability of delivery within the given timeframe is improved by reducing the size of the transmitted message.
Furthermore the serialization time becomes especially important to consider when the processing device of the DER is a System on Chip, for instance Beagle Bone [2] or Odroid [3], with limited processing capabilities, as this will also improve the probability of delivery within a given timeframe, because the sending and receiving devices will be able to process the message quicker.
Fig. 1 – CENELEC SGAM Model [28].
Sponsored by the PROActive INtegration of sustainable energy resources enabling active distribution networks (PROAIN) project.
Moreover, in the case of a System on Chip with limited memory, the memory consumption has to be considered, to ensure that the control system can be executed without fault.
In cases were the DER is communicating over a low bandwidth data connection like EDGE (cell phone network) or Power Line Communication, the size of the message after serialization, and potentially also compression strongly affects the probability of delivery within the timeframe.
The choice of serialization format and library is often affected by the fact that most communication middleware uses a certain format and library, which is more of a convenience than a hindrance, as almost all communication middleware is capable of transmitting binary or text serialized messages.
In the area of power system communications, the choice made by prevalent communication standards should also be taken into account.
These standards are IEC 61850 [4], OpenADR [5] and CIM [6], which uses SCL (extension to XML), XML and RDF (extension to XML) respectively.
The current state of the art is online benchmarks for serialization formats and libraries, which does not take into account the requirements of the Smart Grid, the use of Smart Grid communication standards, the possibility of using compression after serialization, and does not give recommendations as to choosing a serialization format and library for the use in Smart Grid communications.
The hypothesis is that there are many better alternatives to using the XML format and the JAXB library for serialization in the context of Smart Grids, especially for applications with low bandwidth data connections and constrained processing & memory devices.
The aim of the paper is to give guidance in choosing the most appropriate serialization format and library for Smart Grid communications depending on the context, and to compare prominent serialization formats and libraries to the XML format used by the prevalent communication standards.
II. METHODS
The scope of serializers for this paper has been limited to Java serializers, because most serializers are available in Java, and because Java can run cross platform.
The included serializers were chosen by searching online for all java serializers, sorting out the ones, with few users, which has not been updated for years, or are in early beta versions (based on MvnRepository.com).
In addition, serializers that require manual serialization or schemas that cannot be generated from source code, where excluded, as it would require too much implementation work for most real world cases (this primarily includes Thrift and the Protocol Buffers library).
Of the 26 serializers picked, two of them failed to work (YamlBeans & ProtoBuf (Jackson)).
The quantitative comparison of the serializers measures the following:
- Serialization time.
- Deserialization time.
- Compression time.
- Decompression time.
- Memory use for serialization.
- Memory use for compression.
- Serialized message size.
- Compressed message size.
With compression being performed after serialization, and using the GZip compression library.
Faster or more compact compression could be used, but because GZip is the default compression used in communication and because a comparison of compression formats and libraries is outside the scope of the paper, GZip is used to give an idea of the impact of using compression.
The times have been measured by first performing a warm up that serialize, compress, decompress and deserialize all test messages 1000 times, then measures the time it takes to serialize 1000 times and taking the average, and then doing the same for compression, decompression and deserialization.
The memory consumption is measured by requesting the execution of the garbage collection and then saving the memory consumption, after setting up the test objects, but before doing the 1000 runs, then requesting the execution of the garbage collector after 999 serialization runs, and saving the memory consumption after all 1000 runs, to get the memory held by the serializer for all runs plus the memory held during 1 run, which gives the peak memory consumption during 1000 runs if the garbage collection was as active as possible.
The times does not include initialization, because it only has to be performed on startup, and therefore will not affect the average serialization time of a message.
The test messages consists of IEC 61850 data model classes because it gives a good idea of the messages being transmitted for Smart Grid use cases, and because CIM does not specify fixed classes for energy systems as it can be used in many domains and OpenADR is a relatively new standard, and also does not exactly specify data model classes.
The IEC 61850 data model classes used are logical node classes, for which a unit uses one or more of them to describe its components, for instance the battery of an EV or a time schedule for production, used for measurement data and control commands respectively.
A logical node consists of many fixed classes, divided into 3 levels below the logical node in the hierarchy, so they can be relatively large.
For the tests all logical node classes specified in 61850-7-4 (2010) and 61850-7-420 (2009) are used.
The qualitative comparison includes serialization format and library characteristics for language neutrality, the required use of schemas or annotations and whether the serialized output is binary or text, but does not take into account whether version control is supported, as IEC 61850 specifies the version of all logical node classes.
The tests were run on Windows 10 (build 14393), using Java (Oracle 1.8.0_102 64bit), with an Intel Duel Core 2.1 GHz processor (i7-4600U), with 8 GB of memory.
The results for one serializer relative to the other serializers should be the same on any system, as long as the system does not run out of memory.
III. RESULTS
The included serialization formats consists of java specific binary formats (Java Serialization API (JSA) [7], Fast-serialization (FST) [8] and Kryo [9]), human readable text formats (XML [10] [11] [12] [13], JSON [14] [15] [16] [13] [17], YAML [18] [19]), and language neutral binary formats (MsgPack [20] [21], Smile [22] [13], ProtoBuf [13], BSON [23], Hessian [24], CBOR [25], ProtoStuff [13] [26], Avro [27]).
These formats include multiple human readable text formats and multiple language neutral binary formats, which gives many options for choosing alternatives to XML and even includes two java specific binary format (Fast-serialization and Kryo) as alternatives to the built-in Java serialization API.
They also include formats that requires the use of schemas and/or annotations and without, many language neutral formats, the format used by prevalent communication standards (XML), and many popular serialization formats.
The libraries included are the ones needed for most of the formats, as they are single format libraries, and three multi format libraries (ProtoStuff, Jackson, XStream).
The quantitative results of the comparison are the calculated average serialization, deserialization, compression and decompression times (seen in fig. 2), the serialized byte size and compressed serialized byte size (seen in fig. 3), and the memory consumption for serialization and compression (seen in fig. 4).
The JAXB serializer performs particularly bad when the context is not cached, which is why the performance has been
![Processing time measurements graph]
Fig. 2 – Comparison of average processing time spent per message for serialization, deserialization, compression and decompression.
measured both with cached context and without, which is an optimization and optimizations has not been performed for the other libraries.
A comparison of the XML format using the default Java serializer JAXB with a cached context, and the most competitive serializers, based on size (Avro), speed (ProtoBuf-ProtoStuff, ProtoStuff), being human readable (Json-Jackson), and being Java specific (Fast serialization), can be seen in fig. 5.
The results of the qualitative comparison, which includes the name, version, and library (if the library is not a single format library), whether the format is a human readable text format, whether the format enables the use of and/or requires a schema, annotations or inheritance, and whether the format is language specific or language neutral (seen in table 1).
IV. DISCUSSION
The first thing to consider when choosing a serialization format is whether the serialized output needs to be human readable text, and for instance with configuration files, the data often needs to be human readable so it can be changed in a text editor.
Fig. 3 – Comparison of serialized size and compressed serialized size.
However, with Smart Grid communications, it mostly only needs to be human readable for debugging, which means that for most use cases it might as well be binary.
Another important thing to consider is whether the message will be compressed either by the communication middleware or before that, because depending on the chosen serialization it might affect the size of the message and the time it takes to serialize and deserialize, differently.
Moreover, it is important to use a communication middleware that does not serialize the message if it has already been serialized.
Note that even though the compressed serialized byte size is shown in fig. 3 for the human readable text formats (except YAML, which is problematic with compression because of the semantic use of whitespace), it mostly does not make sense to compress these formats, because it removes their primary characteristic, that they are human readable.
Memory consumption is important to consider when using a System on Chip for the Internet of Things, which in the case of a Beagle Bone Black only has 512 MB of memory, which is quickly exhausted by the operating system, and the control system.
Looking at the quantitative result however, it can be seen that the memory used by the serializers range from 1 to 22 MB, with many using less than 5 MB. This should make it possible to choose a serialization format and library that can run on a System on Chip.
Even if the serialization format has already been chosen it is important to note that the speed of the serialization library might differ a lot, for JSON, it could be more than 40 times as long.
Fig. 4 – Comparison of memory use.
The differences between uncompressed serialized language neutral binary message sizes are more than 3 times as big, and the difference between speeds is more than 24 times as fast.
Between human readable serializers, the difference in speed is more than 70 times, and the difference in size could save more than 25 percent, which does not include the ProtoStuff library for JSON, because the way it saves a lot of space is by replacing property names with property indexes, which makes it incompatible with other JSON libraries.
For java specific serializers, Kryo is an impressive alternative to the Java Serialization API (JSA), with message sizes that are less than half as big for uncompressed messages, and 2.5 times as fast.
When the size of the messages are the most important thing, primarily with low-bandwidth data connections, Message Pack (MsgPack) & Avro produces uniquely small messages, but pays the price by being slower than most other language neutral binary serializers.
When it comes to speed, especially for constrained devices, Protocol Buffers (ProtoStuff), ProtoStuff, Kryo and FST perform particularly well and produce quite compact output.
Concerning memory, most serializers use little memory and it should therefore not be a problem, but some of them use much less memory than others, which in certain situations makes them a better choice.
Compression does make the message smaller, which for some use cases makes it worth using, but the price payed in processing time, is not worth it, for the most efficient serializers, in most cases.
The comparison of JAXB with the best serializers in different areas (fig. 5) shows that in every area there is a better choice, especially if a different format than XML is used.
When power system control messages are sent, it requires that measurements values have been received first by the controlling entity, which makes the message sizes used in the tests relevant, even though they are bigger than most control messages, they corresponds with the average size of measurement value messages.
The use of a schema for a serialization format, only helps to generate programming language code, which can be helpful, but not necessary, as the code can be created from documentation instead.
Schemas can also be generated from programming language code, if the serialization library has that feature, which makes it possible to move implementations of data classes from one programming language to a schema and then to another programming language.
A serialization format is language neutral if it is not tied to a particular programming language and supports cross platform applications if implementations exist in multiple languages.
The choice to use a language neutral or cross platform serialization format depends on whether other programming languages has to be supported for the distributed control application, and if so, it is important to check whether a format is language neutral and/or supports cross platform applications.
Some serialization libraries requires or allows the use of annotations, which might add additional work, in implementing
<table>
<thead>
<tr>
<th>Name (version) [library]</th>
<th>Serialization format/library characteristic</th>
<th>Binary / Text</th>
<th>Schema / Annotations / Inheritance</th>
<th>Language neutral</th>
</tr>
</thead>
<tbody>
<tr>
<td>JSA (JDK 1.8.0.102)</td>
<td>Binary</td>
<td>Required</td>
<td>No</td>
<td></td>
</tr>
<tr>
<td>FST (2.47)</td>
<td>Binary</td>
<td>Optional</td>
<td>No</td>
<td></td>
</tr>
<tr>
<td>Kryo (4.0.0)</td>
<td>Binary</td>
<td>Optional</td>
<td>No</td>
<td></td>
</tr>
<tr>
<td>XML (JDK 1.8.0.102) (JAXB)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>XML (2.8.1) (Jackson)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>XML (1.4.9) (XStream)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>XML (1.4.4) (ProtoStuff)</td>
<td>Text</td>
<td>Required</td>
<td>Yes*</td>
<td></td>
</tr>
<tr>
<td>JSON (2.8.1) (Jackson)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>JSON (1.4.9) (XStream)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>JSON (2.7) (Gson)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>JSON (1.4.4) (ProtoStuff)</td>
<td>Text</td>
<td>Required</td>
<td>Yes*</td>
<td></td>
</tr>
<tr>
<td>JSON (1.4) (Genson)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>YAML (1.17) (SnakeYAML)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>YAML (2.8.1) (Jackson)</td>
<td>Text</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>MsgPack (0.6.12) (Jackson)</td>
<td>Binary</td>
<td>Required</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>MsgPack (0.8.8) (Jackson)</td>
<td>Binary</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Smile (2.8.1) (Jackson)</td>
<td>Binary</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Smile (1.4.4) (ProtoStuff)</td>
<td>Binary</td>
<td>Required</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>ProtoBuf (1.4.4) (ProtoStuff)</td>
<td>Binary</td>
<td>Required</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>BSON (2.7.0) (Jackson)</td>
<td>Binary</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Hessian (4.0.38)</td>
<td>Binary</td>
<td>No</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>CBOR (2.8.1) (Jackson)</td>
<td>Binary</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>ProtoStuff (1.4.4)</td>
<td>Binary</td>
<td>Required</td>
<td>Yes</td>
<td></td>
</tr>
<tr>
<td>Avro (2.8.1) (Jackson)</td>
<td>Binary</td>
<td>Optional</td>
<td>Yes</td>
<td></td>
</tr>
</tbody>
</table>
* The JSON like serialization format produced by protostuff is language neutral but not compatible with other JSON serializers, because it uses property indexes instead of property names as keys.
the data model used, which in the case of IEC 61850 includes hundreds of classes, but it might allow certain implementations of data model classes that might otherwise not be possible.
In the case of IEC 61850, versioning can be handled by the application using the serialization as the version is specified by the logical nodes, but in other cases versioning could be an important characteristic of a serialization format and library, to allow the data model classes to change over time, while allowing an application to use multiple versions.
V. CONCLUSION
There are better alternatives to using XML, as JSON is also human readable and more compact, and binary formats, especially ProtoStuff, ProtoBuf, Kryo and FST, are faster and much more compact.
One thing that is special about XML and format extending XML, is the ability to specify new message parts, as part of the message.
But because this requires the system to know them in advance, which could have been done through documentation, or work with previously unknown message parts at runtime, this is only useful for rare complex cases.
When choosing a serialization format and library, it should be considered how active the development is, how big the community using it is, and how many resources are available, and seeing as this changes over time, is hard to quantify, and very subjective, this is outside the scope of the paper.
Further general information, not specific to power system, on pros and cons specific to a particular serializer can be found in online benchmarks.
Future work includes a comparison of compression formats and libraries, which could make the use of compression more useful, and a comparison of communication middleware, which together with this paper, could give a better overview over the possible Internet of Things Smart Grid power system services and applications, depending on the timeframe.
REFERENCES
|
{"Source-Url": "https://backend.orbit.dtu.dk/ws/portalfiles/portal/127962965/Smart_Grid_Serialization_Comparision.pdf", "len_cl100k_base": 5223, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30287, "total-output-tokens": 6683, "length": "2e12", "weborganizer": {"__label__adult": 0.00040602684020996094, "__label__art_design": 0.00032401084899902344, "__label__crime_law": 0.0005021095275878906, "__label__education_jobs": 0.0004346370697021485, "__label__entertainment": 0.0001049041748046875, "__label__fashion_beauty": 0.0001780986785888672, "__label__finance_business": 0.0005955696105957031, "__label__food_dining": 0.0003807544708251953, "__label__games": 0.0005135536193847656, "__label__hardware": 0.005588531494140625, "__label__health": 0.0006198883056640625, "__label__history": 0.000457763671875, "__label__home_hobbies": 0.00017082691192626953, "__label__industrial": 0.0019207000732421875, "__label__literature": 0.0002853870391845703, "__label__politics": 0.0005707740783691406, "__label__religion": 0.0006227493286132812, "__label__science_tech": 0.40234375, "__label__social_life": 0.0001125335693359375, "__label__software": 0.018707275390625, "__label__software_dev": 0.56298828125, "__label__sports_fitness": 0.0003542900085449219, "__label__transportation": 0.0016202926635742188, "__label__travel": 0.00023949146270751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27812, 0.03337]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27812, 0.35175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27812, 0.87859]], "google_gemma-3-12b-it_contains_pii": [[0, 1275, false], [1275, 5431, null], [5431, 11180, null], [11180, 13057, null], [13057, 14206, null], [14206, 15871, null], [15871, 22506, null], [22506, 26281, null], [26281, 27812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1275, true], [1275, 5431, null], [5431, 11180, null], [11180, 13057, null], [13057, 14206, null], [14206, 15871, null], [15871, 22506, null], [22506, 26281, null], [26281, 27812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27812, null]], "pdf_page_numbers": [[0, 1275, 1], [1275, 5431, 2], [5431, 11180, 3], [11180, 13057, 4], [13057, 14206, 5], [14206, 15871, 6], [15871, 22506, 7], [22506, 26281, 8], [26281, 27812, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27812, 0.14054]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c857c297f846e42787cf6cca15b94e63bc91208f
|
Chapter 2
Data and Expressions
Chapter Scope
• Character strings and concatenation
• Escape sequences
• Declaring and using variables
• Java primitive types
• Expressions
• Data conversions
• The Scanner class for interactive programs
Character Strings
• A string of characters can be represented as a *string literal* by putting double quotes around it.
• Examples:
"This is a string literal."
"123 Main Street"
"X"
• Every character string is an object in Java, defined by the **String** class.
• Every string literal represents a **String** object.
The println Method
• In the *Lincoln* program, we invoked the `println` method to print a character string
• The `System.out` object represents a destination (the monitor) to which we can send output
The print Method
• The `System.out` object provides another service as well
• The `print` method is similar to the `println` method, except that it does not advance to the next line
• Therefore anything printed after a `print` statement will appear on the same line
public class Countdown {
// Prints two lines of output representing a rocket countdown.
public static void main(String[] args) {
System.out.print("Three... ");
System.out.print("Two... ");
System.out.print("One... ");
System.out.print("Zero... ");
System.out.println("Liftoff!"); // appears on first output line
System.out.println("Houston, we have a problem.");
}
}
String Concatenation
• The string concatenation operator (+) is used to append one string to the end of another
"Peanut butter " + "and jelly"
• It can also be used to append a number to a string
• A string literal cannot be broken across two lines in a program
public class Facts {
// Prints various facts.
public static void main(String[] args) {
// Strings can be concatenated into one long string
System.out.println("We present the following facts for your " + "extracurricular edification: ");
System.out.println();
// A string can contain numeric digits
System.out.println("Letters in the Hawaiian alphabet: 12");
// A numeric value can be concatenated to a string
System.out.println("Dialing code for Antarctica: " + 672);
System.out.println("Year in which Leonardo da Vinci invented " + "the parachute: " + 1515);
System.out.println("Speed of ketchup: " + 40 + " km per year");
}
}
String Concatenation
• The + operator is also used for arithmetic addition
• The function that it performs depends on the type of the information on which it operates
• If both operands are strings, or if one is a string and one is a number, it performs string concatenation
• If both operands are numeric, it adds them
• The + operator is evaluated left to right, but parentheses can be used to force the order
public class Addition
{
// Concatenates and adds two numbers and prints the results.
public static void main(String[] args)
{
System.out.println("24 and 45 concatenated: " + 24 + 45);
System.out.println("24 and 45 added: " + (24 + 45));
}
}
Escape Sequences
• What if we wanted to print a the quote character?
• The following line would confuse the compiler because it would interpret the second quote as the end of the string
\[\text{System.out.println("I said "Hello" to you.");}\]
• An \textit{escape sequence} is a series of characters that represents a special character
• An escape sequence begins with a backslash character (\)
\[\text{System.out.println("I said \"Hello\" to you.");}\]
Escape Sequences
- Some Java escape sequences:
<table>
<thead>
<tr>
<th>Escape Sequence</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>\b</td>
<td>backspace</td>
</tr>
<tr>
<td>\t</td>
<td>tab</td>
</tr>
<tr>
<td>\n</td>
<td>newline</td>
</tr>
<tr>
<td>\r</td>
<td>carriage return</td>
</tr>
<tr>
<td>"</td>
<td>double quote</td>
</tr>
<tr>
<td>'</td>
<td>single quote</td>
</tr>
<tr>
<td>\</td>
<td>backslash</td>
</tr>
</tbody>
</table>
public class Roses {
// Prints a poem (of sorts) on multiple lines.
public static void main(String[] args) {
System.out.println("Roses are red,
Violets are blue,
Sugar is sweet,
But I have commitment issues,
So I'd rather just be friends
At this point in our relationship.");
}
}
Variables
• A variable is a name for a location in memory
• A variable must be declared by specifying its name and the type of information that it will hold
```java
int total;
int count, temp, result;
```
Multiple variables can be created in one declaration
Variables
- A variable can be given an initial value in the declaration
- When a variable is used in a program, its current value is used
public class PianoKeys
{
// Prints the number of keys on a piano.
public static void main(String[] args)
{
int keys = 88;
System.out.println("A piano has "+ keys + " keys.");
}
}
Assignment
• An assignment statement changes the value of a variable
• The assignment operator is the = sign
```
total = 55;
```
• The expression on the right is evaluated and the result is stored in the variable on the left
• The value that was in `total` is overwritten
• You can only assign a value to a variable that is consistent with the variable's declared type
public class Geometry {
public static void main(String[] args) {
int sides = 7; // declaration with initialization
System.out.println("A heptagon has "+ sides + " sides.");
sides = 10; // assignment statement
System.out.println("A decagon has "+ sides + " sides.");
sides = 12;
System.out.println("A dodecagon has "+ sides + " sides.");
}
}
Assignment
• The right-hand side could be an expression
• The expression is completely evaluated and the result is stored in the variable
```
height = height + gap;
```
Constants
- A constant is an identifier that is similar to a variable except that it holds the same value during its entire existence.
- As the name implies, it is constant, not variable.
- The compiler will issue an error if you try to change the value of a constant.
- In Java, we use the `final` modifier to declare a constant.
```java
final int MIN_HEIGHT = 69;
```
Constants
• Constants are useful for three important reasons
– First, they give meaning to otherwise unclear literal values
• For example, `MAX_LOAD` means more than the literal 250
– Second, they facilitate program maintenance
• If a constant is used in multiple places, its value need only be updated in one place
– Third, they formally establish that a value should not change, avoiding inadvertent errors by other programmers
Primitive Data Types
- There are eight primitive data types in Java
- Four of them represent integers
- byte, short, int, long
- Two of them represent floating point numbers
- float, double
- One of them represents characters
- char
- And one of them represents boolean values
- boolean
Numeric Types
- The difference between the various numeric primitive types is their size, and therefore the values they can store:
<table>
<thead>
<tr>
<th>Type</th>
<th>Storage</th>
<th>Min Value</th>
<th>Max Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>byte</td>
<td>8 bits</td>
<td>-128</td>
<td>127</td>
</tr>
<tr>
<td>short</td>
<td>16 bits</td>
<td>-32,768</td>
<td>32,767</td>
</tr>
<tr>
<td>int</td>
<td>32 bits</td>
<td>-2,147,483,648</td>
<td>2,147,483,647</td>
</tr>
<tr>
<td>long</td>
<td>64 bits</td>
<td>-9,223,372,036,854,775,808</td>
<td>9,223,372,036,854,775,807</td>
</tr>
<tr>
<td>float</td>
<td>32 bits</td>
<td>Approximately -3.4E+38 with 7 significant digits</td>
<td>Approximately 3.4E+38 with 7 significant digits</td>
</tr>
<tr>
<td>double</td>
<td>64 bits</td>
<td>Approximately -1.7E+308 with 15 significant digits</td>
<td>Approximately 1.7E+308 with 15 significant digits</td>
</tr>
</tbody>
</table>
Characters
- A `char` variable stores a single character.
- Character literals are delimited by single quotes:
```java
'a' 'X' '7' '$' ',' '\n'
```
- Example declarations
```java
char topGrade = 'A';
char terminator = ';', separator = ' ';
```
- Note the distinction between a primitive character variable, which holds only one character, and a `String` object, which can hold multiple characters.
Character Sets
• A character set is an ordered list of characters, with each character corresponding to a unique number
• A char variable in Java can store any character from the Unicode character set
• The Unicode character set uses sixteen bits per character
• It is an international character set, containing symbols and characters from many world languages
The ASCII character set is older and smaller than Unicode.
The ASCII characters are a subset of the Unicode character set, including:
- Uppercase letters: A, B, C, ...
- Lowercase letters: a, b, c, ...
- Punctuation: period, semi-colon, ...
- Digits: 0, 1, 2, ...
- Special symbols: &, |, \, ...
- Control characters: carriage return, tab, ...
Booleans
• A boolean value represents a true or false condition
• The reserved words `true` and `false` are the only valid values for a boolean type
```java
boolean done = false;
```
• A boolean variable can also be used to represent any two states, such as a light bulb being on or off
Expressions
• An *expression* is a combination of one or more operators and operands
• *Arithmetic expressions* compute numeric results and make use of the arithmetic operators
— Addition $+$
— Subtraction $-$
— Multiplication $\ast$
— Division $/$
— Remainder $\%$
• If either or both operands used by an arithmetic operator are floating point, then the result is a floating point
Division and Remainder
• If both operands to the division operator (/) are integers, the result is an integer (the fractional part is discarded)
\[
14 \div 3 \quad \text{equals} \quad 4
\]
\[
8 \div 12 \quad \text{equals} \quad 0
\]
• The remainder operator (%) returns the remainder after dividing the second operand into the first
\[
14 \mod 3 \quad \text{equals} \quad 2
\]
\[
8 \mod 12 \quad \text{equals} \quad 8
\]
Operator Precedence
- Operators can be combined into complex expressions
\[
\text{result} = \text{total} + \text{count} / \text{max} - \text{offset};
\]
- Operators have a well-defined precedence which determines the order in which they are evaluated
- Multiplication, division, and remainder are evaluated prior to addition, subtraction, and string concatenation
- Arithmetic operators with the same precedence are evaluated from left to right, but parentheses can be used to force the evaluation order
Operator Precedence
• What is the order of evaluation in the following expressions?
\[ a + b + c + d + e \quad a + b \times c - d \div e \]
\[ a \div (b + c) - d \mod e \]
\[ a \div (b \times (c + (d - e))) \]
Operator Precedence
• What is the order of evaluation in the following expressions?
1. \( a + b + c + d + e \)
- Order: 1, 2, 3, 4
2. \( a + b \times c - d \div e \)
- Order: 3, 1, 4, 2
3. \( a \div (b + c) - d \% e \)
- Order: 2, 1, 4, 3
4. \( a \div (b \times (c + (d - e))) \)
- Order: 4, 3, 2, 1
Expression Trees
• The evaluation of a particular expression can be shown using an expression tree
• The operators lower in the tree have higher precedence for that expression
Operator Precedence
- Precedence among some Java operators:
<table>
<thead>
<tr>
<th>Precedence Level</th>
<th>Operator</th>
<th>Operation</th>
<th>Associates</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>+</td>
<td>unary plus</td>
<td>R to L</td>
</tr>
<tr>
<td></td>
<td>-</td>
<td>unary minus</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>*</td>
<td>multiplication</td>
<td>L to R</td>
</tr>
<tr>
<td></td>
<td>/</td>
<td>division</td>
<td></td>
</tr>
<tr>
<td></td>
<td>%</td>
<td>remainder</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>+</td>
<td>addition</td>
<td>L to R</td>
</tr>
<tr>
<td></td>
<td>-</td>
<td>subtraction</td>
<td></td>
</tr>
<tr>
<td></td>
<td>+</td>
<td>string concatenation</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>=</td>
<td>assignment</td>
<td>R to L</td>
</tr>
</tbody>
</table>
TempConverter.java
Demonstrates the use of primitive data types and arithmetic expressions.
public class TempConverter
{
// Computes the Fahrenheit equivalent of a specific Celsius value using the formula F = (9/5)C + 32.
public static void main (String[] args)
{
final int BASE = 32;
final double CONVERSION_FACTOR = 9.0 / 5.0;
double fahrenheitTemp;
int celsiusTemp = 24; // value to convert
fahrenheitTemp = celsiusTemp * CONVERSION_FACTOR + BASE;
System.out.println ("Celsius Temperature: " + celsiusTemp);
System.out.println ("Fahrenheit Equivalent: " + fahrenheitTemp);
}
}
Assignment Revisited
• The assignment operator has a lower precedence than the arithmetic operators
First the expression on the right hand side of the = operator is evaluated
\[
\text{answer} = \text{sum} / 4 + \text{MAX} \times \text{lowest};
\]
Then the result is stored in the variable on the left hand side
Assignment Revisited
• The right and left hand sides of an assignment statement can contain the same variable
First, one is added to the original value of count
\[ \text{count} = \text{count} + 1; \]
Then the result is stored back into count (overwriting the original value)
Increment and Decrement Operators
- The increment and decrement operators use only one operand
- The *increment operator* (++) adds one to its operand
- The *decrement operator* (--) subtracts one from its operand
- The statement `count++;` is functionally equivalent to `count = count + 1;`
Increment and Decrement Operators
• The increment and decrement operators can be applied in *postfix form*
\[ \text{count}++ \]
• or *prefix form*
\[ ++\text{count} \]
• When used as part of a larger expression, the two forms can have different effects
• Because of their subtleties, the increment and decrement operators should be used with care
Assignment Operators
• Often we perform an operation on a variable, and then store the result back into that variable
• Java provides assignment operators to simplify that process
• For example, the statement
\[ \text{num += count;} \]
is equivalent to
\[ \text{num = num + count;} \]
Assignment Operators
- There are many assignment operators in Java, including the following:
<table>
<thead>
<tr>
<th>Operator</th>
<th>Example</th>
<th>Equivalent To</th>
</tr>
</thead>
<tbody>
<tr>
<td>+=</td>
<td>x += y</td>
<td>x = x + y</td>
</tr>
<tr>
<td>-=</td>
<td>x -= y</td>
<td>x = x - y</td>
</tr>
<tr>
<td>*=</td>
<td>x *= y</td>
<td>x = x * y</td>
</tr>
<tr>
<td>/=</td>
<td>x /= y</td>
<td>x = x / y</td>
</tr>
<tr>
<td>%=</td>
<td>x %= y</td>
<td>x = x % y</td>
</tr>
</tbody>
</table>
Assignment Operators
• The right hand side of an assignment operator can be a complex expression
• The entire right-hand expression is evaluated first, then the result is combined with the original variable
• Therefore
\[ \text{result} /= (\text{total} - \text{MIN}) \% \text{num}; \]
is equivalent to
\[ \text{result} = \text{result} / ((\text{total} - \text{MIN}) \% \text{num}); \]
Assignment Operators
• The behavior of some assignment operators depends on the types of the operands
• If the operands to the `+=` operator are strings, the assignment operator performs string concatenation
• The behavior of an assignment operator (`+=`) is always consistent with the behavior of the corresponding operator (`+`)
Data Conversions
• Sometimes it is convenient to convert data from one type to another.
• For example, in a particular situation we may want to treat an integer as a floating point value.
• These conversions do not change the type of a variable or the value that's stored in it – they only convert a value as part of a computation.
Data Conversions
• Conversions must be handled carefully to avoid losing information
• *Widening conversions* are safest because they tend to go from a small data type to a larger one (such as a `short` to an `int`)
• *Narrowing conversions* can lose information because they tend to go from a large data type to a smaller one.
• In Java, data conversions can occur in three ways
— assignment conversion
— promotion
— casting
Data Conversions
### Widening Conversions
<table>
<thead>
<tr>
<th>From</th>
<th>To</th>
</tr>
</thead>
<tbody>
<tr>
<td>byte</td>
<td>short, int, long, float, or double</td>
</tr>
<tr>
<td>short</td>
<td>int, long, float, or double</td>
</tr>
<tr>
<td>char</td>
<td>int, long, float, or double</td>
</tr>
<tr>
<td>int</td>
<td>long, float, or double</td>
</tr>
<tr>
<td>long</td>
<td>float or double</td>
</tr>
<tr>
<td>float</td>
<td>double</td>
</tr>
</tbody>
</table>
### Narrowing Conversions
<table>
<thead>
<tr>
<th>From</th>
<th>To</th>
</tr>
</thead>
<tbody>
<tr>
<td>byte</td>
<td>char</td>
</tr>
<tr>
<td>short</td>
<td>byte or char</td>
</tr>
<tr>
<td>char</td>
<td>byte or short</td>
</tr>
<tr>
<td>int</td>
<td>byte, short, or char</td>
</tr>
<tr>
<td>long</td>
<td>byte, short, char, or int</td>
</tr>
<tr>
<td>float</td>
<td>byte, short, char, int, or long</td>
</tr>
<tr>
<td>double</td>
<td>byte, short, char, int, long, or float</td>
</tr>
</tbody>
</table>
Assignment Conversion
- **Assignment conversion** occurs when a value of one type is assigned to a variable of another.
- If `money` is a `float` variable and `dollars` is an `int` variable, the following assignment converts the value in `dollars` to a `float`.
```java
money = dollars
```
- Only widening conversions can happen via assignment.
- Note that the value or type of `dollars` did not change.
Promotion
• *Promotion* happens automatically when operators in expressions convert their operands
• For example, if `sum` is a *float* and `count` is an *int*, the value of `count` is converted to a floating point value to perform the following calculation:
\[
\text{result} = \frac{\text{sum}}{\text{count}};
\]
Casting
• *Casting* is the most powerful, and dangerous, technique for conversion
• Both widening and narrowing conversions can be accomplished by explicitly casting a value
• To cast, the type is put in parentheses in front of the value being converted
• For example, if `total` and `count` are integers, but we want a floating point result when dividing them, we can cast `total`
\[
\text{result} = (\text{float}) \text{total} / \text{count};
\]
The Scanner Class
- The `Scanner` class provides convenient methods for reading input values of various types.
- A `Scanner` object can be set up to read input from various sources, including the user typing values on the keyboard.
- Keyboard input is represented by the `System.in` object.
Reading Input
• The following line creates a Scanner object that reads from the keyboard
Scanner scan = new Scanner(System.in);
• The new operator creates the Scanner object
• Once created, the Scanner object can be used to invoke various input methods, such as
answer = scan.nextLine();
Reading Input
• The `Scanner` class is part of the `java.util` class library, and must be imported into a program to be used
• The `nextLine` method reads all of the input until the end of the line is found
• We'll discuss the details of object creation and class libraries later
• Some methods of the Scanner class:
```java
Scanner (InputStream source)
Scanner (File source)
Scanner (String source)
Constructors: sets up the new scanner to scan values from the specified source.
String next()
Returns the next input token as a character string.
String nextLine()
Returns all input remaining on the current line as a character string.
boolean nextBoolean()
byte nextByte()
double nextDouble()
float nextFloat()
int nextInt()
long nextLong()
short nextShort()
Returns the next input token as the indicated type. Throws
InputMismatchException if the next token is inconsistent with the type.
boolean hasNext()
Returns true if the scanner has another token in its input.
Scanner useDelimiter (String pattern)
Scanner useDelimiter (Pattern pattern)
Sets the scanner's delimiting pattern.
Pattern delimiter()
Returns the pattern the scanner is currently using to match delimiters.
String findInLine (String pattern)
String findInLine (Pattern pattern)
Attempts to find the next occurrence of the specified pattern, ignoring delimiters.
```
import java.util.Scanner;
public class Echo
{
//////////////////////////////////////////////////////////////////////////////
// Reads a character string from the user and prints it.
//////////////////////////////////////////////////////////////////////////////
public static void main(String[] args)
{
String message;
Scanner scan = new Scanner(System.in);
System.out.println("Enter a line of text:");
message = scan.nextLine();
System.out.println("You entered: " + message + "}
}
}
Input Tokens
• Unless specified otherwise, *white space* is used to separate the elements (called *tokens*) of the input
• White space includes space characters, tabs, new line characters
• The *next method of the* `Scanner` *class reads the next input token and returns it as a string*
• Methods such as `nextInt` and `nextDouble` *read data of particular types*
import java.util.Scanner;
public class GasMileage
{
// Calculates fuel efficiency based on values entered by the user.
public static void main(String[] args)
{
int miles;
double gallons, mpg;
Scanner scan = new Scanner(System.in);
System.out.print("Enter the number of miles: ");
miles = scan.nextInt();
System.out.print("Enter the gallons of fuel used: ");
gallons = scan.nextDouble();
mpg = miles / gallons;
System.out.println("Miles Per Gallon: "+ mpg);
}
}
|
{"Source-Url": "http://cs.boisestate.edu/~amit/teaching/125/handouts/book-slides/slides02.pdf", "len_cl100k_base": 5707, "olmocr-version": "0.1.50", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 78360, "total-output-tokens": 7473, "length": "2e12", "weborganizer": {"__label__adult": 0.0003566741943359375, "__label__art_design": 0.00017786026000976562, "__label__crime_law": 0.0002315044403076172, "__label__education_jobs": 0.0006518363952636719, "__label__entertainment": 4.523992538452149e-05, "__label__fashion_beauty": 0.00012540817260742188, "__label__finance_business": 8.392333984375e-05, "__label__food_dining": 0.00033855438232421875, "__label__games": 0.0005254745483398438, "__label__hardware": 0.0007877349853515625, "__label__health": 0.00028777122497558594, "__label__history": 0.0001404285430908203, "__label__home_hobbies": 7.104873657226562e-05, "__label__industrial": 0.00025916099548339844, "__label__literature": 0.0001741647720336914, "__label__politics": 0.00016987323760986328, "__label__religion": 0.0004563331604003906, "__label__science_tech": 0.002147674560546875, "__label__social_life": 7.236003875732422e-05, "__label__software": 0.0035724639892578125, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00033593177795410156, "__label__transportation": 0.0004394054412841797, "__label__travel": 0.0001977682113647461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21733, 0.00861]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21733, 0.7151]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21733, 0.77295]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 236, null], [236, 564, null], [564, 766, null], [766, 1035, null], [1035, 1466, null], [1466, 1733, null], [1733, 2454, null], [2454, 2867, null], [2867, 3140, null], [3140, 3598, null], [3598, 3996, null], [3996, 4296, null], [4296, 4558, null], [4558, 4698, null], [4698, 4910, null], [4910, 5282, null], [5282, 5682, null], [5682, 5853, null], [5853, 6225, null], [6225, 6670, null], [6670, 6966, null], [6966, 7932, null], [7932, 8357, null], [8357, 8719, null], [8719, 9065, null], [9065, 9356, null], [9356, 9752, null], [9752, 10179, null], [10179, 10696, null], [10696, 10910, null], [10910, 11227, null], [11227, 11405, null], [11405, 12215, null], [12215, 12873, null], [12873, 13188, null], [13188, 13467, null], [13467, 13760, null], [13760, 14117, null], [14117, 14408, null], [14408, 14804, null], [14804, 15195, null], [15195, 15529, null], [15529, 15864, null], [15864, 16300, null], [16300, 17105, null], [17105, 17520, null], [17520, 17837, null], [17837, 18290, null], [18290, 18584, null], [18584, 18894, null], [18894, 19177, null], [19177, 20260, null], [20260, 20811, null], [20811, 21179, null], [21179, 21733, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 236, null], [236, 564, null], [564, 766, null], [766, 1035, null], [1035, 1466, null], [1466, 1733, null], [1733, 2454, null], [2454, 2867, null], [2867, 3140, null], [3140, 3598, null], [3598, 3996, null], [3996, 4296, null], [4296, 4558, null], [4558, 4698, null], [4698, 4910, null], [4910, 5282, null], [5282, 5682, null], [5682, 5853, null], [5853, 6225, null], [6225, 6670, null], [6670, 6966, null], [6966, 7932, null], [7932, 8357, null], [8357, 8719, null], [8719, 9065, null], [9065, 9356, null], [9356, 9752, null], [9752, 10179, null], [10179, 10696, null], [10696, 10910, null], [10910, 11227, null], [11227, 11405, null], [11405, 12215, null], [12215, 12873, null], [12873, 13188, null], [13188, 13467, null], [13467, 13760, null], [13760, 14117, null], [14117, 14408, null], [14408, 14804, null], [14804, 15195, null], [15195, 15529, null], [15529, 15864, null], [15864, 16300, null], [16300, 17105, null], [17105, 17520, null], [17520, 17837, null], [17837, 18290, null], [18290, 18584, null], [18584, 18894, null], [18894, 19177, null], [19177, 20260, null], [20260, 20811, null], [20811, 21179, null], [21179, 21733, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21733, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21733, null]], "pdf_page_numbers": [[0, 31, 1], [31, 236, 2], [236, 564, 3], [564, 766, 4], [766, 1035, 5], [1035, 1466, 6], [1466, 1733, 7], [1733, 2454, 8], [2454, 2867, 9], [2867, 3140, 10], [3140, 3598, 11], [3598, 3996, 12], [3996, 4296, 13], [4296, 4558, 14], [4558, 4698, 15], [4698, 4910, 16], [4910, 5282, 17], [5282, 5682, 18], [5682, 5853, 19], [5853, 6225, 20], [6225, 6670, 21], [6670, 6966, 22], [6966, 7932, 23], [7932, 8357, 24], [8357, 8719, 25], [8719, 9065, 26], [9065, 9356, 27], [9356, 9752, 28], [9752, 10179, 29], [10179, 10696, 30], [10696, 10910, 31], [10910, 11227, 32], [11227, 11405, 33], [11405, 12215, 34], [12215, 12873, 35], [12873, 13188, 36], [13188, 13467, 37], [13467, 13760, 38], [13760, 14117, 39], [14117, 14408, 40], [14408, 14804, 41], [14804, 15195, 42], [15195, 15529, 43], [15529, 15864, 44], [15864, 16300, 45], [16300, 17105, 46], [17105, 17520, 47], [17520, 17837, 48], [17837, 18290, 49], [18290, 18584, 50], [18584, 18894, 51], [18894, 19177, 52], [19177, 20260, 53], [20260, 20811, 54], [20811, 21179, 55], [21179, 21733, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21733, 0.10924]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
fbc340bc62088e96105ac61c13020811a35d192d
|
Chapter - 5
DYNAMIC LEARNING CLASSIFIER FRAMEWORK (DLCF) FOR IDS
Chapter 5
DYNAMIC LEARNING CLASSIFIER FRAMEWORK (DLCF) FOR IDS
In this chapter we investigate the task of classification performed by intrusion detection expert and present a novel concept of building a Dynamic Learning Classifier Framework (DLCF) using machine learning techniques. We analyze the requirements of such a system, select a suitable machine-learning technique and validate the system on real kddcup99 dataset.
5.1 DLCF Framework
The architecture of DLCF for IDS is shown in figure 5.1. The DLCF can be implemented either in expert mode or automation mode. In a conventional setup, alarms generated by IDSs are passed onto an IDS expert analyst. The analyst uses his or her knowledge to distinguish between false and true positives and to understand the severity of the alarms.
Conventional systems may use manual knowledge engineering to build an alarms classifier or may use no alarms classifier at all. In any case, the conventional setup does not take advantage of the fact that the analyst is analyzing the alarms in real-time: the manual knowledge of engineering is separated from analyzing alarms.
As shown in Figure 5.1, our system classifies alarms and passes them to the IDS expert. It also assigns a classification confidence (or confidence for short), to alarms, which shows the likelihood of alarms belonging to their assigned classes.
The IDS expert reviews this classification and reclassifies alarms, if necessary. This process is recorded and used as training by the machine learning component to build an improved alarm classifier.
Currently we use a simple human-computer interaction model, where the IDS expert explicitly classifies alarms into true and false positives. In addition to the training examples, we use background knowledge to learn improved classification rules. These rules are then used by DLCF to classify alarms. The expert can inspect the rules to make sure they are correct.
The architecture presented describes the operation of the system in IDS expert mode. The second mode, automation mode, introduces autonomous processing to reduce the operator’s workload.
5.1.1 DLCF in Expert Mode
In expert mode Figure 5.1(a), DLCF classifies alarms and passes all of them to the console to be verified by the IDS expert. In other words, the system assists the IDS expert suggesting the correct classification. The advantage for the IDS expert is that each alarm is already pre-classified and that the IDS expert has only to verify its correctness. The IDS expert can prioritize his or her work, e.g., by dealing with alarms classified as true positives first or sorting the alarms by classification confidence. It is important to emphasize that at the end, the analyst will review all classifications made by the system.
It has been used as a simple human-computer interaction model, in which the expert sequentially classifies alarms into true and false positives, which are converted into training examples; however, more sophisticated interaction techniques are also possible.
Figure 5.1 DLCF Framework in expert and automation modes
More formally, there is a human intrusion detection IDS expert O reviewing a sequence of intrusion detection alarms $(A_1, A_2, \ldots, A_i, \ldots)$ in the alarm log $L$. The review is done by assigning one of the predefined set of classes $\{C_1, C_2, \ldots, C_n\}$ (which can be in particular two classes: true positives and false positives {“+”, “-”}) to each alarm.
The review is typically done sequentially and in real-time, which means that alarm $A_{i+1}$ is reviewed only after alarms $(A_1, A_2, \ldots, A_i)$ have been reviewed and, at this time, alarms $(A_{i+2}, \ldots)$ are not known. This procedure is shown in figure 5.2
**Given** – A sequence of alarms: $(A_1,A_2, \ldots,A_i, \ldots)$ in the alarm log $L$,
a set of classes $C = \{C_1,C_2, \ldots,C_n\}$,
an intrusion detection IDS expert O sequentially and in real-time assigning classes to alarms,
a utility function $U$ minimizing the misclassification cost,
**Find** A classifier classifying alarms, maximizing the utility function $U$.
**Figure 5.2 Assigning Class Labels to Alarms in Expert Mode**
This is a conventional incremental learning setup. Figure 5.3 shows the operation of DLCF in the expert mode. The function goodClassificationPerformance() estimated the performance of the classifier on a confusion matrix using a weighted accuracy (WA) with a threshold $WA_{th}$.
In the automation mode, the high level goal uses a modified utility function $U$, so that it also limits IDS expert’s workload and therefore a system that autonomously processes some alarms is preferred over the one that does not. More formally it is shown in figure 5.4
Input: a sequence of alarms $(A_1, A_2, \ldots, A_n)$
Result: a sequence of classified alarms $((A_1, C_{A_1}), (A_n, C_{A_n}), \ldots, (A_n, C_{A_n}))$
```plaintext
1 initialize;
/* alarms used for the initial training */
2 $x \leftarrow x^0$;
3 while $x < n$ do
4 $S_i \leftarrow$ subsequence($A_1, \ldots, A_x$);
5 $C_i \leftarrow$ learnUpdateClassifier($C_{i-1}, S_i$);
6 while goodClassificationPerformance(WAt) do
7 $C_x \leftarrow$ classify($C_i, A_x$);
8 $C_{Ax} \leftarrow$ askIDS expertVerifyClassification($A_x, C_x$);
9 updateClassificationPerformance($C_x, C_{Ax}$);
10 $x \leftarrow x + 1$;
11 end
12 $i \leftarrow i + 1$
13 end
```
**Figure 5.3 DLCF Classification in Expert Mode**
### 5.1.2 DLCF in Automation Mode
In the automation mode, the high level goal uses a modified utility function $U$, so that it also limits IDS expert’s workload and therefore a system that autonomously processes some alarms is preferred over the one that does not. More formally it is shown in figure 5.4
In Automation Mode the DLCF autonomously processes some of the alerts based on criteria defined by the expert (i.e., classification assigned by DLCF and classification confidence).
By processing alerts we mean that DLCF executes user-defined actions associated with the class labels and classification confidence values. For example, attacks classified as false positives can be automatically removed, thus reducing the analyst’s workload. In contrast, alerts classified as true positives and successful attacks can initiate an automated response, such as reconfiguring a router or firewall.
It is important to emphasize that such actions should be executed only for alerts classified with high confidence, whereas the other alerts should still be reviewed by the analyst. The operation of DLCF in automation mode is shown in figure 5.5.
Note that autonomous alarm processing may change the behavior of the system and negatively impact its classification accuracy. To illustrate this with an example, suppose the system classifies alarms into true and false positives and it is configured to autonomously discard the latter if the classification confidence is higher than a given threshold value.
Given – A sequence of alarms: \( \{A_1, A_2, \ldots, A_n, \ldots\} \) in the alarm log \( L \),
a set of classes \( C = \{C_1, C_2, \ldots, C_n\} \),
an intrusion detection IDS expert \( O \) sequentially and in real-time assigning classes to alarms,
a utility function \( U \) minimizing the misclassification cost,
Find A classifier classifying alarms, maximizing the utility function \( U \).
Fig. 5.4 Classifying Alarms in Automation Mode
Suppose the system learned a good classifier and classifies alarms with high confidence. In this case, if the system starts classifying all alarms as false positives then these alarms would be autonomously discarded and would never be seen by the IDS expert.
These alarms would not become training examples and would never be used to improve the classifier. Another problem is that alarms classified and processed autonomously cannot be added to the list of training examples as the IDS expert has not reviewed them. If alarms of a certain class are processed autonomously more frequently than alarms belonging to other classes (as in the above example), as a consequence we change the class distribution in the training examples.
This has important implications as machine-learning techniques are sensitive to class distribution in training examples. In the optimal case, the distribution of classes in training and testing examples should be identical.
To alleviate these problems, we use a technique called random sampling. In this technique we randomly select a fraction $s$ of alarms which would normally be processed autonomously and instead forward them to the IDS expert. This ensures the stability of the system. The value of $s$ is a tradeoff between how many alarms will be processed autonomously and how much risk of misclassification is acceptable.
5.2 **DLCF Model learning based on RIPPER**
Among the machine learning techniques that best fulfill our requirements, we chose RIPPER [102] a fast and effective rule learner. It has been successfully used in intrusion detection (e.g., on system call sequences and network connection data [47] as well as related domains and it has proved to produce concise and intuitive rules.
As reported by Lee [42], RIPPER rules have two very desirable conditions for intrusion detection: good generalization accuracy and concise conditions. Another advantage of RIPPER is its effectiveness with noisy datasets.
RIPPER has been well documented in the literature, however, for the sake of a better understanding of the system we will briefly explain how RIPPER works. As shown in Algorithm 4, RIPPER learns a sequence RS of rules Ri in the form:
\[ \text{if (condition1 and condition2 and ... conditionN) then class.} \]
A single condition is in the form \( A_i = v \) (in the case of categorical attributes) or \( A_i >= 0 \) or \( A_i <= 0 \) (in the case of numerical attributes). The rule evaluates to true if and only if all its conditions hold, in which case, the prediction is made and no further rules are evaluated.
In a multi-class setting, RIPPER sorts the classes \( C_1, C_2, \ldots, C_n \) in increasing frequency and induces the rules sequentially from the least prevalent class (SC1) to the second to last most prevalent class (SC\(_{n-1}\)). The most prevalent class SC\(_n\) is called a default class, for which no rules are induced. Hence, in the binary case, RIPPER induces rules only for the minority class.
The process of inducing rules for a single class proceeds in two stages: the building stage and the optimization stage. In the building stage, RIPPER builds the rules in the following two steps: growing and pruning. In the growing step, rules are greedily “grown” by adding conditions that maximize the information gain [56]. In the pruning step, rules are pruned using a criterion, which is equivalent to precision. The goal of pruning is to improve both the generalization and the simplicity of the rule. In the optimization stage, building and pruning is executed on both an initial rule and an empty rule set, with the evaluation done on the entire rule-set. Finally, the best variant of the two is selected for the final rule-set.
Unfortunately, the standard RIPPER algorithm is not cost-sensitive and does not support incremental learning. We used the following methods to circumvent these limitations.
5.2.1 Cost-Sensitive Modeling
In a cost-insensitive world, both types of misclassifications (false negatives and false positives) carry equal weights and hence the performance of a classifier can be evaluated by means of accuracy. However, in the real-world the costs of misclassifications are most often not equal, e.g., missing an intrusion is intuitively more expensive than investigating one false positive. This, together with the fact that cost-sensitive problems are typically skewed, increases an importance of cost-sensitive modeling.
In general, cost-sensitive modeling is a difficult issue [20] as there can be many costs that need to be taken into account. For example, Fan [21] defines two types of costs in the domain of intrusion detection: the damage cost $D_{Cost}$, which characterizes the maximum amount of damage inflicted by an attack and a response cost $R_{Cost}$, which is the cost to take action when a potential intrusion is detected.
In this case, false negatives would incur the $D_{Cost}$ for the given attack, false positives and true positives would incur $D_{Cost}$ for the given attack and the wrongly identified attacks would incur both the response cost for the action taken and the damage cost of the missed attack. Moreover, the damage and the response costs are typically not constant and depend on both the attack class and in some cases, the particular instance of an attack (e.g., the damage incurred as a result of an attack against an important server is typically much higher than for the same attacks against a workstation, which can simply be switched off).
In addition, Fan showed that certain features used for testing have different costs than others, e.g., analyzing a flag Transmission Control Protocol (TCP) header is much “cheaper” in terms of resources than calculating statistics over an entire TCP flow. The approach proposed by Fan allows taking this fact into account in building ensemble-based learning systems.
However, while this approach is correct in the formal sense, it has two main problems. First, Fan used boosting methods, in which misclassified instances are “penalized” according to the misclassifications the weak learner made.
While taking both RCost and DCost into account can be easily achieved in this iterative learning method, as a side effect it produces a number of weak classifiers, which make their rules less interpretable. For example with 200 boosting rounds, each of them building a classifier producing 50 rules, there would be 10000 rules that would need to be investigated. In contrast, our approach focuses on a single classifier. Second, the multi-cost sensitive approach introduces a high number of parameters that would need to be investigated.
5.2.2 Binary vs. Multi-Class Classification
It has been seen the job of an intrusion detection analyst and possible classifications of intrusions. Here, we argue that our setup, in which the human analyst analyzes alerts generated by IDS, can be without loss of functionality considered a binary classification problem. First, if multiple classes are used, they are not very systematic and, in most cases, describe a nature of a problem, which either is uniquely determined by the type of IDS alerts at question (e.g., an PORTSCAN alert if it is a true positive is a “scanning incident”), or cannot be determined with certainty.
This means that in many cases, such a classifier, knowing that an alert is a true positive, can be built as a second-stage classifier, or should not be built at all. Second, the costs of misclassifying a certain type of an intrusion as another one are extremely hard to determine.
However, the actual cost of misclassifying different types of alerts as non-attacks is not identical. To illustrate this with an example, missing a scanning incident is much less costly than missing a single stealthy attack that installs a root-kit on a machine.
However, the problem is that those “cheap” attacks are fairly easy to identify and moreover, they constitute a large numbers of alerts. Conversely stealthy attacks are much more difficult to detect (that is why they are called “stealthy”).
This problem of redundancy in the data stream can be solved in two ways: First, alert correlation systems aim at reducing the redundancy in the alert stream and the number of alerts passed to the analyst.
Second, we propose to assign a weight to alerts normalizing them so that the costs of missing different attacks would be identical. This weight should be a function of an alert type so that with n categories of alerts, only n parameters would have to be estimated.
In our evaluation, as we wanted to evaluate minimizing the number of parameters that need to be set, we decided not to take this approach and assumed that the cost of missing all attacks is identical.
5.2.3 Cost-Sensitive RIPPER
As the base version of RIPPER is cost-insensitive, we had to adapt it to support misclassification costs. Among the various methods of making a classification technique cost-sensitive, we focused on those that are not specific to a particular machine-learning technique. By changing costs appropriately, these methods can also be used to address the problem of skewed class distributions. These methods produce comparable results, although this can be data dependent [6]. Experiments not documented here showed that in our context Weighting gives better run-time performance, MetaCost most likely because of the learning of multiple models by. Therefore we chose Weighting for our system.
Weighting re-samples the training set so that a standard cost-insensitive learning algorithm builds a classifier that optimizes the misclassification cost. The input parameter for Weighting is a cost matrix, which defines the costs of misclassifications for individual class pairs.
5.3 DLCF Evaluation
In this section we evaluate DLCF, our Dynamic Learning Classification System presented in Chapter 5. Recall that DLCF can operate in two modes: (i) the expert mode, in which alarms are classified and forwarded to the IDS expert and (ii) the automation mode, which in addition to the classification allows for a fraction of alarms to be processed automatically (e.g., false positives can be discarded) without IDS expert’s intervention. In this section we will verify the operation of DLCF in both these modes. In particular, we would like to test the following two hypotheses:
Hypothesis 5.3.1: The proposed background knowledge improves the accuracy of alarm classification.
Hypothesis 5.3.2: DLCF has acceptable false-positive and false-negative rates in both recommender and automation modes and is useful for intrusion detection.
For the evaluation, the following remark is in place: While evaluating the performance of any binary classifier (or alarm-classification system in particular), we characterize its performance by its confusion matrix and the terms: true positives, false positives, false negatives and true negatives. This causes conflict with terms false positives, true positives commonly used in the domain of intrusion detection and referring to the classification of alarms. In fact, an IDS is a special type of a binary classifier and these names are justified.
To avoid confusion, in the remainder of this dissertation we use terms false negatives, true positives and false positives only in the context of the evaluation of alarm-classification systems.
From now onwards it has been refer to the original classification of alarms as true alarms and false alarms.
5.4 Evaluation Methodology
The evaluation of supervised components of our system is performed in a streamline fashion classifying alarms sequentially as they would be seen by the human IDS expert. We purposely did not use standard machine learning evaluation techniques using stratified cross-validation, because the streamline method better reflects the way the system would be used in practice.
In fact, the system leverages the dependency between alarms by its incremental nature: Misclassified alarms are used to learn an improved alarm classifier and classify future similar alarms correctly.
In the evaluation we use ROC analysis to determine the influence of background knowledge and set system parameters. Subsequently, we evaluate false-negative (FN) and false-positive (FN) rates. We also plot evaluation charts showing how these rates vary during system’s runtime (as a function of classified alarms) and evaluate the overall cumulative numbers.
5.5 Results Obtained with DARPA 1999 Data Set
We evaluated the performance of DLC in expert and automation mode.
5.5.1 Results of DLCF in Expert mode.
In expert mode the IDS expert reviews each alarm and corrects DLCF misclassifications. We plotted the number of misclassifications: false positive rate (Figure 5.6) and false-negative rate (Figure 5.7) as a function of processed alarms. Note that we cropped high error rates at beginning of the run. These are transient effects and we are interested in the asymptotic values.
The resulting overall false-negative rate (fn = 0.024) is much higher than the false-negative rate for the batch classification on the entire dataset (fn = 0.0076). At the same time, the overall false-positive rate (fp = 0.025) is less than half of the false-positive rate for batch classification (fp = 0.06).
These differences are expected due to different learning and evaluation methods used, i.e., batch incremental learning vs. 10-fold cross-validation. Note that both DLCF and a batch classifier have very good classification accuracy and yield comparable results in terms of accuracy.
In automation mode DLCF processes alarms autonomously based on criteria defined by the IDS expert, described in Section 5.1. We configured the system to forward to the IDS expert all alarms classified as true alarms and those false alarms that were classified with low confidence (confidence < cth). The system discarded all other alarms, i.e., false alarms classified with high confidence, except for a fraction s of randomly chosen alarms, which were also forwarded to the IDS expert.
Similarly to the expert mode, we calculated the number of misclassifications made by the system. We experimented with different values of $c_{th}$ and sampling rates $s$. We then chose $c_{th} = 90\%$ and three sampling rates $s$: 0.1, 0.25 and 0.5.
Our experiment is shown in figure 5.7 that the sampling rates below 0.1 make the agent misclassify too many alarms and significantly changes the class distribution in the training examples. On the other hand, with sampling rates much higher than 0.5, the system works similarly to expert mode and is less useful for the IDS expert.
Notice that there are two types of false negatives in automation mode, the ones corrected by the IDS expert and the ones the IDS expert is not aware of because the alarms have been discarded.
Figure 5.7 False positives for DLCF in Automation Mode
We plotted the second type of misclassification as mirrored series with no markers in Figure 5.4a. Intuitively with lower sampling rates, the agent will have fewer false negatives of the first type, in fact missing more alarms. As expected the total number of false negatives is lower with higher sampling rates.
One is surprised to observe that the recommender and the agent have similar false positive rates and similar false-negative rates, even with low sampling rates.
This seemingly counterintuitive result can be explained if we note that automatic processing of alarms classified as false positives effectively changes the class distribution in training examples in favor of true alarms. As a result the agent performs comparably to the recommender.
As shown in Figure 5.8, with the sampling rate of 0.25, more than 60% of false alarms were processed and discarded by DLC. At the same time the number of unnoticed false negatives is half the number of mistakes for expert mode. Our experiments show that the system is useful for intrusion detection IDS experts as it significantly reduces the number of false positives with fairly good accuracy.
The results were particularly clear for the DARPA 1999 data set. We showed that the system is useful in recommender mode, where it dynamically learns the classification from the expert. For dataset we obtained false-negative and false positive rates comparable to batch classification. Note that in recommender mode all system misclassifications are corrected by the expert. In addition, it is found that our system is useful in automation mode, where some alerts are autonomously processed.
The results were particularly clear for the DARPA 1999 data set. We showed that the system is useful in recommender mode, where it dynamically learns the classification from the expert. For dataset we obtained false-negative and false positive rates comparable to batch classification. Note that in recommender mode all system misclassifications are corrected by the expert. In addition, it is found that our system is useful in automation mode, where some alerts are autonomously processed (e.g., false positives classified with high confidence are discarded).
More importantly, for kddcup’99 dataset the false-negative rate of our system is comparable to that in the recommender mode. With this real dataset the system reduced the number of false positives by 60% with a false-negative rate below 0.026 (half of these alerts would have been shown to the analyst) and a false-positive rate 0.025.
5.6 Chapter Summary
In our evaluation of DLCF carried on real-time dataset, DARPA 1999 dataset and I validated the.
The results were particularly clear for the DARPA 1999 data set. We showed that the system is useful in recommender mode, where it dynamically learns the classification from the expert. For dataset we obtained false-negative and false positive rates comparable to batch classification. Note that in recommender mode all system misclassifications are corrected by the expert. In addition, it is found that our system is useful in automation mode, where some alerts are autonomously processed (e.g., false positives classified with high confidence are discarded).
More importantly, for kddcup’99 dataset the false-negative rate of our system is comparable to that in the recommender mode. With this real dataset the system reduced the number of false positives by 60% with a false-negative rate below 0.026 (half of these alerts would have been shown to the analyst) and a false-positive rate 0.025.
|
{"Source-Url": "http://shodhganga.inflibnet.ac.in/bitstream/10603/38969/14/15_chapter%205.pdf", "len_cl100k_base": 5443, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 34530, "total-output-tokens": 6298, "length": "2e12", "weborganizer": {"__label__adult": 0.0005483627319335938, "__label__art_design": 0.000736236572265625, "__label__crime_law": 0.003864288330078125, "__label__education_jobs": 0.00417327880859375, "__label__entertainment": 0.00023734569549560547, "__label__fashion_beauty": 0.0003256797790527344, "__label__finance_business": 0.0006728172302246094, "__label__food_dining": 0.0004353523254394531, "__label__games": 0.00206756591796875, "__label__hardware": 0.001915931701660156, "__label__health": 0.0008177757263183594, "__label__history": 0.0004646778106689453, "__label__home_hobbies": 0.00020015239715576172, "__label__industrial": 0.0011835098266601562, "__label__literature": 0.0005726814270019531, "__label__politics": 0.0009226799011230468, "__label__religion": 0.0005517005920410156, "__label__science_tech": 0.30517578125, "__label__social_life": 0.0002582073211669922, "__label__software": 0.0877685546875, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.0003399848937988281, "__label__transportation": 0.0003724098205566406, "__label__travel": 0.00022518634796142575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25911, 0.02726]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25911, 0.4765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25911, 0.92446]], "google_gemma-3-12b-it_contains_pii": [[0, 66, false], [66, 1434, null], [1434, 3102, null], [3102, 3160, null], [3160, 4522, null], [4522, 5850, null], [5850, 7496, null], [7496, 7755, null], [7755, 9462, null], [9462, 11218, null], [11218, 12999, null], [12999, 14766, null], [14766, 16226, null], [16226, 17926, null], [17926, 19541, null], [19541, 21026, null], [21026, 21513, null], [21513, 22346, null], [22346, 23996, null], [23996, 24558, null], [24558, 25911, null]], "google_gemma-3-12b-it_is_public_document": [[0, 66, true], [66, 1434, null], [1434, 3102, null], [3102, 3160, null], [3160, 4522, null], [4522, 5850, null], [5850, 7496, null], [7496, 7755, null], [7755, 9462, null], [9462, 11218, null], [11218, 12999, null], [12999, 14766, null], [14766, 16226, null], [16226, 17926, null], [17926, 19541, null], [19541, 21026, null], [21026, 21513, null], [21513, 22346, null], [22346, 23996, null], [23996, 24558, null], [24558, 25911, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25911, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25911, null]], "pdf_page_numbers": [[0, 66, 1], [66, 1434, 2], [1434, 3102, 3], [3102, 3160, 4], [3160, 4522, 5], [4522, 5850, 6], [5850, 7496, 7], [7496, 7755, 8], [7755, 9462, 9], [9462, 11218, 10], [11218, 12999, 11], [12999, 14766, 12], [14766, 16226, 13], [16226, 17926, 14], [17926, 19541, 15], [19541, 21026, 16], [21026, 21513, 17], [21513, 22346, 18], [22346, 23996, 19], [23996, 24558, 20], [24558, 25911, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25911, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
f19e9766fa0f92ca85930c7fb827eed1054c91b1
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/ifip8-1/poem2011/ZikraSZ11.pdf", "len_cl100k_base": 6097, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 30143, "total-output-tokens": 8014, "length": "2e12", "weborganizer": {"__label__adult": 0.00030803680419921875, "__label__art_design": 0.00044083595275878906, "__label__crime_law": 0.0002872943878173828, "__label__education_jobs": 0.0013885498046875, "__label__entertainment": 5.555152893066406e-05, "__label__fashion_beauty": 0.00015878677368164062, "__label__finance_business": 0.0006718635559082031, "__label__food_dining": 0.0003249645233154297, "__label__games": 0.0004119873046875, "__label__hardware": 0.0005173683166503906, "__label__health": 0.00042819976806640625, "__label__history": 0.00023818016052246096, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.00044083595275878906, "__label__literature": 0.0002830028533935547, "__label__politics": 0.00022685527801513672, "__label__religion": 0.0003840923309326172, "__label__science_tech": 0.019744873046875, "__label__social_life": 7.462501525878906e-05, "__label__software": 0.006298065185546875, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.00023448467254638672, "__label__transportation": 0.0005016326904296875, "__label__travel": 0.0001906156539916992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36275, 0.02781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36275, 0.48633]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36275, 0.91485]], "google_gemma-3-12b-it_contains_pii": [[0, 2476, false], [2476, 5719, null], [5719, 8644, null], [8644, 10756, null], [10756, 12771, null], [12771, 14657, null], [14657, 16855, null], [16855, 18843, null], [18843, 20780, null], [20780, 23570, null], [23570, 25961, null], [25961, 27780, null], [27780, 29779, null], [29779, 32691, null], [32691, 36275, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2476, true], [2476, 5719, null], [5719, 8644, null], [8644, 10756, null], [10756, 12771, null], [12771, 14657, null], [14657, 16855, null], [16855, 18843, null], [18843, 20780, null], [20780, 23570, null], [23570, 25961, null], [25961, 27780, null], [27780, 29779, null], [29779, 32691, null], [32691, 36275, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36275, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36275, null]], "pdf_page_numbers": [[0, 2476, 1], [2476, 5719, 2], [5719, 8644, 3], [8644, 10756, 4], [10756, 12771, 5], [12771, 14657, 6], [14657, 16855, 7], [16855, 18843, 8], [18843, 20780, 9], [20780, 23570, 10], [23570, 25961, 11], [25961, 27780, 12], [27780, 29779, 13], [29779, 32691, 14], [32691, 36275, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36275, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
931f05d0a030ba32290445f7349a205bd1c72138
|
Trademarks
Linux is a registered trademark of Linus Torvalds. PathScale is a registered trademark of Cray, Inc. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. SUSE is a registered trademark of Novell, Inc. PGI is a registered trademark of NVIDIA Corporation. FLEXlm is a registered trademark of Flexera Software, Inc. PBS Professional, PBS Pro, and Green Provisioning are trademarks of Altair Engineering, Inc. All other trademarks are the property of their respective owners.
Rights and Restrictions
All statements, specifications, recommendations, and technical information contained herein are current or planned as of the date of publication of this document. They are reliable as of the time of this writing and are presented without warranty of any kind, expressed or implied. Bright Computing, Inc. shall not be liable for technical or editorial errors or omissions which may occur in this document. Bright Computing, Inc. shall not be liable for any damages resulting from the use of this document.
Limitation of Liability and Damages Pertaining to Bright Computing, Inc.
The Bright Cluster Manager product principally consists of free software that is licensed by the Linux authors free of charge. Bright Computing, Inc. shall have no liability nor will Bright Computing, Inc. provide any warranty for the Bright Cluster Manager to the extent that is permitted by law. Unless confirmed in writing, the Linux authors and/or third parties provide the program as is without any warranty, either expressed or implied, including, but not limited to, marketability or suitability for a specific purpose. The user of the Bright Cluster Manager product shall accept the full risk for the quality or performance of the product. Should the product malfunction, the costs for repair, service, or correction will be borne by the user of the Bright Cluster Manager product. No copyright owner or third party who has modified or distributed the program as permitted in this license shall be held liable for damages, including general or specific damages, damages caused by side effects or consequential damages, resulting from the use of the program or the un-usability of the program (including, but not limited to, loss of data, incorrect processing of data, losses that must be borne by you or others, or the inability of the program to work together with any other program), even if a copyright owner or third party had been advised about the possibility of such damages unless such copyright owner or third party has signed a writing to the contrary.
# Table of Contents
Table of Contents.................................................................................................................. i
0.1 About This Manual........................................................................................................ iii
0.2 About The Manuals In General.................................................................................... iii
0.3 Getting Administrator-Level Support........................................................................ iv
0.4 Getting Professional Services..................................................................................... iv
1 Introduction ......................................................................................................................... 1
1.1 Cloud Computing Vs Edge Computing ...................................................................... 1
1.2 High Speed Monitoring And Local Processing.......................................................... 1
2 Bright Edge ......................................................................................................................... 3
2.1 Bright Edge .................................................................................................................. 3
2.1.1 Defining The Edge Site ....................................................................................... 4
2.1.2 Adding Nodes To Pre-existing Edge Sites With cmsh .................................. 9
2.1.3 Viewing Edge Sites Using cmsh ....................................................................... 9
2.1.4 Viewing Edge Sites Using Bright View .............................................................. 9
2.1.5 Create Edge ISO .............................................................................................. 10
2.1.6 Edge ISO Node Installer .................................................................................. 11
2.1.7 Edge Directors .................................................................................................. 14
2.1.8 Edge Nodes ....................................................................................................... 15
3 Installing Slurm To An Edge Site .................................................................................... 17
3.1 Preparation .................................................................................................................. 17
3.2 Installation ..................................................................................................................... 17
Preface
Welcome to the *Edge Manual* for Bright Cluster Manager 9.0.
0.1 About This Manual
This manual is aimed at helping cluster administrators install, understand, configure, and manage the edge computing capabilities of Bright Cluster Manager. The administrator is expected to be reasonably familiar with the *Administrator Manual*.
0.2 About The Manuals In General
Regularly updated versions of the Bright Cluster Manager 9.0 manuals are available on updated clusters by default at /cm/shared/docs/cm. The latest updates are always online at http://support.brightcomputing.com/manuals.
- The *Installation Manual* describes installation procedures for the basic cluster.
- The *Administrator Manual* describes the general management of the cluster.
- The *User Manual* describes the user environment and how to submit jobs for the end user.
- The *Cloudbursting Manual* describes how to deploy the cloud capabilities of the cluster.
- The *Developer Manual* has useful information for developers who would like to program with Bright Cluster Manager.
- The *OpenStack Deployment Manual* describes how to deploy OpenStack with Bright Cluster Manager.
- The *Machine Learning Manual* describes how to install and configure machine learning capabilities with Bright Cluster Manager.
If the manuals are downloaded and kept in one local directory, then in most pdf viewers, clicking on a cross-reference in one manual that refers to a section in another manual opens and displays that section in the second manual. Navigating back and forth between documents is usually possible with keystrokes or mouse clicks.
For example: <Alt>-<Backarrow> in Acrobat Reader, or clicking on the bottom leftmost navigation button of xpdf, both navigate back to the previous document.
The manuals constantly evolve to keep up with the development of the Bright Cluster Manager environment and the addition of new hardware and/or applications. The manuals also regularly incorporate customer feedback. Administrator and user input is greatly valued at Bright Computing. So any comments, suggestions or corrections will be very gratefully accepted at manuals@brightcomputing.com.
There is also a feedback form available via Bright View, via the Account icon, 📄, following the clickpath:
Account→Help→Feedback
0.3 Getting Administrator-Level Support
If the reseller from whom Bright Cluster Manager was bought offers direct support, then the reseller should be contacted.
Otherwise the primary means of support is via the website https://support.brightcomputing.com. This allows the administrator to submit a support request via a web form, and opens up a trouble ticket. It is a good idea to try to use a clear subject header, since that is used as part of a reference tag as the ticket progresses. Also helpful is a good description of the issue. The followup communication for this ticket goes via standard e-mail. Section 13.2 of the Administrator Manual has more details on working with support.
0.4 Getting Professional Services
Bright Computing normally differentiates between professional services (customer asks Bright Computing to do something or asks Bright Computing to provide some service) and support (customer has a question or problem that requires an answer or resolution). Professional services can be provided after consulting with the reseller, or the Bright account manager.
1. Introduction
1.1 Cloud Computing Vs Edge Computing
Cloud computing is traditionally about the concept of end users using resources that are located in a cloud elsewhere. The cloud is the central coordinator, and end users use the resources that are in the cloud rather than using their local resources.
As computing power has become cheaper over time, and resource use has grown, it has in many cases become more financially attractive to shift the emphasis of resource coordination, from the center of the cloud (core of the cloud), over to the local resources which are at the edge. These local devices are then called edge nodes.
A strong case for edge computing is when the following resource requirements are easier to provide locally via local devices, than via central processing in the cloud:
- low latency
- high bandwidth consumption
- high CPU cycle consumption
For example, a self-driving car requires a low latency, high bandwidth, and high CPU cycle consumption in order to ensure a speedy and safe response to traffic requirements. Attempting to run a self-driving car via central processing in the cloud would be impractically slow or prohibitively dangerous.
Generally, edge computing is regarded as a way to have a geographically spread-out cluster make more local use of its computing resources. A geographically spread-out cluster typically already has plenty of CPU cycles, bandwidth, and low latency at its regular nodes. So, for such a geographically spread-out cluster, making more local use of its computing resources tends to mean granting extra autonomy to the edge computing devices, and making them more independent of the head node.
To achieve this greater autonomy, Bright Cluster Manager uses an edge director. This is somewhat similar to the cloud director, but is required to be geographically close to the edge nodes, and is also optimized for edge requirements.
1.2 High Speed Monitoring And Local Processing
The importance of being local and autonomous is often due to the environment that the regular nodes are in. The environment is typically under high speed monitoring by many sensors linked to the regular nodes. The data values obtained by the sensors are processed very quickly by the nodes. Such high speed processing of the monitoring data values can typically only reasonably be achieved by the nodes managing the processing locally as much as possible, rather than having the nodes managed by a head node a large distance away.
2.1 Bright Edge
The Bright Edge feature of Bright Cluster Manager allows a single cluster to span many geographical locations ("one cluster, multiple locations"). Typical use cases are:
- HPC: organizations that have compute resources located in different cities or countries
- IoT: companies that have "edge" locations with the required compute resources at each location
Bright Cluster Manager can be used to deploy and manage resources at edge locations from the central head node.
Bright Edge sites comprise an edge director and edge nodes.
- The edge director must be reachable from the central head node. The edge director forwards requests from the edge nodes to the central head node when required.
- The edge nodes are similar to regular nodes, and are provisioned by them PXE booting off the edge director. Unlike with regular nodes, no direct connection is required between the central head node and the edge nodes.
Figure 2.1: Bright Edge: The Big Picture
© Bright Computing, Inc.
Items to check before creating edge sites:
- The Bright Cluster Manager license must allow edge site creation
- The to-be-provisioned edge director must have an IP address that can be reached from the central head node
- Conversely, the central head node must have an IP address that can be reached by the edge director
Creating and deploying edge sites involve the following steps:
- Create the edge site using `cm-edge-setup`
- Create an edge ISO for provisioning the edge director
- Provision the edge director using the edge ISO
- Provision the edge nodes off the edge director
The following sections explain each of the preceding steps in further detail:
2.1.1 Defining The Edge Site
Edge sites are defined in Bright Cluster Manager using the Ncurses-based `cm-edge-setup`. This section goes through a `cm-edge-setup` session on the central head node that creates an edge site definition.
Running `cm-edge-setup` In Interactive Mode
Running `cm-edge-setup` without any options brings up the main edge setup screen (figure 2.2):

A new edge site can be created by entering a series of parameters (figure 2.3):
Please enter site details below. Not specifying a site secret will require certificate requests to be processed manually from the head node when the edge nodes are booted for the first time. Entering a site secret will require the site secret to be entered on the console of the edge director when it is booted for the first time. Alternatively, the site secret can also be added to the edge director's installation media to allow for non-interactive installation.
Figure 2.3: Entry of edge site parameters
A secret for the site should be entered (figure 2.4):
Figure 2.4: Entry of site secret
The site secret entry is reconfirmed by the administrator in a subsequent entry screen. The next screen after that asks how the external network for the edge director should be set (figure 2.5):
Figure 2.5: Selection of edge external network
- If networks defined as type EdgeExternal are found, then these networks are presented for selection (figure 2.6).
- If no networks of type EdgeExternal are found, then the only option is to create a new network (figure 2.7).
Similarly to the external network configuration for the edge director, a screen comes up next that asks how the internal network for the edge director (figure 2.8):
- If networks defined as type `EdgeInternal` are found, then these networks are presented for selection.
- If no networks of type `EdgeInternal` are found, then the only option is to create a new network (figure 2.9).
The next screen allows edge director parameters to be entered:
The edge nodes can now be configured. Individual nodes (figure 2.11), or a range of nodes (figure 2.12), can be configured:
Running `cm-edge-setup` In Batch Mode
In the preceding section `cm-edge-setup` was used interactively to define edge sites. It can also be used non-interactively for the same purpose. This is done by saving a site configuration file at the end of the interactive setup. This is a YAML file, and it can be used to re-create the edge sites, or it can be used as a template to create new sites.
Example
```
[root@headnode ~]# cat /root/cm/edge/ams-west.yaml
```
© Bright Computing, Inc.
edge_sites:
- address: Kings
admin_email: admin@bright
city: Amsterdam
contact: admin
country: Amsterdam
edge_director:
category: edge-director
hostname: ams-west-director
interface_name_external: eth0
interface_name_internal: eth1
ip_address_external: 10.2.125.125
ip_address_internal: 10.161.255.254
mac_address: '
edge_nodes:
- category: edge-director
hostname: ams-west-node001
interface_name_internal: eth1
ip_address_internal: 10.161.0.1
mac_address: '
external_network:
base_address: 10.2.0.0
domainname: brightcomputing.com
name: externalnet
netmask_bits: 16
internal_network:
base_address: 10.161.0.0
domainname: ams-west-internal.cluster
name: ams-west-internal
netmask_bits: 16
notes: '
secret: xxxxxx
site_name: ams-west
meta:
command_line: /cm/local/apps/cm-setup/bin/cm-edge-setup
date: Thu Dec 6 11:05:29 2018
generated_with: Edge
hostname: smcluster
2.1 Bright Edge
2.1.2 Adding Nodes To Pre-existing Edge Sites With cmsh
Edge nodes can also be added to an existing edge site. This is typically required when no edge nodes were added during `cm-edge-setup`, or if the site is being expanded by adding more nodes. The addition can be done in the usual way, which is to first add the required node object with `cmsh` (section 2.5.3 of the Administrator Manual). The nodes are then added to the relevant edge site(s).
Adding nodes to an edge site can be done as follows:
Example
```
[root@smcluster ~]# cmsh
[smcluster]# edgesite
[smcluster->edgesite]# use ams-west
[smcluster->edgesite->ams-west]# append nodes edge-node005 edge-node006
[smcluster->edgesite->ams-west]# commit
[smcluster->edgesite->ams-west]# list
Name (key) Director Nodes
------------- --------------------- ------------------------------------------
ams-west ams-west-director ams-west-node001,ams-west-director,edge-node005
edge-node006
```
2.1.3 Viewing Edge Sites Using cmsh
Edge sites can be viewed from the `edgesite` mode of `cmsh`.
Example
```
[root@smcluster ~]# cmsh
[smcluster]# edgesite
[smcluster->edgesite]# list
Name (key) Director Nodes
------------- --------------------- ------------------------------------------
ams-west ams-west-director ams-west-node001,ams-west-director
edge-node006
```
```
[smcluster->edgesite]# show ams-west
Parameter Value
---------------- ---------------------
Address Kings
Administrator e-mail admin@bright
City Amsterdam
Contact admin
Country Amsterdam
Director ams-west-director
Name ams-west
Nodes ams-west-node001,ams-west-director
Notes
Revision
Secret ***********
```
2.1.4 Viewing Edge Sites Using Bright View
Edge sites can also be viewed via the clickpath Datacenter → Infrastructure → Edge Sites (figure 2.13). Properties of an edge site can be managed via editing a particular edge site.
© Bright Computing, Inc.
2.1.5 Create Edge ISO
The next step in the deployment is to create the edge ISO on the head node. Typically, the edge ISO is configured so that the edge director boots from it the first time, and carries out a FULL install using the ISO for its source of files that will be installed on the edge director. The edge director is also configured to allow a boot from the hard drive.
If booting from the ISO after the first time, and if the partitions on the edge director have not changed, then a SYNC install is carried out against the central head node. If booting after the first time, and if there is no ISO, then the edge director simply boots from its local hard drive, and no files are synced with the central head node.
There are two ways to create the edge ISO:
1. The edge ISO can be created with a site-specific auto-generated wrapper script. This is the recommended approach. When an edge site is created, CMDaemon on the head node creates a wrapper script at /var/spool/cmd/edge/create-<site-name>-iso.sh. The wrapper script then provides all the site-specific information that needs to be provided for the edge node installer.
Example
```
[root@headnode ~]# cat /var/spool/cmd/edge/create-dell-edge-iso.sh
#!/bin/bash
#
# Written by CMDaemon, do not edit.
# Copy or freeze this file to make modifications.
#
export CMD_EDGE_SITE_SECRET="edge site secret"
/cm/local/apps/cluster-tools/bin/create-edge-iso
```
Alternatively, the edge site ISO can be created manually by setting the options to create-edge-iso:
Example
[root@headnode ~]# create-edge-iso --help
[-e EDGEDIRECTORIP] [-m HEADNODEIP] [-g DEFAULTGATEWAY]
[-k KERNELIMAGE] [-i IMAGENAME] [-s] [-p PATHTOISOFILE]
[-n]
Create edge iso
optional arguments:
-h, --help show this help message and exit
-v, --verbose Turn on verbose logging
-d, --debug Turn on debug mode, iso work directory will not be cleaned up
-c, --includecmshared Include /cm/shared on iso
-f EDGEINTERFACE, --edgeinterface EDGEINTERFACE
Name of interface on edge node
-e EDGEDIRECTORIP, --edgedirectorip EDGEDIRECTORIP
IP[/Netmask bits] of edge director
If Netmask bits is not specified, defaults to /16
-m HEADNODEIP, --headnodeip HEADNODEIP
IP[:port] of head node
If port is not specified, defaults to :8081
-g DEFAULTGATEWAY, --defaultgateway DEFAULTGATEWAY
Gateway for edge director to reach central head node
-k KERNELIMAGE, --kernelimage KERNELIMAGE
Name of image whose kernel will be used for booting iso
-i IMAGENAME, --imagename IMAGENAME
Name of software image to include on iso
-s, --sitesecret Prompt user to enter Edge site secret
-p PATHTOISOFILE, --pathtoisofile PATHTOISOFILE
Path to iso file name
-n, --donotstoresecret
Inform node-installer not to store the secret on the edge director
2.1.6 Edge ISO Node Installer
The edge ISO is used to provision the edge director. The node installer displays the following screens when booting from the edge ISO:
Figure 2.14: Edge node-installer ISO boot menu
Figure 2.15: Edge node-installer select interface
2.1 Bright Edge
Figure 2.16: Edge director IP Static/DHCP selection
Figure 2.17: Edge director IP address and netmask
Figure 2.18: Central head node IP address and port
© Bright Computing, Inc.
2.1.7 Edge Directors
Edge directors can be provisioned from the head node, but are normally provisioned using the software image on the edge ISO/USB. This means:
- The ISO/USB should have a software image included in it
- The ISO/USB should have /cm/shared included in it
If the edge director is booting from the ISO/USB, it means that:
- There is a minimal overhead when only updates, rather than an entire filesystem, are synced from the head node to the edge director
- A FULL install of the edge director only takes place during the first installation of that director, or if the director disk partitions have changed.
- If the edge director has already been installed previously, and its disk partitions are unchanged, then a SYNC install is carried out, so that local files on the edge director can get updated against the head node
If there is no ISO/USB available to the edge director, then the director simply boots off its local drive, and no SYNC install is followed. An explicit imageupdate can however be carried out afterwards when needed, if connectivity is there, to update the software image.
Once the edge director is in the UP state, it is responsible for the following local operations:
- Ramdisk creation for the edge nodes
- Power control for itself and the edge nodes
- Device state (UP, DOWN) check via ICMP ping to the edge nodes
- Monitoring for the edge nodes
### 2.1.8 Edge Nodes
Edge nodes must PXE boot off the edge internal network. The edge director provisions edge nodes in the same way that the head node provisions regular nodes.
Installing Slurm To An Edge Site
On some edge sites there may be a need to run a workload manager.
The Slurm workload manager can be run on an edge site if the cluster is prepared and software installed as in sections 3.1 and 3.2.
3.1 Preparation
1. The edge director must be UP according to cmsh or Bright View.
2. No MySQL or MariaDB installation must be present on the edge director or it will conflict with the automatic installation of cm-mariadb.
3.2 Installation
- The Bright Cluster Manager script cm-wlm-setup (Section 7.3 of the Administrator Manual) is then run on the head node.
- At the Select installation type screen, the edge name should be selected, and the edge director should then be specified as only server role node.
- By default there are no user home directories on the edge director or edge nodes. These must therefore be mounted or created, otherwise jobs cannot run on the edge nodes. This is true for all WLMs running on edge nodes.
|
{"Source-Url": "https://support.brightcomputing.com/manuals/9.0/edge-manual.pdf", "len_cl100k_base": 5073, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 35787, "total-output-tokens": 6070, "length": "2e12", "weborganizer": {"__label__adult": 0.00030040740966796875, "__label__art_design": 0.000415802001953125, "__label__crime_law": 0.0008835792541503906, "__label__education_jobs": 0.0010089874267578125, "__label__entertainment": 0.00011473894119262697, "__label__fashion_beauty": 0.0001105666160583496, "__label__finance_business": 0.0030670166015625, "__label__food_dining": 0.0001590251922607422, "__label__games": 0.001064300537109375, "__label__hardware": 0.0058441162109375, "__label__health": 0.00021266937255859375, "__label__history": 0.0002294778823852539, "__label__home_hobbies": 0.0001829862594604492, "__label__industrial": 0.0006103515625, "__label__literature": 0.00023794174194335935, "__label__politics": 0.0003132820129394531, "__label__religion": 0.0003228187561035156, "__label__science_tech": 0.039093017578125, "__label__social_life": 9.745359420776369e-05, "__label__software": 0.442138671875, "__label__software_dev": 0.5029296875, "__label__sports_fitness": 0.00016129016876220703, "__label__transportation": 0.0002663135528564453, "__label__travel": 0.0001931190490722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24576, 0.02159]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24576, 0.12704]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24576, 0.80692]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2604, false], [2604, 5228, null], [5228, 5228, null], [5228, 7530, null], [7530, 8621, null], [8621, 11110, null], [11110, 11110, null], [11110, 12110, null], [12110, 13278, null], [13278, 14347, null], [14347, 14731, null], [14731, 15407, null], [15407, 16334, null], [16334, 18390, null], [18390, 19815, null], [19815, 21736, null], [21736, 21834, null], [21834, 22032, null], [22032, 22875, null], [22875, 23605, null], [23605, 23605, null], [23605, 24576, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2604, true], [2604, 5228, null], [5228, 5228, null], [5228, 7530, null], [7530, 8621, null], [8621, 11110, null], [11110, 11110, null], [11110, 12110, null], [12110, 13278, null], [13278, 14347, null], [14347, 14731, null], [14731, 15407, null], [15407, 16334, null], [16334, 18390, null], [18390, 19815, null], [19815, 21736, null], [21736, 21834, null], [21834, 22032, null], [22032, 22875, null], [22875, 23605, null], [23605, 23605, null], [23605, 24576, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24576, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24576, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2604, 2], [2604, 5228, 3], [5228, 5228, 4], [5228, 7530, 5], [7530, 8621, 6], [8621, 11110, 7], [11110, 11110, 8], [11110, 12110, 9], [12110, 13278, 10], [13278, 14347, 11], [14347, 14731, 12], [14731, 15407, 13], [15407, 16334, 14], [16334, 18390, 15], [18390, 19815, 16], [19815, 21736, 17], [21736, 21834, 18], [21834, 22032, 19], [22032, 22875, 20], [22875, 23605, 21], [23605, 23605, 22], [23605, 24576, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24576, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
dec05c31794cf27bbdd860684078d4169587cdf6
|
A Java implementation
of the RS1 algorithm using SQL
Robert H. Warren Julia A. Johnson
warren@cs.uregina.ca julia@cs.laurentian.ca
Department of Computer Science
University of Regina
Regina, Saskatchewan
Canada S4S 0A2
Technical Report TR-2000-03
ISBN 0-7731-0399-6
Abstract
This paper describes a Java implementation of the RS1 Rough Sets algorithm, that leverages the use of a Data Base Management System (DBMS) with Structured Query Language (SQL). DBMS use ensures that large information tables can be processed by the algorithm, while keeping the computational resource needs of the Java class low. The algorithm is implemented within a single Java class, making it ideal not only for Rough Set research, but as an add-on to non Rough Set projects.
Keywords Java, Rough Set Implementation, Database Management System
1 Introduction
RS1 is a Rough Sets induction algorithm developed by Wong, Ziarko and Ye in 1986 for generating decision rules based on a table of inconsistent information [1]. A Java implementation of the RS1 algorithm is described. The algorithm logic was written in the Java language, while the actual data manipulation was performed through the Java DataBase Connectivity (JDBC) package. The relational database used to manage the actual data set was the open source Postgresql database package which supports the SQL query language. This Java implementation is being developed in the context of a diversity of applications of Rough Sets techniques for dealing with inconsistent and incomplete knowledge bases [8, 6, 5, 4, 3, 2, 7].
2 Objective
The objective of this research is the implementation of an inductive Rough Sets algorithm to serve as a kernel from which additional Rough Sets research can be performed. While much work has been done, commercially available implementations of RS algorithms are few. Most implementations are experimental and monolithic. (eg. In the case of Rosetta[9], the implementation is a closed package whose capabilities cannot be extended.) In this implementation, the processing is done within a single class file, thus making it relatively simple to integrate into other applications.
Rough Sets is based on set theory and requires vast amounts of set operations which DBMS are ideally suited to perform. Traditionally, these set operations were performed locally within the algorithm development environment. This ensured a localized treatment of the set information, sometimes at the expense of resource efficiency, as the mechanics of data manipulation took second place to the primary objective of implementing Rough Sets.
The novelty of our work lies in the use of a DBMS to perform the data processing functions of the Rough Sets algorithm. In most implementations[9], DBMS use is limited to the import or export of data to and from the Rough Set implementation. This is unfortunate, because DBMSs are designed to handle large volumes of data and efficiently manipulate them based on both queries and constraints. Because the DBMS access is done through the JDBC package, the overhead of accessing information tables is kept at a minimum while ensuring that a maximum number of data sources can be used. This research is meant as a proof of concept as to the use of Data Base Management Systems (DBMS) with Rough Set algorithms.
3 RS1 algorithm description
The RS1 algorithm is an algorithm that functions by incrementally selecting a series of attributes around which to “pivot”, generating rule sets of increasing complexity until all examples in the universe are covered.
At first, each attribute \((A_k)\) is individually processed, and for each of it’s possible values \((V_{ij})\), a subset \((S_{ij})\) of the universe \((E)\) is generated(1). Each of these subsets can be part of the Upper Bound \((\Upsilon)\), the Lower Bound \((\sum)\) or not part of anything
\[
\sum_{i=1}^{m} \sum_{j=1}^{n_i} S_{ij} = \text{subset}(E, A_i = V_{ij})
\]
The set of all positive class examples is generated as a subset \((S_+)\), and the attribute subset \((S_{ij})\) is part of the Lower Bound if it intersects with this class subset(3). Likewise, an attribute subset \((S_{ij})\) is part of the Upper Bound if it is included within this class subset(2).
\[
S_{ij} \subseteq \Upsilon \iff (S_{ij} \cap S_+) \\
S_{ij} \subseteq \sum \iff (S_{ij} \subseteq S_+)
\]
Then a quality value represented by \(\alpha\) is generated for each attribute (4). The attribute with the largest value of \(\alpha\) then becomes the pivot attribute for the next iteration. The universe of possible elements is cleared of rows that are already covered by the rule set using the equation (5).
\[
\alpha = 1 - \frac{|\Upsilon - Y|}{|E|}
\]
\[
E = E - [(E - \Upsilon) \cup \sum]
\]
Using the pivot attribute, the list of attributes is traversed again and new subsets are generated for each of the value combinations for pivot and attribute. The Lower and Upper bounds are again generated and the attribute with the best \(\alpha\) is joined to the pivot, so that we now have a two attribute pivot.
The process is repeated again, adding attributes to the pivot, until we either run out of attributes or the universe becomes empty.
4 Implementation description
The implementation in Java is based on a slightly modified RS1 algorithm. The implementation is a restriction of the RS1 algorithm as only one positive decision class is currently supported and an unique identifying attribute is needed. In order to optimize for a large, real-world, application a DBMS was used to store and retrieve the the set information using SQL queries. The source code is not reviewed here for space considerations; instead the implementation of set operations using SQL is examined as this forms the cornerstone of the implementation.
4.1 A rational for the use of SQL
Data Base Management Systems (DBMS) are fairly mature, robust and standardized systems. They are able to store and process large amounts of tabular data through the use of Structured Query Language (SQL).
The use of a DBMS greatly reduces the amount of code required to implement the RS1 algorithm because it allows the actual mechanics of information manipulation to be dispatched to the DBMS. Instead of fine-tuning the algorithm to the particulars of file I/O, we can rely on the engineering embedded within the DBMS to self-optimize the operations required to implement the RS1 algorithm.
Furthermore, offloading the table operations to the DBMS ensures that the memory consumption by the Rough Set algorithm will be low. Only the table and view names need to be kept in local memory by the Java class, the heavy I/O operations being handled by the DBMS. Most of these have built-in memory space management and internal query caching and optimization. This shelters the Java class from design decisions that are out of the scope of the Rough Sets abstraction, such as selecting an internal set storage method.
Finally, SQL is a sufficiently powerful language to support most set operations needed by a Rough Set algorithm including the subset of, intersection of and union of functions. It is relatively trivial to code these operations because SQL frees us from array and object-space considerations. The generation of subsets is achieved through the generation of temporary tables or views which can be discarded to reduce storage space utilization.
4.2 Implementing set operations using SQL
Within the DBMS, an element of a set is represented as a row within a table and SQL queries are used to manipulate the set elements as desired. The universe is represented by a master table that contains the data that is to be processed by RS1.
In the algorithm described in Section 3, two basic types of operations need to be performed. The generation of sub-sets based on attribute value constraints and the set operators $\cap$ and $\subseteq$.
4.2.1 Generating sub-sets:
In order to generate the subsets needed in (1), the possible values of all attributes must be known. To do this, we use the SQL query listed in Example 1, from which we can obtain the possible values for an attribute. This is repeated for each attribute, enabling the RoughSet class to generate all possible value combinations that need to be verified.
Example 1 SELECT DISTINCT ATTRIB FROM TABLE
To generate the sets we could use nested sub-queries. However, some SQL implementations only have limited support for nested sub-queries, which would
make portability an issue.
Instead, sub-sets can be generated from the data using either views or tables. Generating the subset using a table means creating a separate table to which rows are copied (Examples 2 and 3). This takes up disk space, and the time needed to copy the record. Because a view is table that is actually a query of another table, no additional disk space is needed (Example 4). However, a performance penalty occurs because a query is run internally by the DBMS.
**Example 2**
CREATE TABLE TMP3976 () INHERITS (MAINTABLE)
**Example 3**
INSERT INTO TMP3976 SELECT * from MAINTABLE WHERE eyes='Blue' AND hair='Red'
**Example 4**
CREATE VIEW TMP3976 AS SELECT * FROM MAINTABLE WHERE eyes='Blue' AND hair='Red'
### 4.2.2 Coding set functions:
After the sub-sets have been generated, both the intersection function and the inclusion functions need to be implemented in order to determine if (3) or (2) occur.
In the case of (2), the result needed is the presence of data in the intersection between two sets. This is implemented in Example 5 where the number of elements within the intersection are counted and returned to the Java class.
**Example 5**
SELECT COUNT(item) FROM TABLE1 WHERE item IN (SELECT item FROM TABLE2)
A variation of this query is used in Example 6 to implement (3). In order for TABLE1 to be included in TABLE2, all the elements from it must be part of TABLE2. Therefore, the count returned from the SQL query must be 0.
**Example 6**
SELECT COUNT(item) FROM TABLE1 WHERE item NOT IN (SELECT item FROM TABLE2)
### 5 Testing of algorithm
The implementation was tested on two sample data sets and the output compared with hand-derived expected results. The two data sets were those presented in [1] which have been reproduced in tables 1 and 2. The output results are provided in the remainder of this section. A partial trace of the algorithm for the data presented in table 1 is provided in Appendix A.
<table>
<thead>
<tr>
<th>item</th>
<th>height</th>
<th>hair</th>
<th>eyes</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Short</td>
<td>Dark</td>
<td>Blue</td>
<td>-</td>
</tr>
<tr>
<td>2</td>
<td>Tall</td>
<td>Dark</td>
<td>Blue</td>
<td>-</td>
</tr>
<tr>
<td>3</td>
<td>Tall</td>
<td>Dark</td>
<td>Brown</td>
<td>-</td>
</tr>
<tr>
<td>4</td>
<td>Tall</td>
<td>Red</td>
<td>Blue</td>
<td>+</td>
</tr>
<tr>
<td>5</td>
<td>Short</td>
<td>Blond</td>
<td>Blue</td>
<td>+</td>
</tr>
<tr>
<td>6</td>
<td>Tall</td>
<td>Blond</td>
<td>Brown</td>
<td>-</td>
</tr>
<tr>
<td>7</td>
<td>Tall</td>
<td>Blond</td>
<td>Blue</td>
<td>+</td>
</tr>
<tr>
<td>8</td>
<td>Short</td>
<td>Blond</td>
<td>Brown</td>
<td>-</td>
</tr>
</tbody>
</table>
Table 1: Test Data Set 1
<table>
<thead>
<tr>
<th>id</th>
<th>weight</th>
<th>sex</th>
<th>class</th>
</tr>
</thead>
<tbody>
<tr>
<td>E1</td>
<td>Heavy</td>
<td>F</td>
<td>+</td>
</tr>
<tr>
<td>E2</td>
<td>Heavy</td>
<td>M</td>
<td>+</td>
</tr>
<tr>
<td>E3</td>
<td>Medium</td>
<td>M</td>
<td>-</td>
</tr>
<tr>
<td>E4</td>
<td>Medium</td>
<td>F</td>
<td>-</td>
</tr>
<tr>
<td>E5</td>
<td>Light</td>
<td>M</td>
<td>+</td>
</tr>
<tr>
<td>E6</td>
<td>Light</td>
<td>M</td>
<td>+</td>
</tr>
<tr>
<td>E7</td>
<td>Light</td>
<td>F</td>
<td>+</td>
</tr>
<tr>
<td>E8</td>
<td>Light</td>
<td>F</td>
<td>-</td>
</tr>
<tr>
<td>E9</td>
<td>Light</td>
<td>M</td>
<td>-</td>
</tr>
<tr>
<td>E10</td>
<td>Light</td>
<td>F</td>
<td>-</td>
</tr>
</tbody>
</table>
Table 2: Test Data Set 2
height='Short' → class='+' covers 3 row(s) (1 positive row(s)).
height='Tall' → class='+' covers 5 row(s) (2 positive row(s)).
hair='Blond' → class='+' covers 4 row(s) (2 positive row(s)).
eyes='Blue' → class='+' covers 5 row(s) (3 positive row(s)).
Table 3: Upper bound rules for Table 1.
5.1 Results after processing table 1
The following results were consistent with the hand-derived results for the information contained in table 1.
5.2 Results after processing table 2
The following results were consistent with the hand-derived results for the information contained in table 2.
6 Conclusion
The implementation of the RS1 Rough Sets Inductive Algorithm with a Database Management System was successful. The expected results presented in [1] cor-
hair='Blond' ∧ eyes='Blue' → class='+' covers 2 positive row(s).
hair='Red' → class='+' covers 1 positive row(s).
Table 4: Lower bound rules for Table 1, without the Upper bound.
weight = 'Light' → class = '+' covers 6 row(s) (3 positive row(s)).
sex = 'F' → class = '+' covers 5 row(s) (2 positive row(s)).
sex = 'M' → class = '+' covers 5 row(s) (3 positive row(s)).
Table 5: Upper bound rules for Table 2.
weight = 'Heavy' ∧ sex = 'F' → class = '+' covers 1 positive row(s).
weight = 'Heavy' ∧ sex = 'M' → class = '+' covers 1 positive row(s).
Table 6: Lower bound rules for Table 2, without the Upper bound.
responded to those returned by our RS1 Java implementation, and are listed in Sections 5.1 and 5.2.
A Sample program trace
/*** Initialise ***/
RoughSet: Connecting to database.
RoughSet: Loaded column: [item, height, hair, eyes, class, ]
RoughSet: Checking for Class Column [class].
RoughSet: Checking for ID Column [item].
/*** Generate a sub-set of all positive class rows. ***/
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP7574 () INHERITS (roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP7574 SELECT * FROM roughtest WHERE class='+]'.
/*** Find first pivot. ***/
generateRules: Working on Attribute [height].
/*** For each possible value of the attribute, generate a subset of rows. ***/
/*** For each subset decide if it belong to the upper or lower bound. ***/
generateRules: Working on Attribute [height] with value [Short].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP1311 () INHERITS (roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP1311 SELECT
*** Generate alpha based on upper and lower bound. ***
generateRules: Attribute [height] has an alpha value of [0.0].
generateRules: This Alpha of [0.0] is better than best Alpha of [-1.0].
///*** Try with next attribute. ***///
generateRules: Working on Attribute [height].
///*** For each possible value of the attribute, generate a subset of rows. ***///
///*** For each subset decide if it belongs to the upper or lower bound. ***///
generateRules: Working on Attribute [hair] with value [Blond].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP1991 () INHERITS
(roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP1991 SELECT
* FROM roughtest WHERE hair='Blond'].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP1866 () INHERITS
(roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP1866 SELECT
* FROM roughtest WHERE hair='Dark'].
generateRules: Working on Attribute [hair] with value [Red].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP7007 () INHERITS
(roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP7007 SELECT
* FROM roughtest WHERE hair='Red'].
///*** Generate alpha based on upper and lower bound. ***///
generateRules: Attribute [hair] has an alpha value of [0.5].
generateRules: This Alpha of [0.5] is better than best Alpha of [0.0].
///*** Try with next attribute. ***///
8
generateRules: Working on Attribute [eyes].
/** For each possible value of the attribute, generate a subset of rows. ***/
/** For each subset decide if it belong to the upper or lower bound. ***/
generateRules: Working on Attribute [eyes] with value [Blue].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP9680 () INHERITS (roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP9680 SELECT * FROM roughtest WHERE eyes='Blue'].
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP1701 () INHERITS (roughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP1701 SELECT * FROM roughtest WHERE eyes='Brown'].
/** Generate alpha based on upper and lower bound. ***/
generateRules: Attribute [eyes] has an alpha value of [0.375].
/** End of first iteration, dump the pivot attribute’s rules ***/
/** to the results table and prune the universe data table. ***/
generateRules: Best Attribute was [hair] with an Alpha of [0.5],
storing lower bound and recursing.
storeRules: Storing lower bound rule(s) over [1] columns from table
[TMP6874] to table [RULEL].
storeRules: Will send [SELECT DISTINCT hair, class FROM TMP6874]
as SQL Soup.
storeRules: Will send [INSERT INTO RULEL ( hair, class) VALUES ('Red', '+')] as SQL Soup.
storeRules: Storing upper bound rule(s) over [1] columns from table
[TMP6874] to table [RULEH].
storeRules: Will send [SELECT DISTINCT hair, class FROM TMP6874]
as SQL Soup.
storeRules: Will send [INSERT INTO RULEH ( hair, class) VALUES ('Red', '+')] as SQL Soup.
storeRules: Storing upper bound rule(s) over [1] columns from table
[TMP2093] to table [RULEH].
storeRules: Will send [SELECT DISTINCT hair, class FROM TMP2093]
as SQL Soup.
storeRules: Will send [INSERT INTO RULEH ( hair, class) VALUES ('Blond', '+')] as SQL Soup.
storeRules: Will send [INSERT INTO RULEH ( hair, class) VALUES ('Blond', '-')] as SQL Soup.
9
as SQL Soup.
storeRules: Will send [INSERT INTO RULEH ( hair, class) VALUES ( 'Red', '+')] as SQL Soup.
createSubSet: Creating table with SQL SOUP [CREATE TABLE TMP4285 () INHERITS (ruoughtest)].
createSubSet: Will insert rows using SQL SOUP [INSERT INTO TMP4285 SELECT * FROM roughtest WHERE class='+'].
/*** Generate a sub-set of all positive class rows. ***/
generateRules: Created table [TMP4285] as positive reference class.
generateRules: Pivoting on at least [hair].
/*** Using hair as a pivot, repeat process for ***/
/*** both left-over attributes and pick best alpha. ***/
generateRules: Working on Attribute [height].
(...)
generateRules: Attribute [height] has an alpha value of [0.5].
generateRules: This Alpha of [0.5] is better than best Alpha of [-1.0].
(...)
generateRules: Attribute [eyes] has an alpha value of [1.0].
generateRules: This Alpha of [1.0] is better than best Alpha of [0.5].
generateRules: Best Attribute was [eyes] with an Alpha of [1.0], storing lower bound and recursing.
/*** Second best attribute is eyes, dump the pivot ***/
/*** attribute’s to the results table and prune the Universe data table.***/
generateRules: No rows left in universe, exit.
*** Finished processing ***
References
|
{"Source-Url": "http://www.cs.uregina.ca/Research/Techreports/2000-03.pdf", "len_cl100k_base": 4755, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 49770, "total-output-tokens": 5820, "length": "2e12", "weborganizer": {"__label__adult": 0.0002961158752441406, "__label__art_design": 0.000217437744140625, "__label__crime_law": 0.00037288665771484375, "__label__education_jobs": 0.0007352828979492188, "__label__entertainment": 4.9114227294921875e-05, "__label__fashion_beauty": 0.0001308917999267578, "__label__finance_business": 0.000270843505859375, "__label__food_dining": 0.0003600120544433594, "__label__games": 0.0004401206970214844, "__label__hardware": 0.0008206367492675781, "__label__health": 0.0006656646728515625, "__label__history": 0.00018668174743652344, "__label__home_hobbies": 0.00010031461715698242, "__label__industrial": 0.0004646778106689453, "__label__literature": 0.0001928806304931641, "__label__politics": 0.00019371509552001953, "__label__religion": 0.0003859996795654297, "__label__science_tech": 0.038421630859375, "__label__social_life": 8.803606033325195e-05, "__label__software": 0.00882720947265625, "__label__software_dev": 0.94580078125, "__label__sports_fitness": 0.0002994537353515625, "__label__transportation": 0.0004246234893798828, "__label__travel": 0.00018084049224853516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20405, 0.03808]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20405, 0.57506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20405, 0.84091]], "google_gemma-3-12b-it_contains_pii": [[0, 840, false], [840, 3569, null], [3569, 5795, null], [5795, 8460, null], [8460, 10425, null], [10425, 12210, null], [12210, 13854, null], [13854, 15512, null], [15512, 17720, null], [17720, 19684, null], [19684, 20405, null]], "google_gemma-3-12b-it_is_public_document": [[0, 840, true], [840, 3569, null], [3569, 5795, null], [5795, 8460, null], [8460, 10425, null], [10425, 12210, null], [12210, 13854, null], [13854, 15512, null], [15512, 17720, null], [17720, 19684, null], [19684, 20405, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20405, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20405, null]], "pdf_page_numbers": [[0, 840, 1], [840, 3569, 2], [3569, 5795, 3], [5795, 8460, 4], [8460, 10425, 5], [10425, 12210, 6], [12210, 13854, 7], [13854, 15512, 8], [15512, 17720, 9], [17720, 19684, 10], [19684, 20405, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20405, 0.09362]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
9c93f054b52aec54201f534d7d4788d057d99c9d
|
Mitigating the obsolescence of quality-specification models in service-based systems
Romina Torres
Universidad Tecnica Federico Santa Maria
Chile
romina@inf.utfsm.cl
Nelly Bencomo
INRIA Paris-Rocquencourt
France
nelly@acm.org
Hernan Astudillo
Universidad Tecnica Federico Santa Maria
Chile
hernan@acm.org
Abstract—Requirements-aware systems have addressed the need of reasoning about uncertainty at runtime to support adaptation decisions. Unfortunately, the RE research community has not addressed yet the uncertainty about the QoS of services generated by the market. Currently, requirements of SBS are transformed into specification models using the domain knowledge about the market known at design time. During runtime, the market and therefore the domain knowledge, can change resulting in the obsolescence of the specification models. Obsolete specification models may make the system miss opportunities for self-adaptation to improve its performance. In this paper, we argue that QoS requirements should be specified in a way that avoids its future obsolescence. We propose an approach to address the uncertainty associated to QoS due to the unforeseen behavior of the market during execution. We propose the use of abstract specification models of QoS. During runtime, these abstract specification models are transformed into concrete specification models to determine if the requirements are still satisfied by the current service configuration and consequently executing an adaptation if needed. We have applied our approach in different case studies. Our results showed so far that in 100% of the cases, SBS using our approach are able to detect unsatisfied requirements during runtime and, therefore triggering suitable adaptations.
Keywords—Requirements-awareness, Quality of Service, service-based systems, dynamically adaptive systems, requirements model, model@runtime.
I. INTRODUCTION
The runtime representations of requirements [1] presented by requirements-aware systems [2] act as a base line to drive and reason about dynamically adaptive systems (DAS). Those systems are capable of dealing with different kinds of uncertainty [3], reasoning over their requirements at runtime, monitoring their satisfaction and triggering corrective adaptations when deviations are detected between the system’s runtime behaviour and the requirements model.
During the specification of a system, the requirements \( R \) are transformed into specification \( S \) supported by domain knowledge \( K \) [4]. According to Zave and Jackson [4], the specifications \( S \) and the relevant domain knowledge \( K \) must be sufficient to guarantee that the requirements \( R \) are satisfied:
\[
S, K \vdash R \tag{1}
\]
During execution, it is possible to determine if the requirements are satisfied or not, by monitoring the deviations between the system’s behavior and the specification models. The latter is valid just if \( K \) has not changed considerably during execution since the specification \( S \) were defined. For the specific case of service-based systems (SBS), this assumption cannot always be guaranteed given the unprecedented degree of change in the service market [5]. Even, if the required functionalities of a SBS would not change, the quality specifications which constrain functionalities are likely to change through the time because they are highly dependent on the characteristics of the market represented by \( K \). In this kind of systems, the quantifiable quality specifications \( S \) are obtained by observation of what the service market \( K \) is offering.
Unfortunately, during execution the ever changing market may provoke the obsolesce of \( S \) making impossible for systems to determine if their requirements are being satisfied or not by the current configuration of services. The latter is a problem in the case that the satisfaction of the specifications \( S \) are used by systems as a base to drive their adaptations. The system can miss opportunities of adaptations because it is not aware when requirements are becoming unsatisfied.
In this work, we propose an approach to support systems to address the uncertainty of the QoS of the service market, by mitigating the obsolescence of the specifications at runtime in order to avoid the degradation of the adaptation capability of the specification model.
The rest of the article is organized as follows: Section II introduces a motivational example and the background needed to understand following sections; Section III presents our approach; Section IV explains the architecture which supports this approach; Section V explains the current dataset, describes experiments and discusses their results; Section VI contrasts related proposals; and Section VII concludes the paper and draws future works.
II. MOTIVATIONAL SCENARIO
Consider the following motivational scenario: our client requires to build a service-based application to send by email, as fast as possible, the city and state for the location where the user is. To build such application, the architect
needs 1) to transform, at design time, the requirements \( R \) into an specification model \( S \) using the current offering of the service market \( K \) and then, 2) at binding time, to select from the service market a proper architecture configuration \( C \) (service composition), which satisfies the model, as Figure 1 shows. The process to build the specification model \( S \) is explained as follows:
1) **From requirements to software requirements.** From the statement, architect identifies three software requirements \( SRs: SR_1 \), a service capable of determining the location of an IP address, \( SR_2 \), a service capable of returning the state and the city given either a zip code or a location and, \( SR_3 \) a service capable of sending emails given an email address and a text. At runtime time, each software requirement will be implemented by one service or a composition of services.
2) **Prioritizing software requirements.** Suppose that, in this example \( SRs \) are prioritized as equally important.
3) **From quality requirements to quality specifications.** From the statement, the architect identifies the quality requirements and then how they constrain each \( SR \).
In this particular example, from the statement the architect identifies the quality requirement: \{ “as fast as possible” \}. Architect transforms the quality constraint of \( SR_3 \) by observing the current and relevant offering of the service market, which is in this case, the measurements of the response time of the services which are capable to perform the same functionality \( i \) required for \( SR_3 \). Let suppose the architect decides for services providing functionality \( i \)
- “fast” are those services with \( \text{response time} \leq 100 \) milliseconds,
- “no fast” are those services with \( \text{response time} \geq 200 \) milliseconds and,
- for those services whose have \( 100 \leq \text{response time} \leq 200 \) ms, as Figure 2 shows, there is a function which measures the proportional “fast” degree of them.
Analogously, the architect defines for \( SR_1 \) a range between \([10, 50]\) and for \( SR_2 \) a range between \([40, 100]\).
As we can see, the numerical range which represents “fast” is different depending of which part of service market (which kind of functionality) is relevant.
4) **Prioritizing quality specifications.** For each software requirement, quality specifications are prioritized.
3) From quality requirements to quality specifications.
From the statement, the architect identifies the quality requirements and then how they constrain each \( SR \).
**Figure 1. Building the model at design time to drive adaptations at runtime**
**Figure 2. Satisfaction function of the response time quality specification by observing the current offering \( K \)**
**Figure 3. Partial view of the specification model at design time**
Figure 3 partially shows the specification model \( S \) built at design time which is used to obtain an initial configuration \( C \) as well as to trigger the needed adaptations at runtime, when \( C \) ceases to satisfy the model \( S \).
Suppose that, for this particular scenario, the architect selects at runtime an initial service composition \( C = \{ s_u, s_v, s_w \} \) which maximizes the satisfaction of the particular model \( S \) at time \( t \). Suppose now that, at time \( t + x \) there
is enough evidence in $K$ that the service $s_u$ has dropped several times its QoS by increasing, in average, its response time from 80 to 230 milliseconds. Then, the system must trigger its adaptation (see Figure 4) in order to replace the infringer service or in some cases the complete composition. Suppose in this case that, the configuration $C$ is replaced by $C' = \{s_u, s_v, s_o\}$, where the response time of service $s_o$ is 91 milliseconds.
Suppose now a second scenario. The response time measurements of the functionally-equivalent services offering functionality $i$ required to implement $SR_3$, in general, have decreased. More than 75% of the services have now a response time $\leq 107$ milliseconds. The relevant market $K$ to this requirement, has drastically changed, what may change the perception of the architect of what “fast” means in this kind of services (e.g. architect could change its specification from [100,200] to [20,50] milliseconds). Therefore, if the specification model $S$ is not updated in this case and each time the assumptions under $S$ was built become falsified ($K$ has drastically changed), then the model $S$ itself will become obsolete and it will be not able to support SBS to drive their adaptation. Unfortunately, the $K$ is continuously and drastically changing because service providers are competing by offering services with similar functionality but different quality and cost attributes [5] [6].
This new kind of dynamism of the service market [5] makes 1) unfeasible for humans manually maintaining their models aware of the market and 2) unfeasible for SBS driving automatically their adaptations under these conditions.
III. PROPOSAL: MITIGATING THE OBSOLESCENCE OF THE SPECIFICATION MODEL
In this Section, we present our approach to mitigate the obsolescence of the specification model $S$ at runtime, which drives the adaptation of SBS under a ever changing market.
We propose to relieve architects from the arduous task of transforming requirements $R$ into measurable specifications $S$, as well as maintaining synchronized $R$ with $S$ when the service market $K$ is evolving. Indeed, we encourage architects to transform $R$ into an abstract specification model $S^*$ by using “linguistic” variables [7] instead of numerical ones, because the latter are more prone to obsolescence.
Figure 5 schematically shows the overview of our approach which consists of several subprocesses. Whenever the market has significantly changed, the first subprocess (area 1) is in charge of generating a new view of the knowledge domain $K_T$. Moreover, this process also provides online the measurements of the services $K_t$. The second subprocess (area 2) allows to each client to define an abstract specification model $S^*$ from the requirements $R$. Given this abstract specification $S^*$, our approach is capable to automatically generate a concrete specification model $S$ by using the current knowledge domain $K_T$ (subprocess marked as area 3) and, secondly, to drive the adaptation whenever there is enough evidence that the current configuration $C$ is non longer satisfying the specification model $S$ (subprocess marked as area 4). In the following sections each subprocess will be explained in details.
A. Subprocess 1: Obtaining the relevant knowledge domain at runtime
Let \( CS_i = \{s_1^i, s_2^i, ..., s_n^i\} \) be a functionally-equivalent service set, which is comprised by \( n_i \geq 1 \) concrete services that provide the same functionality \( i \) than an abstract service \( sa_i \), with \( 1 \leq i \leq I \) [8]. Let \( Q = \{q_1, ..., q_M\} \) be the set of quality attributes which allow to distinguish functionally-equivalent services. Let \( K \) be the service market composed of all the functionally-equivalent service sets.
We assume that for each functionally-equivalent services set \( CS_i \), it is possible to periodically obtain the measurement for each quality attribute of each service member. Moreover the services can be ordered according to each of these quality attributes as well as they can be categorized in five overlapping groups which allow to classify them comparatively. These groups are the linguistic variables \( LVs \), which in this work we assume they are \( LV^{[i]} = \{“poor”, “fair”, “good”, “very good”, and “excellent”\} \). Each linguistic variable is a fuzzy set denoted by \( \mu \) with a triangular shape whose support \( \alpha_1, \alpha_2 \) and its peak \( \alpha_M \) are calculated using \( K_T \). \( \mu \) is defined as follows:
\[
\mu(x) = \begin{cases}
\frac{x - \alpha_1}{\alpha_M - \alpha_1} & \text{if } \alpha_1 \leq x \leq \alpha_M \\
\frac{x - \alpha_M}{\alpha_2 - \alpha_M} & \text{if } \alpha_M \leq x \leq \alpha_2 \\
0 & \text{otherwise,}
\end{cases}
\]
(2)
where \( x \) is the measurement given by \( K_t \).
Let \( K_T \) be the measurements of the quality attributes of all services of the market at the snapshot obtained at time \( \tau \) (\( \tau \) could be considered as the time \( t \) or time \( T \) whichever is applicable).
\( K_T \) allows clients to specify its quality specifications by using the \( LVs \), while \( K_t \) allows the systems to monitor if the current architecture configuration \( C \) is satisfying the concrete specifications \( S \). Notice that the frequency at which the adaptation of \( K_T \) is generated is significantly lower than the frequency of \( K_t \) (see figure 6).
B. Subprocess 2: Defining the abstract specification model at design time
In this subprocess the architect specifies at design time the abstract specification model \( S^* \) by transforming quality requirements into abstract quality specifications by using the linguistic variables \( LV \).
The abstract specification model \( S^* \) is constructed from a set of fuzzy conditional statements. These statements are expressions of the form \( IF \text{ } A \text{ } and \text{ } B \text{ } and \ldots \text{ THEN } Z \) where \( A, \text{ } B \text{ } and \text{ } Z \) have fuzzy meaning.
For instance, the concrete specification model which was generated in Figure 1 can be specified as an abstract model by using \( \mathbb{L} \) instead of precise numerical values. The left branch of the model can be specified as an abstract model as follows: \( IF \text{ the response time of a service capable of sending email is at least “fast” THEN the belonging degree to the acceptable solution set is high, where “fast” for this kind of service is a linguistic variable whose numerical range is defined in } K_T \) (by simplicity we omit prioritization in the statement). The aim of the specification model is to allows systems to determine if the current architecture configuration \( C \) is satisfying the requirements and to support the assessment of replacement configurations in case of an adaptation is needed. Therefore, we represent the abstract specification model \( S^* \) as a fuzzy multi-criteria decision making function
\[
S^* : \sum_{i=1}^{I} v_i \left( \sum_{j=1}^{J} w[j] \delta_{MAC|j}^{[i]} (c_j(C)) \right) \mathbb{E} (C, SR_i) \tag{3}
\]
where \( V = \{v_1, ..., v_I\} \) and \( W^{[i]} = \{w_1^{[i]}, ..., w_J^{[i]}\} \) be the sets of relative importance of each software requirement \( SR_i \), as well as for each software requirement, the relative importance of each quality constraint; \( \mathbb{E} (C, SR_i) \) which is an indicator function that returns 1 if there is a service \( s \in C \) providing functionality \( i \) requested by \( SR_i \), or 0 if not; \( c_j(C) \) is a function which returns the current measurement value of the service \( s \in C \) from \( K_t \) if applies; \( \delta_{MAC|j}^{[i]} \) is a fuzzy function which returns the membership degree of the current measurement of the \( s \in C \) to the minimal acceptable class (MAC) which is a linguistic variable defined in \( K_T \).
C. Subprocess 3: Generating the concrete specification model at runtime
The subprocess marked as 3 in Figure 5 shows a transformation from the abstract specification model \( S^* \) into a concrete specification model \( S \). This transformation is executed each time a new \( K_T \) is available. The main difference between \( S^* \) and \( S \) is that we incorporated the information of the relevant knowledge domain \( K_T \) in order to obtain the numerical values of the parameters of the model equation 3. This equation has several function \( \delta_{MAC|j}^{[i]} \) (one for each quality attribute constrain of each functionally-equivalent set) whose parameters must be obtained from the relevant knowledge domain \( K_T \). Because our model allows to specify the minimal acceptable class, services belonging to better quality levels must also be considered with membership degree of 1. We define each \( \delta_{MAC|j}^{[i]} \) as the fuzzy union.
of the minimal acceptable class (which is one of the five linguistic variables) with those classes which are better, as follows
\[ \delta_{MAC}^{[t]} = \min(\mu_{MAC}, \mu_{C1}, \ldots, \mu_{CL}) \]
(4)
where \( \mu_{C1}, \ldots, \mu_{CL} \) are those linguistic variables whose linguistic meanings are better than \( \mu_{MAC} \). Because, we are assuming triangular fuzzy sets, the function \( \mu_{MAC} \) is defined as a ramp function as follows (assuming \( a_1 \leq a_2 \))
\[ \mu_{MAC} = \begin{cases} 0 & \text{if } c_j(C) > a_2 \\ \frac{c_j(C) - a_1}{a_2 - a_1} & \text{if } a_1 \leq c_j(C) \leq a_2 \\ 1 & \text{if } c_j(C) < a_1 \end{cases} \]
(5)
Each time the subprocess explained in subsection III-A generates a new \( K_T \), a new concrete specification model \( S \) will be generated.
D. Subprocess 4: Driving adaptations at runtime
The subprocess marked as 4 in Figure 5 shows how the specification model \( S \) and the current measurements \( K_t \) are used to determine if the current configuration \( C \) is satisfying or not the specification model \( S \). If the specification model \( S \) using \( K_t \) is not satisfied by the current configuration \( C \), then the monitoring component in the area 4 of the Figure 5 sends the violation to the Analyzer component which determines if there is enough evidence to trigger an adaptation or not. The planner component obtain a new configuration \( C_t \) by using \( S \) and \( K_t \). Then, this new configuration is applied. How the configuration is applied or which adaptation strategy to use (instead of replacement) are out of the scope of this paper.
IV. APPROACH IMPLEMENTATION
In order to maintain the specification models aware of the market, the process 1 explained in the section III-A must be periodically executed. Figure 7 shows the architecture to produce new market views from the current observations. The functional crawler component collects from different Web-based catalogs the Web service descriptors. The QoS certifier component runs a benchmark tool over the endpoint list obtained by the functional crawler, in order to gather the QoS measurements. The functional clustering component clusters the Web services (based on their WSDL descriptor files) according to their functionality (if there is not valid categories information available). And the QoS-fuzzy clustering which clusters each quality aspect of each functionally-equivalent service set into \( c \) classes using a modified fuzzy c-means algorithm (deeper details in [9]). In this work we set up \( c = 5 \).
Notice in the Figure 5 we have two feedback loops. The first one is located in the market side, which is constantly monitoring the changes in the market, analyzing if there is enough evidence to generate a new market view, and in the positive case, in the planning stage, generating a new market snapshot which basically allows to update all the parameters of the linguistic variables (fuzzy sets) of each functional-equivalent service set which are informed to the SBS client systems by the executing stage.
V. EXPERIMENTS
In order to show how the approach works, we have developed a basic prototype to study how concrete specification models become obsolete when the market is changing and how new architecture configuration can be driven if the obsolescence is mitigated. The prototype allows (1) to specify a set of software requirements; (2) to prioritize them; (3) for each software requirement to specify its quality constrains by using linguistic variables; (4) to prioritize them as well; (5) to find a valid architecture configuration which maximizes the satisfaction of the model at time \( t \); and finally (6) to show how new configurations are recommended at \( t + \Delta t \) and at \( t + x\Delta t \) when the configuration recommended at \( t \), \( C_t \) does not satisfy anymore the specifications \( R \).
In the following subsections we explain the dataset, what experiments we ran and what conclusions we draw from them.
A. Dataset
The dataset consists of a subset of 1500 Web services of the QWS Dataset \(^1\) (all of them valid as of October 2011), which originally included 2507 actual Web service descriptors with nine QoS measurements. The quality aspects are response time, availability, throughput, success-ability, reliability, compliance, best practices, latency and documentation.
To emulate the market changes, we have created two new market snapshots, where QoS’ service specifications
\(^1\)http://www.uoguelph.ca/~qmahmoud/qws
are improved in the first snapshot in a random percentage between 0% and 30%, and in the second snapshot a random percentage between 30% and 50%. All of these modifications are applied to all services. We are not using a blacklist with the services currently selected by the configurations C of the SBS.
Our prototype follows the approach presented in our previous work [10] to externalize adaptation capabilities by a third application that provides the service of monitoring subscribed contracts (requirements and current architecture configuration in use) and monitoring the changes in the market in order to act as a recommender system, whose objective is to notify subscribed SBS when an adaptation should be executed because its requirements have been not satisfied recurrently, which could mean, probably, its architecture is degrading and it is better to adapt it in order to avoid it becomes obsolete.
### B. Case study
We have prepared a set of ten case studies, each one may be composed of multiple software requirements, which themselves may be constrained by multiple quality requirements. Because the objective of these experiments is to study the robustness of the model against the market changes we are not studying prioritization. The reader can assume if a request is divided in several software requirements, these are prioritized as equally important (“high”), and if a software requirement is constrained by several quality requirements, these are equally important (“high”) as well. For lack of space, we only show four of the ten cases:
- **R1**: one service capable of given a zip code return the country with at least response time “excellent”, at least availability “excellent”, and at least throughput “excellent”; and a second service capable of given the latitude and longitude return a map with at least throughput “excellent”, at least reliability “excellent”, at least best practices “excellent” and at least latency “excellent”.
- **R3**: one service capable of return the sequence of a protein with at least response time “excellent”, at least throughput “excellent”, and at least latency “excellent”.
- **R6**: one service capable of given a phone number return its information with at least response time “excellent”, at least throughput “excellent”, and at least best practices “excellent”; and a second service capable of send a tex by fax with at least response time “excellent”, and at least availability “excellent”.
- **R8**: one service capable of given a zip code returns the country with at least response time “excellent”, and at least reliability “excellent”; and a second service capable of given a country returns its currency with at least response time “excellent”, at least availability “excellent”, and at least best practices “excellent”.
### C. Experiments and discussion
The objective of this experiment is to empirically show that by using our approach of mitigating the obsolescence of the specifications (or in other words, make them market-aware models) specification models maintained at runtime are a valid and effective base to enable systems to reason about them whether their requirements are being satisfied by the current architecture.
Figure 8 shows the user interface of our prototype. Software and quality requirements, prioritization, and minimal acceptable classes are specified. Our prototype computes an architecture at design time (using the first snapshot) choosing one of those whose membership degree to the “acceptable solution” fuzzy set (defuzzifying equation 3) is closest to 1. It is important to notice, that we choose a fuzzy approach instead of a deterministic one in order to avoid always recommend the best, because if we do that, we would increasing the potential demand of some services will face next time the market is evaluated.
Figure 9 shows the results for the request R3, where the service with id 125858046 was selected to implement the requirement R whose membership degree to the “acceptable solution” set was 1. As we can see in the table below the service selected, it is not the only one, which has a membership degree equal to 1. Figure 9 also shows results at runtime: Market 1 (K1) and Market 2 (K2). In K1, we can see the service with id 125858046 drops its membership degree to the “acceptable solution”, from 1 to 0.8471 and in K2, it still drops even more its membership degree until 0.6667. We have to remember the K1 and K2 snapshots are synthetical data, whose quality was randomly improved from previous market view. Then, it is possible some services do not experiment changes in their quality, or even when they did, the quality fuzzy set drifted in such a way, services still are considered in this case, of “excellent” quality (for instance, the service with id 88047002 with membership degree of 1 to the “acceptable solution” at design time.
maintains the same degree at runtime $K_1$).
In Table I we show the results for the four requests that we specified before. The first column shows the id of requests, the second column shows the hypothetical selected architecture configuration and between parenthesis the membership degree to the “acceptable solution” set. The third and four column shows the results obtained for the first and second runtime snapshots respectively. For each request we are divided the results in three rows. the first row of each request, indicates in the second column the solution selected $C$ at design time with its membership degree to the “acceptable solution” set, in the third column asses $C$ again but over $K_1$ and in the four column asses it again but over $K_2$. The second row of each request shows in the third column, the recommended adaptation for the system under this new $K_1$, and in the four column the assessment of this recommendation but over the future $K_2$. The third row of each request shows in the four column, the recommended adaptation for the system under this new $K_2$. It is important to notice, we are omitting deeper details as how are they are connected or which one is the expected QoS of the system by using these services with their particular QoS, because this is part of our current work which is discussed in Section VII. We can conclude based on this experiment, if obsolescence of specifications is not mitigated, systems using a model@runtime to drive its architecture adaptation could miss adaptation because the obsolescence of specifications is hiding requirements are becoming unsatisfied. In the 100% of the cases, recommendations at runtime are encouraged to replace the older ones because the current configurations have at runtime a membership degree to the “acceptable solution” lower than the threshold. Threshold should be defined by each SBS owner, but now this is out of the scope of this paper but is part of our future work.
The main contribution of mitigate the obsolescence of specifications against of a market which is constantly changing, is SBS are not missing adaptation opportunities.
### Table I
<table>
<thead>
<tr>
<th>ID</th>
<th>design-time</th>
<th>(1st)run-time</th>
<th>(2nd)run-time</th>
</tr>
</thead>
<tbody>
<tr>
<td>R1</td>
<td>84193574,</td>
<td>(0.875)</td>
<td>(0.27)</td>
</tr>
<tr>
<td></td>
<td>4869688</td>
<td>(0.545)</td>
<td>(0.5)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>(1)</td>
<td>(1)</td>
</tr>
<tr>
<td>R3</td>
<td>15103218(1)</td>
<td>(0.667)</td>
<td>(0.501)</td>
</tr>
<tr>
<td></td>
<td>87750611(1)</td>
<td>(0.883)</td>
<td>(0.883)</td>
</tr>
<tr>
<td>R6</td>
<td>78667380,</td>
<td>(1)</td>
<td>(1)</td>
</tr>
<tr>
<td></td>
<td>23013627(1)</td>
<td>(0.833)</td>
<td>(0.333)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>(0.833)</td>
<td>(0.833)</td>
</tr>
<tr>
<td>R8</td>
<td>112791393,</td>
<td>(0.438)</td>
<td>(0.137)</td>
</tr>
<tr>
<td></td>
<td>179771826(1)</td>
<td>(0.417)</td>
<td>(0.417)</td>
</tr>
<tr>
<td></td>
<td>60150637</td>
<td>(1)</td>
<td>(1)</td>
</tr>
<tr>
<td></td>
<td>47714966(1)</td>
<td>(0.417)</td>
<td>(0.417)</td>
</tr>
</tbody>
</table>
VI. RELATED WORK
Ramirez et al. [3] have proposed a taxonomy of potential sources of uncertainty at the requirements, design, and execution phases. The authors reported on existing techniques for mitigating specific types of uncertainty. We deal with the uncertainty of the QoS offering of the service market at runtime (“known unknown”). According to the proposed taxonomy, we are dealing with run-time uncertainty whose source is the incomplete information of the market behavior that we have at design time. Our domain problem could be classified into the kind of concerns tackled by different approaches like RELAX [11] and Requirements Reflection [1]. However, there is not research initiative that specifically addresses this kind of uncertainty.
RELAX is a requirements language addressing the uncertainty in the specification of requirements of self-adaptive systems, which allows analysts to specify which requirements could be relaxed at runtime when the environment changes. RELAX implements key ideas of Requirements Reflection. Similar to RELAX, we also delay decisions until runtime, and we use a language to mark which parts of the requirements are delayed. However, RELAX works at level of specifications of adaptive behavior while we make recommendations of adaptations at the level of adaptive architecture.
Welsh et al. proposed REAssuRE [12], a framework, which monitors when assumptions made at design time are falsified by the current conditions and therefore triggering adaptations. With our approach, different kind of assumptions can be monitored. Using our approach, specifications of the market are monitored to see if the are becoming obsolete and are compared against the QoS offered by the service market. In REAssuRE claims associated to soft goals are made at design time to determine which operationalization alternative is the more suitable decision. During runtime, claims are monitored to check if they are obsolete according to the current environmental conditions. When claims are falsified, REAssuRE allows systems to decide at runtime if an adaptation is needed (i.e. to change to another operationalization). Similarly in our case, when domain knowledge $K$ changes, our approach allows systems to reason and decide at runtime if specifications should be synchronized to mitigate their obsolescence, and then determine if an adaptation is needed (i.e. to change to another proper configuration).
Baresi et al. [13] extended KAOS (Goal-Directed Requirements Acquisition) by including adaptive goals. Goal-based models support the specification of “when” the adaptation should be executed and “what” it means. Besides, authors proposed a runtime infrastructure [14] which constantly monitors the conditions to trigger adaptations. Baresi et al. [15] formalized this model as FLAGS (Fuzzy Live Adaptive Goals for Self-adaptive systems) which represents requirements as runtime entities, distinguishing between crisp and fuzzy goals. Unfortunately, they use stakeholders to define the membership function, which in our case, is not feasible. Under a closed-world assumption [5] their approach works. However, due to the dynamism degree of change of the QoS offered by the service market, these specifications should constantly be updated by stakeholders, which would be an expensive process.
Filieri et al. [16] proposed a formal approach to adaptive software by assuring continuously the satisfaction of non-functional requirements. Similar to our work, they also casted their proposal into the Zave and Jackson approach to requirements [4]. Their approach is exemplified in the context of service-oriented system, focusing specifically in the non-functional requirements, they also assume the domain knowledge regarding the qualities attributes of the services are changing, and therefore their approach is trying to maintain consistent the specifications with the requirements by estimating the knowledge periodically as a way to determine if the requirements are becoming unsatisfied or not. Both proposals allow the system to maintain non-functional properties satisfied by adapting their architecture to new conditions. However, we are specifically proposing a framework which allows analysts to specify requirements in such a way that they are continuously synchronized with the open world in which service-based systems are immersed [5].
VII. CONCLUSIONS AND FURTHER WORK
In this work we have proposed an approach to support systems to address the uncertainty of the QoS of the service market, by mitigating the obsolescence of the specification models at runtime. The main contribution of this paper is that our approach allows the system to mitigate the degradation of the adaptation capability of the model used during runtime. Until now, the adaptation capability of these models depended on precise numerical quality specifications that become rapidly obsolete against the QoS offered by the ever-changing market. Our proposal supports the reaction capacity of the system to detect requirements dissatisfaction, the maintenance of the consistency of the requirements at runtime to drive adaptations.
As future steps in our research we are considering the following topics:
- **Sensibility adaptation index**: in order to compare the adaptation capability we define the sensibility adaptation index of a model as the percentage of the required adaptations which were recommended using the model.
- **Global quality**: until now we are assuming the global quality of the system under construction and maintenance, can be obtained by ensuring that the quality requirements of the parts are achieved. There are several proposals to obtain a global model, for instance we could the components interact between them under a workflow model [17] where the interaction patterns can be know in advance. Our next step in this area is to apply our approach in this specific service-oriented architecture to reach the global quality specifications and not only the local ones.
- **How often $K$ should be recalculated**: How much evidence the monitor component of the market feedback loop needs, to determine the $K$ is obsolete and needs to be recalculated?
As part of our future work, we will release a benchmark to the community in order to asses similar models proposed by different authors.
ACKNOWLEDGMENT
This work was partially funded by FONDEF (grant D09i1171), UTFSM DGIP 241167 and BASAL FB0821(FB.02PG.11), the EU Marie Curie Project Requirements@runtime and the EU Connect project.
REFERENCES
|
{"Source-Url": "https://research.aston.ac.uk/portal/files/4871719/obsolescence_of_quality_specifications_models.pdf", "len_cl100k_base": 8061, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32531, "total-output-tokens": 10315, "length": "2e12", "weborganizer": {"__label__adult": 0.0003249645233154297, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.001262664794921875, "__label__entertainment": 9.357929229736328e-05, "__label__fashion_beauty": 0.000179290771484375, "__label__finance_business": 0.0006532669067382812, "__label__food_dining": 0.000396728515625, "__label__games": 0.0006732940673828125, "__label__hardware": 0.0008192062377929688, "__label__health": 0.00070953369140625, "__label__history": 0.00030732154846191406, "__label__home_hobbies": 8.928775787353516e-05, "__label__industrial": 0.0004215240478515625, "__label__literature": 0.00045871734619140625, "__label__politics": 0.0002837181091308594, "__label__religion": 0.0004270076751708984, "__label__science_tech": 0.07293701171875, "__label__social_life": 0.00010210275650024414, "__label__software": 0.01239776611328125, "__label__software_dev": 0.90576171875, "__label__sports_fitness": 0.00022602081298828125, "__label__transportation": 0.0004565715789794922, "__label__travel": 0.0002008676528930664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40637, 0.04325]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40637, 0.11726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40637, 0.91038]], "google_gemma-3-12b-it_contains_pii": [[0, 5106, false], [5106, 8552, null], [8552, 11829, null], [11829, 17442, null], [17442, 22008, null], [22008, 26879, null], [26879, 31137, null], [31137, 36714, null], [36714, 40637, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5106, true], [5106, 8552, null], [8552, 11829, null], [11829, 17442, null], [17442, 22008, null], [22008, 26879, null], [26879, 31137, null], [31137, 36714, null], [36714, 40637, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40637, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40637, null]], "pdf_page_numbers": [[0, 5106, 1], [5106, 8552, 2], [8552, 11829, 3], [11829, 17442, 4], [17442, 22008, 5], [22008, 26879, 6], [26879, 31137, 7], [31137, 36714, 8], [36714, 40637, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40637, 0.08861]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
c948d64c4bd3518072a68a3ed0e2f4372fcd9a9a
|
Enhancing attendance tracking using animated QR codes: a case study
Mustafa Saad Mohammed, Khamis A. Zidan
Department of Computer Engineering, College of Engineering, Al-Iraqia University, Baghdad, Iraq
ABSTRACT
This research paper explores the effectiveness of quick response (QR) code-based attendance systems with the added security measure of generating two QR codes per second. With traditional attendance tracking methods being time-consuming and inefficient, QR codes have become increasingly popular as a quick and efficient alternative. However, one concern with QR code-based attendance systems is the potential for fraud and misuse. To address this issue, this study proposes generating two QR codes per second to ensure that only the current and legitimate QR code is recognized. The purpose of this study is to assess the impact of this technology on student attendance rates, the accuracy and reliability of attendance data, and the overall user experience for both students and instructors. Through data analysis and surveys, we found that the use of QR codes with the added security measure resulted in increased student attendance rates, improved accuracy and reliability of attendance data, and a positive user experience for both students and instructors. This research provides practical insights for educational institutions considering the implementation of QR code-based attendance systems and contributes to the growing body of literature on the use of QR codes in education.
Keywords: ASP.NET, Authentication, Flutter, MD5, QR code, Secure attendance, Smartphone
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mustafa Saad Mohammed
Department of Computer Engineering, College of Engineering, Al-Iraqia University
Baghdad, Iraq
Email: mustafa.s.mohammed@aliraqia.edu.iq
1. INTRODUCTION
In recent years, the use of quick response (QR) codes has become increasingly prevalent in various aspects of daily life, including education [1]–[3]. A QR code, short for QR code, is a two-dimensional barcode that can be scanned using a smartphone or QR code reader to access information. It was first developed by the Denso Wave Corporation, a subsidiary of Toyota, in 1994 for the purpose of tracking automotive parts during the manufacturing process [4]. Today, QR codes are widely used for a variety of purposes, including marketing, payment systems, and attendance tracking. QR codes can store more information than traditional barcodes and can be customized with logos and colors. They are a popular and efficient way to share information in a digital age [5], as shown in Figure 1.
QR codes offer a quick and efficient way to access information, and can be used for a variety of purposes, including tracking student attendance. Traditional attendance tracking methods, such as manual sign-in sheets, can be time-consuming and inefficient [6]. With the use of QR codes, students can check in and out of classes quickly and easily, while also providing real-time attendance data for teachers and administrators.
One concern with QR code-based attendance systems is the potential for fraud and misuse. Students could share QR codes with others or generate fake codes, leading to inaccuracies in attendance tracking and compromising the integrity of the system [7]–[9]. To address this issue, this research paper proposes a unique
solution: generating two QR codes per second. This added security measure ensures that only the current and legitimate QR code will be recognized, as any previous codes will have already expired [10]. According to several researchers, including those cited in references [9], [11]–[13], QR codes have been utilized to achieve specific objectives, such as identifying individuals or storing messages. This study aims to evaluate the effectiveness of a QR code-based attendance system that generates two QR codes per second, assessing its impact on attendance rates, accuracy and reliability of data, and user experience for both students and instructors. It also examines the implementation process, challenges, and opportunities of the system. The research aims to contribute to the literature on QR code use in education and provide practical insights for institutions considering implementing this technology.
Figure 1. QR code
2. LITERATURE REVIEW
In recent years, the use of QR codes for student attendance tracking has gained increasing attention among educators and researchers. QR codes provide a quick and efficient way to record attendance, enabling students to check in and out of classes quickly and easily, while also providing real-time attendance data for teachers and administrators. In addition, the use of QR codes has been shown to reduce the amount of time and effort required to manually take attendance, allowing teachers to spend more time focusing on instruction and student engagement.
Maciel and Pereira [14] proposed a smart attendance system that utilizes QR codes for secure authentication. The system aims to improve attendance management by using data-hiding algorithms to embed QR codes with student information. This information is scanned by students using their smartphones when the QR code is displayed by the teacher, and attendance is automatically marked based on their user identifier (ID).
Patel et al. [15] in the paper titled “Smart student attendance system using QR code” presented at the 2nd International Conference on Advances in Science and Technology in 2019, Institute of Engineering and Information Technology, Mumbai, India proposes a smart attendance system using QR code technology. The system uses secure authentication and data-hiding algorithms to embed the QR code, which is displayed by the teacher for students to scan using their smartphones. The attendance is marked automatically according to the user ID, eliminating the possibility of false registrations. The paper highlights the wide range of applications of QR codes in the evolving technology world and proposes a cost-effective solution for attendance management in educational institutions.
Fauzi et al. [16] published a paper titled “Development of web-based smart security door using QR code system” in 2020. The study describes a secure door lock system which employs QR technology and a Raspberry Pi processor to allow access to university classrooms and laboratories. The system enables authorized individuals to access the facility and monitor the activity log via a web-based server. The authors demonstrate the system’s effectiveness and its potential to be extended to other properties and facilities such as offices and laboratories. This study represents a preliminary investigation into the development of a QR code-based smart security door system.
Imanullah and Reswan [17] in the paper “Randomized QR-code scanning for a low-cost secured attendance system” in 2022 propose an attendance system that uses random QR-codes as one-time passwords (OTPs) to ensure security. The system requires employees to scan the QR-code within ten seconds before it is changed and randomized each time. To track attendance, the system utilizes employees' smartphones and Mac-Address as unique identification numbers. The authors conclude that the randomized QR-code scanning approach is effective and relevant for implementing a secure attendance system in workplaces such as offices and factories [17]. In summary, the use of QR codes for student attendance tracking has been shown to provide a reliable and efficient way to track attendance, while also enhancing student engagement and promoting a more collaborative learning environment. Studies conducted since 2018 have consistently demonstrated the
effectiveness of QR codes in improving attendance rates and providing a more streamlined approach to attendance tracking.
3. METHOD
3.1. Overview of proposed system
The proposed system is a digital attendance management system that uses QR code technology and geolocation tracking to automate attendance processes for educational institutions. The system comprises two components: a server-side application built on the Active Server Pages network enabled technologies (ASP.NET) framework utilizing model view controller (MVC5) architecture, and a mobile application built on the flutter framework, available on both iPhone operating system (IOS) and Android platforms. The server-side application generates unique QR codes, stores attendance data in a secure database, and processes data received from the mobile app. The mobile app facilitates quick check-ins by scanning the QR code, verifies the student’s physical presence within the vicinity of the classroom using geolocation tracking. The proposed system aims to simplify attendance management, improve accuracy, and reduce administrative burdens.
3.2. Workflow of proposed system
The proposed system aims to streamline the attendance-taking process in educational institutions. To illustrate the process of recording attendance, a flowchart was created as shown in Figure 2. This figure displays the steps involved in creating an attendance session, generating unique hashes, and displaying the hashes as an animated QR code. The flowchart serves as a visual representation of the methodology used to capture student attendance.

Figure 2. Flowchart of the secure attendance system using animated QR code
It leverages the use of QR codes and geolocation technology to provide a seamless and efficient attendance tracking experience for both teachers and students. The detailed steps that the system follows:
Enhancing attendance tracking using animated QR codes: a case study (Mustafa Saad Mohammed)
Input:
- Teacher's login credentials.
- Classroom geolocation.
- List of available classes.
- Mobile app with camera access.
- Application programming interface (API) to retrieve hashes from the database.
Output:
a) Attendance report for each class
- Teacher creates an attendance session and sets classroom geolocation.
- Server generates 6 unique message-digest (MD5) hashes and stores them in the database.
- The hashes are displayed as an animated QR code on the classroom's data show.
- Students open the attendance page on the mobile app and select their classroom.
- The app retrieves the hashes from the database and saves them in an array variable called HASHES.
- The app opens the camera and captures the QR codes displayed on the data show.
- If a match is found, the app sends the student's attendance data to the server and marks them as present.
- The server records the student's attendance data along with the device geolocation.
- At the end of the session, the teacher can view and download the attendance report for the class.
End of Algorithm
Overall, the proposed system offers a secure and efficient method of tracking student attendance through QR codes and geolocation technology. QR codes enable easy check-in for students by scanning codes at class entrances, eliminating manual roll calls and reducing administrative tasks. Geolocation integration ensures students are physically present in the designated area, enhancing attendance accuracy. Real-time reports and notifications enable timely action for unauthorized absences.
3.3. System development
The system development for the QR code attendance system involved various stages and techniques to ensure optimal functionality and security. This included the development of a teacher's control panel, database management, and API development using C# language, as well as the creation of a student app using dart and flutter [18]–[20]. Each section played a critical role in the successful implementation of the system and will be discussed in detail in the following sections:
a) Teachers control panel: The teacher's control panel is the primary interface for managing attendance sessions. It is developed using the ASP.NET framework and C# programming language [21], [22] as Figure 3. The control panel allows teachers to create attendance sessions shown as Figure 4, set the geolocation of the classroom, and generate unique hashes using MD5 and show as QR code for each hash. The animated QR code in this research refers to a sequence of six unique QR codes generated using the MD5 algorithm and displayed with a 500 millisecond interval between each code as Figure 5. The animation is designed to enhance the visibility and readability of the QR codes on the classroom's data show, thereby facilitating the students' attendance process. Additionally, it enables teachers to view and download attendance reports for each session, providing a comprehensive overview of the student's attendance history. The user interface is designed to be intuitive and user-friendly, allowing teachers to easily manage attendance sessions and streamline the attendance-taking process as Figure 6.


Figure 3. Teacher sign-in page
Figure 4. Create attendance sessions page
b) Database: The attendance system uses Microsoft SQL Server 2016 for secure and reliable data storage, with a database schema that stores attendance data and associated information. The database stores unique hashes generated by the server and retrieves them for use in the student app, as well as records attendance data received from the app and associates it with attendance sessions and geolocation data.
c) Application programming interface (API): The system's API is developed using C# programming language and provides a secure communication channel between the student app and the server [23], [24]. The API is designed to retrieve the unique hashes from the database and send them to the student app.
d) Student App: The student app is developed using dart and flutter, providing a cross-platform solution that works on both IOS and Android devices. The app enables students must login to system using your email and password as Figure 7. Then select their classroom and retrieve the unique hashes from the server as Figure 8.
It also provides a camera interface for capturing the animated QR codes displayed on the classroom's data show as Figure 9. When camera start capture the QR code the red circles will be green circle to indicate student transaction process as Figure 10. The app then compares the hash value of the QR code with the hashes retrieved from the server and geolocation of student mobile with the geolocation of classroom, marks the student as present if a match is found as Figure 11, or show error if the information fake, and sends the attendance data to the server through the API.
4. RESULTS AND DISCUSSION
In the results phase, the researcher designed a questionnaire survey using the Likert scale [25]–[27] to collect data from three system experts and ten students. The survey consisted of four criteria: utility, reliability, convenience, and effectiveness. The Likert scale used a range of 1 to 5, where 1 represented "strongly agree," 2 represented "agree," 3 represented "neither agree nor disagree," 4 represented "disagree," and 5 represented "strongly disagree." The collected data was then analyzed and used to evaluate the system's performance and identify areas for improvement as shown in Table 1.
<table>
<thead>
<tr>
<th>Questions</th>
<th>Mean (X)</th>
<th>Standard deviation (S.D.)</th>
<th>Satisfaction rate</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Section 1 - System Advantages</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Incorporated features for users</td>
<td>4.67</td>
<td>0.58</td>
<td>Satisfy</td>
</tr>
<tr>
<td>The system can be effectively utilized</td>
<td>4.67</td>
<td>0.58</td>
<td>Satisfy</td>
</tr>
<tr>
<td><strong>Section 2 - System Reliability</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>No issues encountered while using the system</td>
<td>3.33</td>
<td>0.58</td>
<td>Neutral</td>
</tr>
<tr>
<td>User-friendly interface</td>
<td>4.33</td>
<td>0.58</td>
<td>Satisfy</td>
</tr>
<tr>
<td>Cheating prevention through date/time</td>
<td>4</td>
<td>1</td>
<td>Satisfy</td>
</tr>
<tr>
<td>Cheating prevention for subjects</td>
<td>4</td>
<td>1</td>
<td>Satisfy</td>
</tr>
<tr>
<td><strong>Section 3 - System Convenience</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Simple page design</td>
<td>4</td>
<td>1</td>
<td>Satisfy</td>
</tr>
<tr>
<td>User-friendly interface</td>
<td>4.33</td>
<td>0.58</td>
<td>Satisfy</td>
</tr>
<tr>
<td>Straightforward system operation</td>
<td>4</td>
<td>1</td>
<td>Neutral</td>
</tr>
<tr>
<td>Requires minimal equipment and has a visually appealing result page</td>
<td>4</td>
<td>1</td>
<td>Satisfy</td>
</tr>
<tr>
<td>GUI used for result page</td>
<td>3.67</td>
<td>0.58</td>
<td>Neutral</td>
</tr>
<tr>
<td>Guaranteed data completeness</td>
<td>4</td>
<td>0.82</td>
<td>Satisfy</td>
</tr>
<tr>
<td><strong>Part 4 - System Efficiency</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Streamlined login procedure</td>
<td>3.67</td>
<td>0.58</td>
<td>Satisfy</td>
</tr>
<tr>
<td>Accurate information provision</td>
<td>4</td>
<td>0</td>
<td>Satisfy</td>
</tr>
<tr>
<td>Overall system efficacy</td>
<td>4.03</td>
<td>0.43</td>
<td>Satisfy</td>
</tr>
</tbody>
</table>
The results suggest that overall, the system was perceived positively by the experts with a mean satisfaction rate of 3.67 out of 5. The system was rated highest in terms of utility and reliability, with mean satisfaction rates of 4.33 and 4.00, respectively. The experts were also satisfied with the convenience and effectiveness of the system, with mean satisfaction rates of 3.67 and 3.80, respectively. However, there were some areas where the system could be improved, such as the complexity of the system (mean satisfaction rate of 3.67) and the graphic and color of the result page (mean satisfaction rate of 3.00). Overall, the results suggest that the system is useful and reliable, but could benefit from some improvements in terms of convenience and simplicity.
In this study, a comparison was made between the proposed attendance system using animated QR codes and the radio frequency identification (RFID) system [28]. The results showed that the proposed system provided better accuracy and reliability compared to the RFID system as showed in Table 2. The proposed system was able to track attendance in real-time and eliminate the possibility of errors due to illegible handwriting or lost records. In addition, the system was perceived positively by the experts and students in terms of utility, reliability, convenience, and effectiveness.
Table 2. Comparison of attendance tracking systems
<table>
<thead>
<tr>
<th>Criteria</th>
<th>Proposed system (animated QR codes)</th>
<th>RFID system</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cost</td>
<td>Low cost</td>
<td>High cost</td>
</tr>
<tr>
<td>Convenience</td>
<td>Easy to use, no need for special equipment</td>
<td>Requires special equipment, such as RFID readers</td>
</tr>
<tr>
<td>Accuracy</td>
<td>Accurate in tracking attendance</td>
<td>Accurate in tracking attendance</td>
</tr>
<tr>
<td>Speed</td>
<td>Fast and efficient, can track attendance in real-time</td>
<td>Fast and efficient, can track attendance in real-time</td>
</tr>
<tr>
<td>Security</td>
<td>Secure, as the QR codes can be encrypted</td>
<td>Secure, as the RFID tags can be encrypted</td>
</tr>
<tr>
<td>Limitations</td>
<td>May require a stable internet connection for real-time tracking</td>
<td>RFID tags may interfere with other electronic devices</td>
</tr>
</tbody>
</table>
It is important to note that the RFID system has its own advantages such as the ability to track attendance from a distance and the use of a secure authentication process. However, it was found that the proposed system using animated QR codes was more user-friendly and cost-effective compared to the RFID system. Overall, the results suggest that the proposed attendance system using animated QR codes provides a more efficient and accurate method for attendance tracking compared to the RFID system. Future studies could further explore the potential of using different identifier technologies to improve attendance tracking in various settings.
5. CONCLUSION
Overall, the results of this study indicate that the use of a secure attendance system based on an animated QR code was effective in increasing the security and accuracy of attendance tracking in our institution. Our findings suggest that the system is reliable, convenient, and easy to use for both students and instructors. In terms of future work, we recommend conducting further research to explore the potential of using this system in other educational contexts and to evaluate its effectiveness in increasing student attendance rates. Additionally, we suggest investigating the possibility of integrating the system with other technologies, such as facial recognition or biometric authentication, to further enhance the security and reliability of attendance tracking.
REFERENCES
Mustafa Saad Mohammed is a highly skilled computer and software engineer from Iraq. He graduated from the University of Diyala/College of Engineering with a Bachelor's degree in computer engineering and software, where he achieved first place in his section and college. Before even entering college, he had already started his computer programming career, showing a particular interest in programming Nokia mobile applications using the Python programming language. Over the years, Mustafa has gained expertise in various programming languages, including C, C++, JAVA SE, JAVA EE, C#, Visual Basic, HTML, CSS, JavaScript, PHP, ASP.NET, MYSQL, Assembly, and Swift. He has also worked as an iPhone application developer using Swift and has specialized in programming and developing Android applications, with nearly five years of experience in this field. He can be contacted at email: moustafa.alnaimi@gmail.com.
Prof. Dr. Khamis A. Zidan is the chairman and membership of many scientific committees and supervisory promotions, scientific committees and investigative, test scores, engineering, development the preparatory committees of scientific conferences held in the Ministry of Higher Education and Iraqi universities. He had participated in many training and development courses in the field of computers and information technology and communications inside and outside Iraq. He had received numerous certificates of appreciation in the field of computers and information and communication technology from universities and from international centers inside and outside Iraq. He can be contacted at email: khamis_zidan@aliraqia.edu.iq.
|
{"Source-Url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/viewFile/32572/17583", "len_cl100k_base": 4711, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24722, "total-output-tokens": 5750, "length": "2e12", "weborganizer": {"__label__adult": 0.0006666183471679688, "__label__art_design": 0.0015497207641601562, "__label__crime_law": 0.0010957717895507812, "__label__education_jobs": 0.378173828125, "__label__entertainment": 0.00022482872009277344, "__label__fashion_beauty": 0.0005593299865722656, "__label__finance_business": 0.001129150390625, "__label__food_dining": 0.0009455680847167968, "__label__games": 0.0014886856079101562, "__label__hardware": 0.0059356689453125, "__label__health": 0.0026874542236328125, "__label__history": 0.0012311935424804688, "__label__home_hobbies": 0.0005393028259277344, "__label__industrial": 0.0009632110595703124, "__label__literature": 0.0008511543273925781, "__label__politics": 0.0004127025604248047, "__label__religion": 0.0008144378662109375, "__label__science_tech": 0.158935546875, "__label__social_life": 0.0006022453308105469, "__label__software": 0.04962158203125, "__label__software_dev": 0.388671875, "__label__sports_fitness": 0.0007872581481933594, "__label__transportation": 0.0015554428100585938, "__label__travel": 0.0005636215209960938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24784, 0.02889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24784, 0.3235]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24784, 0.90496]], "google_gemma-3-12b-it_contains_pii": [[0, 3387, false], [3387, 7717, null], [7717, 9666, null], [9666, 13118, null], [13118, 14737, null], [14737, 17335, null], [17335, 23142, null], [23142, 24784, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3387, true], [3387, 7717, null], [7717, 9666, null], [9666, 13118, null], [13118, 14737, null], [14737, 17335, null], [17335, 23142, null], [23142, 24784, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24784, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24784, null]], "pdf_page_numbers": [[0, 3387, 1], [3387, 7717, 2], [7717, 9666, 3], [9666, 13118, 4], [13118, 14737, 5], [14737, 17335, 6], [17335, 23142, 7], [23142, 24784, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24784, 0.25]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
2e37c47d42a45139326ddc74bd36c09e5b746cb7
|
Generating Fuzzy Term Sets for Software Project Attributes using Fuzzy C-Means and Real Coded Genetic Algorithms
Ali Idri\textsuperscript{1}, Azeddine Zahi\textsuperscript{2} and Alain Abran\textsuperscript{3}
\textsuperscript{1} Department of Software Engineering
ENSIAS, Mohamed V University
Rabat, Morocco, E-mail : \{ idri, elkouabji\}@ensias.ma
\textsuperscript{2} Department of Computer Science FST,
Sidi Mohamed Ben Abdellah University, Fez, Morocco
E-mail : azeddine.zahi@fsr-usmba.ac.ma
\textsuperscript{3} École de Technologie Supérieure
1180 Notre-Dame Ouest,
Montreal, Canada H3C 1K3
E-mail : aabran@ele.etsmtl.ca
ABSTRACT
This paper investigates with the fuzzy representation of software project attributes. The aim is to generate fuzzy sets and their membership functions from numerical data of software project attributes. The proposed fuzzy sets generation process consists in two main steps: First, we use the well-known Fuzzy C-Means algorithm (FCM) and the Xie-Beni validity criterion to decide on the number of fuzzy sets. Second, we use a Real Coded Genetic Algorithm (RCGA) to build membership functions for these fuzzy sets. Membership functions can be trapezoidal, triangular or Gaussian. This study uses the software attributes given in the COCOMO’81 dataset.
Keywords : Software project attributes, Fuzzy clustering, Real Coded Genetic Algorithms.
1. INTRODUCTION
Software project attributes are used by estimation models in software engineering to predict some important attributes of future entities such as software development effort, software reliability and programmers productivity. For example, software cost estimation models use as inputs some software project attributes, also called cost drivers, such as software size, software reliability, and experience of the personnel involved in the software project in order to estimate the required software development effort (Boehm, 1981) (Boehm, 1995) (Burgess, 2001) (Idri, 2002) (Shepperd, 1997) (Vicinanza, 1990) (Wittig, 1997).
In general, many software project attributes are measured either on Nominal or Ordinal scale type composed of linguistic values such as, low, very low, complex, and important. For example in the COCOMO II software cost estimation model (Boehm, 1995) 17 among 23 cost drivers are measured on an Ordinal scale composed of six linguistic values: very low, low, nominal, high, very high, and extra-high. As a consequence, when dealing with linguistic values handling imprecision, uncertainty and partial truth is unavoidable. However, the software engineering community often uses numbers or classical intervals to represent these linguistic values. Furthermore, such transformations and representations do not mimic the way in which humans interpret linguistic values and consequently cannot deal with imprecision and uncertainty. To overcome this limitation, we have suggested the use of fuzzy sets rather than classical interval (or numbers) to represent linguistic values (Idri, 2000) (Idri, 2001) (Idri, 2002). The main motivation of fuzzy sets theory, founded by Zadeh in 1965, is apparently the desire to build a formal quantitative framework that captures the vagueness of humans knowledge since it is expressed via natural language. Fuzzy set theory (Zadeh, 1965) suggests through the fuzzy set concept a more suitable representation of linguistic values. Indeed, a fuzzy set, by contrast to a classical set, is associated with a membership function, which maps the elements of a domain \( W \) in a real interval \( [0,1] \). Thus, a fuzzy set representation captures the vagueness of one linguistic value by the use of gradual rather than abrupt-step membership function. In this paper, we investigate the fuzzy representation of linguistic values measuring software project attributes.
Fuzzy representation of linguistic values has been successfully used in many other fields such as control, image processing, and pattern recognition. An overview of techniques to generate fuzzy sets and their membership functions is presented in (Medasani, 1998). They may be grouped into two major categories: (1) empirical techniques which construct membership functions from expert knowledge (Idri, 2000) (Sicilia, 2005), and (2) automatic techniques, which construct membership functions from historical data using clustering techniques (Liao, 2001) (Chen, 2005) (Guillaume, 2004). In an earlier work (Idri, 2000), we have empirically built fuzzy sets of twelve COCOMO’81 cost drivers based on their descriptions given
in (Boehm, 1981). These fuzzy sets are associated with trapezoidal membership functions. The aim of this work is to generate fuzzy sets and their membership functions using the fuzzy C-Means clustering technique and a Real Coded genetic algorithm.
The proposed fuzzy sets generation process consists in two main steps (Figure 1). First, we use the well-known Fuzzy C-Means algorithm (FCM) and the Xie-Beni validity criterion to decide on the number of clusters (fuzzy sets) (Bezdek, 1981) (Xie and Beni, 1991). Second, we use a Real Coded Genetic Algorithm (RCGA) to build membership functions for these fuzzy sets (Herrera, 2003) (Mühlenbein, 1993). Membership functions can be trapezoidal, triangular or Gaussian. Fuzzy C-Means algorithm is a fuzzy clustering method used to generate a known number of clusters. The determination of this number is still an open problem in clustering. Often, an empirical knowledge or a set of evaluation criteria is used to choose the best set of clusters. In this work, we use the fuzzy cluster validity criteria proposed in (Xie, 1991).
This study uses a dataset that contains 252 historical Software projects. This dataset is deduced from the COCOMO’81 dataset (Boehm, 81). Each project is described by 13 attributes: the software size measured in KDSI (Kilo Delivered Source Instructions) and the remaining 12 attributes are measured on a scale composed of six linguistic values: ‘very low’, ‘low’, ‘nominal’, ‘high’, ‘very high’ and ‘extra high’. These 12 attributes are related to the software development environment such as the experience of the personnel involved in the software project, the method used in the development and the time and storage constraints imposed on the software (Table 1).
### Table 1: COCOMO attributes selected for fuzzification
<table>
<thead>
<tr>
<th>Attributes</th>
<th>Designation</th>
</tr>
</thead>
<tbody>
<tr>
<td>SIZE</td>
<td>Software Size</td>
</tr>
<tr>
<td>DATA</td>
<td>Database Size</td>
</tr>
<tr>
<td>TIME</td>
<td>Execution Time Constraint</td>
</tr>
<tr>
<td>STOR</td>
<td>Main Storage Constraint</td>
</tr>
<tr>
<td>VIRTMIN, VIRT MAJ</td>
<td>Virtual Machine Volatility</td>
</tr>
<tr>
<td>TURN</td>
<td>Computer Turnaround</td>
</tr>
<tr>
<td>ACAP</td>
<td>Analyst Capability</td>
</tr>
<tr>
<td>AEXP</td>
<td>Applications Experience</td>
</tr>
<tr>
<td>PCAP</td>
<td>Programmer Capability</td>
</tr>
<tr>
<td>VEXP</td>
<td>Virtual Machine Experience</td>
</tr>
<tr>
<td>LEXP</td>
<td>Programming Language Experience</td>
</tr>
<tr>
<td>SCED</td>
<td>Required Development</td>
</tr>
</tbody>
</table>
This paper is organized as follows. Section 2 describes briefly the Fuzzy C-Means algorithm and presents the results of its application to the software project attributes of the COCOMO’81 dataset. Section 3 presents how a Real Coded Genetic Algorithm is used to build membership functions of the fuzzy sets generated by the FCM algorithm. Section 4 presents and discusses the obtained membership functions when applying RCGA to software project attributes of the COCOMO’81 dataset. A conclusion and an overview of future work conclude this paper.
## 2. Fuzzy C-Means Algorithm for Clustering Software Project Attributes
### 2.1 FCM algorithm: An overview
Fuzzy C-means algorithm (FCM) is a fuzzy clustering technique which is different from classical C-means that uses hard partitioning. FCM uses fuzzy partitioning such that a data point can belong to all clusters with different membership grades between 0 and 1. FCM is an iterative algorithm that aims to find cluster centers (centroids) that minimize the following objective function:
$$\text{Min } J_m(U, C) = \sum_{i=1}^{m} \sum_{j=1}^{c} (u_{ij})^m ||x_j - c_i||^2$$
Subject to \( \sum_{j=1}^{c} u_{ij} = 1, \forall j = 1, ..., n \)
where
- \( X = \{x_1, ..., x_n\} \) is a data set of points;
- \( c \) is the desired number of clusters;
- \( m \) is the control parameter of fuzziness;
- \( U = \{u_{ij}\} \) is the partition matrix, containing the membership values of all data in all clusters;
- \( C = \{c_i\} \) is the set of cluster centers.
### Figure 1. Fuzzy sets generation process
To obtain a fuzzy partition using the FCM algorithm, the membership matrix \( U \) is randomly initialized according to Equation 2. To reach a minimum of the objective function, there are two conditions. First, the centers are computed according to the Equation 3. Second, the matrix \( U \) is calculated according to the Equation 4. By iteratively updating the cluster centers and the membership grades for
each data point, FCM moves the cluster centers to the "right" locations within a data set.
\[ c_i = \frac{\sum_{j=1}^{n} u_{ij}^m x_j}{\sum_{j=1}^{n} u_{ij}^m} \quad (3) \]
\[ u_{ij} = \frac{1}{\sum_{k=1}^{c} \left( \frac{\|x_j - c_k\|}{\|x_j - c_k\|^2} \right)^{2(m-1)}} \quad (4) \]
The outline of the FCM algorithm can be stated as follows (Bezdek, 1981):
Step 1. Randomly initialize the membership matrix (U) that has constraints in Equation 2.
Step 2. Calculate centroids (c_i) by using Equation 3.
Step 3. Compute dissimilarity between centroids and data points using Equation 1. Stop if its improvement over previous iteration is below a threshold.
2.2 Empirical results
This subsection presents the obtained results when applying the FCM algorithm to the COCOMO'81 software projects attributes. The calculations were made using a software prototype developed with Matlab under a Microsoft Windows PC environment.
For each software project attribute, several experiments were conducted with the FCM algorithm each time using different initial matrix U. The desired number of clusters (c) is varied within the interval \([3, 6]\) because all the COCOMO'81 attributes are evaluated on a scale composed of at most six values (Boehm, 1981). The parameter \(m\) is fixed to 2 in all experiments. As mentioned earlier, we use the Xie-Beni criterion to decide on the number of clusters to be used in the next section. Table 2 shows the variation of the Xie-Beni index according to the number of clusters for each COCOMO'81 attribute.
For each attribute, we choose the number of clusters that minimizes the value of Xie-Beni criterion. (bold cell in table 2). Figures 2 and 3 show the fuzzy partition generated by the FCM algorithm of the DATA and TIME attributes respectively.
After generating fuzzy sets (clusters) with their partition by FCM, we use a Real Coded Genetic Algorithm (Herrera, 2003) (Mühlenbein, 1993) to build membership functions for these clusters; membership functions can be trapezoidal, triangular or Gaussian.
### Table 2: Variation of Xie-Beni index according to the number of clusters for the COCOMO'81 attributes
<table>
<thead>
<tr>
<th>Attributes</th>
<th>Number of clusters</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>3</td>
</tr>
<tr>
<td>SIZE</td>
<td>0.012</td>
</tr>
<tr>
<td>DATA</td>
<td>0.012</td>
</tr>
<tr>
<td>TIME</td>
<td>0.076</td>
</tr>
<tr>
<td>STOR</td>
<td>0.072</td>
</tr>
<tr>
<td>VIRTMIN</td>
<td>0.087</td>
</tr>
<tr>
<td>VIRTMAJ</td>
<td>0.078</td>
</tr>
<tr>
<td>TURN</td>
<td>0.144</td>
</tr>
<tr>
<td>ACAP</td>
<td>0.102</td>
</tr>
<tr>
<td>AEXP</td>
<td>0.065</td>
</tr>
<tr>
<td>PCAP</td>
<td>0.077</td>
</tr>
<tr>
<td>VEXP</td>
<td>0.078</td>
</tr>
<tr>
<td>LEXP</td>
<td>0.121</td>
</tr>
<tr>
<td>SCED</td>
<td>0.128</td>
</tr>
</tbody>
</table>
### Figures
- Figure 2: Fuzzy partition for DATA attribute
- Figure 3: Fuzzy partition for TIME attribute
### 3. BUILDING MEMBERSHIP FUNCTIONS OF FUZZY SETS USING REAL CODED GENETIC ALGORITHM
#### 3.1 Problem formulation
Let us suppose we know a partition composed by \(c\) fuzzy clusters generated when applying the FCM algorithm to a given dataset \(X = \{x_1, \ldots, x_n\}\). Consider that
in the partition matrix containing membership grades of data $X = \{x_1, ..., x_n\}$ to the $c$ fuzzy clusters. $C = (c_j), 1 \leq i \leq c$, $c_j$ are the cluster centers. The problem consists of building a set of membership functions $(\mu_i), 1 \leq i \leq c$, that are interpolating the known membership values $u_{ij}$ of the partition matrix $U$; membership functions can be trapezoidal, triangular, or gaussian. Hence, the problem can be formulated as an optimization problem, which consists of finding the membership functions, $(\mu_i), 1 \leq i \leq c$, minimizing the mean square error defined as follows:
$$MSE(\mu_1, ..., \mu_c) = \frac{1}{n} \sum_{i=1}^{n} \left[ \mu_i(x_j) - u_{ij} \right]^2$$
subject to the following conditions:
$$\sum_{i=1}^{c} \mu_i(x_j) = 1, \text{ for all } x_j$$
$$\mu_i(x_j) = u_{ij}, 1 \leq i \leq c, 1 \leq j \leq n$$
According to the shape of membership functions, which are often not differentials, we suggest approaching this problem with a Real Coded Genetic Algorithm.
### 3.2 Experiment design of a Real Coded Genetic Algorithm to build membership functions
Genetic algorithms (GAs) are stochastic methods based on the principles of genetics and the natural evolution (Goldberg, 1989) (Holland, 1975). They are used in search and optimization problems. The main idea is to evolve over time a finite part of search space, called population, using three operators: selection, crossover, and mutation until a termination criterion is reached. Each element in the population is treated as a chromosome, and represents a candidate solution to the problem. Furthermore, a chromosome is associated with a value called fitness which reflects its goodness and its adaptability capabilities; it is often calculated from the objective function. When tackling an optimization problem with variables in a continuous domain, GAs are called Real Coded Genetic Algorithms (RCGAs) (Mühlenbein., 1993) (Herrera, 2003). In this case, each chromosome in the search space is coded by a vector of real numbers and specific operators are used. In our case, the use of an RCGA to find membership functions $\mu_j$ requires the determination of certain parameters such as the coding scheme, the fitness function and the various genetic operators (selection, crossover and mutation).
Concerning the coding scheme, a chromosome in the population of our RCGA, $m_j, 1 \leq i \leq M$, represents the set of the unknown membership functions $(\mu_j), 1 \leq j \leq c$ associated to the $c$ fuzzy sets generated by the FCM. The shape of the membership functions can be trapezoidal, triangular or gaussian. Thus, each chromosome encodes a set of membership functions in a real vector $(m_1, ..., m^K)$. The genes $m^j_i$ are obtained from the shape of the membership functions. Furthermore, in order to avoid incoherent situations, such us the peak value of one function being greater than the peak value of the next one, each gene $m^j_i$ (a real value) must be within a fixed interval. These intervals are often determined by experts. Here, we use the cluster centers $C = (c_j), 1 \leq j \leq c$ to decide on these intervals. Taking into account these aspects, we propose three coding schemes of $m_i$ that are associated to the trapezoidal, triangular, and gaussian shapes respectively.
- For the trapezoidal shape, each membership function, $\mu_j, 2 \leq j \leq c-1$, is represented by 4 parameters $(a^j_1, a^j_2, a^j_3, a^j_4)$; the membership functions $\mu_i$ and $\mu_c$ are represented by 2 parameters, $(a^i_1, a^i_2)$ and $(a^c_1, a^c_2)$ respectively. In order to obtain a fuzzy partition, i.e. $\sum_{j=1}^{c} \mu_j(x) = 1$, the parameters $(a^j_1, a^j_2)$ of each function $\mu_j, 1 \leq j \leq c-1$ must be the same as the parameters $(a^j_{i+1}, a^j_{i+2})$ of the next function $\mu_{j+1}$. Thus, only 2 of the parameters $(a^j_1, a^j_2, a^j_{i+1}, a^j_{i+2})$ are considered in the coding scheme. Figure 4 shows the structure of an chromosome $m_i$ encoding trapezoidal membership functions. The size of this chromosome is defined by the expression $K = 2c - 2$. The variation intervals associated to a chromosome $m_i = (m^1_i, ..., m^K_i)$ are defined in Table 3.
<table>
<thead>
<tr>
<th>Gene</th>
<th>Variation interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>$m^1_i$</td>
<td>$(\min(X), c_i + \frac{c_2 - c_1}{2})$</td>
</tr>
<tr>
<td>$m^{2l}_i, 1 \leq l \leq c-1$</td>
<td>$[c_i + \frac{c_{l+1} - c_i}{2}, c_{l+1}]$</td>
</tr>
<tr>
<td>$m^{2l-1}_i, 1 \leq l \leq c-1$</td>
<td>$[c_i, c_i + \frac{c_{l+1} - c_i}{2}]$</td>
</tr>
<tr>
<td>$m^K_i$</td>
<td>$[c_{l-1} + \frac{c_{c-1}}{2}, \max(X)]$</td>
</tr>
</tbody>
</table>
- For the triangular shape, each membership function, $\mu_j, 2 \leq j \leq c-1$ is represented by 3 parameters $(a^j_1, a^j_2, a^j_3)$; the membership functions $\mu_i$ and $\mu_c$ are represented by 2 parameters, $(a^i_1, a^i_2)$ and $(a^c_1, a^c_2)$ respectively. In order to obtain a fuzzy partition, the parameter $a^j_2$ of each function $\mu_j$ is the same as the
parameters \( a_j^{l-1} \) and \( a_j^{l+1} \) of the adjacent functions \( \mu_{j-1} \) and \( \mu_{j+1} \) respectively. Thus, only one of these parameters (the center of a triangular function), is considered in the coding scheme. Figure 5 shows the structure of a chromosome encoding triangular membership functions. The size of this chromosome is given by \( K = c \). The variation intervals associated to a chromosome \( m_i = (m_i^1, \ldots, m_i^K) \) are defined in Table 4.
### Table 4: Variation intervals of a chromosome \( m_i \) associated to triangular membership functions
<table>
<thead>
<tr>
<th>Gene</th>
<th>Variation interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>( m_i^1 )</td>
<td>( \text{min}(X), \frac{c_2 - c_1}{2} )</td>
</tr>
<tr>
<td>( m_i^l, 2 \leq l \leq c - 1 )</td>
<td>( \frac{c_{l-1} - c_{l+1}}{2}, \frac{c_{l+1} - c_l}{2} )</td>
</tr>
<tr>
<td>( m_i^K )</td>
<td>( \frac{c_{c-1} - c_{c-1}}{2}, \text{max}(X) )</td>
</tr>
</tbody>
</table>
---
Figure 4: Structure of a chromosome associated to trapezoidal membership functions
Figure 5: Structure of a chromosome associated to triangular membership functions
• For the Gaussian shape, each membership function \( \mu_j, 1 \leq j \leq c \) is defined by 2 parameters: the width \( \sigma_j \) and the center \( c_j \). Figure 6 shows the structure of a chromosome encoding gaussian membership functions. The size of this chromosome is given by \( K = 2c \). The variation intervals associated to a chromosome \( m_i = (m_i^1, ..., m_i^K) \) are defined in Table 5.
Concerning the fitness function \( F \), we use the following formula:
\[
F(m_i) = \frac{\sum_{j=1}^{n} \text{MSE}(m_i)}{\sum_{j=1}^{n} \text{MSE}(m_j)}
\]
(6)
\[
\text{MSE}(m_i) = \frac{1}{n} \sum_{j=1}^{n} \left\| \mu(x_j) - y_j \right\|^2
\]
where \( \mu(x_j) = (\mu_1(x_j), ..., \mu_c(x_j)) \), \( y_j = (u_{j1}, ..., u_{jc}) \), and \( M \) is the size of the population.
Table 5: Variation intervals of a chromosome \( m_i \) associated to Gaussian membership functions
<table>
<thead>
<tr>
<th>Gene</th>
<th>Variation interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>center ( m_i^1 )</td>
<td>( \left[ \min(X), c_1 + \frac{c_2 - c_1}{2} \right] )</td>
</tr>
<tr>
<td>centers ( m_i^{2l-1} ), ( 2 \leq l \leq c-1 )</td>
<td>( \left[ c_{l-1} + \frac{c_l - c_{l-1}}{2}, c_l + \frac{c_{l+1} - c_l}{2} \right] )</td>
</tr>
<tr>
<td>center ( m_i^{K-1} )</td>
<td>( \left[ c_{c-1} + \frac{c_c - c_{c-1}}{2}, \max(X) \right] )</td>
</tr>
<tr>
<td>Widths ( m_i^{2l} ), ( 1 \leq l \leq c )</td>
<td>( [0, L_l], L_l ) is the length of the variation interval of the previous gene: ( m_i^{2l-1} )</td>
</tr>
</tbody>
</table>
For the three genetic operators (selection, crossover and mutation), we use those that are specifics to Real Coded Genetic algorithms:
• **Selection:** The linear ranking is used as a selection operator (Baker, 1987). Fitness values are first sorted into decreasing order. A chromosome is then randomly selected according to its rank in the population with the probability computed as follows.
\[
P(m_i) = \frac{1}{M} \left( 2 - \eta + 2*(\eta - 1)* \left( \frac{\text{rank}(m_i) - 1}{M - 1} \right) \right)
\]
(7)
where \( M \) is size of the population, and \( \eta \in [0,1] \).
• **Crossover:** The line recombination method is considered as a crossover operator (Mühlenbein, 1993). It performs recombination between real coded chromosomes. Let be \( P_1 \) and \( P_2 \) the chromosomes to be crossed, and \( O_1 \) a chromosome generated by this operator. \( O_1 \) is constructed gene by gene, and each gene \( O_{1l} \) is the result of combining the genes in the parents according to the expression:
\[
O_{1l} = P_{1l} + \alpha (P_{2l} - P_{1l})
\]
(8)
where \( \alpha \) a scaling factor chosen uniformly, for all pairs of parents, at random in the interval \([0.25, 1.25]\).
• **Mutation:** As a mutation operator, we consider the Breeder Genetic Algorithm (Mühlenbein, 1993), which performs a mutation of real coded chromosomes by perturbing each gene \( m_i^l \) of the chromosome \( m_i \) according to the expression:
\[
(m_i^l)' = m_i^l \pm \Delta m_i^l \cdot \delta
\]
(9)
where, \( \Delta m_i \) is the range of the variation interval associated to \( m_i^l \), the sign (−) or (+) is selected with probability 0.5, and \( \delta \) is a distributed amplitude of the perturbation favoring the worst values.
---
**Figure 6:** Structure of a chromosome associated to Gaussian membership functions.
4. EMPIRICAL RESULTS
This section presents the obtained membership functions when applying the RCGA algorithm to the COCOMO’81 software projects attributes. The calculations were made using a software prototype developed with Matlab under a Microsoft Windows PC environment.
For each software project attribute, we have applied the RCGA algorithm, as it is designed in the previous section, to the fuzzy clusters generated by the FCM algorithm in order to build their membership functions. The RCGA algorithm is applied with populations sized to 300, mutation probability fixed to 0.9, and the number of generation is equal to 200. For each attribute, the number of membership functions is equal to the number of fuzzy clusters generated by the FCM algorithm with the Xie-beni criterion (Section 2). Figure 7 and Figure 8 show three different shapes of membership functions associated to the fuzzy sets of the DATA and TIME attributes respectively.
Figure 7: Membership functions associated to the fuzzy sets of the DATA attribute. (a) Trapezoidal. (b) Triangular. (c) Gaussian.
Figure 8: Membership functions associated to the fuzzy sets of the TIME attribute. (a) Trapezoidal. (b) Triangular. (c) Gaussian.
5. CONCLUSION AND FUTURE WORK
In this paper, we have proposed and validated the use of the FCM algorithm and a Real Coded Genetic Algorithm to generate fuzzy sets and their membership functions for software project attributes. The proposed fuzzy sets generation process consists in two main steps. First, we have used the well-known Fuzzy C-Means algorithm (FCM) and the Xie-Beni validity criterion to decide on the number of clusters (fuzzy sets). Second, we have used a Real Coded Genetic Algorithm (RCGA) to build membership functions for these fuzzy sets. Membership functions can be trapezoidal, triangular or Gaussian. This study has used the 13 attributes of the COCOMO’81 dataset.
The obtained fuzzy sets and their membership functions of the 13 attributes of the COCOMO’81 dataset will be used for software cost estimation. Indeed, in some earlier works, we have developed a set of software cost estimation models based on an empirical-construction of fuzzy sets (Idri et al., 2000) (Idri et al., 2002). Hence, we are looking currently at the investigation of the fuzzy sets obtained in this work to compare the accuracy of cost estimation models when using FCM and RCGA rather than empirical knowledge for building fuzzy sets.
Bibliography
|
{"Source-Url": "http://s3.amazonaws.com/publicationslist.org/data/a.abran/ref-1935/1044.pdf", "len_cl100k_base": 6778, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30084, "total-output-tokens": 7266, "length": "2e12", "weborganizer": {"__label__adult": 0.0003139972686767578, "__label__art_design": 0.0003483295440673828, "__label__crime_law": 0.0004165172576904297, "__label__education_jobs": 0.0010738372802734375, "__label__entertainment": 5.549192428588867e-05, "__label__fashion_beauty": 0.00016295909881591797, "__label__finance_business": 0.0008320808410644531, "__label__food_dining": 0.00032210350036621094, "__label__games": 0.000492095947265625, "__label__hardware": 0.0008668899536132812, "__label__health": 0.0006999969482421875, "__label__history": 0.0002033710479736328, "__label__home_hobbies": 0.00012862682342529297, "__label__industrial": 0.0005736351013183594, "__label__literature": 0.00023233890533447263, "__label__politics": 0.0002313852310180664, "__label__religion": 0.0003190040588378906, "__label__science_tech": 0.041839599609375, "__label__social_life": 0.00010520219802856444, "__label__software": 0.00902557373046875, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.0002465248107910156, "__label__transportation": 0.0003955364227294922, "__label__travel": 0.00017344951629638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24269, 0.04954]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24269, 0.66239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24269, 0.82441]], "google_gemma-3-12b-it_contains_pii": [[0, 4541, false], [4541, 9035, null], [9035, 12538, null], [12538, 17448, null], [17448, 18494, null], [18494, 21802, null], [21802, 24269, null], [24269, 24269, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4541, true], [4541, 9035, null], [9035, 12538, null], [12538, 17448, null], [17448, 18494, null], [18494, 21802, null], [21802, 24269, null], [24269, 24269, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24269, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24269, null]], "pdf_page_numbers": [[0, 4541, 1], [4541, 9035, 2], [9035, 12538, 3], [12538, 17448, 4], [17448, 18494, 5], [18494, 21802, 6], [21802, 24269, 7], [24269, 24269, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24269, 0.29193]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
f8bc16a0dad2dc3df85f8c7fa7c1053be4061464
|
3.6 The Cocke-Younger-Kasami parsing algorithm
Recursive descent parsing is a clear and effective method for parsing
LL(1)-grammars; a string of \( n \) symbols can be parsed in \( O(n) \) time.
However, LL(1)-grammars are a fairly limited class; solving the
general parsing problem is not as easy.
In principle the problem could be solved by applying the recursive
descent parsing method, but in practice the large number of
alternatives to be tested becomes a problem (typically \( O(c^n) \) for
some \( c \geq 2 \)).
The Cocke-Younger-Kasami algorithm is a method for parsing strings
in any context-free grammar \( G \). It is based on the general technique
of dynamic programming (tabulating partial solutions). The method
requires \( O(n^2) \) time, where \( n \) is the length of the string to be parsed.
First some grammar transformations are defined.
1. Removing \( \varepsilon \)-productions
Let \( G = (V, \Sigma, P, S) \) be a context-free grammar. The nonterminal
\( A \in V - \Sigma \) is called nullable if \( A \Rightarrow^* \varepsilon \).
Lemma 3.5. Any context free grammar \( G \) can be transformed into an
equivalent grammar \( G' \) where at most the initial symbol is nullable.
Proof. Let \( G = (V, \Sigma, P, S) \). First determine the nullable nonterminals
of \( G \) as follows:
(i) first set
\[
\text{NULL} := \{ A \in V - \Sigma \mid A \Rightarrow^* \varepsilon \text{ is a production of } G \};
\]
(ii) repeat the following expansion of the set \( \text{NULL} \) until it no longer
grows:
\[
\text{NULL} := \text{NULL} \cup \{ A \in V - \Sigma \mid A \Rightarrow B_1 \ldots B_k \text{ is a production of } G, \ B_i \in \text{NULL} \text{ for all } i = 1, \ldots, k \}.
\]
Now replace each production \( A \rightarrow X_1 \ldots X_k \) by the set of productions of the form
\[
A \rightarrow \alpha_1 \ldots \alpha_k, \quad \text{where}
\]
\[
\alpha_i = \begin{cases}
X_i, & \text{if } X_i \notin \text{NULL}; \\
X_i \text{ or } \varepsilon, & \text{if } X_i \in \text{NULL}.
\end{cases}
\]
Finally remove all productions of the form \( A \rightarrow \varepsilon \). If the production \( S \rightarrow \varepsilon \) would be to remove, we introduce the new initial symbol \( S' \) and the productions \( S' \rightarrow S \) and \( S' \rightarrow \varepsilon \). □
Example. Remove the \( \varepsilon \)-productions from the grammar:
\[
\begin{align*}
S & \rightarrow A \mid B \\
A & \rightarrow aBa \mid \varepsilon \quad \Rightarrow \quad (\text{NULL} = \{A, B, S\}) \\
B & \rightarrow bAb \mid \varepsilon \\
S & \rightarrow A \mid B \mid \varepsilon \\
A & \rightarrow aBa \mid aa \mid \varepsilon \\
B & \rightarrow bAb \mid bb.
\end{align*}
\]
Proof. Let \( G = (V, \Sigma, P, S) \). Find first the “unit successors” of each nonterminal in \( G \) as follows:
(i) at start for each \( A \in V - \Sigma \) let
\[
F(A) := \{ B \in V - \Sigma \mid A \rightarrow B \text{ is a production of } G \};
\]
(ii) repeat the following expansion operation of the \( F \)-sets until they no longer grow:
\[
F(A) := F(A) \cup \bigcup \{ F(B) \mid A \rightarrow B \text{ is a production of } G \}.
\]
After this remove all unit productions from \( G \) and replace them with all possible productions of the form \( A \rightarrow \omega \), where \( B \rightarrow \omega \) is a non-unit production of \( G \) for some \( B \in F(A) \). □
2. Removing unit productions
A production of the form \( A \rightarrow B \), where \( A \) and \( B \) are nonterminals, is a unit production.
Lemma 3.6. Any context-free grammar \( G \) can be transformed into an equivalent grammar \( G' \) without unit productions.
Example. Remove the unit productions from the previous grammar:
\[ S' \rightarrow S | \varepsilon \]
\[ F(S') = \{ S, A, B \}, F(S) = \{ A, B \}, \]
\[ S \rightarrow A \mid B \]
\[ F(A) = F(B) = \{ \varepsilon \}. \]
By replacing the unit productions as described we obtain the grammar:
\[ S' \rightarrow aBa | aa | bAb | bb | \varepsilon \]
\[ S \rightarrow aBa | aa | bAb | bb \]
\[ A \rightarrow aBa \mid aa \]
\[ B \rightarrow bAb \mid bb. \]
(We may note that \( S \) is now “unnecessary”, i.e., it cannot occur in the derivation of any sentence in the grammar. Unnecessary nonterminals can be removed from the grammar by a similar method. Exercise.)
The Chomsky normal form
A context free grammar \( G = (V, \Sigma, P, S) \) is in Chomsky normal form, if no nonterminal other than \( S \) is nullable, and apart from the possible production \( S \rightarrow \varepsilon \) the remaining productions are of the form
\[ A \rightarrow BC \quad \text{or} \quad A \rightarrow a, \]
where \( A, B \) and \( C \) are nonterminals and \( a \) is a terminal.
For simplicity it is additionally required that the initial symbol \( S \) does not appear on the right hand side of any production.
Theorem 3.7. Any context free grammar \( G \) can be transformed into an equivalent grammar \( G' \) that is in Chomsky normal form.
Proof. Let \( G = (V, \Sigma, P, S) \). First remove the \( \varepsilon \)-productions and unit productions from \( G \) by the constructions in Lemmata 3.5 and 3.6. Now all productions in \( G \) are of the form \( A \rightarrow a \) or \( A \rightarrow X_1 \ldots X_k, \)
\[ k \geq 2 \text{ (or } S \rightarrow \varepsilon). \]
For each terminal \( a \) add a new nonterminal \( C_a \) and a production \( C_a \rightarrow a \). Then first replace all terminals in productions of the form \( A \rightarrow X_1 \ldots X_k, k \geq 2 \), with the new nonterminals, and then replace the whole production by the set of productions
\[ A \rightarrow X_1 A_1 \]
\[ A_1 \rightarrow X_2 A_2 \]
\[ \vdots \]
\[ A_{k-2} \rightarrow X_{k-1} X_k, \]
where \( A_1, \ldots, A_{k-2} \) are again new nonterminals. \( \square \)
The Chomsky normal form obtained by the previous construction:
\[ S \rightarrow CD_1 \]
\[ S_1 \rightarrow BCD_1 \]
\[ S_2 \rightarrow CD \]
\[ S \rightarrow CD_2 \]
\[ S_1 \rightarrow CD_1 \]
\[ B \rightarrow b \]
\[ C \rightarrow c \]
Example. A grammar:
\[ S \rightarrow aBCd \mid bbb \]
\[ B \rightarrow b \]
\[ C \rightarrow c \]
\[ C_a \rightarrow a \]
\[ C_b \rightarrow b \]
\[ C_c \rightarrow c \]
\[ C_d \rightarrow d. \]
The CYK algorithm
Let \( G = (V, \Sigma, P, S) \) be a context-free grammar. By Theorem 3.7 we may assume that \( G \) is in Chomsky normal form. The question whether the string \( x \) is in the language \( L(G) \) can be solved as follows:
If \( x = \varepsilon \), then \( x \in L(G) \) iff \( S \rightarrow \varepsilon \) is a production of \( G \).
Otherwise denote \( x = a_1 \ldots a_n \) and investigate producing various substrings of \( x \).
Let \( N_k \) denote the set of those nonterminals \( A \) from which one can derive the substring of \( x \) that starts at position \( i \) and whose length is \( k \):
\[ N_k = \{ A \in V - \Sigma \mid A \vdash_G^* a_i \ldots a_{i+k-1}, \ 1 \leq i \leq i+k-1 \leq n. \} \]
The sets \( N_k \) can be computed by tabulating them from shorter to longer strings as presented in the following. Clearly \( x \in L(G) \) iff \( S \in N_{|x|} \).
Example. A grammar \( G \) in Chomsky normal form:
\[ S \rightarrow AB \mid BC \]
\[ A \rightarrow BA \mid a \]
\[ B \rightarrow CC \mid b \]
\[ C \rightarrow AB \mid a \]
The computation of the CYK algorithm with grammar \( G \) and input \( x = baaba \):
\[ S \rightarrow AB \]
\[ A \rightarrow BA \]
\[ B \rightarrow CC \]
\[ C \rightarrow AB \]
<table>
<thead>
<tr>
<th>( N_k )</th>
<th>1:b</th>
<th>2:a</th>
<th>3:a</th>
<th>4:b</th>
<th>5:a</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>B</td>
<td>A.C</td>
<td>A.C</td>
<td>B</td>
<td>A.C</td>
</tr>
<tr>
<td>2</td>
<td>S.A</td>
<td>B</td>
<td>B</td>
<td>S.A</td>
<td>-</td>
</tr>
<tr>
<td>k</td>
<td>3</td>
<td>S.A</td>
<td>C</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>4</td>
<td>S.A</td>
<td>C</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>5</td>
<td>S.A</td>
<td>C</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Since the initial symbol \( S \) is in \( N_5 \), we deduce that \( x \) is in \( L(G) \).
In computing a set $N_k$ the CYK algorithm proceeds simultaneously towards the set $N_k$ in column $N_i$ and away from $N_k$ along the diagonal $N_{i+j,k+j}$:
\[ N_k \]
### Definition 3.2
A pushdown automaton is a 6-tuple
\[ M = (Q, \Sigma, \Gamma, \delta, q_0, F) \]
where
- $Q$ is the finite set of states;
- $\Sigma$ is the input alphabet;
- $\Gamma$ is the stack alphabet;
- $\delta : Q \times (\Sigma \cup \{\varepsilon\}) \times (\Gamma \cup \{\varepsilon\}) \rightarrow P(Q \times (\Gamma \cup \{\varepsilon\}))$ is the (set-valued) transition function;
- $q_0 \in Q$ is the initial state;
- $F \subseteq Q$ is the set of (accepting) final states.
The interpretation of the value
\[ \delta(q, \sigma, \gamma) = \{(q_1, \gamma_1), \ldots, (q_k, \gamma_k)\} \]
of the transition function is that in state $q$ upon reading the input symbol $\sigma$ and the stack symbol $\gamma$ the automaton may move to one of the states $q_1, \ldots, q_k$ and replace the top element of the stack by one of the symbols $\gamma_1, \ldots, \gamma_k$ respectively. In the general case, pushdown automata are therefore nondeterministic.
If $\sigma = \varepsilon$, the automaton makes a transition without reading an input symbol. If $\gamma = \varepsilon$, the automaton does not read a stack symbol and the new written symbol is put on the top of the stack (a push operation). If the symbol read from the stack is $\gamma \neq \varepsilon$ and the symbol to be written is $\gamma' = \varepsilon$, the element on the top of the stack is removed (a pop operation).
A pushdown automaton is like a finite state automaton, to which a stack of unbounded size has been added. Using the stack is fairly limited: the automaton may read, write, remove or add symbols only at the top of the stack.
The configuration of the automaton is the triple \((q, w, \alpha) \in Q \times \Sigma^* \times \Gamma^*\); in particular, the initial configuration with input \(x\) is the triple \((q_0, x, \varepsilon)\).
Intuition: in configuration \((q, w, \alpha)\) the automaton is in state \(q\), the remaining part of the input string is \(w\) the stack, read from top to bottom, contains the string \(\alpha\).
The configuration \((q, w, \alpha)\) directly leads to configuration \((q', w', \alpha')\), denoted by
\[
(q, w, \alpha) \xrightarrow{\sigma} (q', w', \alpha'),
\]
if we may write \(w = \sigma w', \alpha = \gamma \beta, \alpha' = \gamma' \beta' (|\sigma|, |\gamma|, |\gamma'| \leq 1)\), such that
\[
(q', \gamma') \in \delta(q, \sigma, \gamma).
\]
Example. ⬤ A pushdown automaton for the language \(\{a^k b^k \mid k \geq 0\}\):
\[M = \{(q_0, q_1, q_2, q_3), \{a, b\}, \{A, \varepsilon\}, \delta, q_0, \{q_2, q_3\}\},\]
missä
\[
\begin{align*}
\delta(q_0, a, \varepsilon) &= \{(q_1, A)\}, \\
\delta(q_1, a, \varepsilon) &= \{(q_1, A)\}, \\
\delta(q_1, b, A) &= \{(q_2, \varepsilon)\}, \\
\delta(q_1, b, A) &= \{(q_3, \varepsilon)\}, \\
\delta(q_2, b, A) &= \{(q_2, \varepsilon)\}, \\
\delta(q_2, b, A) &= \{(q_3, \varepsilon)\}, \\
\delta(q, \sigma, \gamma) &= \emptyset \quad \text{for other } (q, \sigma, \gamma).
\end{align*}
\]
A graph presentation:
\[
\begin{tikzpicture}
\node (q0) at (0,0) {q_0};
\node (q1) at (1,1) {q_1};
\node (q2) at (1,-1) {q_2};
\draw[->] (q0) edge [loop above] node {a, A\varepsilon} (q0);
\draw[->] (q0) edge node {a, A\varepsilon} (q1);
\draw[->] (q1) edge node {b, A\varepsilon} (q2);
\draw[->] (q1) edge node {b, A\varepsilon} (q2);
\draw[->] (q2) edge [loop below] node {b, A\varepsilon} (q2);
\end{tikzpicture}
\]
The computation of the automaton with input \(aabb\):
\[
(q_0, aabb, \varepsilon) \xrightarrow{a} (q_1, abb, A) \xrightarrow{b} (q_2, b, A) \xrightarrow{b} (q_3, \varepsilon).
\]
Since \(q_3 \in F = \{q_0, q_3\}\), we have \(aabb \in L(M)\).
Pushdown automata and context-free languages
Theorem 3.8 A language is context-free, if and only if it can be recognized by some (non-deterministic) pushdown automaton. □
The proof is omitted here, but the principle in constructing the pushdown automaton M_G that corresponds to the given grammar G is:
1. The stack of the automaton is used to simulate the left derivation
2. If the top element of the stack is a non-terminal, some production of G is applied and the corresponding symbols pushed onto the stack;
3. If the topmost element is a terminal, it is matched with the next input symbol.
Example. The pushdown automaton that corresponds to the grammar \( \{ S \rightarrow aSbS \mid bSaS \} \):
\[
\begin{align*}
S & \Rightarrow aSbS \\
aSbS & \Rightarrow abSaSbS \\
& \Rightarrow abaSbS \\
& \Rightarrow abab
\end{align*}
\]
For example there is an accepting computation for the input \( abab \):
\[
(q_0, abab, \varepsilon) \xrightarrow{(q, bab, S\#)} (q, bab, aSbS\#) \xrightarrow{(q, ab, SaSbS\#)} (q, ab, aSbS\#) \xrightarrow{(q, b, Sb\#)} (q, b, bS\#) \xrightarrow{(q, \varepsilon, \varepsilon)} (q, \varepsilon, \varepsilon)
\]
This corresponds to the left derivation of abab in the grammar:
\[
S \Rightarrow aSbS \Rightarrow abSaSbS \Rightarrow abaSbS \Rightarrow abab.
\]
A pushdown automaton \( M \) is deterministic, if every configuration \((q, w, \alpha)\) has at most one possible immediate successor \((q', w', \alpha')\), for which
\[
(q, w, \alpha) \xrightarrow{(q', w', \alpha')}
\]
Unlike for finite state automata, nondeterministic pushdown automata are strictly more powerful than deterministic ones. For example the language \( \{ w^R \mid w \in \{a, b\}^* \} \) can be recognized by a nondeterministic, but not by a deterministic, pushdown automaton (proof omitted).
A context-free language is deterministic, if it can be recognized by a deterministic pushdown automaton. Deterministic languages can be parsed in \( O(n) \) time; general context-free languages require nearly \( O(n^2) \) time with known methods.
|
{"Source-Url": "http://www.tcs.hut.fi/Studies/T-79.148/2004S/slides/lecture08.pdf", "len_cl100k_base": 4568, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 7371, "total-output-tokens": 5074, "length": "2e12", "weborganizer": {"__label__adult": 0.00043892860412597656, "__label__art_design": 0.0005636215209960938, "__label__crime_law": 0.0005893707275390625, "__label__education_jobs": 0.0012273788452148438, "__label__entertainment": 0.00019109249114990232, "__label__fashion_beauty": 0.0002282857894897461, "__label__finance_business": 0.00024235248565673828, "__label__food_dining": 0.0006132125854492188, "__label__games": 0.00099945068359375, "__label__hardware": 0.0013170242309570312, "__label__health": 0.0007052421569824219, "__label__history": 0.0003843307495117187, "__label__home_hobbies": 0.00012671947479248047, "__label__industrial": 0.0007557868957519531, "__label__literature": 0.001678466796875, "__label__politics": 0.0005235671997070312, "__label__religion": 0.0009522438049316406, "__label__science_tech": 0.12274169921875, "__label__social_life": 0.00012874603271484375, "__label__software": 0.00888824462890625, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.0003867149353027344, "__label__transportation": 0.0007648468017578125, "__label__travel": 0.0002079010009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13648, 0.01164]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13648, 0.67325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13648, 0.65821]], "google_gemma-3-12b-it_contains_pii": [[0, 1714, false], [1714, 3641, null], [3641, 5788, null], [5788, 7786, null], [7786, 9570, null], [9570, 11594, null], [11594, 13648, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1714, true], [1714, 3641, null], [3641, 5788, null], [5788, 7786, null], [7786, 9570, null], [9570, 11594, null], [11594, 13648, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13648, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13648, null]], "pdf_page_numbers": [[0, 1714, 1], [1714, 3641, 2], [3641, 5788, 3], [5788, 7786, 4], [7786, 9570, 5], [9570, 11594, 6], [11594, 13648, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13648, 0.03196]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
135a4b9e52b32657944a12c2bf8fb14da150b3d5
|
Utah Preschool Outcomes Data System
Sandeep Venigalla
Utah State University
Follow this and additional works at: https://digitalcommons.usu.edu/gradreports
Part of the Computer Sciences Commons
Recommended Citation
Venigalla, Sandeep, "Utah Preschool Outcomes Data System" (2011). All Graduate Plan B and other Reports. 81.
https://digitalcommons.usu.edu/gradreports/81
This Report is brought to you for free and open access by the Graduate Studies at DigitalCommons@USU. It has been accepted for inclusion in All Graduate Plan B and other Reports by an authorized administrator of DigitalCommons@USU. For more information, please contact dylan.burns@usu.edu.
UTAH PRESCHOOL OUTCOMES DATA SYSTEM
by
Sandeep Venigalla
A report submitted in partial fulfillment
of the requirements for the degree
of
MASTER OF SCIENCE
in
Computer Science
Approved:
_______________________ _______________________
Dr. Stephen W. Clyde Dr. Curtis Dyreson
Major Professor Committee Member
_______________________
Dr. Stephen J. Allan
Committee Member
UTAH STATE UNIVERSITY
Logan, Utah
2011
ABSTRACT
UTAH PRESCHOOL OUTCOMES DATA SYSTEM
by
Sandeep Venigalla
Utah State University, 2011
Major Professor: Dr. Stephen W. Clyde
Department: Computer Science
In the State of Utah, both state and federal government agencies work together to provide special education services to eligible children. An important part of this effort is to document the effectiveness of the program, which can be measured in terms of the outcomes, namely, the progress of the individual children. This information can help identify both strengths and weaknesses of special education programs within the state, which in turn can lead to program improvements and better allocation of resources.
This report describes a software system that supports the tracking of child and program outcomes for special education within Utah. Specifically, it provides an overview of the project, the motivation behind the software system, the new technologies used in development, and suggestions for future work.
(37 pages)
ACKNOWLEDGMENTS
I thank Dr. Stephen Clyde for helping me throughout my graduate career and providing his valuable support to me. He not only gave me the technical knowledge but also the inspiration to carry out my work. Dr. Clyde has always taught me good software practice and design, and elegant programming principles.
I am grateful to my committee members, Dr. Curtis Dyreson and Dr. Steve Allan, for their interest in this project and their valuable guidance.
I thank Brian Smith (Vitruvian framework developer) for his co-operation and help. The use of the Vitruvian framework significantly reduced the development time for this project.
I also thank my mother and sister who have always supported me in many ways during my time in school.
Sandeep Venigalla
# CONTENTS
ABSTRACT .......................................................................................................................... iii
ACKNOWLEDGMENTS ........................................................................................................ iv
1 INTRODUCTION ................................................................................................................. 1
1.1. Introduction ................................................................................................................ 1
1.2. Overview of Utah Preschool Outcomes Data System ............................................. 2
2 SYSTEM ANALYSIS .......................................................................................................... 6
2.1. Introduction ................................................................................................................ 6
2.2. User Goals .................................................................................................................. 7
2.3. Functional Requirements .......................................................................................... 11
2.4. Structural Analysis ..................................................................................................... 16
3 ARCHITECTURAL DESIGN ............................................................................................. 19
4 IMPLEMENTATION DETAILS .......................................................................................... 21
4.1. Introduction ................................................................................................................ 21
4.2. Introduction to Vitruvian DBObjects ......................................................................... 21
4.3. Experience with and Improvement of DBObjects ................................................... 23
4.4. Implementation Details and Challenges ..................................................................... 24
5 SOFTWARE TESTING ......................................................................................................... 26
5.1. Introduction ................................................................................................................ 26
5.2. Unit Testing ............................................................................................................... 26
5.3. Integration Testing ...................................................................................................... 27
5.4. System Testing ........................................................................................................... 27
5.5. User Acceptance Testing ........................................................................................... 28
6 CONCLUSION AND FUTURE WORK ............................................................................. 29
REFERENCES ...................................................................................................................... 30
LIST OF FIGURES
Figure 1: Relationship among BTOTS, TEDI, and Utah Preschool Outcomes Data System ................................................................. 3
Figure 2: Actors in the system ................................................................. 7
Figure 3: Use-case - Manage users and teachers ........................................ 8
Figure 4: Use-case - Manage assessments and generate reports ....................... 9
Figure 5: Use-case - Child transfers .................................................... 10
Figure 6: Analysis level class diagram for the system ................................... 17
Figure 7: Architecture diagram of UPOD system ..................................... 20
Figure 8: Relationships among user interface, DBObjets and database ............. 22
CHAPTER 1
INTRODUCTION
1.1. Introduction
A child develops many basic life skills during his first five years, such as walking, eating, playing, and interacting with others. Some children have disabilities or developmental delays with respect to these basic skills. Children with a risk of developing such disabilities are referred by their physicians to participate in an Early Intervention (EI) program [10], which is provided by most states in the USA and aim to meet the special education needs of these children. In Utah, intervention for children ages 0-2 years, known as EI Part C, is under the purview of the Utah Department of Health (UDOH).
Some children only need EI Part C services to catch them up to their peers. Other children, specifically those of ages 3-4 years, need additional services to help them get ready for kindergarten. These children enroll in the special education program provided the Utah State Office of Education (USOE)\(^1\), also referred to as EI Part B. Children can enroll in EI Part B, even if they did not use EI Part C services.
Each child who receives special education and related services has an individualized education program (IEP). As indicated by the name, an IEP is individualized for a child’s specific needs. Tracking the child’s progress is then customized to match the plan and measure his or her skill-level relative to average
\(^1\) This special education program is funded by both state and federal governments.
children of the same age. This information can help educators ensure that the IEP is effective and adjust it, if necessary.
This report describes a web application, called Utah Preschool Outcomes Data (UPOD), for tracking the progress of a child during the implementation of the child’s IEP. This web application is accessible to all special education teachers, supervisors, center staff, and state staff throughout the state. However, the features and level of child information that they can access varies.
1.2. Overview of Utah Preschool Outcomes Data System
The Baby and Toddler Online Tracking System (BTOTS) keeps track of the children receiving EI Part-C services. The data about children who are likely to need preschool special education services are sent from BTOTS to a system within the USOE. This system, called Transition from Early-Intervention to Preschool Data Input System (TEDI), is responsible for tracking children already in the EI Part-C system who are entering EI Part-B up to the time the implementation of their IEP is started, e.g., they start receiving special education from EI Part-B. Once a child starts receiving special education from EI Part-B, the information is loaded into UPOD, and teachers start to track said child’s progress. Figure 1 depicts the relationships among these three systems.
Figure 1: Relationship among BTOTS, TEDI and Utah Preschool Outcomes Data System.
The children in TEDI are not immediately admitted into the special education program, but evaluated for a suspected disability. The evaluation is undertaken by a group of qualified professionals and parents, who in turn follow the guidelines defined by the Individuals with Disabilities Education Act 1997 (IDEA)\(^1\) to determine if a child has a developmental disability. They decide if the child has a disability as defined by IDEA.
If the child is found to have a disability as defined by IDEA, the teachers, special education teachers, and parents meet to write an IEP. Once this is done, the implementation of the IEP is started.
\(^1\) Individuals with Disabilities Education Act 1997 (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education and related services to more than 6.5 million eligible infants, toddlers, children and youth with disabilities [1].
The progress of the child during the implementation of IEP is tracked by the Utah Preschool Outcomes Data (UPOD) system. Special-education teachers evaluate the skills of the child at various stages of the IEP implementation by performing assessments. To support these activities, UPOD provides three general types of assessment:
- **Entry Assessment:** This assessment is done prior to the start of IEP. Performing this assessment is mandatory.
- **Intermediate Assessment:** This is an optional assessment that can be performed by the teacher at any time during the implementation of the IEP.
- **Exit Assessment:** This is done after the implementation of IEP. If the IEP is fully implemented, the child should have an exit assessment.
Each assessment has three or more outcomes that correspond to a set of skills. For each outcome, a teacher rates the child on a scale of 1-7 according to a standard rubric that relates the child’s current skill level to those of average children of the same age. The teacher must also record mechanisms used to determine the ratings. These are called the rating sources. Currently, the national standard for assessment requires tracking of the following three outcomes [2]:
- **Positive social relationship:** This outcome rates the child’s ability to relate with other children as well as with adults, follow group rules, and generally interact with others.
- **Knowledge and skills:** This outcome rates the child’s ability to reason, remember, solve problems, understand symbols, and understand both the physical and social world.
• **Action needs:** This outcome rates the child’s ability to take care of basic needs, use tools, move from place to place, and contribute to his own health and safety.
More outcomes may be included in an assessment as needed. So, UPOD must support the addition of new types of outcome and rating sources.
Also, UPOD needs to track children through multiple, non-contiguous, or relocated episodes of service. For example, if the child moves to a new center within the state, the IEP execution is continued at the new center, and assessments are transferred to the new center. However, if the child moves out of state and then later returns, s/he exits the program and re-enrolls when he/she moves back.
CHAPTER 2
SYSTEM ANALYSIS
2.1. Introduction
This chapter documents the requirements for UPOD’s use of the Unified Modeling Language\(^1\) (UML) diagrams and the functional requirements. The UML diagrams include use-case diagrams and class diagrams. The use-case diagrams in Section 2.2 provide software developers with a high-level overview of who uses UPOD, as well as said users’ goals. The functional requirements listed in Section 2.3 expand on these goals with a more detailed description including constraints of the system's features and behavior. Finally, the class diagrams in Section 2.4 describe the key objects in the system and their relationships to each other from an analysis perspective. This information is provided to help the developer solidify his/her understanding of system components and thus set the stage for database, business logic, and user interface design.
\(^1\) The *Unified Modeling Language*, or UML, provides industry standard mechanisms for visualizing, specifying, constructing and documenting software systems [11]. It includes use-case diagrams, class diagrams, interaction diagrams, state charts, activity charts, and more. Readers who are unfamiliar with UML can refer to any of the many textbooks on the subject, or the official specification published the *Object Modeling Group* (OMG).
2.2. User Goals
A use-case defines the interactions between external actors and the system under consideration to accomplish a goal. An actor specifies a role played by a person or system while interacting with the system [3]. There are four types of actors in UPOD: state user, LEA user, program center user, and teacher. See Figure 2. A state user can manage all the children, teachers, and users in the system, but cannot see certain, private assessments. An LEA user has access to child, teacher, and user accounts in his/her LEA. Similarly, a program center user can manage the children, teachers and users in his/her program center. A teacher can only manage the information of the children in his/her program center.

The use-case diagram in Figure 3 documents the management of teachers and users in the system. State users, LEA users, and program center users can manage teachers and users. Teachers may or may not be associated with a user account. If the teacher has a user account, the management of the teacher extends the management of the associated user account and vice versa.
Teachers without a user account exist in the system so that the assessments can be attributed to them. Consequently, teachers can never be deleted. They can only be set to inactive. However, users can be deleted.
Figure 3: Use-Case - Manage users and teachers.
The use-case diagram in Figure 4 documents the user goals relative to assessments and reports. Teachers, program center users, and LEA users can view, add, and edit assessments for the children. State users can only view assessments. This is because state users are not involved in performing assessments.
Figure 4: Use-Case - Manage assessments and generate reports.
The use-case diagram in Figure 5 documents the user goals associated with transferring children. A state user can transfer any child in the system. However, an LEA user, program center user, or a teacher can only transfer a child they have access to. This means if an LEA user wants to transfer a child into his/her LEA from a different LEA, he/she must place a transfer-in request. This request is then approved or rejected by a state user. Similarly, if a program center user or a teacher wants to transfer-in a child from outside the program center, a transfer-in request is required. If the child is in the same LEA, the LEA user can process the transfer. Otherwise, the state user must process the transfer.

2.3. Functional Requirements
Functional requirements describe the services the system should provide. Sometimes the functional requirements state what the system should not do. Functional requirements can be high-level and general or detailed expressing inputs, outputs, exceptions, and so on [4]. Since the analysis and design are dependent on the functional requirement specifications, having complete and accurate functional requirements is very important. Having an incorrect or incomplete set of requirements can significantly increase the time and effort required to develop a system, in this case the Utah Preschool Outcomes Data System.
The functional requirements for this project are as follows:
1. **Usability:** The system should provide an easy to use interface for managing child data, assessments, users, program centers, and teachers. It should also provide for the creation of reports.
2. **Data integrity:** The system should validate all user entered data. It should prompt for any missing information and show easy to understand error messages when necessary.
3. **Security:** Access to the system should be restricted to authorized users with a valid username and password. The system should be designed to deal with hackers and accidental loss of information.
4. **Users:** The following types of users should be supported:
4.1. **State users:** A state user has access to any child, user, or teacher in the system.
4.2. **LEA users:** An LEA user has access to any child, user, or teacher in his/her LEA.
4.3. **Program center users:** A Program center user has access to any child, user, or teacher in his/her program center.
4.4. **Teacher:** A teacher can only access a child in his/her program center.
5. **Teachers:**
5.1. The system must be able to handle multiple teachers in a program center.
5.2. Maintaining a direct relationship between a child and a teacher is not required. An assessment must be related to a specific teacher. This is important for training teachers and attributing an abnormal assessment to a specific teacher.
6. **User and teacher management:**
6.1. The system should allow users to add, modify, and delete users and teachers.
6.2. A teacher may or may not have a user account.
7. **Entry:** A child can enter the system in two ways:
7.1. The child information is loaded into the Utah Preschool Outcomes Data System from the TEDI database; or
7.2. A new child can be added by LEA users, program center users, and teachers.
8. **Child information:**
8.1. The system should store first name, last name, middle initial, birth date, gender, SSID, and LEA student number.
8.2. SSID should be visible to state users only.
9. **Program centers:**
9.1. The system should provide for the management of program centers.
9.2. The program centers in an LEA are managed by LEA users.
9.3. State users can manage all program centers.
9.4. A program center can never be deleted. It is made inactive when no longer needed.
10. **Transfer:**
10.1. When a child is transferred, another program center takes over the enrollment.
10.2. All existing assessments go to the new program center with the child. So, only enrollment per child is required.
10.3. LEA users, program center users, and teachers should be able to transfer a child to another LEA or to a program center within the same LEA. However, they should not be able to transfer a child to a program center outside their LEA.
10.4. A state user should be able to transfer a child to an LEA but not to a specific program center.
10.5. LEA users, program center users, and teachers should be able to place a transfer-in request for a child they need to transfer but cannot view.
10.6. If a child is out of state for more than six months, the child is exited from the system, and the data goes into a report.
10.7. If the child returns, the original enrollment is considered to be continued. The period for which the child is to be gone will be specified in the notes.
11. Child exit:
11.1. A user who can access a child can exit the child from the program.
11.2. A child can exit without an exit assessment. If the child exits without an exit assessment, this information is captured in the exit reason.
11.3. The exit reason and LEA student number should always be captured before a child exits.
12. Child conflict:
12.1. A child conflict alert is generated when a new child’s first name, middle initial, last name, gender, and birth date match those of an existing child.
12.2. When a user adds a new child with a conflict, the system should display a warning to the user about a possible conflict. If the child is accessible to the user, s/he should be able to view the child. Else, the user should be able to place a transfer-in request for the child.
12.3. Child conflict alerts should be viewed and resolved by state users only.
13. Disabilities:
13.1. The disabilities of a child are independent of enrollment.
13.2. Users should be able to choose from a standard set of disabilities.
13.3. Two sets of disabilities are captured for a given child:
13.3.1. **Disabilities on entry:** This is the set of disabilities the child has on entry into the preschool special education program.
13.3.2. **Current disabilities:** This is the current list of child’s disabilities.
14. Assessments:
14.1. The system supports entry, intermediate, and exit assessments.
14.2. Teachers, program center users, and LEA users should be able to add, edit, and view assessments.
14.3. State user, however, should be able to view only entry and exit assessments, but not intermediate assessments.
14.4. Each assessment consists of three outcome scores and their sources.
14.5. Outcome score is rated on a 1-7 scale. Users can select from predefined outcome scores and specify the sources.
14.6. The user should be able to enter an assessment date and select a teacher by name.
14.7. For intermediate and exit assessments, the user must specify if progress has been made for each of the outcomes.
14.8. Each child can have a maximum of one entry assessment and one exit assessment. However, there can be multiple intermediate assessments.
14.9. An entry assessment must be entered before an intermediate assessment or an exit assessment is entered.
14.10. For an intermediate assessment, the user can specify if it is “reportable” or “non-Reportable”. A non-reportable assessment should not be not visible to state users, nor should it affect reports.
14.11. A user needs the ability to enter the sources of information for each outcome in an assessment. The source can be selected from a predefined list and/or be entered as text.
2.4. Structural Analysis
Object-oriented analysis (OOA) looks at the problem domain, with the aim of producing a conceptual model of the information that exists in the area being analyzed. Analysis models do not consider any implementation constraints that might exist, such as concurrency, distribution, persistence, or how the system is to be built. Implementation constraints are dealt during object-oriented design (OOD) [5].
Figure 6 shows the analysis-level class diagram for the Utah Preschool Outcomes Data System. The class Child represents a child enrolled in the preschool special education program. This class stores the child’s personal information such as name, SSID\(^1\), etc., and information about child’s enrollment status in a special education program.
A child has to be in a program center to avail special education services. However, the child sometimes can be in an LEA awaiting program center allotment. In such cases, the Child class has to be associated to both a program center and an LEA. The information about transfers of any child between LEAs and program centers is captured by the Transfer History class.
---
\(^1\) Utah SSID or Utah Statewide Student Identifier System is an identification number assigned to each child by the Utah State Office of Education (USOE). The USOE maintains a master SSID database of all students who enroll in a public school along with a few primary attributes (last name, first name, middle name, DOB, gender, school number and LEA student IDs) along with a history of what districts and schools that student has been enrolled in [6].
The child’s developmental delay or disability is captured by the Disability class. There is a predefined set of disabilities that can be attributed to the child. Two sets of disabilities are recorded for each child:
1. Disabilities on entry.
2. Current disabilities.
Teachers assess a child’s abilities from time to time. These assessments are classified as entry assessments, intermediate assessments, and exit assessments. Each assessment currently has three outcome scores – one for each outcome type. More outcome types are a possible enhancement to the system in the future. The teacher performing an assessment relies on various information sources for arriving at the outcome scores. Each outcome score has to include at least one outcome score source to capture the source of the information. There is a pre-defined set of outcome score sources. A category called “other” allows the users to enter text in the event that the pre-defined set does proves insufficient.
A LEA user, program center user, or a teacher can enter an assessment into the system. However, only a teacher can perform an assessment. So, the assessment is always associated with a specific teacher. A relation between a child and teacher is not required to associate an assessment with a teacher.
CHAPTER 3
ARCHITECTURAL DESIGN
The architecture of a program or computing system is the structure or structures of the system, comprising software components, the externally visible properties of those components, and the relationships between them. Documenting software architecture facilitates communication between stakeholders, documents early decisions about high-level design, and allows reuse of design components and patterns between projects [7].
The architecture of the UPOD system is depicted in Figure 7. It consists of three layers: presentation layer, application layer, and domain layer.
The presentation layer contains all GUI classes and has forms for managing all the data and generation of reports. This package uses .NET WebForms for most of the forms. The only exception is the form to add/view/edit assessments. It uses a custom control given that .NET WebForms does not have the required user interface features.
The application package contains the following components:
1. **Data transfer service**: This is a Windows service responsible for transfer of data between the TEDI and UPOD systems.
2. **Data transfer utilities**: This is collection of utility classes used by the data transfer service for accessing the database and transfer of data.
3. **User interface utilities**: This is a collection of classes used by the presentation layer. It contains classes for manipulating the DBObjects retrieved from database and data validation.
4. **Report generation utilities**: This is a collection of classes to be used for generating reports. Currently, report generation is not implemented, but a future goal is to use a PDF report generation features in Vitruvian to do so.
The database layer contains the DBOobjects and DBLists corresponding to the tables in the database. It is a mapping of a relational database to the object model and is generated by using a wizard provided with Vitruvian.
All the components in the system use features from the Vitruvian framework and .NET framework.

CHAPTER 4
IMPLEMENTATION DETAILS
4.1. Introduction
To implement the UPOD System, we used C#, .Net Framework Version 3.5, and Sybase Adaptive Server Enterprise (ASE) 15.0.3. We also used Vitruvian’s DBObjets for database support. USOE stipulates that Sybase must be the database manager since it uses Sybase for most of its other information systems.
4.2. Introduction to Vitruvian DBObjets
One of the problems encountered when mapping an object-oriented language, such as Java or C++, to a declarative language, like SQL, is impedance mismatch. Impedance mismatch is caused by the fact that one object in the application can contain data from multiple tables and multiple rows within a table [8].
There are several techniques for overcoming impedance mismatch. Typically, the developer writes classes for each of the tables or for each of the required objects. Doing so involves writing hundreds, possibly thousands of lines of code. This process is error prone and therefore requires writing lot of test cases. And then there is the added problem of maintaining the classes and their test cases.
Using object-relational mapping (ORM) is a more streamlined approach to overcoming impedance mismatch. ORM is a programming technique for converting data between incompatible type systems in relational databases and object-oriented
programming languages. This creates, in effect, a virtual object database that can be used from within the programming language [9].
We used Vitruvian DBObjects for ORM. The use of DBObjects minimizes, and in some cases eliminates, the need to access the database directly. The database is represented and maintained by DBObjects. Data transfers to and from the user interface are handled by DBObjects (See Figure 8). The DBObjects take care of reading and updating the database.

**Figure 8: Relationships among user interface, DBObjects, and database.**
Important features of DBObjects include the following:
1. Automatically generate classes for tables and views in the database.
2. Avoid or minimize writing SQL for create, read, update, and delete (CRUD) operations.
3. Navigate between related objects.
4. Lazy loading of objects.
5. Specify filters and sort order for loading DBOBJECT lists from the database.
Vitruvian provides a wizard for generating DBOBJECT classes and DBOBJECT list classes for tables and/or views. Properties are generated in the classes for each of the columns in the corresponding tables/views. The relationships between tables are captured as properties in either or both the related classes. One-to-one relationships are represented as DBOBJECTs while the one-to-many relationships are represented as DBOBJECT lists. The
wizard allows us to choose the relationships to be represented in the generated classes. The user can customize the names of classes, their properties, and relationships.
Vitruvian provides the following methods for using the DBObjects:
1. **Load()**: Load the data into the DBObject or DBList. Data can be filtered before loading.
2. **Reload()**: Load the new set of data from database.
3. **Save()**: Save the DBObject to the database.
4. **Delete()**: Delete the DBObject from the database.
5. **ResetValues()**: Reset the values (i.e., all properties) of a DBObject.
6. **RelationalSave()**: Save the DBObject and the children tables of the current DBObjects.
7. **RelationalDelete()**: Delete the DBObject and the children tables of the current DBObjects.
### 4.3. Experience with and Improvement of DBObjects
This section describes my experience with DBObjects and the improvements I made. I discovered the following problem or bugs and reported them to Brain Smith (Vitruvian developer) in order to be resolved.
Vitruvian initially only supported the use of a persistent connection to the database. This technique is efficient and removes the overhead associated with connection establishment each time we access the database. However, if the connection to the database failed due to some reason, Vitruvian had problems in reconnecting to the database. I reported the problem to Brian and he provided a new way to connect to the database on demand.
Vitruvian caches the recently accessed objects for efficiency. However, a bug in the implementation of the cache caused the application to crash randomly. I also reported this problem to Brian, and he fixed it.
During testing, I discovered SQL injection vulnerability in DBObjects. I reported the problem to Brian, and he gave me suggestions on how I can fix it. In Sybase, injection is prevented by enclosing String, Text, Char, Date, and DateTime data types in single quotes. The single quotes in the data should be replaced by two single quotes. I implemented this fix in the Vitruvian.
I enjoyed using Vitruvian DBObjects for ORM. In any application with many tables and views, developers tend to spend a significant amount of time and effort to overcome the impedance mismatch. Vitruvian DBObjects is a great tool to minimize or eliminate this problem. With ORM taken care of, I was able to concentrate on the more important and complex parts of the project. I also found that using Vitruvian DBObjects is much more reliable and maintainable than writing SQL statements.
4.4. Implementation Details and Challenges
This section describes the implementation details of the Utah Preschool Outcomes Data System, the challenges I faced, and my solutions to those problems.
Learning to work with Vitruvian, while keeping the code clean and structured, was a key goal. DBObjects and control binding are the features of Vitruvian I used in the project. The DDBObject-generation wizard generates three files - [name].cs, [name].auto.cs and [name].relation.cs for each table/view. DBObjects need to be regenerated when the database changes. When this is done, the [name].auto.cs and [name].relation.cs files are
regenerated and overwritten. So, I wrote all the custom properties and methods in the [name].cs files.
In the Utah Preschool Outcomes Data System, a teacher may or may not have a user account. So, each time a new teacher is added, the system checks for an existing user account based on first name and last name in the same program center. If a match is found, the teacher is linked to the user account. When a user account for a teacher is added, the system similarly checks for a teacher. If a match is found, the teacher is linked to the teacher.
The implementation of add/view/edit assessment form was a challenge. The number of outcomes and the outcome score sources can change, and such changes must be displayed from the database. There was no .NET control that could handle this. I wrote a custom control-AssessmentFormCtl that generates HTML without using .NET controls. The .NET framework incorporates some security checks that help prevent injection attacks. For example, when we use the DropDownList control in a form, .NET makes sure that the value submitted in the form is from the list of allowed values. The problem with generating HTML directly is that we have to incorporate checks to make sure the submitted form values are valid. I incorporated these checks into the custom control.
Most children enter the UPOD system through TEDI. I wrote a Windows service to read the data from the TEDI database and write the qualifying child data into the Utah Preschool Outcomes Database.
CHAPTER 5
SOFTWARE TESTING
5.1. Introduction
Software testing is essential to identify problems in the software, verify the fulfillment of requirements, and ensure the quality of the software. Unit testing, integration testing, and user acceptance testing were all conducted on the UPOD system.
5.2. Unit Testing
Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application. In procedural programming, a unit may be an individual function or procedure. In object-oriented programming, a unit is usually an interface, such as a class [12]. Typically, unit test cases are written by developers in the early phases of development.
In the UPOD system, we performed unit testing using the NUnit unit testing framework. The use of DBObjects for ORM eliminated a significant amount of unit testing that would otherwise have been required in a typical web application.
I customized the DBObjects by adding functions and properties. This code was tested using unit test cases. The StringUtil class contains methods for processing and manipulating strings. Similarly, the DateUtil class has methods for processing and manipulating DateTime objects. I wrote test cases for thoroughly testing these classes.
5.3. Integration Testing
Integration testing is the level of testing done to ensure that the various components of a system interact and pass data correctly among themselves, as well as function cohesively [13].
During the integration testing of the UPOD system, the test cases focused on the classes supporting the import of data from TEDI system. I wrote test cases to verify the transfer of child information in the TEDI database to the UPOD database. This was done using the TEDI database populated with random test data.
I also wrote test cases to verify the generation of child conflict alerts to cope with the possibility of duplicate children entering the system while transferring data from TEDI.
5.4. System Testing
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements [14]. System testing is performed after integration testing to verify the fulfillment of the requirements.
In the initial states of system testing, I tested the system against the requirements. Later, the system was shown to select officials from USOE. The requirements identified for this release of the system were met. Some changes had to be done to the graphical user interface based on the feedback from USOE officials.
5.5. User Acceptance Testing
The objective of user acceptance testing is to confirm that the application under test (AUT) meets its business requirements and to provide confidence that the system works correctly and is usable before it is formally “delivered” to the end user(s).
The UPOD system is currently undergoing the user acceptance testing. The system is deployed on a test server, and a subset of the end users is testing the system. The database design and data integrity aspects are being evaluated by a team from the IT department of USOE.
CHAPTER 6
CONCLUSION AND FUTURE WORK
Presently, the Utah Preschool Outcomes Data System allows collection and viewing of data on individual children in the special education program. However, the system needs to be able to generate reports for evaluating the strengths and weaknesses of the program. So, the next step in development should be a feature to enable generation of customizable reports.
The child information in BTOTS and TEDI is currently kept in sync, even after the child transfers to TEDI. However, once the data is transferred from TEDI to the Utah Preschool Outcomes Data System, there is no feature to synchronize the data between the two systems. The data transfer Windows service needs to be enhanced to allow synchronization of data.
Working on this project, I was responsible for the analysis, design, and development of the software. The resulting product is software designed to be flexible to change and easy to maintain. Further, the product fuses streamlined and efficient database design with object-oriented programming.
REFERENCES
|
{"Source-Url": "https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1086&context=gradreports", "len_cl100k_base": 8033, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 81622, "total-output-tokens": 10175, "length": "2e12", "weborganizer": {"__label__adult": 0.0006194114685058594, "__label__art_design": 0.0009140968322753906, "__label__crime_law": 0.0007157325744628906, "__label__education_jobs": 0.0858154296875, "__label__entertainment": 0.00013315677642822266, "__label__fashion_beauty": 0.0004029273986816406, "__label__finance_business": 0.0006299018859863281, "__label__food_dining": 0.0007200241088867188, "__label__games": 0.0008411407470703125, "__label__hardware": 0.001537322998046875, "__label__health": 0.00095367431640625, "__label__history": 0.0007042884826660156, "__label__home_hobbies": 0.0002980232238769531, "__label__industrial": 0.000667572021484375, "__label__literature": 0.000782012939453125, "__label__politics": 0.0005474090576171875, "__label__religion": 0.0007276535034179688, "__label__science_tech": 0.01035308837890625, "__label__social_life": 0.0004487037658691406, "__label__software": 0.00974273681640625, "__label__software_dev": 0.88037109375, "__label__sports_fitness": 0.00046896934509277344, "__label__transportation": 0.0010213851928710938, "__label__travel": 0.0003256797790527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41805, 0.02722]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41805, 0.55722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41805, 0.90032]], "google_gemma-3-12b-it_contains_pii": [[0, 667, false], [667, 1101, null], [1101, 1101, null], [1101, 2099, null], [2099, 2868, null], [2868, 5924, null], [5924, 6732, null], [6732, 8207, null], [8207, 9539, null], [9539, 10610, null], [10610, 12189, null], [12189, 12895, null], [12895, 14229, null], [14229, 15001, null], [15001, 15633, null], [15633, 16002, null], [16002, 16754, null], [16754, 18202, null], [18202, 19400, null], [19400, 20592, null], [20592, 22073, null], [22073, 23424, null], [23424, 25030, null], [25030, 25273, null], [25273, 26309, null], [26309, 27782, null], [27782, 28384, null], [28384, 29720, null], [29720, 31163, null], [31163, 32625, null], [32625, 34337, null], [34337, 35838, null], [35838, 37149, null], [37149, 38467, null], [38467, 39021, null], [39021, 40076, null], [40076, 41143, null], [41143, 41805, null]], "google_gemma-3-12b-it_is_public_document": [[0, 667, true], [667, 1101, null], [1101, 1101, null], [1101, 2099, null], [2099, 2868, null], [2868, 5924, null], [5924, 6732, null], [6732, 8207, null], [8207, 9539, null], [9539, 10610, null], [10610, 12189, null], [12189, 12895, null], [12895, 14229, null], [14229, 15001, null], [15001, 15633, null], [15633, 16002, null], [16002, 16754, null], [16754, 18202, null], [18202, 19400, null], [19400, 20592, null], [20592, 22073, null], [22073, 23424, null], [23424, 25030, null], [25030, 25273, null], [25273, 26309, null], [26309, 27782, null], [27782, 28384, null], [28384, 29720, null], [29720, 31163, null], [31163, 32625, null], [32625, 34337, null], [34337, 35838, null], [35838, 37149, null], [37149, 38467, null], [38467, 39021, null], [39021, 40076, null], [40076, 41143, null], [41143, 41805, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41805, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41805, null]], "pdf_page_numbers": [[0, 667, 1], [667, 1101, 2], [1101, 1101, 3], [1101, 2099, 4], [2099, 2868, 5], [2868, 5924, 6], [5924, 6732, 7], [6732, 8207, 8], [8207, 9539, 9], [9539, 10610, 10], [10610, 12189, 11], [12189, 12895, 12], [12895, 14229, 13], [14229, 15001, 14], [15001, 15633, 15], [15633, 16002, 16], [16002, 16754, 17], [16754, 18202, 18], [18202, 19400, 19], [19400, 20592, 20], [20592, 22073, 21], [22073, 23424, 22], [23424, 25030, 23], [25030, 25273, 24], [25273, 26309, 25], [26309, 27782, 26], [27782, 28384, 27], [28384, 29720, 28], [29720, 31163, 29], [31163, 32625, 30], [32625, 34337, 31], [34337, 35838, 32], [35838, 37149, 33], [37149, 38467, 34], [38467, 39021, 35], [39021, 40076, 36], [40076, 41143, 37], [41143, 41805, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41805, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
c2be514d866a7a9a8cf74f82df5d27d36e61f34e
|
On the duality of fault tolerant system structures.
S.K. Shrivastava, L.V. Mancini, B. Randell
An examination of the structure of fault tolerant systems incorporating error recovery, and in particular backward error recovery, indicates a partitioning into two broad classes. Two canonical models, each representing a particular class of systems have been constructed. The first model incorporates objects and actions as the entities for program construction while the second model employs communicating processes. Applications in the areas such as office information and database systems typically use the first model while applications in the area of real time process control are usually based on the second model. The paper claims that the two models are duals of each other and presents arguments and examples to substantiate this claim, which is in effect, an extension of the earlier duality argument presented by Lauer and Needham. An interesting conclusion to be drawn from this study is that there is no inherent reason for selecting one model over the other, but that the choice is governed by the architectural features of the layer over which the system is to be constructed. A pleasing consequence has been the recognition that the techniques which have been developed for one model, turn out to have interesting and hitherto unexplored duals in the other model.
Series Editor: M.J. Elphick
© 1987 University of Newcastle upon Tyne.
Printed and published by the University of Newcastle upon Tyne,
Computing Laboratory, Claremont Tower, Claremont Road,
Newcastle upon Tyne, NE1 7RU, England.
Bibliographical details
SHRIVASTAVA, Santosh Kumar
On the duality of fault tolerant system structures.
Newcastle upon Tyne: University of Newcastle upon Tyne, Computing Laboratory, 1987.
(University of Newcastle upon Tyne, Computing Laboratory, Technical Report Series, no. 248.)
Abstract
An examination of the structure of fault tolerant systems incorporating error recovery, and in particular backward error recovery, indicates a partitioning into two broad classes. Two canonical models, each representing a particular class of systems have been constructed. The first model incorporates objects and actions as the entities for program construction while the second model employs communicating processes. Applications in the areas such as office information and database systems typically use the first model while applications in the area of real time process control are usually based on the second model. The paper claims that the two models are duals of each other and presents arguments and examples to substantiate this claim, which is in effect, an extension of the earlier duality argument presented by Lauer and Needham. An interesting conclusion to be drawn from this study is that there is no inherent reason for selecting one model over the other, but that the choice is governed by the architectural features of the layer over which the system is to be constructed. A pleasing consequence has been the recognition that the techniques which have been developed for one model, turn out to have interesting and hitherto unexplored duals in the other model.
About the authors
Professor Shrivastava joined the Computing Laboratory in August, 1975, where he is a Professor.
Mr. Mancini has been at the Computing Laboratory since May, 1985 as a Research Associate.
Professor Randell has been a Professor of Computing Science at the Computing Laboratory of the University of Newcastle upon Tyne since 1969.
Suggested keywords
DISTRIBUTED SYSTEMS
FAULT TOLERANCE
OBJECT BASED SYSTEMS
OPERATING SYSTEMS
REAL TIME SYSTEMS
RELIABILITY
Suggested classmarks (primary classmark underlined)
Dewey (18th): 001.64404
U.D.C.: 519.887
1. Introduction
An investigation of backward error recovery based fault tolerance techniques employed in a variety of systems reveals two general classifications. We propose two models, each embodying the major characteristics of the corresponding class of systems. One widely used technique of introducing fault tolerance - particularly in distributed systems - is based on the use of *atomic actions* (atomic transactions) for structuring programs [1]. An atomic action possesses the properties of serializability, failure atomicity and permanence of effect. Atomic actions operate on *objects* (instances of abstract data types). The class of applications where such an *object and action* (OA) based model has found usage include banking, office information, airline reservation and database systems. A number of other applications - typically concerned with real time control - are structured as concurrent processes communicating via messages. Some examples are process control, avionics and telephone switching systems. Fault tolerance in such systems is introduced through a controlled use of *checkpoints* by processes. We will refer to this way of structuring an application as employing the *process and message* (PM) model.
In this paper we claim that the OA and PM approaches to the provision of fault tolerance are duals of each other and present arguments and examples to substantiate our claim. As a result of this observation, we can state that there is no *inherent* reason for favouring one approach over the other; rather the choice is largely dictated by the architectural features of the underlying layer. Indeed, we would now claim that the differences between the two approaches are basically a matter of viewpoint and terminology. Our investigations have been influenced by the well known duality paper of Lauer and Needham [2] which puts forward the notion that within the context of operating systems, procedure based systems and message based systems are duals of each other. The authors observed that (1) a program or subsystem constructed strictly according to the primitives defined by one model can be mapped directly into a dual program or subsystem which fits the other model; (2) the dual programs or subsystems are logically identical to each other, and they can also be made textually very similar; and (3) the performance of a program or subsystem from one model will be identical to its counterpart. The present work may be considered as an extension of the ideas put forward in that paper with regard to fault tolerance.
The paper is structured as follows: sections two and three describe the essential aspects of OA and PM models respectively. Section four contains the arguments intended to establish the duality between OA and PM. Section five contains a few simple examples, and the concluding section summarizes the paper and discusses possible implications of the duality claim. Throughout the paper, we will assume a distributed system composed out of a number of nodes connected by some communication medium.
2. Object and Action Model
Objects are instances of abstract data types. An object encapsulates some data and provides a set of operations for manipulating the data, these operations being the only means of object manipulation. In most object based fault tolerant systems that we know (see [3-8] for a representative sample), an operation is performed by invoking an object with a remote procedure call (RPC), which passes value parameters to the object and returns the results of the operation to the caller. Programs which operate on objects are executed as atomic actions with the properties of (i) serializability, (ii) failure atomicity and (iii) permanence of effect [1]. The first property ensures that concurrent executions of programs are free from interference (i.e. a concurrent execution can be shown to be equivalent to some serial order of execution [9,10]). The second property ensures that a computation can either be terminated normally, producing the intended results or be aborted, producing no results. This property is obtained by appropriate use of backward error recovery, which is invoked whenever a failure that cannot be masked occurs. Typical failures causing an action to be aborted are node crashes and communication failures such as lost messages. It is reasonable to assume that once a computation terminates normally, the results produced are not destroyed by subsequent node crashes. This is the third property - permanence of effect - which ensures that state changes produced are recorded on stable storage which can survive node crashes with a high probability of success. A two-phase commit protocol is required during the termination of an action to ensure that either all the objects updated within the action have their new states recorded on stable storage (normal termination), or no updates get recorded (aborted termination).
A variety of concurrency control techniques for atomic actions to enforce the serializability property have been reported in the literature. A very simple and widely used approach is to regard all operations on objects to be of type read or write, which must follow the well known locking rule permitting concurrent reads but only exclusive writes. In a classic paper [9], Eswaren et al. proved that actions must follow a two-phase locking policy (see Fig. 1). During the first phase, termed the growing phase, a computation can acquire locks on objects but not release them. The tail end of the computation constitutes the shrinking phase, during which time held locks can be released but no locks can be acquired. Now suppose that an action in its shrinking phase is to be aborted, and that some updated objects have been released. If some of these objects have been locked by other actions, then abortion of the action will require these actions to be aborted as well. To avoid this cascade abort problem, it is necessary to make the shrinking phase instantaneous, as indicated by the dotted lines.
Any atomic action can be viewed at a lower level as constructed out of more primitive atomic actions - this is illustrated in Fig. 2 which also introduces the action diagram which will be used in this paper (this notation is based on that used by Davies [11]). According to Fig. 2, action B's constituents are actions B_1, B_2, B_3 and B_4. A directed arc from an action (e.g. A) to some other
action (e.g. B) indicates that B uses objects released by A. Optionally, an arc can be labelled, naming the objects used by the action. In Fig. 2, B uses objects a, b and c and C uses object a which has been released by B. Actions such as B₂ and B₃ are executed concurrently. Nested actions give rise to nested recovery. Suppose time has advanced up to the point shown by the vertical arrow, and an error is detected in B₃ causing it to be aborted. What happens after B₃’s recovery? The question must be resolved within the scope of B - the enclosing action. B can provide a specific exception handler to deal with this particular eventuality (such exception handling techniques have been discussed by Taylor [12]). If no handler is available, then a failure of B₃ will cause B to be aborted.
One of the most important aspects of the OA model from our point of view is the fact that objects and actions are the two primary entities from which an application program is constructed. Any implementation of actions and objects will require processes (clients and servers) for carrying out the required functions. However, the role played by processes is hidden at the application level. Similarly, there is no explicit use of message passing between entities, since RPCs hide the details of message interactions between clients and servers. For example, in the Argus programming system [3], the implementation of guardians (objects) requires a number of processes for receiving and executing calls from clients - but processes are not visible entities to be used explicitly by an application program. Taylor [12] describes a number of ways of implementing atomic actions using different process structures. In the OA model, objects are long lived entities and are the main repositories for holding system states, while actions are short lived entities.
3. Process and Message Model
In contrast to the OA model, where processes and messages play at most a secondary role, the PM model has them as the primary entities for structuring programs. An application is structured out of a number of concurrent and interacting processes. A notation for describing the PM model that has received much attention is the communicating sequential processes (CSP) notation [13] which can be used for specifying a concurrent system by a fixed number of processes interacting via synchronous message passing. The topic of backward error recovery among interacting processes has been studied extensively, e.g. [14-18], beginning with the study reported in [19].
The PM model will be assumed to have the following characteristics: (1) processes do not share memory, at least explicitly, and communicate via messages sent over the underlying communication medium; (2) appropriate communication protocols ensure that processes can send messages reliably such that they reach their intended destinations uncorrupted and in the sent order; (3) a process can take a checkpoint to save its current state on some reliable storage medium (stable storage). If a process fails, it is rolled back to its latest checkpoint.
The notion of a consistent global state of a system is central when considering the recovery of interacting processes. A global state of a system is the set of local states, one from each process (a precise formulation is presented in [20]). The interactions among processes can be depicted using a time diagram, such as that shown in Fig. 3. Here, horizontal lines are time axes of processes and sloping arrows represent messages. A global state is a cut dividing the time diagram into two halves. A cut in the time diagram is consistent (consistent global state) if no arrow starts on the right hand side of the cut and ends on the left hand side of it. Cut C₁ in the figure is consistent; but cut C₂ is not, since it indicates that process q has received a message which has not yet been sent by r.
In a system of interacting processes, the recovery of one process to its checkpoint can create an inconsistent global state, unless some other relevant processes are rolled back as well. This leads to the notion of a consistent set of checkpoints or a recovery line [21]: a set of checkpoints, one from each process, is consistent if the saved states form a consistent global state. Fig. 4 illustrates the notions of consistent and inconsistent sets of checkpoints where opening square brackets on process axes indicate checkpoints. Suppose process p fails at the point indicated by the vertical arrow and is rolled back to its latest checkpoint. The global state of the system as represented by cut C₂ is clearly inconsistent; the set of checkpoints on recovery line C₁ is however consistent. Thus a failure of p can cause a cascade rollback of all the four processes – this is the domino effect.
mentioned in [19]. The dynamic determination of a recovery line is a surprisingly hard task; the reader should consult [17] for a clear exposition.
The domino effect can be avoided if processes coordinate the checkpointing of their states. A well known scheme of coordinated checkpoints is the conversation scheme [15,19]. The set of processes which participate in a conversation may communicate freely between each other but with no other processes. Processes may enter the conversation at different times but, on entry, each must establish a checkpoint (see Fig. 5). In Fig. 5, a closing bracket indicates that all participating processes must exit at the same time (brackets will not be explicitly drawn in the subsequent diagrams). If a process within a conversation fails then all the participating processes are rolled back to the respective checkpoints established at the start of the conversation. Conversations can be nested as shown in the figure.
Conversations provide a convenient structuring concept for introducing fault tolerance in a large class of real time systems [22]. The need to respond promptly to changes in the external environment dictates that most real time systems have an iterative nature. The PM model provides a natural way of expressing such systems in the form of interacting cyclic processes with synchronization points usually associated with timing constraints. A study of real time system structure for avionic systems by Anderson and Knight [22] indicated that synchronization of processes in such a system stems from the need to synchronize with the events in the external environment, rather than from any inherent needs of processes themselves. Fig. 6 depicts a typical synchronization requirement. An informal interpretation of such a synchronization graph
is as follows (see [22] for a precise formulation): process $P_1$ repeatedly initiates a computation at time $T_1$ which must finish by time $T_3$ ($T_3 > T_1$); processes $P_2$, $P_3$, and $P_4$ complete two iterations in the interval $T_1$ to $T_3$. Any interactions between $P_2$, $P_3$, and $P_4$ can be performed within the confines of two conversations: one starting at $T_1$ and finishing at $T_2$ and the other starting at $T_2$ and finishing at $T_3$. The use of conversations for introducing fault tolerance in the manner indicated here is discussed at length in [22].
The most important aspects of the PM model relevant to this paper are summarized below. An application is programmed in terms of a number of processes interacting via message passing. If processes establish checkpoints in an arbitrary manner then there can be a danger of cascade rollback, which is usually undesirable. Conversations provide a coordinated means of managing checkpoints to avoid the danger of such a cascade rollback. However, a conversation requires the participating processes to synchronize such that they exit from the conversation simultaneously. A large class of applications, typically concerned with process control or real time control, traditionally employs the PM model for structuring applications. Conversations can be imposed on such applications by exploiting naturally occurring synchronization points among interacting processes. In the PM model, processes are long lived entities and main repositories for holding system states, while conversations are short lived entities.
4. Duality
The canonical models discussed in the previous two sections are representative of the corresponding class of fault tolerant systems. Given a description of any fault tolerant system, it is usually straightforward to work out its representative model, despite the fact that the terminology used for the description may differ somewhat from that used here. The duality between the OA and PM models can be established by considering objects and actions to be the duals of processes and conversations respectively. Further, RPCs can be considered duals of messages [2]. A given conversation diagram (e.g. Fig. 7.a), can be translated into an action diagram quite simply (e.g. Fig. 7.b) by replacing each conversation \( C_i \) with a corresponding action \( A_i \), and adding an arrow from \( A_i \) to \( A_j \) if \( C_i \) and \( C_j \) have at least one process in common and that process enters \( C_j \) after exiting from \( C_i \). An arc from one action to the other is labelled with the objects representing the processes common to the corresponding conversations. A reverse mapping is possible by replacing distinct objects named in the action diagram by processes. An action is replaced by the corresponding conversation, with the set of processes in the conversation determined by the set of objects named in all the incoming and outgoing arcs of the action.
In order to support our hypothesis, we will discuss the way in which three major properties of a fault tolerant computation, namely, (1) freedom from interference, (2) backward recovery capability, and (3) crash resistance, are embodied in the OA and PM models.
(1) Freedom from interference. In the OA model, this requirement is ensured by the serializability property of actions and enforced by some concurrency control technique, such as two phase locking. In the PM model, freedom from interference between multiprocess computations structured as conversations is ensured by the two conversation rules, (i) a process can only communicate with those processes that are in the same conversation; and (ii) a process can only be inside a single conversation at a time (this rule can be relaxed under certain conditions, see later). The two phase locking discipline for
actions corresponds to entering a conversation (growing phase) and leaving a conversation (shrinking phase).
(2) **Backward recovery capability.** An action in progress can be aborted (recovered) without affecting any other ongoing actions. This recovery property of an action is enforced in conjunction with the concurrency control technique in use. In the case of two phase locking, this means that all the held locks are released simultaneously. This corresponds to the synchronized (simultaneous) exit from a conversation which is required from all the participating processes. The act of taking checkpoints at the start of a conversation has its dual in the OA model, and consists of the requirement of maintaining recovery data for objects used within an action. It was indicated earlier that the serializability property of actions can be maintained even if - for two phase locking - locks are released gradually (rather than simultaneously) during the shrinking phase of locking; however this has the danger of *cascade aborts* (recovery of an action can cause some other actions to be aborted as well). A similar observation can be made for conversations: the synchronized exit requirement is necessary to prevent cascade aborts. Fig. 8 illustrates that if
```
p ->
/ |
/ | C2
q ---/ |
C1
/ |
r /
```
Fig. 8. Cascade aborts.
"conversations" $C_1$ and $C_2$ do not observe the rule of synchronized exit, and if time has advanced up to the point shown by the vertical arrow, and $C_1$ is to be aborted, then $C_2$ will have to be aborted as well.
(3) **Crash resistance.** A two phase commit protocol is employed in action based systems to ensure that despite the presence of failures such as node crashes, an action terminates either normally, with all the updated objects made stable to their new states, or abnormally with no state changes. A similar protocol will be required to ensure that the states of all the processes participating in a conversation are made stable.
A striking benefit of establishing the duality is that the body of knowledge and techniques developed for one model can be mapped and applied to the other model. We illustrate this with the help of the following two examples.
(1) **Read only requests.** A number of optimizations are possible if an action uses some or all of its objects in read only mode. Read locks can be released during the shrinking phase and need not be held till the end of the action, without the danger of cascade aborts. Further, no recovery data need be maintained for read only objects and they need not be involved in the two phase commit protocol since they do not change state. Such optimization strategies have been studied extensively within the context of database systems, e.g. [23]. However, to our knowledge, no such strategies have been studied for conversations, although they can be developed quite easily. Essentially, processes inside a conversation that do not update their states need not synchronize their exit from the conversation, nor do they need to take checkpoints at the start of the conversation. Consider a simple example. An action performs the following computation: \( x := y + z \). Here \( y \) and \( z \) will be read locked; the commit protocol will involve only making object \( x \) stable to its new state and the action need generate no recovery data for \( y \) and \( z \). Fig. 9 shows a possible conversation to perform the same computation. In this particular case it is only necessary for process \( x \) to establish a checkpoint. Message \( m_1 \) (\( m_2 \)) is a request to \( y \) (\( z \)) for some value, and message \( m_3 \) (\( m_4 \)) contains the value sent by \( y \) (\( z \)).
Note that even though there is a two-way exchange of messages between \( x \) and \( y \) (\( z \)), \( x \) can recover without affecting \( y \) (\( z \)), since message \( m_1 \) (\( m_2 \)) is a read request. Indeed, \( y \) and \( z \) can take part in other conversations, while still in \( C_1 \), provided those conversations also involve only read requests directed to \( y \) and \( z \). This is obviously the dual of the shared read lock mode rule applicable in the OA model. It is worth noting that, just as locking can cause deadlocks among actions, similar problems can occur in conversations.
(2) **Programmed exception handling.** So far we have examined the duality from the point of view of backward error recovery, which involves abandoning the current state for a prior state. In contrast, forward error recovery involves selective corrections to the current state to obtain an
acceptable state [21]. Programmed exception handling is a means of incorporating this form of forward recovery. A widely accepted exception handling strategy is as follows: if during the execution of a computation an error is detected (an exception is detected) for which a specifically programmed handler is available, then that handler is invoked; if there is no programmer-provided handler available then a default handler is invoked whose function is to invoke backward recovery. Thus, exception handling can provide a uniform means of incorporating both forward and backward error recovery strategies [24,25]. A recent paper [26] proposes an exception handling strategy for concurrent processes with conversations and describes how processes can resolve concurrent exceptions through the use of exception trees. To keep this paper brief, we will not describe this strategy; instead we note here that these exception handling ideas, although developed using the PM model, have since been applied by Taylor [12] to the OA model.
A summary of the various characteristics of the two models for which duality has been established is presented in Table 1.
<table>
<thead>
<tr>
<th>Object-Action Model</th>
<th>Process-Message Model</th>
</tr>
</thead>
<tbody>
<tr>
<td>Objects</td>
<td>Processes</td>
</tr>
<tr>
<td>Actions</td>
<td>Conversations</td>
</tr>
<tr>
<td>RPCs</td>
<td>send-receive messages</td>
</tr>
<tr>
<td>concurrency control for</td>
<td>conversation rules</td>
</tr>
<tr>
<td>serializability</td>
<td>preventing no outside</td>
</tr>
<tr>
<td></td>
<td>communication</td>
</tr>
<tr>
<td>stable objects</td>
<td>stable processes</td>
</tr>
<tr>
<td>growing phase</td>
<td>processes entering a</td>
</tr>
<tr>
<td>(2-phase locking)</td>
<td>conversation</td>
</tr>
<tr>
<td>shrinking phase</td>
<td>processes leaving a</td>
</tr>
<tr>
<td>(2-phase locking)</td>
<td>conversation</td>
</tr>
<tr>
<td>read locks</td>
<td>read only request messages</td>
</tr>
</tbody>
</table>
Table 1. Duality Mapping.
5. Examples
This section contains two further examples, one taken from the database area and normally programmed using objects and actions and the other taken from the process control area and normally programmed using processes and messages. It will be shown that programs written using the primitives of one model have duals in the other. Simple and self-explanatory notation will be used for program description.
Banking application. An example often used to illustrate the properties of an action concerns transferring a sum of money from one bank account to another. The failure atomicity
property, for example, will ensure that either the sum of money is debited from one account and credited to the other, or no state changes are produced. For the sake of illustration, the application has been structured to invoke nested actions, even though simpler, non-nested solutions are clearly possible.
Two types of objects will be assumed: standing-order, and credit-debit:
```plaintext
type standing-order = object
- object variables - -
action transfer (to, from: credit-debit; amount: dollars)
cobegin
authority (to, from);
to.credit (amount);
from.debit (amount)
coend
end action
- - other actions, e.g. authority - -
end standing-order;
type credit-debit = object
- - current account variables - -
action credit (amount: dollars)
- - add amount - -
end action
action debit (amount: dollars)
- - subtract amount - -
end action
- - other actions - -
end credit-debit;
```
Specific instances of these objects can be created:
```plaintext
order : standing-order;
acc1, acc2 : credit-debit;
```
An invocation of `order.transfer` will give rise to a nested computation shown in Fig. 10. Any exceptions during the execution of transfer will cause that action to be aborted.
The same program can be recoded quite easily in terms of communicating processes.
type standing-order = process
-- process variables --
select
conversation transfer (to, from: credit-debit, amount: dollars)
cobegin
send (self, authority, to, from);
send (to, credit, amount);
send (from, debit, amount)
coend
end conversation
-- other selections, e.g. authority --
end select
end standing-order
type credit-debit = process
-- current account variables --
select
conversation credit (amount: dollars)
-- add amount --
end conversation
-- other selections, e.g. debit --
end select
end credit-debit
Specific instances of these processes can be created:
order : standing-order;
acc1, acc2 : credit-debit;
A transfer conversation can be initiated by sending a message to order:
send(order, transfer, parameters)
The transfer conversation is shown in Fig. 11.
The second example is taken from a process control application in the coal mining industry [27]. Fig. 12 shows a simplified pump installation. It is used to pump mine-water collected in the sump at the shaft bottom to the surface. The pump is enabled by a command from the control room. Once enabled, it works automatically, controlled by water level sensors; detection of a high
level causes the pump to run until a low level is indicated. For safety reasons, the pump must not run if the percentage of methane exceeds a certain safety limit. Some other parameters of the environment are also monitored by the monitoring station.
The control software can be structured as five communicating processes, namely: pump controller, surface, level, pump and monitor. Some sketchy details are given here for the pump controller.
_Pump controller process._ Some of its functions are to receive start/stop command from the _surface_ process (representing the control room), receive water level reports from the _level_ process and to receive an alarm signal from the _monitor_ process. The _controller_ process can send start/stop commands to the _pump_ process which controls the pump.
A study of process structure discussed in [27] reveals that the overall behaviour of the other processes have a similar structure to the pump-controller, either receiving requests to carry out certain functions and/or sending messages to other processes to request certain functions to be performed. These interactions can be organized as conversations. A highly simplified program fragment for the pump controller is given in Fig. 13.a.
```
(a) Process-Message Model
<table>
<thead>
<tr>
<th>Type</th>
<th>(b) Object-Action Model</th>
</tr>
</thead>
<tbody>
<tr>
<td>pump-controller = process</td>
<td>pump-controller = object</td>
</tr>
<tr>
<td>- - process variables - -</td>
<td>- - object variables - -</td>
</tr>
<tr>
<td>select</td>
<td></td>
</tr>
<tr>
<td>conversation on/off(...)</td>
<td>action on/off(...)</td>
</tr>
<tr>
<td>send start/stop command to the pump process</td>
<td>send start/stop command to the pump process</td>
</tr>
<tr>
<td>end conversation</td>
<td>end action</td>
</tr>
<tr>
<td>- - other selections - -</td>
<td>- - other actions - -</td>
</tr>
<tr>
<td>end select</td>
<td></td>
</tr>
<tr>
<td>end pump-controller</td>
<td>end pump-controller</td>
</tr>
</tbody>
</table>
```
Fig. 13. Pump-controller example.
A command to enable or disable the pump from the surface process starts a conversation containing the pump-controller and the pump process: if the conversation terminates normally, the pump will have changed state accordingly. It is fairly easy to reprogram this example in terms of objects and actions, with the five processes replaced by the corresponding objects. For the sake of illustration, the program for the pump-controller object is shown in Fig. 13.b.
These examples provide further empirical support to our claim by illustrating that close similarity exists between the two classes of programs. Given a program constructed from the
primitives defined by one model, it can be mapped directly into a dual program of the second model.
6. Concluding Remarks
After examining the structure of a variety of systems, two canonical models of fault tolerant systems were developed, one of which is representative of the techniques and terminology used within the database and office information systems community, the other of which is more closely allied to the real time and process control applications area. These models were shown to be duals of each other. Although, in retrospect, this may not appear to be a surprising conclusion, particularly given the Lauer and Needham paper, we had not before realized how direct and complete the relationship between the two models was, and are not aware of any earlier literature explaining and exploiting this duality. Instead, one finds that fault tolerant systems are constructed and described using the concepts and terminology applicable to just one of the two models, with no apparent realization of the potential relevance of systems and the literature describing them which make use of the other model. However, we must admit that the duality that we have discussed is sometimes obscured by the fact that many process control applications are structured as a small and fixed number of processes, whereas it is more usual to find object based systems which contain a large and dynamically varying number of objects.
Our arguments to support the duality claim were based on an examination of three properties of a fault tolerant computation, namely: freedom from interference, backward recovery capability and crash resistance. It was shown that mechanisms employed to implement a given property in one model have duals in the other. Similarly, any particular behaviour observed in one model has its dual in the other. Examples presented in the paper show that programs developed using the primitives of one model can be mapped easily to the programs of the other model. Indeed, we would claim that the differences between the two models are principally a matter of view point and terminology.
The establishment of the equivalence between the two approaches to fault tolerance has several interesting implications, some of which are enumerated here.
(1) There seems to be no inherent reason for favouring one approach over the other. For example, there is no obvious reason why a real time system must be designed using the primitives of the PM model. In fact, one is led to state that the choice of a model to adopt for a given system should not be dictated by the application area but by the architectural features of the layer over which the system is to be built.
(2) It can also be stated that a single system architecture based on either model can in principle, support both classes of applications.
(3) We further speculate that, were sufficient representative systems of each class available for detailed evaluation and comparison, we would find that the observation made in [2] regarding the invariance of operating system performance under two classes of systems also applies to this fault tolerance duality.
(4) Techniques and mechanisms which happen to have been developed within the domain of just one of the models can be mapped and applied to the other model. Two examples were presented to illustrate this observation. It was shown that optimization techniques developed for read operations of actions can be applied to optimize conversations. A second example indicated that the exception handling framework developed for the PM model can be applied to the OA model.
(5) We put forward another proposal for further investigation. There is a large body of literature on the topic of replicated object management for increasing availability. We believe that interesting techniques for replicated process management can be developed from these studies and applied to process control systems that have been developed using the PM model.
(6) The ideas from this paper can be used for the design of fault tolerant systems with minimum set of compatible concepts, thus allowing several degrees of freedom in the design process to be eliminated, leading to well structured systems.
(7) Finally, given that, as discussed in [28], there is the prospect of using certain kinds of fault tolerance techniques to provide increased security and not just increased reliability, it appears that the duality mapping presented here can be extended and applied to clarify and illuminate at least some of the literature discussing various approaches to building multi-level secure systems. This however is a topic which will not be explored further in this paper.
Acknowledgements.
The authors have had discussions and arguments with several of their colleagues over a period of many years on the subject matter reported here. Those we would like to mention specifically include Graham Wood, David Taylor and Roy Campbell. Written comments from Tom Anderson and Robert Stroud on an early draft of the paper are also gratefully acknowledged. This work was supported in part by research grants from SERC/Alvey and the UK Ministry of Defence.
References
|
{"Source-Url": "http://www.cs.ncl.ac.uk/publications/trs/papers/248.pdf", "len_cl100k_base": 7799, "olmocr-version": "0.1.53", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 42760, "total-output-tokens": 10712, "length": "2e12", "weborganizer": {"__label__adult": 0.00037550926208496094, "__label__art_design": 0.00036525726318359375, "__label__crime_law": 0.000518798828125, "__label__education_jobs": 0.000941753387451172, "__label__entertainment": 9.459257125854492e-05, "__label__fashion_beauty": 0.0001804828643798828, "__label__finance_business": 0.0003783702850341797, "__label__food_dining": 0.0004227161407470703, "__label__games": 0.0006728172302246094, "__label__hardware": 0.002079010009765625, "__label__health": 0.000972747802734375, "__label__history": 0.0003690719604492187, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.00066375732421875, "__label__literature": 0.0004696846008300781, "__label__politics": 0.0003387928009033203, "__label__religion": 0.000537872314453125, "__label__science_tech": 0.19677734375, "__label__social_life": 0.00011366605758666992, "__label__software": 0.01384735107421875, "__label__software_dev": 0.7783203125, "__label__sports_fitness": 0.000274658203125, "__label__transportation": 0.0007581710815429688, "__label__travel": 0.00020492076873779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42511, 0.02316]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42511, 0.40481]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42511, 0.91949]], "google_gemma-3-12b-it_contains_pii": [[0, 1608, false], [1608, 1608, null], [1608, 3799, null], [3799, 3799, null], [3799, 6859, null], [6859, 6859, null], [6859, 10228, null], [10228, 10228, null], [10228, 11021, null], [11021, 11021, null], [11021, 14124, null], [14124, 14124, null], [14124, 15022, null], [15022, 15022, null], [15022, 16824, null], [16824, 16824, null], [16824, 18413, null], [18413, 18413, null], [18413, 20665, null], [20665, 20665, null], [20665, 22958, null], [22958, 22958, null], [22958, 25348, null], [25348, 25348, null], [25348, 28166, null], [28166, 28166, null], [28166, 29510, null], [29510, 29510, null], [29510, 30341, null], [30341, 30341, null], [30341, 30721, null], [30721, 30721, null], [30721, 33356, null], [33356, 33356, null], [33356, 36493, null], [36493, 36493, null], [36493, 38515, null], [38515, 38515, null], [38515, 42511, null], [42511, 42511, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1608, true], [1608, 1608, null], [1608, 3799, null], [3799, 3799, null], [3799, 6859, null], [6859, 6859, null], [6859, 10228, null], [10228, 10228, null], [10228, 11021, null], [11021, 11021, null], [11021, 14124, null], [14124, 14124, null], [14124, 15022, null], [15022, 15022, null], [15022, 16824, null], [16824, 16824, null], [16824, 18413, null], [18413, 18413, null], [18413, 20665, null], [20665, 20665, null], [20665, 22958, null], [22958, 22958, null], [22958, 25348, null], [25348, 25348, null], [25348, 28166, null], [28166, 28166, null], [28166, 29510, null], [29510, 29510, null], [29510, 30341, null], [30341, 30341, null], [30341, 30721, null], [30721, 30721, null], [30721, 33356, null], [33356, 33356, null], [33356, 36493, null], [36493, 36493, null], [36493, 38515, null], [38515, 38515, null], [38515, 42511, null], [42511, 42511, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42511, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42511, null]], "pdf_page_numbers": [[0, 1608, 1], [1608, 1608, 2], [1608, 3799, 3], [3799, 3799, 4], [3799, 6859, 5], [6859, 6859, 6], [6859, 10228, 7], [10228, 10228, 8], [10228, 11021, 9], [11021, 11021, 10], [11021, 14124, 11], [14124, 14124, 12], [14124, 15022, 13], [15022, 15022, 14], [15022, 16824, 15], [16824, 16824, 16], [16824, 18413, 17], [18413, 18413, 18], [18413, 20665, 19], [20665, 20665, 20], [20665, 22958, 21], [22958, 22958, 22], [22958, 25348, 23], [25348, 25348, 24], [25348, 28166, 25], [28166, 28166, 26], [28166, 29510, 27], [29510, 29510, 28], [29510, 30341, 29], [30341, 30341, 30], [30341, 30721, 31], [30721, 30721, 32], [30721, 33356, 33], [33356, 33356, 34], [33356, 36493, 35], [36493, 36493, 36], [36493, 38515, 37], [38515, 38515, 38], [38515, 42511, 39], [42511, 42511, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42511, 0.11521]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
266efd1aef016cb944071f4284092bff14e57931
|
16. Review 2 Exercises
Problem: Table: R(A,B,C,D,E)
F= \{ \{A\} \rightarrow \{B, C\}, \{B\} \rightarrow \{A, C\}, \{A, D\} \rightarrow \{E\}, \{E\} \rightarrow \{D\}\}
1] Which of the following sets of functional dependencies is a canonical cover of F?
a) \{\{A\} \rightarrow \{B, C\}, \{B\} \rightarrow \{A,C\}, \{A,D\} \rightarrow \{E\}, \{E\rightarrow D\}\}
b) \{\{A\} \rightarrow \{B\}, \{A\} \rightarrow \{C\}, \{B\} \rightarrow \{C\}, \{A,D\} \rightarrow \{E\}\}
c) \{\{A\} \rightarrow \{B,C\}, \{B\} \rightarrow \{A\}, \{A,D\} \rightarrow \{E\}, \{E\} \rightarrow \{D\}\}
d) \{\{A\} \rightarrow \{B\}, \{B\} \rightarrow \{A,C\}, \{A,D\} \rightarrow \{E\}, \{E\} \rightarrow \{D\}\}
e) Both c) and d)
2] Which of the following functional dependencies is not in the closure F+?
a) \{A\} \rightarrow \{B\}
b) \{B\} \rightarrow \{B, C\}
c) \{C\} \rightarrow \{A\}
d) \{A, D\} \rightarrow \{C, E\}
e) \{A\} \rightarrow \{C\}
Table: R(A,B,C,D,E)
F= \{ \{A\} \rightarrow \{B, C\}, \{B\} \rightarrow \{A, C\}, \{A, D\} \rightarrow \{E\}, \\
\{E\} \rightarrow \{D\} \}\n
3] Which of the following set is a subset of \{A, D\}+?
a) \{A, B\}
b) \{B, C, D\}
c) \{E\}
d) All of the above
e) None of the above
4] Which of the following is a superkey for R?
a) \{A\}
b) \{AB\}
c) \{BC\}
d) \{ACD\}
e) \{ABC\}
Table: R(A,B,C,D,E)
F = \{ \{A\} \rightarrow \{B, C\}, \{B\} \rightarrow \{A, C\}, \{A, D\} \rightarrow \{E\}, \{E\} \rightarrow \{D\} \}\n
5] Which of the following is a candidate key for R?
a) AD
b) AE
c) BD
d) BE
e) All of the above
6] Consider the following decomposition: \{A, B, C\}, \{A, D, E\}. Which of the following statements is true?
a) The decomposition is 3NF, lossless join and dependency preserving
b) The decomposition is 3NF, lossless join but not dependency preserving
c) The decomposition is 3NF, dependency preserving, but not lossless join
d) The decomposition is lossless join, dependency preserving but not 3NF
e) The decomposition is 3NF, but neither lossless join nor dependency preserving
Consider the following decomposition: \{A, B, C\}, \{A, E\}, \{D, E\}. Which of the following statements is true?
a) The decomposition is BCNF, lossless join and dependency preserving
b) The decomposition is BCNF, lossless join but not dependency preserving
c) The decomposition is BCNF, dependency preserving, but not lossless join
d) The decomposition is lossless join, dependency preserving but not BCNF
e) The decomposition is BCNF, but neither lossless join nor dependency preserving
Problem – Normal Forms
- Consider table R(A, B, C, D, E). Given the functional dependencies in the first column of the following form, complete the form accordingly.
- In the second column, list all candidate keys for R.
- In the third column, provide a maximal decomposition of R in 3NF (we only decompose when there is violation of 3NF) – if R is already in 3NF just write R(A, B, C, D, E) instead of a decomposition.
- In the fourth column, do the same for BCNF decomposition. If there are multiple options, choose any dependency preserving decomposition.
- Give the results without comments. The first row is given as an example. Each answer is 2%.
<table>
<thead>
<tr>
<th>Functional Dependencies</th>
<th>Candidate Keys for R</th>
<th>Decompose R in 3NF</th>
<th>Decompose R in BCNF</th>
</tr>
</thead>
<tbody>
<tr>
<td>{A \rightarrow B, C, D, E}</td>
<td>{A}</td>
<td>R(A, B, C, D, E)</td>
<td>R(A, B, C, D, E)</td>
</tr>
</tbody>
</table>
## Problem – Normal Forms
<table>
<thead>
<tr>
<th>Functional Dependencies</th>
<th>Candidate Keys for R</th>
<th>Decompose R in 3NF</th>
<th>Decompose R in BCNF</th>
</tr>
</thead>
<tbody>
<tr>
<td>{ C → D }</td>
<td>ABCE</td>
<td>R1(A,B,C,E)</td>
<td>R1(A,B,C,E)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>R2(C,D)</td>
<td>R2(C,D)</td>
</tr>
<tr>
<td>{ C→D, D→C }</td>
<td>ABCE, ABDE</td>
<td>R(A,B,C,D,E)</td>
<td>R1(C,D)</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>R2(A,B,C,E) or R1(C,D)</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>R2(A,B,D,E)</td>
</tr>
<tr>
<td>{ A → B, C→D}</td>
<td>ACE</td>
<td>R1(A,B)</td>
<td>R1(A,B)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>R2(C,D)</td>
<td>R2(C,D)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>R3(A,C,E)</td>
<td>R3(A,C,E)</td>
</tr>
<tr>
<td>{ A → BC, D → AE}</td>
<td>D</td>
<td>R1(A,B,C)</td>
<td>R1(D,A,E)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>R2(D,A,E)</td>
<td>R2(A,B,C)</td>
</tr>
</tbody>
</table>
Consider the file
- Sailors (sname, sid, age, rating)
\( n_{\text{sailors}} = 10,000 \) records, \( b_{\text{sailors}} = 1,000 \) pages (10 records/page)
\( V(\text{rating}, \text{Sailors}) = 10 \) and \( V(\text{age}, \text{Sailors}) = 100 \)
- Query:
```sql
SELECT name
FROM Sailors
WHERE rating = 7 AND age = 40
```
Describe alternative plans to process the query and estimate their cost assuming uniformity and attribute independence.
How many records you expect in the result:
\[ 10,000 \times \frac{1}{10} \times \frac{1}{100} = 10 \text{ records} \]
Single Relation Plans
1] **Sequential (Linear) Scan.** Read the whole Sailors file and select the records that satisfy both conditions.
Cost = \( b_{\text{Sailors}} = 1,000 \) pages
2] **Binary Search.** If the file is sorted on Rating (or Age), do binary search and find the first record satisfying the condition Rating=7 (or Age=40). Retrieve the remaining records sequentially and filter out non qualifying tuples.
• *Question* What is cheaper: Sorting on Rating or Age?
Cost for sorting on Rating = \( \log_2 1000 + 99 = 109 \)
Cost for sorting on Age = \( \log_2 1000 + 9 = 19 \)
3] **Single-index.** Use an index for one of the two conditions and check the other condition in memory: *if we have an index on Rating we use this index to find records where Rating=7. For each such record we check the condition Age=40.*
• Not necessarily good solution if index is not clustered. There are 1000 sailors with Rating=7. If index on Rating is not clustered, we need 1,000 random page accesses to retrieve these sailors. Better to do sequential scan.
• If the index is clustered, all sailors with Rating=7 are in 100 consecutive pages.
Additional Single Relation Plans
4] **Multidimensional index.** If we have a multidimensional index (e.g., Grid file) on rating and age we can use this index to retrieve the records that satisfy both conditions. **Cost depends on the index.**
5] **Multiple-indexes.** Use more than one index and do intersection of record pointers (Rids) before retrieval of actual records: *use the index on Age and find all Rids of records where Age=40. Then use an index on Rating to find all Rids where Rating=7. Find the intersection of these two sets of Rids (using sorting) and only retrieve records that belong to the intersection. Applicable only if both indexes contain record pointers.*
6] **Covering Index.** If all attributes mentioned in the query match a dense index we can do an index only scan without accessing the actual file. *If, for instance, we have a B+-tree on <name, age, rating> we can answer the query by doing a sequential scan of the index. If the B+-tree is on <age, rating, name> we only need to read the part of the index for age = 40 and rating = 7.*
Multi-Relation Plans - Estimation of Output Size
Assume that there are 10,000 Sailors, 100,000 Reservations, 1,000 Boats and $V(\text{date, Reserves})=1000$, $V(\text{color, Boat})=10$
What is the expected number of records in the result of the query?
```sql
SELECT *
FROM Sailor S, Reserves R, Boats B
```
There are 100 reservations ($n_{\text{reserves}}/V(\text{date, Reserves})$) on 1.1.2005. 10% of these reservations (i.e., 10) are on red boats. Thus, the expected result contains 10 records.
Alternative solution by estimating the join result before applying selections:
- S JOIN R contains 100,000 records
- (S JOIN R) JOIN B contains 100,000 records
- The output size of $\sigma_{\text{R.date}=1.1.2005 \text{ and B.color}=\text{red}}$ (S JOIN R JOIN B) is $100,000 \times \text{Selectivity}_{\text{R.date}=1.1.2005} \times \text{Selectivity}_{\text{color}=\text{red}} = 100,000 \times 1/1000 \times 1/10 = 10$
Evaluation/execution Plans
Assume that for all files there are 10 records per page and you have the following indexes.
1] Hash index on S.sid for Sailors (no overflow buckets)
2] Clustered B+-tree on R.date for Reserves (2 levels)
3] Hash index on B.bid for Boats (no overflow buckets)
Describe different plans for processing the query and estimate their cost.
Alternative join orders after heuristic optimization (pushing the selections down)
1] (Sailor JOIN \( \sigma_{\text{R.date}=1.1.2005} \) Reserves) JOIN \( \sigma_{\text{color}=\text{red}} \) Boats
2] Sailor JOIN (\( \sigma_{\text{R.date}=1.1.2005} \) Reserves JOIN \( \sigma_{\text{color}=\text{red}} \) Boats)
Total cost: $C_1 + C_2 + C_3$
$C_1$: Cost of computing $\text{Temp1} = (\text{Sailor JOIN } \sigma_{R.\text{date}=1.1.2005} \text{Reserves})$
$C_2$: Cost of computing $\text{Temp2} = \sigma_{\text{color}=\text{red}} \text{Boats}$
$C_3$: Cost of $\text{Temp1 JOIN Temp2}$
In order to estimate $C_1$, we need to determine the best sub-plan $P_1$ for computing $\text{Temp1}$
Some alternatives:
1. BNL using Sailor as the outer relation
2. BNL using Reserves as the outer relation
3. Sort-merge join (we have to sort both tables on sid)
4. Hash join
5. Index nested loop with Reserves as the outer relation. Sailors contains a hash index on the join attribute sid. Furthermore, we have a selective condition $(\sigma_{R.\text{date}=1.1.2005} \text{Reserves})$ and a clustering index on $\text{Reserves.date}$.
Best Option
Estimation of $C_1$
$\text{Temp1} = \text{Sailor JOIN } \sigma_{R.\text{date}=1.1.2005} \text{Reserves}$
**Sub-Plan P1**: Use clustering B+-tree on Reserves.date to find all reservations on 1.1.2005.
How many?
$n_{\text{Reserves}}/V(\text{Date,Reserves})=100,000/1,000=100$
How many pages do I need to access, in order to find these reservations: $2+10$ (2 in the index and 10 in the file – the reservations are ordered on the date)
For each reservation, I retrieve the corresponding sailor record using the hash index on Sailor.sid, with cost 2 per reservation.
Total cost:
$12+100*2=212$
- Do I need to consider other algorithms that produce results in interesting orders?
In this case **no**, because the only useful order would be on the Reserves.bid, which cannot be generated by any algorithm in this example.
Estimation of $C_2 - \text{Temp2} = \sigma_{\text{color}=\text{red}} \text{Boats}$ and $C_3 - \text{Temp1 JOIN Temp2}$
$C_2$ using Sub-Plan $P_2$: For Boats, I only have a hash index on bid (no index on color). Therefore, in order to find red boats, I need to scan the entire (1,000 boat records) file with cost $b_{\text{Boats}} = 100$ pages. Since only 10% of the boats are red, I expect to retrieve 100 records.
$C_3$ using Sub-Plan $P_3$ (materialization). Store the intermediate results of $P_2$ and $P_1$, then read them and join them on the bid using any join algorithm. Very expensive.
$C_3$ using Sub-Plan $P_3$ (pipelining). Recall that during $P_1$, we generate 100 (Sailor JOIN $\sigma_{\text{R.date}=1.1.2005} \text{Reserves}$) records. When each such record is generated, we find the corresponding Boat, using the hash index on Boats.bid with cost 2. If the color is not red I discard the record (i.e., I perform the selection on the fly without the need for $P_2$). This corresponds to the Index Nested Loops algorithm and has cost $100 \times 2$.
Total cost for 1st expression: $C_1 + C_3 = 212 + 200 = 412$.
15
Evaluation plan for 1^{st} expression
\((\text{Sailor JOIN } \sigma_{\text{R.date}=1.1.2005} \text{Reserves}) \text{ JOIN } \sigma_{\text{color}=\text{red}} \text{Boats})\)
1. Use B+-tree on R.date
- Will retrieve 100 records with cost 2+10
2. Index nested loop for every reservation on 1.1.2005
- Use hash index on S.sid to find information about the sailor
- Cost 100(1+1)
- Output contains 100 records
3. Index nested loop for every record of step 2
- Use hash index on B.bid to find information about the boat
- Filter B.color condition in memory
- The cost is 1+1 per reservation, i.e., total cost 200 for all reservations of step 1
- The output contains only 10 records (10% of the reservations are on red boats)
Total cost = 12+200+200=412
Evaluation plan for 2nd expression
Sailor JOIN (σ_{R.date=1.1.2005} Reserves JOIN σ_{color=red} Boats) using the same concepts
Total cost = 12 + 200 + 20 = 232
3. Index nested loop
for every reservation that satisfies steps 1 and 2
use hash index on S.sid to find information about the sailor
The cost is 1+1 per reservation, i.e., total cost 20 for all reservations of step 2
2. Index nested loop
for every reservation that satisfies the date condition
use hash index on B.bid to find information about the boat
filter B.color condition in memory
The cost is 1+1 per reservation, i.e., total cost 200 for all reservations of step
The output contains only 10 records (10% of the reservations are on red boat):
1. Use B+-tree on R.date
Will retrieve 100 records
with cost 2+10
Reserves R
Boats B
The optimizer will choose to execute the last plan, i.e., for Sailor JOIN (\(\sigma_{R.\text{date}=1.1.2005}\) Reserves JOIN \(\sigma_{\text{color}=\text{red}}\) Boats), because it is cheaper. Intuitively, this is expected because this expression first performs the joins on the tables that involve selections. Real optimizers follow similar principles but include additional factors such as CPU time, and the difference between random and sequential I/O operation.
Is the schedule R2(A) R1(A) W1(A) W2(B) C2 C1 recoverable and cascadeless?
Both – no transaction reads items written by the other.
Rewrite the schedule R2(A) R1(A) W1(A) W2(B) according to 2PL protocol (i.e., add **lock-S, lock-X, unlock** statements below). Explain briefly whether the schedule will terminate or fail and why?
<table>
<thead>
<tr>
<th>T1</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>lock_S(A)</td>
</tr>
<tr>
<td>lock_S(A)</td>
<td>READ(A)</td>
</tr>
<tr>
<td>READ(A)</td>
<td></td>
</tr>
<tr>
<td>lock_X(A) - cannot</td>
<td>WRITE(A)</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>WRITE(B)</td>
</tr>
</tbody>
</table>
• Rewrite the schedule R2(A) R1(A) W1(A) W2(B) according to **TS-ordering** protocol (i.e., add **read_TS** and **write_TS** statements) assuming that the timestamps of T1, T2 are 2, 1, respectively. The initial timestamps of A and B are 0. Explain briefly whether the schedule will terminate or fail and why?
<table>
<thead>
<tr>
<th>T1, timestamp=2</th>
<th>T2, timestamp=1</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>READ(A)</td>
</tr>
<tr>
<td></td>
<td>read_TS(A)=1</td>
</tr>
<tr>
<td>READ(A)</td>
<td></td>
</tr>
<tr>
<td>read_TS(A)=2</td>
<td>WRITE(A)</td>
</tr>
<tr>
<td>write_TS(A)=2</td>
<td>WRITE(B)</td>
</tr>
<tr>
<td></td>
<td>write_TS(B)=1</td>
</tr>
</tbody>
</table>
Multiversion Timestamp Protocol
- Rewrite the schedule R2(A) R1(A) W1(A) W2(B) according to multiversion **TS-ordering** protocol (i.e., add **read_TS** and **write_TS** statements and specify the versions of the items) assuming that the timestamps of T1, T2 are 1, 2, respectively and that the initial versions of items are A0, B0. Complete the correct version numbers (e.g., READ(A0) instead of READ(A)).
<table>
<thead>
<tr>
<th>T1, timestamp=1</th>
<th>T2, timestamp=2</th>
</tr>
</thead>
<tbody>
<tr>
<td>READ(A0)</td>
<td>READ(A0)</td>
</tr>
<tr>
<td>read_TS(A0)=2</td>
<td>read_TS(A0)=2</td>
</tr>
<tr>
<td>READ(A0)</td>
<td>WRITE(A1)</td>
</tr>
<tr>
<td>read_TS(A0)=2</td>
<td>failure</td>
</tr>
<tr>
<td>WRITE(B)</td>
<td>WRITE(B)</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.cse.ust.hk/~dimitris/5311/E16.pdf", "len_cl100k_base": 5073, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 35139, "total-output-tokens": 5807, "length": "2e12", "weborganizer": {"__label__adult": 0.0004405975341796875, "__label__art_design": 0.00035834312438964844, "__label__crime_law": 0.0006871223449707031, "__label__education_jobs": 0.0098419189453125, "__label__entertainment": 0.00010573863983154296, "__label__fashion_beauty": 0.00019788742065429688, "__label__finance_business": 0.0008335113525390625, "__label__food_dining": 0.0006356239318847656, "__label__games": 0.00147247314453125, "__label__hardware": 0.0014677047729492188, "__label__health": 0.000713348388671875, "__label__history": 0.00046753883361816406, "__label__home_hobbies": 0.0002570152282714844, "__label__industrial": 0.00151824951171875, "__label__literature": 0.0004367828369140625, "__label__politics": 0.00037550926208496094, "__label__religion": 0.00046634674072265625, "__label__science_tech": 0.1031494140625, "__label__social_life": 0.0001809597015380859, "__label__software": 0.0247344970703125, "__label__software_dev": 0.85009765625, "__label__sports_fitness": 0.00044608116149902344, "__label__transportation": 0.0007462501525878906, "__label__travel": 0.00031113624572753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15876, 0.04075]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15876, 0.39915]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15876, 0.78125]], "google_gemma-3-12b-it_contains_pii": [[0, 23, false], [23, 939, null], [939, 1343, null], [1343, 2090, null], [2090, 2584, null], [2584, 3507, null], [3507, 4693, null], [4693, 5268, null], [5268, 6433, null], [6433, 7504, null], [7504, 8501, null], [8501, 9178, null], [9178, 10003, null], [10003, 10828, null], [10828, 11962, null], [11962, 12737, null], [12737, 13568, null], [13568, 14034, null], [14034, 14587, null], [14587, 15202, null], [15202, 15876, null]], "google_gemma-3-12b-it_is_public_document": [[0, 23, true], [23, 939, null], [939, 1343, null], [1343, 2090, null], [2090, 2584, null], [2584, 3507, null], [3507, 4693, null], [4693, 5268, null], [5268, 6433, null], [6433, 7504, null], [7504, 8501, null], [8501, 9178, null], [9178, 10003, null], [10003, 10828, null], [10828, 11962, null], [11962, 12737, null], [12737, 13568, null], [13568, 14034, null], [14034, 14587, null], [14587, 15202, null], [15202, 15876, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15876, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15876, null]], "pdf_page_numbers": [[0, 23, 1], [23, 939, 2], [939, 1343, 3], [1343, 2090, 4], [2090, 2584, 5], [2584, 3507, 6], [3507, 4693, 7], [4693, 5268, 8], [5268, 6433, 9], [6433, 7504, 10], [7504, 8501, 11], [8501, 9178, 12], [9178, 10003, 13], [10003, 10828, 14], [10828, 11962, 15], [11962, 12737, 16], [12737, 13568, 17], [13568, 14034, 18], [14034, 14587, 19], [14587, 15202, 20], [15202, 15876, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15876, 0.17925]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
47725f541e2da1e2555a0fa5b8e75ae19677a587
|
COGNITIVE COMPLEXITY
A new way of measuring understandability
By G. Ann Campbell,
Product Owner - SonarSource SA
Abstract
Cyclomatic Complexity was initially formulated as a measurement of the “testability and maintainability” of the control flow of a module. While it excels at measuring the former, its underlying mathematical model is unsatisfactory at producing a value that measures the latter. This white paper describes a new metric that breaks from the use of mathematical models to evaluate code in order to remedy Cyclomatic Complexity’s shortcomings and produce a measurement that more accurately reflects the relative difficulty of understanding, and therefore of maintaining methods, classes, and applications.
A note on terminology
While Cognitive Complexity is a language-neutral metric that applies equally to files and classes, and to methods, procedures, functions, and so on, the Object-Oriented terms “class” and “method” are used for convenience.
# Table of Contents
- **Introduction** 3
- **An illustration of the problem** 4
- **Basic criteria and methodology** 4
- **Ignore shorthand** 5
- **Increment for breaks in the linear flow** 5
- Catches 6
- Switches 6
- Sequences of logical operators 6
- Recursion 7
- Jumps to labels 7
- **Increment for nested flow-break structures** 7
- **The implications** 9
- **Conclusion** 10
- **References** 10
- **Appendix A: Compensating Usages** 11
- **Appendix B: Specification** 14
- **Appendix C: Examples** 15
- **Change log** 19
Introduction
Thomas J. McCabe's Cyclomatic Complexity has long been the de facto standard for measuring the complexity of a method's control flow. It was originally intended “to identify software modules that will be difficult to test or maintain”[1], but while it accurately calculates the minimum number of test cases required to fully cover a method, it is not a satisfactory measure of understandability. This is because methods with equal Cyclomatic Complexity do not necessarily present equal difficulty to the maintainer, leading to a sense that the measurement “cries wolf” by over-valuing some structures, while under-valuing other constructs.
At the same time, Cyclomatic Complexity is no longer comprehensive. Formulated in a Fortran environment in 1976, it doesn’t include modern language structures like try/catch, and lambdas.
And finally, because each method has a minimum Cyclomatic Complexity score of one, it is impossible to know whether any given class with a high aggregate Cyclomatic Complexity is a large, easily maintained domain class, or a small class with a complex control flow. Beyond the class level, it is widely acknowledged that the Cyclomatic Complexity scores of applications correlate to their lines of code totals. In other words, Cyclomatic Complexity is of little use above the method level.
As a remedy for these problems, Cognitive Complexity has been formulated to address modern language structures, and to produce values that are meaningful at the class and application levels. More importantly, it departs from the practice of evaluating code based on mathematical models so that it can yield assessments of control flow that correspond to programmers’ intuitions about the mental, or cognitive effort required to understand those flows.
An illustration of the problem
It is useful to begin the discussion of Cognitive Complexity with an example of the problem it is designed to address. The two following methods have equal Cyclomatic Complexity, but are strikingly different in terms of understandability.
```java
int sumOfPrimes(int max) { // +1
int total = 0;
OUT: for (int i = 1; i <= max; ++i) { // +1
for (int j = 2; j < i; ++j) { // +1
if (i % j == 0) { // +1
continue OUT;
}
}
total += i;
}
return total; // Cyclomatic Complexity 4
} // Cyclomatic Complexity 4
String getWords(int number) { // +1
switch (number) {
case 1: // +1
return "one";
case 2: // +1
return "a couple";
case 3: // +1
return "a few";
default: // +1
return "lots";
}
} // Cyclomatic Complexity 4
```
The mathematical model underlying Cyclomatic Complexity gives these two methods equal weight, yet it is intuitively obvious that the control flow of `sumOfPrimes` is more difficult to understand than that of `getWords`. This is why Cognitive Complexity abandons the use of mathematical models for assessing control flow in favor of a set of simple rules for turning programmer intuition into numbers.
Basic criteria and methodology
A Cognitive Complexity score is assessed according to three basic rules:
1. Ignore structures that allow multiple statements to be readably shorthanded into one
2. Increment (add one) for each break in the linear flow of the code
3. Increment when flow-breaking structures are nested
Additionally, a complexity score is made up of four different types of increments:
A. Nesting - assessed for nesting control flow structures inside each other
B. Structural - assessed on control flow structures that are subject to a nesting increment, and that increase the nesting count
C. Fundamental - assessed on statements not subject to a nesting increment
D. Hybrid - assessed on control flow structures that are not subject to a nesting increment, but which do increase the nesting count
While the type of an increment makes no difference in the math - each increment adds one to the final score - making a distinction among the categories of features being counted makes it easier to understand where nesting increments do and do not apply.
These rules and the principles behind them are further detailed in the following sections.
**Ignore shorthand**
A guiding principle in the formulation of Cognitive Complexity has been that it should incent good coding practices. That is, it should either ignore or discount features that make code more readable.
The method structure itself is a prime example. Breaking code into methods allows you to condense multiple statements into a single, evocatively named call, i.e. to “shorthand” it. Thus, Cognitive Complexity does not increment for methods.
Cognitive Complexity also ignores the null-coalescing operators found in many languages, again because they allow short-handing multiple lines of code into one. For example, both of the following code samples do the same thing:
```java
MyObj myObj = null; MyObj myObj = a?.myObj;
if (a != null) {
myObj = a.myObj;
}
```
The meaning of the version on the left takes a moment to process, while the version on the right is immediately clear once you understand the null-coalescing syntax. For that reason, Cognitive Complexity ignores null-coalescing operators.
**Increment for breaks in the linear flow**
Another guiding principle in the formulation of Cognitive Complexity is that structures that break code’s normal linear flow from top to bottom, left to right require maintainers to work harder to understand that code. In acknowledgement of this extra effort, Cognitive Complexity assesses structural increments for:
- Loop structures: `for`, `while`, `do while`, ...
- Conditionals: ternary operators, `if`, `#if`, `#ifdef`, ...
It assesses hybrid increments for:
- `else if`, `elif`, `else`, ...
No nesting increment is assessed for these structures because the mental cost has already been paid when reading the `if`.
These increment targets will seem familiar to those who are used to Cyclomatic Complexity. In addition, Cognitive Complexity also increments for:
Catches
A catch represents a kind of branch in the control flow just as much as an if. Therefore, each catch clause results in a structural increment to Cognitive Complexity. Note that a catch only adds one point to the Cognitive Complexity score, no matter how many exception types are caught. try and finally blocks are ignored altogether.
Switches
A switch and all its cases combined incurs a single structural increment.
Under Cyclomatic Complexity, a switch is treated as an analog to an if-else if chain. That is, each case in the switch causes an increment because it causes a branch in the mathematical model of the control flow.
But from a maintainer’s point of view, a switch - which compares a single variable to an explicitly named set of literal values - is much easier to understand than an if-else if chain because the latter may make any number of comparisons, using any number of variables and values.
In short, an if-else if chain must be read carefully, while a switch can often be taken in at a glance.
Sequences of logical operators
For similar reasons, Cognitive Complexity does not increment for each binary logical operator. Instead, it assesses a fundamental increment for each sequence of binary logical operators. For instance, consider the following pairs:
\begin{verbatim}
a && b
a && b && c && d
a || b
a || b || c || d
\end{verbatim}
Understanding the second line in each pair isn’t that much harder than understanding the first. On the other hand, there is a marked difference in the effort to understand the following two lines:
\begin{verbatim}
a && b && c && d
a || b && c || d
\end{verbatim}
Because boolean expressions become more difficult to understand with mixed operators, Cognitive complexity increments for each new sequence of like operators. For instance:
```java
if (a
&& b
&& c
|| d
|| e
&& f) // +1 for `if`
if (a
&& !(b && c)) // +1
```
While Cognitive Complexity offers a “discount” for like operators relative to Cyclomatic Complexity, it does increment for all sequences of binary boolean operators such as those in variable assignments, method invocations, and return statements.
**Recursion**
Unlike Cyclomatic Complexity, Cognitive Complexity adds a fundamental increment for each method in a recursion cycle, whether direct or indirect. There are two motivations for this decision. First, recursion represents a kind of “meta-loop”, and Cognitive Complexity increments for loops. Second, Cognitive Complexity is about estimating the relative difficulty of understanding the control flow of a method, and even some seasoned programmers find recursion difficult to understand.
**Jumps to labels**
goto, and break or continue to a label add fundamental increments to Cognitive Complexity. But because an early return can often make code much clearer, no other jumps or early exits cause an increment.
Increment for nested flow-break structures
It seems intuitively obvious that a linear series of five if and for structures would be easier to understand than that same five structures successively nested, regardless of the number of execution paths through each series. Because such nesting increases the mental demands to understand the code, Cognitive Complexity assesses a nesting increment for it.
Specifically, each time a structure that causes a structural or hybrid increment is nested inside another such structure, a nesting increment is added for each level of nesting. For instance, in the following example, there is no nesting increment for the method itself or for the try because neither structure results in either a structural or a hybrid increment:
```java
void myMethod () {
try {
if (condition1) { // +1
for (int i = 0; i < 10; i++) { // +2 (nesting=1)
while (condition2) { … } // +3 (nesting=2)
}
}
}
catch (ExcepType1 | ExcepType2 e) { // +1
if (condition2) { … } // +2 (nesting=1)
}
} // Cognitive Complexity 9
```
However, the if, for, while, and catch structures are all subject to both structural and nesting increments.
Additionally, while top-level methods are ignored, and there is no structural increment for lambdas, nested methods, and similar features, such methods do increment the nesting level:
```java
void myMethod2 () {
Runnable r = () -> {
if (condition1) { … } // +2 (nesting=1)
}; // Cognitive Complexity 2
} // Cognitive Complexity 9
```
The implications
Cognitive Complexity was formulated with the primary goal of calculating method scores that more accurately reflect methods’ relative understandability, and with secondary goals of addressing modern language constructs and producing metrics that are valuable above the method level. Demonstrably, the goal of addressing modern language constructs has been achieved. The other two goals are examined below.
Intuitively ‘right’ complexity scores
This discussion began with a pair of methods with equal Cyclomatic Complexity but decidedly unequal understandability. Now it is time to re-examine those methods and calculate their Cognitive Complexity scores:
```java
int sumOfPrimes(int max) {
int total = 0;
OUT: for (int i = 1; i <= max; ++i) { // +1
for (int j = 2; j < i; ++j) { // +2
if (i % j == 0) { // +3
continue OUT; // +1
}
}
total += i;
}
return total; // Cognitive Complexity 7
```
```java
String getWords(int number) {
switch (number) { // +1
case 1:
return "one";
case 2:
return "a couple";
case 3:
return "lots";
default:
return "lots";
}
```
The Cognitive Complexity algorithm gives these two methods markedly different scores, ones that are far more reflective of their relative understandability.
Metrics that are valuable above the method level
Further, because Cognitive Complexity does not increment for the method structure, aggregate numbers become useful. Now you can tell the difference between a domain class - one with a large number of simple getters and setters - and one that contains a complex control flow by simply comparing their metric values. Cognitive Complexity thus becomes a tool for measuring the relative understandability of classes and applications.
Conclusion
The processes of writing and maintaining code are human processes. Their outputs must adhere to mathematical models, but they do not fit into mathematical models themselves. This is why mathematical models are inadequate to assess the effort they require.
Cognitive Complexity breaks from the practice of using mathematical models to assess software maintainability. It starts from the precedents set by Cyclomatic Complexity, but uses human judgment to assess how structures should be counted, and to decide what should be added to the model as a whole. As a result, it yields method complexity scores which strike programmers as fairer relative assessments of understandability than have been available with previous models. Further, because Cognitive Complexity charges no “cost of entry” for a method, it produces those fairer relative assessments not just at the method level, but also at the class and application levels.
References
Appendix A: Compensating Usages
Cognitive Complexity is designed to be a language-agnostic measurement, but it cannot be ignored that different languages offer different features. For instance, there is no else if structure in COBOL, and until recently JavaScript lacked a class-like structure. Unfortunately, those deficits don’t prevent developers from needing those structures or from trying to construct something analogous with the tools at hand. In such cases, a strict application of Cognitive Complexity’s rules would result in disproportionately high scores.
For that reason, and in order not to penalize the use of one language over another, exceptions may be made for language deficits, i.e. structures which are commonly used, and expected in most modern languages, but missing from the language under consideration, such as COBOL’s missing else if.
On the other hand, when a language innovates to introduce a feature, such as Java 7’s ability to catch multiple exception types at once, the lack of that innovation in other languages should not be considered a deficit, and thus there should be no exception.
This implies that if catching multiple exception types at once becomes a commonly-expected language feature, an exception might be added for “extra” catch clauses in languages that do not offer the ability. This possibility is not excluded, but evaluations of whether or not to add such future exceptions should err on the side of conservatism. That is, new exceptions should come slowly.
On the other hand, if a future version of the COBOL standard adds an “else if” structure, the tendency should be to drop the COBOL “else … if” exception (described below) as soon as is practical.
To date, two exceptions have been identified:
COBOL: Missing else if
For COBOL, which lacks an else if structure, an if as the only statement in an else clause does not incur a nesting penalty. Additionally, there is no increment for the else itself. That is, an else followed immediately by an if is treated as an else if, even though syntactically it is not.
For example:
```cobol
IF condition1 // +1 structure, +0 for nesting
...
ELSE
IF condition2 // +1 structure, +0 for nesting
...
ELSE
IF condition3 // +1 structure, +0 for nesting
statement1
IF condition4 // +1 structure, +1 for nesting
...
END-IF
END-IF
END-IF
ENDIF.
```
JavaScript: Missing class structures
Despite the recent addition of classes to JavaScript by the ECMAScript 6 specification, the feature is not yet widely adopted. In fact, many popular frameworks require the continued use of the compensating idiom: the use of an outer function as a stand-in to create a kind of namespace or faux class. So as not to penalize JavaScript users, such outer functions are ignored when they are used purely as a declarative mechanism, that is when they contain only declarations at the top level.
However, the presence at the top level of a function (i.e. not nested inside a sub-function) of statements subject to structural increments indicates something other than a pure declarative usage. Consequently, such functions should receive a standard treatment.
For example:
```javascript
function(...) { // declarative; ignored
var foo;
bar.myFun = function(...) { // nesting = 0
if(condition) { // +1
...
}
}
} // total complexity = 1
function(...) { // non-declarative; not ignored
var foo;
if (condition) { // +1; top-level structural increment
...
}
bar.myFun = function(...) { // nesting = 1
if(condition) { // +2
...
}
}
} // total complexity = 3
```
Appendix B: Specification
The purpose of this section is to give a concise enumeration of the structures and circumstances that increment Cognitive Complexity, subject to the exceptions listed in Appendix A. This is meant to be a comprehensive listing without being language-exhaustive. That is, if a language has an atypical spelling for a key word, such as `elif` for `else if`, its omission here is not intended to omit it from the specification.
B1. Increments
There is an increment for each of the following:
- `if`, `else if`, `else`, ternary operator
- `switch`
- `for`, `foreach`
- `while`, `do while`
- `catch`
- `goto LABEL`, `break LABEL`, `continue LABEL`
- sequences of binary logical operators
- each method in a recursion cycle
B2. Nesting level
The following structures increment the nesting level:
- `if`, `else if`, `else`, ternary operator
- `switch`
- `for`, `foreach`
- `while`, `do while`
- `catch`
- nested methods and method-like structures such as lambdas
B3. Nesting increments
The following structures receive a nesting increment commensurate with their nested depth inside `B2` structures:
- `if`, ternary operator
- `switch`
- `for`, `foreach`
- `while`, `do while`
- `catch`
Appendix C: Examples
From org.sonar.java.resolve.JavaSymbol.java in the SonarJava analyzer:
```java
@Nullable
private MethodJavaSymbol overriddenSymbolFrom(ClassJavaType classType) {
if (classType.isUnknown()) { // +1
return Symbols.unknownMethodSymbol;
}
boolean unknownFound = false;
List<JavaSymbol> symbols = classType.getSymbol().members().lookup(name);
for (JavaSymbol overrideSymbol : symbols) { // +1
if (overrideSymbol.isKind(JavaSymbol.MTH) // +2 (nesting = 1)
&& !overrideSymbol.isStatic()) {
MethodJavaSymbol methodJavaSymbol = (MethodJavaSymbol)overrideSymbol;
if (canOverride(methodJavaSymbol)) { // +3 (nesting = 2)
Boolean overriding = checkOverridingParameters(methodJavaSymbol,
classType);
if (overriding == null) { // +4 (nesting = 3)
if (!unknownFound) { // +5 (nesting = 4)
unknownFound = true;
}
} else if (overriding) { // +1
return methodJavaSymbol;
}
}
}
}
if (unknownFound) { // +1
return Symbols.unknownMethodSymbol;
}
return null; // total complexity = 19
```
private void addVersion(final Entry entry, final Transaction txn)
throws PersistitInterruptedException, RollbackException {
final TransactionIndex ti = _persistit.getTransactionIndex();
while (true) {
try {
synchronized (this) {
if (frst != null) {
if (frst.getVersion() > entry.getVersion()) {
throw new RollbackException();
}
if (txn.isActive()) {
for (Entry e = frst; e != null; e = e.getPrevious()) {
final long version = e.getVersion();
final long depends = ti.wwDependency(version,
txn.getTransactionStatus(), 0);
if (depends == TIMED_OUT) {
throw new WWRetryException(version);
}
if (depends != 0 && depends != ABORTED) {
throw new RollbackException();
}
}
}
entry.setPrevious(frst);
frst = entry;
break;
}
}
} catch (final WWRetryException re) {
try {
final long depends = _persistit.getTransactionIndex()
.wwDependency(re.getVersionHandle(), txn.getTransactionStatus(),
SharedResource.DEFAULT_MAX_WAIT_TIME);
if (depends != 0 && depends != ABORTED) {
throw new RollbackException();
}
} catch (final InterruptedException ie) {
throw new PersistitInterruptedException(ie);
}
} catch (final InterruptedException ie) {
throw new PersistitInterruptedException(ie);
}
}
}
// total complexity = 35
From org.sonar.api.utils.WildcardPattern.java in SonarQube:
```java
private static String toRegexp(String antPattern, String directorySeparator) {
final String escapedDirectorySeparator = '\\' + directorySeparator;
final StringBuilder sb = new StringBuilder(antPattern.length());
int i = antPattern.startsWith("/") ||
antPattern.startsWith("\\") ? 1 : 0; // +1
while (i < antPattern.length()) { // +1
final char ch = antPattern.charAt(i);
if (SPECIAL_CHARS.indexOf(ch) != -1) { // +2 (nesting = 1)
sb.append('\\').append(ch);
} else if (ch == '*') { // +1
if (i + 1 < antPattern.length() // +3 (nesting = 2)
&& antPattern.charAt(i + 1) == '*') {
if (i + 2 < antPattern.length() // +4 (nesting = 3)
&& isSlash(antPattern.charAt(i + 2))) { // +1
sb.append("(?:(?:.*
.append(escapedDirectorySeparator).append("\\")
i += 2;
} else { // +1
sb.append(".*");
i += 1;
}
} else { // +1
sb.append("[^\\].append(escapedDirectorySeparator).append("]"*)");
}
} else if (ch == '?') { // +1
sb.append("[^\\]".append(escapedDirectorySeparator).append("]"*)");
} else if (isSlash(ch)) { // +1
sb.append(escapedDirectorySeparator);
} else { // +1
sb.append(ch);
}
} else { // +1
sb.append('\\').append(ch);
}
i++;
}
sb.append('$');
return sb.toString(); // total complexity = 20
```
Copyright SonarSource S.A., 2016-2017, Switzerland. All content is copyright protected.
From `model.js` in YUI
```javascript
save: function (options, callback) {
var self = this;
if (typeof options === 'function') { // +1
callback = options;
options = {};
}
options || (options = {}); // +1
self._validate(self.toJSON(), function (err) { // +2 (nesting = 1)
if (err) { // +1
callback && callback.call(null, err);
return;
}
self.sync(self.isNew() ? 'create' : 'update', options, function (err, response) { // +2 (nesting = 1)
var facade = {
options : options,
response: response
},
parsed;
if (err) { // +3 (nesting = 2)
facade.error = err;
facade.src = 'save';
self.fire(EVT_ERROR, facade);
} else { // +1
if (!self._saveEvent) { // +4 (nesting = 3)
self._saveEvent = self.publish(EVT_SAVE, {
preventable: false
});
}
if (response) { // +4 (nesting = 3)
parsed = facade.parsed = self._parse(response);
self.setAttrs(parsed, options);
}
self.changed = {};
self.fire(EVT_SAVE, facade);
}
callback && callback.apply(null, arguments); // +1
});
});
return self; // total complexity = 20
}
```
Change log
Version 1.1
6 February 2017
• Update the section on recursion to include indirect recursion
• Add the "Hybrid" increment type and use it to clarify the handling of `else` and `else if`; they are not subject to a nesting increment, but do increase the nesting level
• Clarify that Cognitive Complexity is only concerned with binary boolean operators
• Correct `getWords` to truly have a Cyclomatic Complexity of 4
• Add Appendix A: Compensating Usages
• Update the copyright
• Initiate Change log
Version 1.2
19 April 2017
• Textual adjustments and corrections, such as the use of the word "understandability" instead of "maintainability".
• Add explanation for why a hybrid increment is assessed on `else if` and `else` instead of a structural increment.
• Add Appendix B: Specification
• Add Appendix C: Examples
|
{"Source-Url": "https://www.sonarsource.com/docs/CognitiveComplexity.pdf", "len_cl100k_base": 5906, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 37274, "total-output-tokens": 6980, "length": "2e12", "weborganizer": {"__label__adult": 0.0003437995910644531, "__label__art_design": 0.0002474784851074219, "__label__crime_law": 0.0002980232238769531, "__label__education_jobs": 0.0004887580871582031, "__label__entertainment": 5.602836608886719e-05, "__label__fashion_beauty": 0.00011390447616577148, "__label__finance_business": 0.00014352798461914062, "__label__food_dining": 0.0003113746643066406, "__label__games": 0.0004818439483642578, "__label__hardware": 0.0003769397735595703, "__label__health": 0.0002913475036621094, "__label__history": 0.00014293193817138672, "__label__home_hobbies": 6.93202018737793e-05, "__label__industrial": 0.00022208690643310547, "__label__literature": 0.00031495094299316406, "__label__politics": 0.0002244710922241211, "__label__religion": 0.0003383159637451172, "__label__science_tech": 0.0029659271240234375, "__label__social_life": 8.082389831542969e-05, "__label__software": 0.003475189208984375, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0002548694610595703, "__label__transportation": 0.00028061866760253906, "__label__travel": 0.00015974044799804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27336, 0.01389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27336, 0.72124]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27336, 0.81376]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 973, null], [973, 1512, null], [1512, 3299, null], [3299, 5575, null], [5575, 7624, null], [7624, 9422, null], [9422, 10679, null], [10679, 12347, null], [12347, 14155, null], [14155, 15230, null], [15230, 16988, null], [16988, 18402, null], [18402, 18898, null], [18898, 20113, null], [20113, 21390, null], [21390, 23347, null], [23347, 25035, null], [25035, 26510, null], [26510, 27336, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 973, null], [973, 1512, null], [1512, 3299, null], [3299, 5575, null], [5575, 7624, null], [7624, 9422, null], [9422, 10679, null], [10679, 12347, null], [12347, 14155, null], [14155, 15230, null], [15230, 16988, null], [16988, 18402, null], [18402, 18898, null], [18898, 20113, null], [20113, 21390, null], [21390, 23347, null], [23347, 25035, null], [25035, 26510, null], [26510, 27336, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27336, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27336, null]], "pdf_page_numbers": [[0, 115, 1], [115, 973, 2], [973, 1512, 3], [1512, 3299, 4], [3299, 5575, 5], [5575, 7624, 6], [7624, 9422, 7], [9422, 10679, 8], [10679, 12347, 9], [12347, 14155, 10], [14155, 15230, 11], [15230, 16988, 12], [16988, 18402, 13], [18402, 18898, 14], [18898, 20113, 15], [20113, 21390, 16], [21390, 23347, 17], [23347, 25035, 18], [25035, 26510, 19], [26510, 27336, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27336, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
f8ed2a09cd3d5c5645611ddc4eb099ec347acd56
|
ABSTRACT
Expressing program correctness often requires relating program data throughout (different branches of) an execution. Such properties can be represented using CTL+FO, a logic that allows mixing temporal and first-order quantification. Verifying that a program satisfies a CTL+FO property is a challenging problem that requires both temporal and data reasoning. Temporal quantifiers require discovery of invariants and ranking functions, while first-order quantifiers demand instantiation techniques. In this paper, we present a constraint-based method for proving CTL+FO properties automatically. Our method makes the interplay between the temporal and first-order quantification explicit in a constraint encoding that combines recursion and existential quantification. By integrating this constraint encoding with an off-the-shelf solver we obtain an automatic verifier for CTL+FO.
1. Introduction
In specifying the correct behaviour of systems, relating data at various stages of a computation is often crucial. Examples include program termination [6] (where the value of a rank function should be decreasing over time), correctness of reactive systems [12] (where each incoming request should be handled in a certain timeframe), and information flow [10] (where for all possible secret input values, the output should be the same). The logic CTL+FO offers a natural specification mechanism for such properties, allowing to freely mix temporal and first-order quantification. First-order quantification makes it possible to specify variables dependent on the current system state, and temporal quantifiers allow to relate this data to system states reached at a later point. While CTL+FO and similar logics have been identified as a specification language before, no fully automatic method to check CTL+FO properties on infinite-state systems was developed. Hence, the current state of the art is to either produce verification tools specific to small subclasses of properties, or using error-prone program modifications that explicitly introduce and initialize ghost variables, which are then used in (standard) CTL specifications.
In this paper, we present a fully automatic procedure to transform a CTL+FO verification problem into a system of existentially quantified recursive Horn clauses. Such systems can be solved by leveraging recent advances in constraint solving [2], allowing to blend first-order and temporal reasoning. Our method benefits from the simplicity of the proposed proof rule and the ability to leverage on-going advances in Horn constraint solving.
Related Work.
Verification of CTL+FO and its decidability and complexity have been studied (under various names) in the past. Bohn et al. [4] presented the first model-checking algorithm. Predicates partitioning a possibly infinite state space are deduced syntactically from the checked property, and represented symbolically by propositional variables. This allows to leverage the efficiency of standard BDD-based model checking techniques, but the algorithm fails when the needed partition of the state space is not syntactically derivable from the property.
Working on finite-state systems, Hallé et al. [9], Patthak et al. [14] and Rensink [15] discuss a number of different techniques for quantified CTL formulas. In these works, the finiteness of the data domain is exploited to instantiate quantified variables, thus reducing the model checking problem for quantified CTL to standard CTL model checking.
Hodkinson et al. [12] study the decidability of CTL+FO and some fragments on infinite state systems. They show the general undecidability of the problem, but also identify certain decidable fragments. Most notably, they show that by restricting first order quantifiers to state formulas and only applying temporal quantifiers to formulas with at most one free variable, a decidable fragment can be obtained. Finally, Da Costa et al. [7] study the complexity of checking properties over propositional Kripke structures, also providing an overview of related decidability and complexity results. In temporal epistemic logic, Belardinelli et al. [3] show that checking FO-CTLK on a certain subclass of infinite systems can be reduced to finite systems. In contrast, our method directly deals with quantification over infinite domains.
2. Preliminaries
Programs.
We model programs as transition systems. A program $P$ consists of a tuple of program variables $v$, an initial condition $init(v)$, and a transition relation $next(v, v')$. A state is a valuation of $v$. A computation $\pi$ is a maximal sequence of states $s_1, s_2, \ldots$ such that $init(s_1)$ and for each pair of consecutive states $(s, s')$ we have $next(s, s')$. The set of computations of
\( P \) starting in \( s \) is denoted by \( \Pi_P(s) \).
CTL+FO syntax and semantics.
The following definitions are standard, see e.g. [4, 13].
Let \( T \) be some first order theory and \( \models \) denote its satisfaction relation that we use to describe sets and relations over program states. Let \( c \) range over assertions in \( T \) and \( x \) range over variables. A CTL+FO formula \( \varphi \) is defined by the following grammar using an auxiliary notion of a path formula \( \phi \).
\[
\varphi ::= \forall x : \varphi \mid \exists x : \varphi \mid c \mid \varphi \land \varphi \mid \varphi \lor \varphi \mid \varphi \implies \varphi \mid G\phi \mid E\phi \\
\phi ::= X\varphi \mid F\phi \mid \exists U\varphi
\]
As usual, we define \( F\varphi = (true \land \varphi) \). The satisfaction relation \( P \models \varphi \) holds if and only if for each \( s \) such that \( init(s) \) we have \( P, s \models \varphi \). We define \( P, s \models \varphi \) as follows using an auxiliary satisfaction relation \( P, \pi \models \phi \). Note that \( \pi \) ranges over values from the corresponding domain.
\[
P, s \models \forall x : \varphi \iff \text{for all } d \text{ holds } P, s \models \varphi[d/x] \\
P, s \models \exists x : \varphi \iff \text{exists } d \text{ such that } P, s \models \varphi[d/x] \\
P, s \models c \iff s \models c \\
P, s \models \varphi_1 \land \varphi_2 \iff P, s \models \varphi_1 \text{ and } P, s \models \varphi_2 \\
P, s \models \varphi_1 \lor \varphi_2 \iff P, s \models \varphi_1 \text{ or } P, s \models \varphi_2 \\
P, s \models A\phi \iff \text{for all } \pi \in \Pi_P(s) \text{ holds } P, \pi \models \phi \\
P, s \models E\phi \iff \text{exists } \pi \in \Pi_P(s) \text{ such that } P, \pi \models \phi \\
P, \pi \models X\varphi \iff \pi = s_1, s_2, \ldots \text{ and } P, s_2 \models \varphi \\
P, \pi \models G\varphi \iff \pi = s_1, s_2, \ldots \text{ for all } i \geq 1 \text{ holds } P, s_i \models \varphi \\
P, \pi \models \varphi_1 U\varphi_2 \iff \pi = s_1, s_2, \ldots \text{ and exists } j \geq 1 \text{ such that } P, s_j \models \varphi_2 \text{ and } P, s_i \models \varphi_1 \text{ for } 1 \leq i < j
\]
Quantified Horn constraints.
Our method uses the Ehfsolver [2] solver for for-exists Horn constraints and well-foundedness. We omit the syntax and semantics of constraints solved by Ehfsolver, see [2] for details. Instead, we consider an example:
\[
x \geq 0 \implies \exists y : x \geq y \land \text{rank}(x, y) \land \text{wf}(\text{rank}).
\]
These constraints are an assertion over the interpretation of the “query symbol” rank (the predicate \( \text{wf} \) is not a query symbol, but requires well-foundedness). A solution maps the query symbol into a constraint. Specifically, the example above has a solution that maps \( \text{rank}(x, y) \) to the constraint \((x \geq 0 \land y \leq x - 1)\).
Ehfsolver resolves clauses like the above using a CEGAR scheme to discover witnesses for existentially quantified variables. The refinement loop collects a global constraint that declaratively determines which witnesses can be chosen. The chosen witnesses are used to replace existential quantification, and then the resulting universally quantified clauses are passed to a solver over decidable theories, e.g., HSF [8] or \( \mu Z \) [11]. Such a solver either finds a solution, i.e., a model for uninterpreted relations constrained by the clauses, or returns a counterexample, which is a resolution tree (or DAG) representing a contradiction. Ehfsolver turns the counterexample into an additional constraint on the set of witness candidates, and continues with the next iteration of the refinement loop.
For the existential clause above, Ehfsolver introduces a witness/Skolem relation \( sk \) over variables \( x \) and \( y \), i.e., \( x \geq 0 \land sk(x, y) \implies x \geq y \land \text{rank}(x, y) \). In addition, since for each \( x \) such that \( x \geq 0 \) holds we need a value \( y \), we require that
\[
\text{GEN}(\varphi_0, v_0, init(v_0), next(v_0, v')) = \text{match } \varphi_0 \text{ with } \\
| \forall x : \varphi_1 \Rightarrow \\
| \text{let } v_1 = (v_0, x) \text{ in } \\
\text{GEN}(\varphi_1, v_1, init(v_0), next(v_0, v')) \land x' = x \\
| \exists x : \varphi_1 \Rightarrow \\
| \text{let } v_1 = (v_0, x) \text{ in } \\
\text{let } aux = \text{fresh symbol of arity } |v_1| \text{ in } \\
init(v_0) \rightarrow \exists x : aux(v_1), \\
\text{GEN}(\varphi_1, v_1, aux(v_1), next(v_0, v_0') \land x' = x) \\
| c \Rightarrow \\
init(v_0) \rightarrow c \\
| EF\varphi_1 \Rightarrow \\
\text{let inv, aux = fresh symbols of arity } |v_0| \text{ in } \\
\text{let rank = fresh symbol of arity } |v_0| + |v_0| \text{ in } \\
init(v_0) \rightarrow \text{inv}(v_0), \\
\text{inv}(v_0) \land \neg aux(v_0) \rightarrow \exists v_0' : \text{next}(v_0, v_0') \land \text{inv}(v_0') \land \text{rank}(v_0, v_0'), \\
\text{wf}(\text{rank}), \\
\text{GEN}(\varphi_1, v_0, aux(v_0), \text{next}(v_0, v_0'))
\]
\( \text{Figure 1: Constraint generation rules for first-order quantifi-}
\]
3. Constraint generation
In this section we present our algorithm \( \text{GEN} \) for generating constraints that characterize the satisfaction of a CTL+FO formula. We also consider its complexity and correctness and present an example.
See Figure [2]. \( \text{GEN} \) performs a top-down, recursive descent through the syntax tree of the given CTL+FO formula. It introduces auxiliary predicates and generates a sequence of implication and well-foundedness constraints over these predicates. We use \( \text{v} \), \( \text{v}'' \), \( \text{sk} \) to represent the concatenation operator on sequences of constraints. At each level of recursion, \( \text{GEN} \) takes as input a CTL+FO formula \( \varphi_0 \), a tuple of variables \( v_0 \) that are considered to be in scope and define a state, assertions \( \text{init}(v_0) \) and \( \text{next}(v_0, v_0') \) that describe a set of states and a transition relation, respectively. We assume that variables bound by first-order quantifiers in \( \varphi_0 \) do not shadow other variables. To generate constraints for checking if \( P = (v, \text{init}(v), \text{next}(v, v')) \) satisfies \( \varphi \) we execute \( \text{GEN}(\varphi, v, \text{init}(v), \text{next}(v, v')) \).
Handling first-order quantification.
When \( \varphi_0 \) is obtained from some \( \varphi_1 \) by universally quantifying over \( x \), we directly descend into \( \varphi_1 \) after adding \( x \) to the scope. Hence, the recursive call to \( \text{GEN} \) uses \( v_1 = (v_0, x) \). Since \( \text{init}(v_0) \) defines a set of states over \( v_1 \) in which \( x \) ranges over arbitrary values, the application \( \text{GEN}(\varphi_1, v_1, \text{init}(v_0), \ldots) \) implicitly requires that \( \varphi_1 \) holds for
arbitrary $x$. Since the value of $x$ is arbitrary but fixed within $\varphi_1$, we require that the transition relation considered by the recursive calls does not modify $x$ and thus extend next to $\text{next}(v_0, v'_0) \land x' = x$ in the last argument.
When $\varphi_0$ is obtained from some $\varphi_1$ by existentially quantifying over $x$, we use an auxiliary predicate $aux$ that implicitly serves as witness for $x$. A first constraint connects the set of states $\text{init}(v_0)$ on which $\varphi_0$ needs to hold with $aux(v_1)$, which describes the states on which $\varphi_1$ needs to hold. We require that for every state $s$ allowed by $\text{init}(v_0)$, a choice of $x$ exists such that the extension of $s$ with $x$ is allowed by $aux(v_1)$. Then, the recursive call $\text{Gen}(\varphi_1, v_1, aux(v_1), \ldots)$ generates constraints that keep track of satisfaction of $\varphi_1$ on arbitrary $x$ allowed by $aux(v_1)$. Thus, $aux(v_1)$ serves as a restriction of the choices allowed for $x$. Again, we enforce rigidity of $x$ by adding $x' = x$ to the next relation.
Handling temporal quantification.
We use a deductive proof system for CTL [13] and consider its proof rules from the perspective of constraint generation.
When $\varphi_0$ is a background theory assertion, i.e., does not use path quantification, $\text{Gen}$ produces a constraint that requires $\varphi_0$ to hold on every initial state.
When $\varphi_0$ requires that there is a path on which $\varphi_1$ eventually holds, then $\text{Gen}$ uses an auxiliary predicate $aux(v_0)$ to describe those states in which $\varphi_1$ holds. $Gen$ applies a combination of inductive reasoning together with well-foundedness to show that $aux(v_0)$ is eventually reached from the initial states. The induction hypothesis is represented as $\text{inv}(v_0)$ and is required to hold for every initial state and whenever $aux(v_0)$ is not reached yet. Then, the well-foundedness condition $uf$, which requires that it is not possible to come back into the induction hypothesis forever, ensures that eventually we reach a “base case” in which $aux(v_0)$ holds. Hence, eventually $\varphi_1$ holds on some computations.
Note that the induction hypothesis $\text{inv}(v_0)$, the well-founded relation $\text{rank}(v_0, v'_0)$, and the predicate $aux(v_0)$ are left for the solver to be discovered.
See Appendix [A] for the remaining rules that describe the full set of CTL temporal quantifiers.
Complexity and correctness.
$Gen$ performs a single top-down descent through the syntax tree of the given CTL+FO formula $\varphi$. The running time and the size of the generated sequence of constraints is linear in the size of $\varphi$. Finding a solution for the generated constraints is undecidable in general. In practice however, the used solver often succeeds in finding a solution (cf. Sect. 4). We formalize the correctness of $Gen$ in the following theorem.
**Theorem 1.** For a given program $P$ with $\text{init}(v)$ and $\text{next}(v, v')$ over $v$ and a CTL+FO formula $\varphi$ the application $\text{Gen}(\varphi, v, \text{init}(v), \text{next}(v, v'))$ computes a constraint that is satisfiable if and only if $P \models \varphi$.
**Proof.** (sketch) We omit the full proof here for space reasons. It proceeds by structural induction over the formula, analogous to the constraint generation of the algorithm $Gen$. Intuitively, first-order quantifiers are handled by performing a program modification that allows to keep track of the value of quantified variables explicitly, exploiting their rigidity. The recursive descent into $\varphi$ allows to collect the variables in scope, embedding them into the quantification used in the constraint system.
Formally, we prove that the constraints generated by $Gen(\varphi_0, 0, \text{init}(v_0), \text{next}(v_0, v'_0))$ have a solution if and only if the program $P = (v_0, \text{init}(v_0), \text{next}(v_0, v'_0))$ satisfies $\varphi_0$. The base case, i.e., $\varphi_0$ is an assertion $c$ from our background theory $T$, is trivial.
As example for an induction step, we consider the case $\varphi_0 = \exists x : \varphi_1$. To prove soundness, we consider the case that the generated constraints have a solution. For the predicate $aux$, this solution takes the form of a relation $S_{aux}$ that satisfies all constraints generated for $aux$. For each $s$ with $\text{init}(s)$, we choose $\tau_s$ such that $(s, \tau_s) \in S_{aux}$. As we require $\text{init}(v_0) \rightarrow \exists x : aux(v_0, x)$, this element is well-defined. We now apply the induction hypothesis for $P' = ((v_0, x), aux(v_0, x), \text{next}(v_0, v'_0) \land x' = x)$ and $\varphi_1$. Then for all $s$ with $\text{init}(s)$, we have $P'(s, \tau_s) \models \varphi_1$, and as $P'$ is not changing $x$ by construction, also $P'(s, \tau_s) \models \varphi_1[\tau_s/x]$. From this, $P, s \models \varphi_0$ directly follows.
For completeness, we can proceed analogously. If $P, \varphi_0 \models \varphi_1$, then a suitable instantiation $\tau_s$ of $x$ can be chosen for each $s$ with $\text{init}(s)$, and thus we can construct a solution for $aux(v_0, x)$ from $\text{init}(v_0)$.
**Example.**
We illustrate $Gen$ (see Figure 1) on a simple example. We consider a property that the value stored in a register $v$ can grow without bound on some computation.
$$\forall x : v = x \rightarrow \text{EF}(v > x)$$
This property can be useful for providing evidence that a program is actually vulnerable to a denial of service attack. Let $\text{init}(v)$ and $\text{next}(v, v')$ describe a program over a single variable $v$.
We apply $Gen$ on the property and the program description and obtain the following application trace (here, we treat $\rightarrow$ as expected, exploiting that its left-hand side is a background theory atom).
$Gen((v \colon v = x \rightarrow \text{EF}(v > x), v, \text{init}(v), \text{next}(v, v')))$
$Gen(v = x \rightarrow \text{EF}(v > x), (v, x, \text{init}(v), \text{next}(v, v') \land x' = x)$
$Gen(v = x \rightarrow aux(v, x), (v, x, aux(v, x), \text{init}(v), \text{next}(v, v') \land x' = x)$
$Gen(\text{EF}(v > x), (v, x, aux(v, x), \text{next}(v, v') \land x' = x)$
This trace yields the following constraints.
$\text{init}(v) \rightarrow (v = x \rightarrow aux(v))$
$aux(v) \rightarrow inv(v, x)$
$inv(v, x) \land \neg(v > x) \rightarrow \exists v', x' : \text{next}(v, x, v', x') \land x' = x$
$\land inv(v, x') \land \text{rank}(v, x, v', x')$
Note that there exists an interpretation of $aux$, $inv$, and $\text{rank}$ that satisfies these constraints if and only if the program satisfies the property.
4. Evaluation
In this section, we present CTLFO, a CTL+FO verification engine. CTLFO implements the procedure $Gen$ and applies EHSF [2] to resolve resulting clauses.
We run CTLFO on the examples OS frag.1, ..., OS frag.4 from industrial code from [5, Figure 7]. Each example consists of a program and a CTL property that we are interested in proving about the program. We have modified the given properties to lift the CTL formula to CTL+FO. As example, consider the property $AG(a = 1 \rightarrow AF(r = 1))$. To lift it to CTL+FO, we apply the existential introduction
<table>
<thead>
<tr>
<th>Property $\phi$</th>
<th>$\forall \text{T/O FO} \phi$</th>
<th>$\exists \text{T/O FO} \neg \phi$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$\exists x : \text{AG}(a = x \rightarrow AF(r = 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\forall x : \text{EF}(a = x \rightarrow AF(r = 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\exists x : \text{AG}(a = x \rightarrow EF(r = 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\forall x : \text{EF}(a = x \rightarrow AF(r = 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\forall x : \text{EF}(a = x \rightarrow AG(r \neq 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\exists x : \text{AG}(a = x \rightarrow AF(r = 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
<tr>
<td>$\forall x : \text{EF}(a = x \rightarrow AG(r \neq 1))$</td>
<td>✓</td>
<td>×</td>
</tr>
</tbody>
</table>
Table 1: Evaluation of CTLFO on industrial benchmarks from [9].
rule, one of the natural deduction rules for first-order logic. One modified property to check could be $\exists x : \text{AG}(a = x \rightarrow AF(r = 1))$, and another one is $\text{AG}(\exists x : (a = x \rightarrow AF(r = 1)))$. By doing similar satisfiability-preserving transformations of the properties for all the example programs, we get a set programs whose properties are specified in CTL+FO as shown in Table 1. For programs P1 to P12, we have considered two CTL+FO properties per program where as for programs P13 to P16 we have considered only one. For each pair of a program and CTL+FO property $\phi$, we generated two verification tasks: proving $\phi$ and proving $\neg \phi$. While the existence of a proof for a property $\phi$ implies that $\neg \phi$ is violated by the same program, we consider both properties to show the correctness of our tool.
We report the results in Table 1. ✓ (resp. ×) marks the cases where CTLFO was able to prove (resp. disprove) a CTL+FO property. T/O marks the cases where CTLFO was not able to find either a solution or a counter-example in 600 seconds.
CTLFO is able to find proofs for all the correct programs except for P10 and counter-examples for all incorrect programs except for P16. Currently, CTLFO models the control flow symbolically using a program counter variable, which we believe is the most likely reason for the solving procedure to time out. Efficient treatment of control flow along the lines of explicit analysis as performed in the CPachecker framework could lead to significant improvements for dealing with programs with large control-flow graphs. An executable of CTLFO together with the examples can be found at [https://www7.in.tum.de/~beyene/ctlfo.zip](https://www7.in.tum.de/~beyene/ctlfo.zip).
For cases where the property contains nested path quantifiers and the outer temporal quantifier is $F$ or $U$, our implementation may generate non-Horn clauses following the proof system from [13]. While a general algorithm for solving non-Horn clauses is beyond the scope of this paper, we used a simple heuristic to seed solutions for queries appearing under the negation operator.
5. Conclusion
This paper presented an automated method for proving program properties written in the temporal logic CTL+FO, which combines universal and existential quantification over time and data. Our approach relies on a constraint generation algorithm that follows the formula structure to produce constraints in the form of Horn constraints with forall/exists quantifier alternation. The obtained constraints can be solved using an off-the-shelf constraint solver, thus resulting in an automatic verifier.
6. References
APPENDIX
A. Remaining rules
In this section we present the remaining rules of GEN, which deal with the complete set of temporal quantifiers. See Figure 2.
\[
\begin{align*}
| AX \varphi_1 &\Rightarrow \\
&\text{let } aux = \text{fresh symbol of arity } |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow \exists v'_0 : \text{next}(v_0, v'_0), \\
&\text{init}(v_0) \land \text{next}(v_0, v'_0) \rightarrow aux(v'_0), \\
&\text{GEN}(\varphi_1, v_0, aux(v_0), \text{next}(v_0, v'_0)) \\
| EX \varphi_1 &\Rightarrow \\
&\text{let } aux = \text{fresh symbol of arity } |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow \exists v'_0 : \text{next}(v_0, v'_0) \land aux(v'_0), \\
&\text{GEN}(\varphi_1, v_0, aux(v_0), \text{next}(v_0, v'_0)) \\
| AG \varphi_1 &\Rightarrow \\
&\text{let } inv = \text{fresh symbol of arity } |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow inv(v_0), \\
&\text{inv}(v_0) \land \text{next}(v_0, v'_0) \rightarrow inv(v'_0), \\
&\text{GEN}(\varphi_1, v_0, inv(v_0), \text{next}(v_0, v'_0)) \\
| EG \varphi_1 &\Rightarrow \\
&\text{let } inv = \text{fresh symbol of arity } |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow inv(v_0), \\
&\text{inv}(v_0) \land \text{next}(v_0, v'_0) \rightarrow \exists v'_0 : \text{next}(v_0, v'_0) \land inv(v'_0), \\
&\text{GEN}(\varphi_1, v_0, inv(v_0), \text{next}(v_0, v'_0)) \\
| A(\varphi_1 \cup \varphi_2) &\Rightarrow \\
&\text{let } inv, aux_1, aux_2 = \text{fresh symbols of arity } |v_0| \text{ in} \\
&\text{let } rank = \text{fresh symbol of arity } |v_0| + |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow inv(v_0), \\
&\text{inv}(v_0) \land \neg aux_2(v_0) \rightarrow aux_1(v_0) \land \exists v'_0 : \text{next}(v_0, v'_0), \\
&\text{inv}(v_0) \land \neg aux_2(v_0) \land \text{next}(v_0, v'_0) \rightarrow inv(v'_0) \land rank(v_0, v'_0), \\
&\text{wf(rank),} \\
&\text{GEN}(\varphi_1, v_0, aux_1(v_0), \text{next}(v_0, v'_0)), \text{GEN}(\varphi_2, v_0, aux_2(v_0), \text{next}(v_0, v'_0)) \\
| E(\varphi_1 \cup \varphi_2) &\Rightarrow \\
&\text{let } inv, aux_1, aux_2 = \text{fresh symbols of arity } |v_0| \text{ in} \\
&\text{let } rank = \text{fresh symbol of arity } |v_0| + |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow inv(v_0), \\
&\text{inv}(v_0) \land \neg aux_2(v_0) \rightarrow aux_1(v_0) \land \exists v'_0 : \text{next}(v_0, v'_0) \land inv(v'_0) \land rank(v_0, v'_0), \\
&\text{wf(rank),} \\
&\text{GEN}(\varphi_1, v_0, aux_1(v_0), \text{next}(v_0, v'_0)), \text{GEN}(\varphi_2, v_0, aux_2(v_0), \text{next}(v_0, v'_0)) \\
| (A/E) F \varphi_1 &\Rightarrow \text{GEN}(v_0, \text{init}(v_0), \text{next}(v_0, v'_0), (A/E)(\text{true} \cup \varphi_1)) \\
| \varphi_1 \land / \lor \varphi_2 &\Rightarrow \\
&\text{let } aux_1, aux_2 = \text{fresh symbols of arity } |v_0| \text{ in} \\
&\text{init}(v_0) \rightarrow aux_1(v_0) \land / \lor aux_2(v_0), \\
&\text{GEN}(\varphi_1, v_0, aux_1(v_0), \text{next}(v_0, v'_0)), \text{GEN}(\varphi_2, v_0, aux_2(v_0), \text{next}(v_0, v'_0))
\end{align*}
\]
Figure 2: Remaining rules of constraint generation algorithm GEN.
|
{"Source-Url": "http://spinroot.com/spin/symposia/ws14/spin20140_submission_31.pdf", "len_cl100k_base": 7516, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 25428, "total-output-tokens": 8598, "length": "2e12", "weborganizer": {"__label__adult": 0.00038814544677734375, "__label__art_design": 0.0003566741943359375, "__label__crime_law": 0.0005612373352050781, "__label__education_jobs": 0.0004127025604248047, "__label__entertainment": 7.015466690063477e-05, "__label__fashion_beauty": 0.0001748800277709961, "__label__finance_business": 0.0002551078796386719, "__label__food_dining": 0.0004529953002929687, "__label__games": 0.0007538795471191406, "__label__hardware": 0.000926494598388672, "__label__health": 0.0005574226379394531, "__label__history": 0.00021028518676757812, "__label__home_hobbies": 0.00011748075485229492, "__label__industrial": 0.0005636215209960938, "__label__literature": 0.00026416778564453125, "__label__politics": 0.00032329559326171875, "__label__religion": 0.0004703998565673828, "__label__science_tech": 0.034271240234375, "__label__social_life": 8.296966552734375e-05, "__label__software": 0.0055389404296875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0003287792205810547, "__label__transportation": 0.0006656646728515625, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27132, 0.02544]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27132, 0.33315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27132, 0.76403]], "google_gemma-3-12b-it_contains_pii": [[0, 4761, false], [4761, 11632, null], [11632, 18895, null], [18895, 24071, null], [24071, 27132, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4761, true], [4761, 11632, null], [11632, 18895, null], [18895, 24071, null], [24071, 27132, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27132, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27132, null]], "pdf_page_numbers": [[0, 4761, 1], [4761, 11632, 2], [11632, 18895, 3], [18895, 24071, 4], [24071, 27132, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27132, 0.05]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
37a432545d341c7229a53b573c7ea64039d5aac5
|
An Automatic System Partitioning Algorithm for Mixed-Criticality Systems
Emilio Salazar¹ and Alejandro Alonso¹
¹Universidad Politécnica de Madrid
¹{esalazar, aalonso}@dit.upm.es
Abstract
The continuous improvement of processors computational power and the requirements on additional functionality is changing the way embedded systems are built. Applications with different safety requirements are executed in the same processor giving rise to mixed-criticality systems. The use of partitioned systems is a way of preventing undesirable interferences among applications. Still, the development of partitioned systems requires additional development error-prone activities often done by the system integrator. This paper describes an algorithm aimed at supporting the development of partitioned systems by generating system partitioning automatically, taking into account applications and platform characteristics.
1 Introduction
Traditionally, applications with different safety requirements have been allocated to different computers. However, processors power allows current computers to integrate a large number of applications in a single multi-core computer. As a consequence, there is a significant reduction of costs, volume, weight, and power consumption. In addition, certification for safety-critical systems is a very expensive process that involves the whole system, regardless of the safety requirements of each application. Moreover, they must be reevaluated with each single change on any of the applications.
The use of virtual machines or partitions, provided by a hypervisor, is an approach aimed at reducing costs and making the certification procedures easier is the system partitioning. In a partitioned system, applications of different criticality levels run on different partitions. The hypervisor ensures temporal and spatial isolation between partitions. A faulty application does not interfere on the behavior of other applications. In this way, it should be possible to make an independent certification of applications.
The development of partitioned systems, however, requires additional activities, such as the partitioning of the system. It requires the system integrator to consider a large amount of features, related with applications and execution platform. On the other hand, partitioned systems are a fairly new approach and accordingly there is a lack of tools for supporting its development. As a consequence and despite of their complexity, often these error-prone activities are crafted by the system integrator.
In the framework of the MultiPARTES [1] project, a toolset aimed at supporting the development of partitioned mixed-criticality systems was developed. A key part of this toolset is the algorithm for automatically generating a system partitioning consistent with the input models.
This paper describes the proposed algorithm and the tests and analysis that have been made in order to validate it. In addition, it is detailed how the heuristic search that is done for searching for an optimal solution and the provided mechanisms to make the algorithm more general and flexible.
2 Related Work
There is an important amount of research projects aimed at partitioned mixed-criticality systems.
In the context of the ASSERT [7] project was produced a toolset [8] which has been extended for supporting partitioned systems [2]. This tool however requires the system partitioning to be provided in advance.
CERTAINTY [3] dealt with the certification process for mixed-critical embedded systems. It is proposed a unified semantics for systems and languages with mixed-criticality concern [9]. Main outcomes [11] were a scalable interference analysis framework, a scheduling policy for mixed-criticality multi-core system based on flexible time-triggering and a resource sharing and virtualization mechanism. Additionally, a WCET analysis tool was extended to include more architectures [10]. Nonetheless, partitions are crafted in advance.
RECOMP [4] studied how to enable cost-efficient certification and re-certification of safety-critical
systems and mixed-criticality systems. To that end, several validation, verification and timing analyses for component validation were chosen and extended [12, 13]. Moreover, several operating systems [14] for a number of hardware platforms [15] were extended. In addition, a set of tools were select to create the different tool-chains supporting the development and certification life-cycles [16]. Similarly with [2, 3], RECOMP assumes that the partitions are already defined.
In short, so far, research projects have been aimed at improving the support of the development of partitioned mixed-criticality systems. However, there still exist a lack of research in the automatic system partitioning generation.
EMC² [5] aims at finding solutions for dynamic adaptability in open mixed criticality real-time systems through the entire life-cycle. A toolset addressing the modeling and analyzing multi-domain mixed-criticality applications is to be created. This toolset currently supports schedulability analysis based on Response Time Analysis and C code generation. It will also support a complete implementation of both, single and multiprocessor platforms.
DREAMS [6] aims at virtualized mixed criticality systems on multicore platforms. It will deliver architectural concepts, meta-models, virtualization technologies, model-driven development methods, tools, adaptation strategies and validation, verification and certification methods.
Ongoing research projects such as EMC² and DREAMS assume also that the partitions are defined and fixed from the beginning. Of course, as ongoing projects, they have not produced yet all of their outcomes and accordingly, this assumption might change.
For this reason, one of the goals of the toolset developed in the MultiPARTES project was to automatically generate a ready-to-run partitioned system where the system partitioning has been deduced based on the input models provided by the engineer.
To the best of the authors knowledge, the closest approach regarding automatic system partitioning algorithms is published in [17]. In this paper, Tamas et al. proposed a Tabu Search based algorithm that creates a partitioning schema where the development costs are minimized and the tasks are schedulable.
Contrary to the Tamas’ approach, the algorithm presented in this paper decouples the partitioning logic from the scheduling policy. As a consequence, changes on any of these critical processes do not imply to modify the algorithm which, in turn, decreases the cost of adapting the algorithm to new scenarios. Moreover, parameters such as operating system, core affinity, processor family, hardware requirements, etc are supported and properly managed to assure a consistent system partitioning. In addition, in this paper, a wider range partitioning requirements are supported.
3 System Modeling
In mixed-criticality systems, applications run on a particular execution platform regardless of the application criticality. An application is a software component that provides a well-defined functionality. The impact on the system mission of a fault in a specific functionality determines its criticality.
The execution platform is the environment where applications run. The most important actors of the execution platform are the hardware, the hypervisor and the operating system. Hardware are all the computational devices (e.g. processor, memory, I/O devices, etc.) where the system executes.
The hypervisor is a layer of low-level software that provides virtual machines (i.e. partitions) where a fixed set of applications run on a specific operating system and a known set of resources such as processor budget and memory. The hypervisor provides also temporal and spatial isolation. This crucial feature in mixed-criticality systems means that a fault or misbehavior on a partition does not impact on the behavior of the other partitions.
In addition to applications and execution platform, there may exist a number of non-functional requirements (NFR) that the final system must meet. NFR may have a sound impact on the final system behavior, correctness and efficiency. Relevant examples of the former requirements are real-time, safety or security.
Following the Model-Driven Architecture (MDA) [20], all these data is captured by means of models. The system is thus composed by a set of models which are based on specifically designed meta-models, for allowing a complete description of these entities.
System models are the input of the partitioning algorithm. These models hold the most relevant information to the partitioning algorithm:
- Operating System. This model holds partitioning relevant information about the operating system such as its version, library paths or the operating system processor family.
• **Hypervisor.** The goal of this model is to describe important information about the hypervisor which impacts on the partitioning algorithm. For instance the processor family or the library paths.
• **Hardware Platform.** This model describes the underlying hardware. Remarkable data held in this model is the amount of cores, memory and I/O devices.
• **Application.** In this model, application information related to the partitioning is contained. Relevant examples of these data are application real-time parameters (i.e. period, deadline and computation time), application criticality, application operating system, applications hardware requirements, etc. It is worth to mention that, in order to provide backward compatibility, there are two different application models:
– **Basic application.** This model provides only the basic required data for the partitioning algorithm and it is intended to model legacy applications which have no available documentation.
– **UML-MARTE application.** This model provides a highly detailed vision of the internal structure of the application which enables additional features such as automatic code generation or time analysis.
• **Partitioning constraints.** This model is intended to model the NFRs of the system which must be met in the final system partitioning. It is further discussed in section 5.1.
Further details about the modeling of the partitioning algorithm inputs can be found in [18, 19].
4 The Problem of Partitioning Mixed-Criticality Systems
The result of partitioning a mixed-criticality system is a system partitioning. A system partitioning is a particular allocation of the system applications to partitions. This allocation must however meet a number of conditions to be considered valid:
• All applications are allocated to, at least, one partition.
• All partitions host, at least, one application.
• Resources allocated to partitions do not exceed those available.
• All non-functional requirements are met.
Owing to the size and complexity of the partitioning problem, an approach based on dealing with the problem as a whole does not seem to be practical. Instead, a divide-and-conquer approach has been taken by breaking down the partitioning problem into three smaller problems:
• **Allocate applications to partitions.** The problem is how to group the initial set of applications into different partitions. The allocation must meet a number of partitioning requirements in order to assure a valid result.
• **Allocate partitions to processing resources.** Once the partitions are defined, they must be allocated to different processing resources. This problem is analogous to the bin packing problem which is, in turn, a well known NP-Hard problem.
• **Schedule partitions.** Partitions must be scheduled so that all applications meet their deadlines but taking into account that there are two scheduling levels. On one hand, each partition hosts a number of applications that are executed according to a local scheduling policy. On the other hand, each partition is scheduled based on a global scheduling policy. This problem is called hierarchical scheduling and it is a studied NP-Hard problem.
This paper is focused on the first of the above points. For this reason, hereinafter the **partitioning problem** shall refer to the problem of allocating applications to partitions. Accordingly, the **partitioning algorithm** shall refer to the algorithm created to handle the partitioning problem.
5 Modeling the Allocation of Applications to Partitions
The partitioning problem has been formally modeled in order to make the use of mathematical algorithms easier. This representation is also used as internal representation in the partitioning algorithm.
The choose of the mathematical representation is indeed of a critical importance as it is the base for the rest of the algorithm. For this reason, a study of the representation used in other fields for solving similar problems has been carried out. As a result of this study, the allocation of intermediate variables to machine registers has arose as a very close problem. This problem, denoted as the **register allocation problem**, has been and still
is profusely studied by the compiler research community [24, 25, 26, 27, 23, 31, 32, 33] as it is of a great practical importance.
The approach provided by [25] and then improved by [26, 27] is probably one of the most used approaches in modern compilers. The key idea behind this approach is to model the register allocation as graph (i.e interference graph) and then k-color this graph where K is the amount of available machine registers.
A vertex colored graph is a graph where vertices may be tagged (or colored) according to a given policy. One of the most usual coloring policies is the proper vertex coloring. In a proper vertex colored graph, no two adjacent vertices share the same color. A k-coloring of a graph is a coloring that uses, at most, k colors for proper coloring the graph.
The partitioning algorithm presented in this paper is based on the same idea: modeling the problem of allocating applications to partitions as a vertex colored graph which is built as follows:
- **Vertices.** Each vertex represents an application of the system.
- **Edges.** Each edge links two applications that cannot be allocated to the same partition.
- **Color.** Each color represents a partition.
### 5.1 Partitioning Constraints
Applications (i.e. vertices) and partitions (i.e. colors) are properly represented in the graph. However, the modeling of the non-functional requirements (NFR) needs further considerations.
There are a number of NFR that have an important relevance in the allocation of applications to partitions. Some relevant examples are safety, real-time and security requirements. A common factor of these requirements are that they imply a set of restrictions on the allocation of applications to partitions. Two application with different safety criticality levels cannot be allocated in the same partition. The same holds for applications with sensible information.
The proposed approach is to create a set of simpler partitioning constraints that specify the effects of a NFR on the final allocation of applications to partitions. In other words, the idea is to design a set of simple constraints that makes it possible to generate automatically a set of partitioning constraints to ensure the fulfillment of specific NFR. This approach provides several advantages:
- **NFR agnostic partitioning algorithm.** Virtually any type of NFR can be processed with the same and unmodified partitioning algorithm as long as each NFR can be expressed in terms of partitioning constraints.
- **Composition of NFR.** The fulfilling of all the partitioning constraints implies fulfilling of all the NFR as well.
- **Makes the integration easier.** Individually, all of the partitioning constraints are intentionally very simple and in addition, the number of them is small. Therefore, the partitioning constraints can be used by different third-party tools as a common language to express impact of additional of NFR on the system partitioning.
#### 5.1.1 Constraints Sources
- **Implicit constraints.** The resulting system works only if all of these constraints are met. For instance, two applications with the different operating systems cannot be allocated to the same partition as each partition hosts only a single operating system. Often, these requirements are automatically deduced from the data extracted of the input models.
- **Explicit constraints.** As a rule, contrarily to the explicit constraints, these constraints must be explicitly provided. Two main kinds of explicit constraints are:
- **Non-functional constraints.** System properties such as real-time, safety or security are provided with these constraints. For this reason, they are crucial to make the final system work properly.
- **System integrator constraints.** These constraints deliver the engineer background and expertise.
#### 5.1.2 Constraints Types
- **Basic constraints.** After studying several use cases, the following constraints arose as the most important constraints. So far, with these constraints was possible to describe all of the needed requirements. In addition, these are the sole constraints that have a direct impact on the graph.
- **Application A must not be allocated along with B.** This constraints forces the algorithm not to allocate A to the same partition as B. In terms of the graph, it is represented as an edge between vertices A and B.
– **Application A must be allocated to partition P.** This constraint forces the algorithm to allocate the application A to the partition P. In the graph, it is represented by pre-coloring the vertex A with the color that stands for the partition P.
– **Application A must be allocated together with B.** This constraint forces the algorithm to allocate applications A and B to the same partition. This is translated to the graph by pre-coloring both applications with the same color.
– **Application A must not be allocated to partition P.** This constraint indicates the algorithm to avoid allocating application A to the partition P. In the graph, the color P is added to the banned colors list of vertex A.
**Combined constraints.** These convenient constraints are sets of basic constraints that represent a number of common partitioning requirements.
– **Application A must be allocated to core C.**
– **Application A must be allocated to processor P.**
– **Application A must be allocated to a core of the processor family F.**
– **Application A must be allocated at the address X.**
– **Application A requires access to hardware H.**
– **Application A must run on hypervisor H.**
– **Application A requires partition system privileges.**
– **Application A requires floating point support.**
6 The Partitioning Algorithm
The goal of the partitioning algorithm is to find a valid allocation of applications to partitions based on the provided mathematical model (i.e. graph) described in section 5.
Furthermore, there are several principles that have driven the design of the partitioning algorithm:
– **General.** It must be able of handling a wide range of systems from different domains. A way of achieving the desired levels of generality is by modeling the problem with a high level of abstraction. To that end, as stated in section 3, MDA proposes a methodology where models are the center of the developing process. According to the MDA strategy, relevant information is captured by a number of models while the algorithm itself is embedded in different model-to-model transformations.
– **Flexible** The algorithm must be able to model and process all of the NFR regardless of their origin. This is achieved by partitioning constraints which are described in subsection 5.1.
– **Adaptable.** The nature of the partitioned systems makes difficult to provide a general definition of what is a correct and optimal system partitioning. For this reason, the algorithm shall provide means for customizing both concepts according to the parameters defined in each system.
6.1 Correctness
As stated in section 4, there exist a minimum amount of necessary conditions that a system partitioning must meet to be considered valid.
The proposed approach to find a valid system partitioning is to **proper vertex color the graph built in section 5.** Provided that this graph was properly built, the resulting coloring is equivalent to a system partitioning where all of the aforesaid conditions are met:
– **All vertices are colored.** By definition, all of the vertices of a proper colored graph are colored with, at least, one color. From the point of view of the partitioning problem, this property ensures that all of the applications are allocated to, at least, one partition.
– **Each color is used in, at least, one vertex.** By definition, all colors created for proper coloring a colored graph have been used to color, at least, one vertex. Analogously, in the partitioning problem, this property ensures that there is no empty partitions or, in other words, it ensures that all partitions host, at least, one application.
– **There is no to adjacent vertices with the same color.** By definition, a proper colored graph cannot use the same color in two adjacent vertices. As a consequence, a proper colored graph is equivalent to a system partitioning that by definition meets all the partitioning constraints.
Furthermore, the solution is guaranteed as there always exist a (naive) proper coloring (i.e. each vertex is colored with a different color) provided that the graph does not have vertices connected directly back to themselves. The naive coloring
is usually not the desired result though as it is discussed in section 6.2.
### 6.1.1 Color Filters
Several NFR can be difficult to model by means of a colored graph. For instance, in the partitioning of a high-integrity system, it may be important to guarantee that no partition uses more than a given maximum amount of processor time. In such cases, the algorithm provides *color filters*.
A color filter is a function that rejects the use of a specific color for coloring a particular vertex according to an user-specific policy. Each vertex is colored only with colors that have validated all the filters. In addition, there is no limit in the amount of color filters so, a set of color filter may be provided in order to ensure the fulfillment of different NFR.
In the high-integrity system example, a color filter can be provided for avoiding the allocation of applications to partitions that have exceeded the maximum allowed processor time.
### 6.2 Optimality
A graph may have multiple proper colorings and consequently multiple valid system partitioning may be produced for a single system. For this reason it is also important to define criteria for selecting the optimal partitioning for a given system.
The notion of optimal system partitioning is not absolute. It depends on the specific requirements of a particular system. It some cases, it may be interesting to minimize the number of partitions. In other cases, the cores load balancing can be more relevant.
For this reason, the partitioning algorithm does not impose any particular optimal search function but rather it provides a customizable function. If no specific optimal function is provided, the default optimal in the partitioning algorithm is the smallest amount of partitions (see subsection 6.2.1). This optimal makes sense in many partitioned systems as it is commonly desirable to retrieve the smallest, fastest and simplest system possible. Yet, as said, there are scenarios where the amount of partitions is not a priority (e.g. high-integrity systems) and where a different optimal search function may be provided.
### 6.2.1 Smallest Amount of Partitions
Generating a system partitioning with the smallest amount of partitions is equivalent to coloring a graph with the smallest amount of colors. However, coloring a graph with the minimal amount of colors is a known NP-Hard problem [34]. In other words, there is no known algorithm that provides optimal solutions in polynomial time.
Heuristic algorithms do not ensure an optimal solution but often run in polynomial time. For this reason, they are commonly used for attack NP problems. There is a vast amount of heuristics for coloring a graph with the smallest amount of colors, but as a rule, a coloring heuristic proceed as follows:
1. Sort vertices according to a given heuristic in a list of vertices.
2. Color the first vertex of the list.
3. Remove the colored vertex from the list.
4. If the list is empty, finish. Otherwise, if the heuristic is has dynamic vertex ordering go to 1. Otherwise, go to 2.
Despite the amount of heuristics, in practice [35], three heuristics are the most commonly used:
- **Largest-First (LF)** [36] is a static vertex ordering heuristic that sorts vertices in decreasing order of degree. LF starts to color with the most connected vertex (i.e. highest degree vertex) because this vertex is probably the most difficult vertex to color given that is the vertex with the highest amount of neighbors.
- **Largest-Saturation-First (LSF)** [37] is a dynamic vertex ordering heuristic that colors the vertex with the highest saturation. The saturation of a vertex is the number of different colors used by the vertex neighbors.
- **Smallest-Last (SL)** [38] is a static vertex ordering heuristic based on the same idea as LF. However, the vertices sorting process is refined which avoids certain faults of LF.
After studying a number of use cases, it was stated that the amount of vertices is very rarely big (i.e. more than 100 vertices). Therefore, given that the today’s computing power makes insignificant the required time for coloring a graph of less than 100 vertices, the default behavior of the proposed algorithm is to color the graph using the three aforementioned heuristics and choose the best result.
6.3 Alternative System Partitioning
There is not an upper bound to the amount of colors of a vertex. This means that, regardless of the optimal search function and the amount of color filters, a vertex may have multiple candidate colors when a graph is proper colored.
Each vertex is colored with a single color for each system partitioning that is generated. However, when a vertex has multiple candidate colors, the preferred color is selected and removed from the candidate colors list. The preferred color is the first color of the vertex candidate colors list. The default order is LIFO but if it is needed, the system engineer may provide a specific vertex candidate colors sorting function.
When an alternative system partitioning is requested, the graph is re-colored starting by the first vertex with multiple candidate colors that was found in the previous coloring. Then, the preferred color (i.e. the first color of the vertex candidate colors list) is chosen for coloring the vertex and then removed from the list of candidate colors of the vertex.
The subsequent vertices with multiple candidate colors are colored using their preferred colors, but unlike the first vertex, the preferred color is not removed from the list. The reason is that different colorings of the first vertex with multiple colors may or may not affect to the rest of vertices with multiple candidate colors as it might happen that the sole difference between two different system partitioning was the coloring of the first vertex with multiple candidate colors.
The resulting colored graph represents thus an alternative system partitioning which has also assured the correctness as it is a valid proper coloring of the initial graph. The level of optimality of the different alternative system partitioning depends on the provided optimal search function and candidate colors sorting function.
An alternative system partitioning may be very helpful in the development of complex system, where many analysis aimed at verifying the correctness of the final system are involved. This feature enables the system integrator to ask for an alternative system partitioning if the original system partitioning is rejected by later non-functional analysis such as schedulability or performance.
7 Algorithm Synthetic Tests
These tests have been developed with the following goals in mind:
- Determine the scalability of the algorithm with the default optimal search function.
- Determine the impact on the generated system partitioning of the color filters using the default optimal search function (i.e. least amount of partitions).
- Verify the correct implementation of the different procedures used in the algorithm.
7.1 Generation of a Test Case
A model representing the system to be partitioned (i.e. project model) is created with a random number of application models, hypervisor models, operating system models, hardware platform models (i.e. cores, processors and I/O devices) and explicit partitioning constraints.
In turn, each of these components is randomly created. For instance, each hardware platform models a random number of processors that have a variable number of cores working at different frequencies.
Upper and lower bounds are defined to control the project model complexity. For example, if a complex project model is desired, a lower bound of 50 applications may be set. If it is required to generate a mono-core hardware platform, a upper bound for processors and cores my be defined.
7.2 Scalability
These tests are aimed at defining the average execution time of the algorithm in projects of different sizes and complexities.
Notwithstanding that the scalability is always a desirable feature, it actually is not a priority in the partitioning algorithm design since it is very unlikely that a system have more than 100 applications. In addition, even in case of hundreds of applications, the execution times of the algorithm are of a few seconds as tests have shown.
Figure 1 depicts how the amount of applications impacts on the global execution time of the algorithm.
In this particular case, the test case is composed of 200 randomly generated projects with sizes between 5 and 200 applications each of them. In addition, in order to stress the algorithm, a complex environment is generated: a hardware platform with 6 mono-core processors, four different operating systems and 2 hypervisors.
Such complex scenario ensures an important amount of implicit partitioning constraints which, at the end, leads to a complex graph to color. The
heuristic used for coloring the graph is LSF without any additional color filter.
Figure 1 shows the amount of vertices (i.e. applications) has an important impact on the execution time: while 100 vertices require less than 200 ms, the double of them require almost 900 ms.
However, figure 1 also states that even in case of huge projects with 200 applications, the execution times are below the second. One second is a very reasonable time for an algorithm that is aimed at improving the development process of partitioned systems, not to be executed on-line in the final partitioned system.
7.3 Color Filters
These tests are aimed at determining the impact of introducing color filters in the algorithm.
Some high-integrity systems must work even in overload situations. The approach commonly taken is to define a maximum processor time for each partition so that none of the partitions can use more than the defined processor time. This ensures that, even in case of overload, all partitions will have available time to produce their results.
This test case analyzes the impact of limiting the processor time available for each partition. The scenario and the used heuristic is the same as in subsection 7.2.
Figure 2 depicts clearly the impact of limiting the processor time on the generated partitions. Triangles are the amount of generated partitions when the processor load of each partition is limited to 5%. On the other hand, lines show the amount of generated partitions when the processor time is limited to 85%.
As expected, the lower is the available processor time, the higher is the amount of generated partitions. When partitions have up to 85% of processor time, the amount of partitions is under 15. However, if the amount of available time processor goes down to 5%, the amount of required partitions grows up to 30 partitions.
Referencias
[6] DREAMS (Distributed Real-time Architecture for Mixed Criticality Systems). EU FP7-
Figure 2: Amount of generated partitions when the processor time of each partition is bounded.
ICT project ref. 610640. http://www.dreams-project.eu
[16] D2.5 - Guidelines for developing certifiable systems and integration with existing tool flows. http://atcproyectos.ugr.es/recomp
|
{"Source-Url": "https://ehu.eus/documents/3444171/4484751/123.pdf", "len_cl100k_base": 6446, "olmocr-version": "0.1.46", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32909, "total-output-tokens": 8820, "length": "2e12", "weborganizer": {"__label__adult": 0.00045680999755859375, "__label__art_design": 0.0005526542663574219, "__label__crime_law": 0.00046324729919433594, "__label__education_jobs": 0.0007586479187011719, "__label__entertainment": 0.00010585784912109376, "__label__fashion_beauty": 0.0002684593200683594, "__label__finance_business": 0.00038743019104003906, "__label__food_dining": 0.00045228004455566406, "__label__games": 0.0012989044189453125, "__label__hardware": 0.00481414794921875, "__label__health": 0.0008568763732910156, "__label__history": 0.0005288124084472656, "__label__home_hobbies": 0.00017917156219482422, "__label__industrial": 0.0010204315185546875, "__label__literature": 0.0003807544708251953, "__label__politics": 0.0004041194915771485, "__label__religion": 0.0007824897766113281, "__label__science_tech": 0.2286376953125, "__label__social_life": 9.78708267211914e-05, "__label__software": 0.00934600830078125, "__label__software_dev": 0.74658203125, "__label__sports_fitness": 0.0003974437713623047, "__label__transportation": 0.0010175704956054688, "__label__travel": 0.0002834796905517578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38275, 0.02661]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38275, 0.62568]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38275, 0.90844]], "google_gemma-3-12b-it_contains_pii": [[0, 4103, false], [4103, 8894, null], [8894, 13116, null], [13116, 17508, null], [17508, 21700, null], [21700, 26002, null], [26002, 30573, null], [30573, 33242, null], [33242, 35744, null], [35744, 38275, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4103, true], [4103, 8894, null], [8894, 13116, null], [13116, 17508, null], [17508, 21700, null], [21700, 26002, null], [26002, 30573, null], [30573, 33242, null], [33242, 35744, null], [35744, 38275, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38275, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38275, null]], "pdf_page_numbers": [[0, 4103, 1], [4103, 8894, 2], [8894, 13116, 3], [13116, 17508, 4], [17508, 21700, 5], [21700, 26002, 6], [26002, 30573, 7], [30573, 33242, 8], [33242, 35744, 9], [35744, 38275, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38275, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
16acfdde7561fcc748cdecdc56819a7d6bb7127d
|
qCrypt Key Management Server and qClient KMIP C SDK
Performance Measurement and Optimisation Tips
Disclaimer
QuintessenceLabs makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. QuintessenceLabs shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of QuintessenceLabs. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for QuintessenceLabs products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. QuintessenceLabs shall not be liable for technical or editorial errors or omissions contained herein.
Overview
This document discusses methods of optimising performance of applications that communicate with the qCrypt Key Manager over KMIP. Performance measurements and qClient C SDK source code examples are presented.
Intended Audience: Application developers, system architects and operations personnel.
Assumptions: Familiarity with networking, access to VM or physical deployment of qCrypt appliances, and an ability to build and run C programs using the qClient C SDK.
Introduction
The qCrypt Key Manager conforms to the OASIS KMIP standard. This requires that KMIP client-server communication is protected over a mutually authenticated TLS channel. During setup of a TLS session, handshake messages are exchanged that require public key cryptographic (PKC) operations. PKC operations are relatively complex due to the nature of the algorithms involved, and as such, can account for a significant portion of the time taken for the exchange of short messages. If the qCrypt appliance uses an HSM, then PKC operations involving the server’s private key are performed within the HSM. This adds additional delay to the handshake.
This document presents performance measurements for a number of KMIP operations, and shows how load sharing, and connection pooling can dramatically improve performance.
Connection Establishment Overview
In order for a KMIP client to communicate with a KMIP server, a mutually authenticated TLS session must be established over a TCP connection. During the TLS handshake, both the client and server perform a number of PKC operations to authenticate each other. Additionally, the establishment of the secret key used to encrypt the communications between client and server involves PKC operations.
PKC operations use mathematical operations that are relatively CPU intensive. If a Hardware Security Module (HSM) is used to perform the PKC operations, then there can be additional relatively significant delays in moving data between host memory and the HSM.
For short-lived sessions, the time to complete the TLS handshake can significantly impact system performance. This type of delay is not unique to KMIP and TLS. This is a well-known issue that has been around since the dawn of electronic communications, and later, computing. Commonly used solutions to this problem include load sharing, and connection pooling. These solutions can also be applied to improve KMIP’s TLS session establishment.
In the case of load sharing, more resources are used. This can be thought of as simply providing more aggregate compute power that is shared across the entire load. If one resource is busy, another resource can provide service, concurrently.
In the case of connection pooling, also called connection caching, several connections are established, kept open, and re-used many times so that a new connection establishment handshake is not required for each transaction that takes place between the client and server.
Establishing a connection using the qClient SDK
The SDK function, `qlc_connect_key_manager()`, is used to establish a mutually authenticated TLS session between the client and KMIP server. `qlc_connect_key_manager()` takes two arguments. The first is the address of a pointer to a `*qlc_km_ctx_t` type. This is an opaque structure that holds context information related to the session between the client and server.
The second argument is an unsigned char pointer that contains connection information, including server IP address and port number, client private key, client certificate, and trusted CA certificate. In qClient sample code, this information is often stored in a file which is indicated by pre-pending an “@” symbol to the connection information.
```
Modules={
Search_Path= 'E:\usr\local\lib':'C:\Windows\System32'),
Presentation={
Protocol=KMIP, Version='1.3':'1.2':'1.1':'1.0', Format=TTLV},
Session = {
Protocol=SSL,
Host='kmip.mydomain.com',
Port='5696',
Certificate=PEM:'E:/usr/local/etc/certs/my-cert.pem',
CA_Cert=PEM:'E:/usr/local/etc/certs/ca-cert.pem',
Public=PEM:'E:/usr/local/etc/certs/my-cert.pem',
Private=PEM:'E:/usr/local/etc/certs/my-key.pem':'password',
Authenticate
}
```
Example qClient SDK configuration information
The code fragment below shows how `qlc_connect()` is used.
```c
qlc_node_t *rsp = NULL;
qlc_km_ctx_t *ctx = NULL;
int ret = QLC_ERR_NONE;
rsp = qlc_connect_key_manager(&ctx, "@kmip.cfg");
if ((ret = qlc_ok(rsp)) != QLC_ERR_NONE)
{
printf("Connection failed, error: %s\n", qlc_explain(rsp));
goto end;
}
printf("Connection established with server\n");
qlc_release(rsp);
rsp = NULL;
```
Connection establishment example code fragment
When this code executes, qClient opens and parses the configuration file, and establishes a TCP connection with the server on the specified port – 5696 has been assigned by IANA for KMIP. After establishing the TCP connection, qClient begins the TLS handshake, and uses the credential files identified in the configuration file for mutual authentication. Additionally, qClient sends a KMIP Discover Versions request to the server. The response to this request tells the client the versions of KMIP that are supported by the server.
After successful establishment of a TLS session, the `ctx` structure can be used in `qlc_perform()`, and `qlc_execute()` functions to send KMIP requests to the server using the established TLS session.
When `qlc_disconnect_key_manager()` is called, the TLS session and TCP connection are shut down. In order for the client to communicate again with the server, it must establish a new TLS session by calling `qlc_connect_key_manager()`.
Deployment Architectures
Typically, for operational reasons, compute, storage, and network infrastructure are built to support high availability and resilience through replication of equipment in geographically separated data centers. Key management systems built using qCrypt key management servers can consist of one or more qCrypt appliances. Like other IT resources, replication of key management servers and geographic separation is recommended for availability and service resilience.
An added consideration with key management is the risk of key loss. The risk of loss of keys, as well as loss of operations is greatest when only a single key manager is deployed. Network isolation, power failure, and device malfunction can all lead to loss of service. In the worst case, keys may be lost as well, potentially leading to large losses of encrypted data.

qCrypt key managers can be deployed in replicated pairs to support active-active, and active-standby system architectures. In this scenario, one key manager provides key management service, while a second operates as a hot standby. To mitigate the risk of key loss in failure scenarios, an active qCrypt key manager, when deployed for replication, always has one synchronous replication partner. The impact of network bandwidth and latency on performance needs to be considered carefully.
Additionally, for a two-node replication deployment, if one key management appliance goes offline – whether for maintenance, or due to network or device failure – the remaining key manager will not have a replication partner. The system will become vulnerable to key loss, just as in a single node deployment, in this failure scenario.
Recommended best practice for system architectures is deployment of no less than four qCrypt key managers. This supports service continuation in case of multiple failures, and also permits active-active service, either with two active nodes in one data center, or one active node in each of two data centers.
qCrypt fully supports load balancers, resulting in both automatic switching of client traffic to currently active key management nodes, and sharing of resources to improve overall system service.
The diagram above illustrates a “two-plus-two” deployment of qCrypt key managers, with two qCrypt nodes in each of two data centers. Load balancers and/or DNS can be used to switch traffic to active nodes. Any two nodes (two in primary data center, two in secondary data center, or one in each data center), can be configured to operate as active key managers, with the remaining two operating as hot standby nodes. To maintain best performance, synchronous replication should be configured between an active key manager node and a collocated key manager node.
Example of two-plus-two deployment configuration
The screen shot above shows an example of a two-plus-two configuration. Two nodes are deployed in a data center in Atlanta, and two nodes are deployed in a data center in London. At the time of the screen capture, one node in Atlanta (atlanta-001), and one node in London (london-001), were operating as masters. The atlanta-002 node, collocated with atlanta-001, is operating as a synchronous slave to atlanta-001. The london-002 node, collocated with london-001, is operating as a synchronous slave to london-001.
Performance Measurement and Results
Test environment
A “2+2” load-balanced test environment was deployed in VMware on a laptop. Device and software specifications were as follows:
1. Host computer
a. Dell XPS 15 laptop
b. Processor: Intel Core i7-3632QM CPU @ 2.20GHz
c. RAM: 16.0 GByte
d. OS: Windows 10, 64-bit
e. SSD: 256 GByte SSD, plus 2TByte SSD
2. VMware Workstation 12 Pro version 12.5.7 build-5813279
3. Load balancer
a. F5 BIG-IP version 12.1.2 Build 0.0.249
b. VM guest
i. Memory: 4 GByte
ii. Processors: 2
iii. HDD 1: 142 GByte
iv. HDD 2: 20 GByte
v. Network Adaptor: 4 x NAT
4. qCrypt-VM x 4
a. Release: 1.6
b. VM guest
i. Memory: 1 GByte
ii. Processors: 1
iii. HDD: 40 GByte
iv. Network Adaptor: NAT (Client, Management)
v. Network Adaptor: LAN Segment (Replication)
Test configurations
The following test configurations were used:
1. All connections to the same KM server, connections not cached
2. All connections to the same KM server, connections cached
3. Connections load-balanced across two KM servers, connections not cached
4. Connections load-balanced across two KM servers, connections cached
In all cases a “2+2” configuration of qCrypt was used; i.e. a total of four qCrypt-VM nodes, two nodes providing KM service (master nodes), two nodes running as slave-only nodes. Replication enabled from each master node to all other nodes.
Performance tests
The qClient sample program, s_speed, was used to test the performance of the following operations:
1. Get AES-256 key
2. Wrap key using NIST AES key wrap with 256-bit key
3. Create AES-256 key
4. Create RSA-2048 key pair
5. Create ECDSA Curve P-256 key pair
s_speed supports a number of command line options:
usage: s_speed <option>
where options are:
- `-help` - Display program usage.
- `-?` - Display program usage.
- `<configuration file>` - Connection command configuration file name.
- `-repeat <count>` - (Optional) Repeat the operation count times. Default is 100 times. Count must be an integer greater than 0.
- `-no_reuse` - (Optional) Do not re-use the connection; i.e. establish a new TLS session prior to each operation. Default behaviour is to re-use an already established connection.
- `-server` - Query server vendor and information.
- `-versions` - Discover protocol versions supported by the server.
- `-get` - Get a symmetric key.
- `-create` - Create symmetric keys.
- `-create_rsa` - Create RSA key pair.
- `-create_ecc` - Create ECC key pair.
- `-wrap` - Perform wrap operation.
Help output of s_speed
In all performance tests, the repeat count value was left at the default value of 100.
Tests were run with the no_reuse switch included, and with it omitted. In the results tables and graphs, this is indicated by “(not cached)”, and “(cached)” comments respectively.
Tests were run against a single active qCrypt server, as well as against a pair of active qCrypt servers. For the former case, the client connected directly over TLS to the qCrypt server. For the latter case, qClient connected to the VIP address of the F5 BIG-IP load balancer which was set to round-robin mode.
At all times, four qCrypt-VM nodes were connected in a replication group.
Each test was repeated with a single client instance, and in increments of one, to ten concurrent client instances. Concurrent instances were run on the Dell XPS-15 host machine from a Cygwin bash shell. Examples:
```bash
./s_speed $PERFORMANCE_CLIENT -create_ecc
```
Example of single instantiation of s_speed test for ECC
```bash
./s_speed $PERFORMANCE_CLIENT -create_ecc & ./s_speed $PERFORMANCE_CLIENT -create_ecc & ./s_speed $PERFORMANCE_CLIENT -create_ecc & ./s_speed $PERFORMANCE_CLIENT -create_ecc &
```
Example of five concurrent instantiations of s_speed test for ECC
Performance test results
Get AES-256 symmetric key
For the get key operation, connection caching improved performance by approximately an order of magnitude; e.g. for five concurrent clients connecting to a single server, performance increased from 840 to 8,100 operations per minute, and for two load-balanced servers performance increased from 1,200 to 13,860 operations per minute.
For two or more concurrent clients using cached connections, performance was approximately 50% higher for two load-balanced servers versus a single server.
For a single client process, performance was higher for the single server case than the two load-balanced server case. This is most likely mostly due to the overheads introduced by the load balancer in the data path.
A final observation is that with two or more concurrent clients, performance remained fairly flat. This indicates that the server(s) are easily able to serve the load presented by the clients. It would be interesting to extend the tests beyond ten concurrent clients to determine when server capacity is reached. For the test configuration used (i.e. a single laptop computer running multiple virtual machines) it is likely that laptop performance limitations would impact results, and therefore the tests were limited to just the ten concurrent clients.
<table>
<thead>
<tr>
<th>Number of concurrent client processes</th>
<th>Single server (not cached)</th>
<th>Single server (cached)</th>
<th>Two load-balanced servers (not cached)</th>
<th>Two load-balanced servers (cached)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>672</td>
<td>8,520</td>
<td>540</td>
<td>6,660</td>
</tr>
<tr>
<td>2</td>
<td>870</td>
<td>8,400</td>
<td>840</td>
<td>11,220</td>
</tr>
<tr>
<td>3</td>
<td>900</td>
<td>8,160</td>
<td>900</td>
<td>12,720</td>
</tr>
<tr>
<td>4</td>
<td>888</td>
<td>8,700</td>
<td>960</td>
<td>13,140</td>
</tr>
<tr>
<td>5</td>
<td>840</td>
<td>8,100</td>
<td>1,200</td>
<td>13,860</td>
</tr>
<tr>
<td>6</td>
<td>864</td>
<td>8,040</td>
<td>1,080</td>
<td>12,300</td>
</tr>
<tr>
<td>7</td>
<td>882</td>
<td>7,920</td>
<td>1,260</td>
<td>13,920</td>
</tr>
<tr>
<td>8</td>
<td>864</td>
<td>7,500</td>
<td>960</td>
<td>12,540</td>
</tr>
<tr>
<td>9</td>
<td>864</td>
<td>8,100</td>
<td>1,080</td>
<td>12,300</td>
</tr>
<tr>
<td>10</td>
<td>900</td>
<td>7,860</td>
<td>1,080</td>
<td>12,120</td>
</tr>
</tbody>
</table>
NIST AES key wrap
For the NIST AES key wrap operation, relative performance was very similar to get key, i.e. cached connections improved performance by approximately an order of magnitude, two load-balanced servers provided approximately 50% greater transaction rate than a single server, and beyond two concurrent clients, performance remained flat.
<table>
<thead>
<tr>
<th>Number of concurrent client processes</th>
<th>Single server</th>
<th>Two loaded-balanced servers</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>(not cached)</td>
<td>(cached)</td>
</tr>
<tr>
<td>1</td>
<td>594</td>
<td>5,640</td>
</tr>
<tr>
<td>2</td>
<td>756</td>
<td>5,760</td>
</tr>
<tr>
<td>3</td>
<td>774</td>
<td>6,120</td>
</tr>
<tr>
<td>4</td>
<td>648</td>
<td>5,340</td>
</tr>
<tr>
<td>5</td>
<td>750</td>
<td>5,700</td>
</tr>
<tr>
<td>6</td>
<td>756</td>
<td>5,580</td>
</tr>
<tr>
<td>7</td>
<td>756</td>
<td>5,460</td>
</tr>
<tr>
<td>8</td>
<td>720</td>
<td>5,280</td>
</tr>
<tr>
<td>9</td>
<td>756</td>
<td>5,400</td>
</tr>
<tr>
<td>10</td>
<td>720</td>
<td>5,400</td>
</tr>
</tbody>
</table>

**Wrap key (AES-NIST-KEYWRAP)**
- Single server (not cached)
- Single server (cached)
- Two load-balanced servers (not cached)
- Two load-balanced servers (cached)
Create AES-256 key
For the create AES-256 key operation, lower relative performance between cached and non-cached cases was observed. This is due to the fact that the time taken for the create key operation is more significant relative to TLS connection setup time. That is, the relative contribution to total transaction time by TLS setup is reduced. Even so, cached configurations provided significantly higher transaction rates.
Using two load-balanced servers combined with caching of connections increased performance over a single connection-cached server by approximately a factor of two.
<table>
<thead>
<tr>
<th>Number of concurrent client processes</th>
<th>Single server (not cached)</th>
<th>(cached)</th>
<th>Two load-balanced servers (not cached)</th>
<th>(cached)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>234</td>
<td>1,320</td>
<td>366</td>
<td>1,146</td>
</tr>
<tr>
<td>2</td>
<td>450</td>
<td>1,320</td>
<td>492</td>
<td>1,338</td>
</tr>
<tr>
<td>3</td>
<td>468</td>
<td>1,260</td>
<td>624</td>
<td>2,070</td>
</tr>
<tr>
<td>4</td>
<td>480</td>
<td>1,200</td>
<td>684</td>
<td>2,160</td>
</tr>
<tr>
<td>5</td>
<td>420</td>
<td>1,200</td>
<td>678</td>
<td>2,160</td>
</tr>
<tr>
<td>6</td>
<td>432</td>
<td>1,080</td>
<td>708</td>
<td>2,280</td>
</tr>
<tr>
<td>7</td>
<td>462</td>
<td>1,260</td>
<td>672</td>
<td>1,860</td>
</tr>
<tr>
<td>8</td>
<td>432</td>
<td>960</td>
<td>648</td>
<td>1,920</td>
</tr>
<tr>
<td>9</td>
<td>432</td>
<td>1,080</td>
<td>648</td>
<td>1,860</td>
</tr>
<tr>
<td>10</td>
<td>420</td>
<td>1,200</td>
<td>624</td>
<td>2,040</td>
</tr>
</tbody>
</table>
Create symmetric key (AES-256)
Create RSA-2048 key pair
For the create RSA-2048 key pair operation, connection caching provided only marginal performance improvement. This is because key pair creation time dominates the transaction time.
Significant performance increases were seen when connection caching was combined with two load-balanced servers.
<table>
<thead>
<tr>
<th>Number of concurrent client processes</th>
<th>Single server (not cached)</th>
<th>Single server (cached)</th>
<th>Two loaded-balanced servers (not cached)</th>
<th>Two loaded-balanced servers (cached)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>186</td>
<td>180</td>
<td>156</td>
<td>246</td>
</tr>
<tr>
<td>2</td>
<td>204</td>
<td>180</td>
<td>210</td>
<td>252</td>
</tr>
<tr>
<td>3</td>
<td>192</td>
<td>198</td>
<td>246</td>
<td>474</td>
</tr>
<tr>
<td>4</td>
<td>192</td>
<td>198</td>
<td>264</td>
<td>438</td>
</tr>
<tr>
<td>5</td>
<td>210</td>
<td>216</td>
<td>246</td>
<td>462</td>
</tr>
<tr>
<td>6</td>
<td>186</td>
<td>252</td>
<td>276</td>
<td>468</td>
</tr>
<tr>
<td>7</td>
<td>210</td>
<td>252</td>
<td>276</td>
<td>462</td>
</tr>
<tr>
<td>8</td>
<td>192</td>
<td>246</td>
<td>252</td>
<td>420</td>
</tr>
<tr>
<td>9</td>
<td>162</td>
<td>216</td>
<td>270</td>
<td>432</td>
</tr>
<tr>
<td>10</td>
<td>180</td>
<td>246</td>
<td>264</td>
<td>432</td>
</tr>
</tbody>
</table>

Create ECDSA-Curve-P-256 key pair
For the create ECDSA key pair operation connection caching marginally improves performance. As for the create RSA-2048 key pair operation, this is because ECDSA key pair creation time dominates the transaction time.
Significant performance increases were seen when connection caching was combined with two load-balanced servers.
<table>
<thead>
<tr>
<th>Number of concurrent client processes</th>
<th>Single server (not cached)</th>
<th>Single server (cached)</th>
<th>Two loaded-balanced servers (not cached)</th>
<th>Two loaded-balanced servers (cached)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>360</td>
<td>576</td>
<td>258</td>
<td>588</td>
</tr>
<tr>
<td>2</td>
<td>408</td>
<td>624</td>
<td>336</td>
<td>948</td>
</tr>
<tr>
<td>3</td>
<td>414</td>
<td>564</td>
<td>426</td>
<td>1002</td>
</tr>
<tr>
<td>4</td>
<td>408</td>
<td>552</td>
<td>456</td>
<td>1038</td>
</tr>
<tr>
<td>5</td>
<td>390</td>
<td>540</td>
<td>450</td>
<td>1026</td>
</tr>
<tr>
<td>6</td>
<td>396</td>
<td>468</td>
<td>438</td>
<td>1044</td>
</tr>
<tr>
<td>7</td>
<td>378</td>
<td>462</td>
<td>420</td>
<td>1002</td>
</tr>
<tr>
<td>8</td>
<td>384</td>
<td>480</td>
<td>390</td>
<td>888</td>
</tr>
<tr>
<td>9</td>
<td>378</td>
<td>486</td>
<td>432</td>
<td>900</td>
</tr>
<tr>
<td>10</td>
<td>360</td>
<td>480</td>
<td>420</td>
<td>1020</td>
</tr>
</tbody>
</table>
Conclusions
The results show that for all operations tested, connection caching, and load-balancing two servers improved performance. Most significant performance improvements, when enabling connection caching, were seen when the operation time was relatively short compared to the TLS session establishment time.
Sharing load between two servers always improved performance by up to a factor of two.
The relative performance numbers published in this application provide useful information for architectural design and deployment; e.g. connection caching in clients reduces total transaction time, and load balancing of multiple servers increases aggregate throughput.
The actual performance numbers published in the application note were generated from a system implemented using virtual machines on a single Windows laptop, and therefore are not representative of a typical production environment. However, the performance numbers do provide indicative relative performance that should be representative of production systems, and therefore, the conclusions drawn in this document should also be applicable to those systems.
Replication was enabled at all times, so the relative impact of replication traffic, and synchronous versus asynchronous replication was not tested. Even so, it is logical to expect that latency on synchronous replication links would impact performance.
To maximise KMIP operation performance, the following recommendations are made:
1. Enable connection caching in the client;
2. Deploy multiple servers behind a load balancer; and
3. Configure replication node priorities so that a slave node collocated with its master node has highest priority; i.e. synchronous replication traffic flows over the lowest latency, highest bandwidth link, and other replication traffic flows asynchronously.
About QuintessenceLabs
QuintessenceLabs’ portfolio of modular products addresses the most difficult security challenges, helping implement robust security strategies to protect data today and in the future. For more information on QuintessenceLabs’ data protection solutions, please visit www.quintessencelabs.com
|
{"Source-Url": "https://www.quintessencelabs.com/wp-content/uploads/2018/04/AppNote_Performance-Measurement-Tips.pdf", "len_cl100k_base": 6179, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25768, "total-output-tokens": 6499, "length": "2e12", "weborganizer": {"__label__adult": 0.0005154609680175781, "__label__art_design": 0.00034165382385253906, "__label__crime_law": 0.0011377334594726562, "__label__education_jobs": 0.00032901763916015625, "__label__entertainment": 0.00010472536087036131, "__label__fashion_beauty": 0.00019252300262451172, "__label__finance_business": 0.0010080337524414062, "__label__food_dining": 0.000347137451171875, "__label__games": 0.0007624626159667969, "__label__hardware": 0.0198822021484375, "__label__health": 0.000621795654296875, "__label__history": 0.00023746490478515625, "__label__home_hobbies": 0.00014531612396240234, "__label__industrial": 0.001708984375, "__label__literature": 0.00016868114471435547, "__label__politics": 0.0002872943878173828, "__label__religion": 0.0005087852478027344, "__label__science_tech": 0.2083740234375, "__label__social_life": 7.891654968261719e-05, "__label__software": 0.059478759765625, "__label__software_dev": 0.7021484375, "__label__sports_fitness": 0.0003466606140136719, "__label__transportation": 0.0008940696716308594, "__label__travel": 0.00021636486053466797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28423, 0.0376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28423, 0.12301]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28423, 0.88479]], "google_gemma-3-12b-it_contains_pii": [[0, 1146, false], [1146, 4098, null], [4098, 6808, null], [6808, 8870, null], [8870, 10195, null], [10195, 11921, null], [11921, 14032, null], [14032, 17406, null], [17406, 19076, null], [19076, 21261, null], [21261, 23782, null], [23782, 26283, null], [26283, 28423, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1146, false], [1146, 4098, null], [4098, 6808, null], [6808, 8870, null], [8870, 10195, null], [10195, 11921, null], [11921, 14032, null], [14032, 17406, null], [17406, 19076, null], [19076, 21261, null], [21261, 23782, null], [23782, 26283, null], [26283, 28423, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28423, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28423, null]], "pdf_page_numbers": [[0, 1146, 1], [1146, 4098, 2], [4098, 6808, 3], [6808, 8870, 4], [8870, 10195, 5], [10195, 11921, 6], [11921, 14032, 7], [14032, 17406, 8], [17406, 19076, 9], [19076, 21261, 10], [21261, 23782, 11], [23782, 26283, 12], [26283, 28423, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28423, 0.25957]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
e1f6650404fc132145e73c413f6fecce32d872a8
|
Strings
**Problem:**
A direct-mail advertising agency has decided to personalize its sweepstakes offers. It has prepared a basic letter with a particular customer's name, address, spouse's name, and other personal information. The company would like a computer program to make the appropriate changes to address each customer individually. As a first test, the company would like the program to make one set of changes: to replace all occurrences of
1. Smith by Johnson
2. Mr. by Ms.
3. 421 Main St. by 18 Windy Lane
4. wife by husband
5. Susan by Robert
6. her by his
Here is the basic letter:
Congratulations, Mr. Smith! The Smith family of 421 Main St. may have already won a new One-million-dollar house!! Your neighbors at 421 Main St. will be so surprised when you, Mr. Smith, move into your new house with your wife, Susan! And her eyes will light up with joy at her fabulous new home! Enter the sweepstakes now, and you and the Smith family may win it all!!
Write a C program that reads in the text line by line, displays each line as it is read in, makes the designated changes, and displays the revised text.
• A **string** is a sequence of elements of the **char** data type.
• a string must end (or terminate) in the **null character** ("\0").
• **A string literal** is a sequence of characters enclosed by **double** quotation marks. (Note: a string literal automatically inserts the null character.) A string literal is a **constant**.
• **Declaring String Variables:**
A string is declared like an array of characters. You must leave room for the null character.
```
char name[21]; // this can hold a string up to length 20
```
• **Initializing a String:**
```
char first[10] = { 't', 'a', 'b', 'l', 'e', '\0' };
char second[10] = "table";
```
• The **length** of a string is the number of characters stored in the string up to, but not including, the null character.
• The name of a string can be viewed as a **constant pointer** to a string.
• **Variable Pointer to a String:**
```
char * sptr;
```
• Legal Characters:
Each occurrence of double quotation marks or a backslash (or any other special character) must be preceded by the escape character (\) to tell the compiler that this is a character and not a control character.
```
char quotes[20] = "the \"king\" of rock";
char filename[20] = "c:\\hw ork\\prob9.c";
```
• Initializing a String within the Code:
```c
char city[15];
city[0] = 'L';
city[1] = '.';
city[2] = 'A';
city[3] = '.';
city[4] = '\0';
city = "L.A."; // illegal
if (city == "L.A." ) // illegal
...
```
• A String as an Array of char:
```c
char c,d;
char str[5] = "wing";
char item[10] = "computer";
c = str[3]; // c = = 'g'
d = str[0]; // d = = 'w'
item[4] = 'u'; // item = = "computer"
```
• Printing a String (the %s conversion specification):
```c
printf("The name is %s\n",first);
```
• Reading Strings Using scanf:
char name[16];
printf("Enter a name of up to 15 characters: ");
scanf("%s",name);
Note: DO NOT TYPE IN THE QUOTATION MARKS!
• Problems with Using scanf:
- scanf stops reading at the first whitespace character (e.g., space, tab, newline)!
- This is true even if the string entered is larger than the maximum string length. (It will corrupt subsequent memory locations.)
• Reading Multiple Strings Using scanf:
- It is recommended that each variable value be entered using a separate call to scanf.
char first[16];
char last[16];
int number;
printf("Enter a first name of up to 15 characters: ");
scanf("%s",first);
printf("Enter a last name of up to 15 characters: ");
scanf("%s",last);
scanf("%s %s",first,last); // considered bad style
scanf("%d %s",&number,last); // also considered bad style
The string.h Standard Library Functions
- **Using string.h:**
```c
#include <string.h>
```
- **Assigning a Value to a String Variable - strcpy():**
```c
strcpy(dest_str,source_str);
```
- copies the source_str to the dest_str.
- **Examples:**
```c
char first[16];
char last[16];
strcpy(first,"Wayne Smith");
strcpy(last,first);
strcpy(last,&first[6]);
strcpy(last,first+6);
```
Note: strcpy continues to copy until the first null character.
• Determining the length of a String - strlen():
\[ length = \text{strlen}(str); \]
- returns the current length of str as an integer.
• The sizeof Operator:
\[
\begin{align*}
size &= \text{sizeof item;} & \text{// returns size in bytes} \\
&\text{or} & \\
size &= \text{sizeof(item);} & \text{// returns size in bytes} \\
&\text{or} & \\
size &= \text{sizeof(typename);} & \text{// returns number of bytes} \\
&\text{// allocated to that type}
\end{align*}
\]
• Examples:
\[
\begin{align*}
\text{int length, size;} \\
\text{char dest[25];} \\
\text{char source[30];} \\
\text{scanf("%s", source);} \\
\text{length = \text{strlen(source)}} \\
\text{size = \text{sizeof dest;} \\
\text{if (length < size)} \\
\text{\quad \text{strcpy(dest, source);} \\
\text{else \\
\quad \text{printf("won't fit\n");}}}
\end{align*}
\]
Comparing Strings - strcmp():
\[ \text{result} = \text{strcmp(first\_str,second\_str);} \]
- strings are compared according to their ASCII values.
- strings are compared character by character until either a null character or a difference in characters is found.
- returns an integer result of the comparison:
\[ \begin{align*}
\text{result} &> 0 \text{ if first\_str} > \text{second\_str} \\
\text{result} &= 0 \text{ if first\_str} = = \text{second\_str} \\
\text{result} &< 0 \text{ if first\_str} < \text{second\_str}
\end{align*} \]
- A < Z < a < z
- " " < "Car" < "Cat" < "car" < "cat" < "cats" < "cub"
Examples:
```c
int result;
char first[15] = "cat";
char second[15] = "car";
result = strcmp("cat","car"); \hspace{1em} // result > 0
result = strcmp("big","little"); \hspace{1em} // result < 0
result = strcmp("ABC","abc"); \hspace{1em} // result < 0
result = strcmp("ab","ab"); \hspace{1em} // result < 0
result = strcmp("pre","prefix"); \hspace{1em} // result < 0
result = strcmp("potato","pot"); \hspace{1em} // result > 0
result = strcmp("cat","cat"); \hspace{1em} // result = = 0
result = strcmp(first,second); \hspace{1em} // result > 0
result = strcmp(first,"catalog"); \hspace{1em} // result < 0
scanf("%s",first);
scanf("%s",second);
if (strcmp(first,second) = = 0)
printf("they are equal\n");
```
• **Concatenating Two Strings - strcat():**
strcat(first_str,second_str);
- This function concatenates the second string onto the end of the first string.
• **Example:**
char bigstr[1024];
char dest[30] = "computer";
char second[15] = "programming";
strcat(dest,second); // dest = "computer programming"
strcpy(bigstr,dest); // bigstr = "computer programming"
strcat(bigstr," is fun and very demanding");
// bigstr = "computer programming is fun and very demanding"
• **Example:**
char dest[30] = "computer";
char second[15] = "programming";
if (strlen(dest) + strlen(second) < sizeof(dest))
strcat(dest,second);
else
printf("error: can't concatenate - dest too small\n");
Substring Functions
• Comparing Substrings - strncmp():
\[ \text{result} = \text{strncpy(address}_1, \text{address}_2, \text{numchars}); \]
- Compares up to \textbf{numchars} from two strings, starting at the addresses specified (\textit{address}_1 and \textit{address}_2).
- Returns an integer representing the relationship between the strings. (As per strcmp().)
- Strings are compared character by character until numchars are compared or either a null character or a difference in characters is found.
• Example:
char first[30] = "strong";
char second[10] = "stopper";
int numchars;
if (strncpy(first, second, 4) = = 0)
printf("first four characters are alike\n");
else if (strncpy(first, second, 4) < 0)
printf("first four characters of first string are less\n");
else
printf("first four characters of first string are more\n");
if (strncpy(&first[2],"ron",3) = = 0)
printf("ron is found in position 2\n");
else
printf("ron is not found in position 2\n");
• **Copying a Substring - strncpy():**
```c
strncpy(dest_str,source_str,numchars);
```
- copies numchars from the source_str to the dest_str.
• **Example:**
```c
char dest[10];
char source[20] = "computers are fun";
char result[18] = "I have a cat";
char insert[10] = "big ";
int len;
strncpy(dest,source+ 3,3);
dest[3] = '\0'; // dest = "put"
printf("%s\n",dest);
len = strlen(insert);
strcpy(result+ 9+ len,result+ 9); // make room for insertion
strncpy(result+ 9,insert,len); // result = "I have a big cat"
```
String Input/Output Functions
- **Printing a String - puts()**
```c
puts(str);
```
- sends str to **stdout**, the standard output stream.
- puts() automatically prints a newline character after str.
- **Example:**
```c
char str[28] = "This is a string to display";
puts(str);
```
- **Reading a value into a Character Array - gets():**
```c
result = gets(str);
```
- str is pointer to a character array.
- fills the array str from **stdin**, the standard input stream.
- gets() continues reading (including whitespace characters) until a newline character is encountered (ENTER).
- gets() returns a value of type **char** * (or NULL if it fails to read).
- **Example:**
```c
char str[128];
char * instring;
gets(str);
puts(str);
instring = gets(str);
puts(instring);
```
The NULL Pointer
- In C, there is a special value for a pointer to indicate that it is currently not pointing at anything, This value is NULL.
- The gets() function returns NULL when it fails to read in a value.
- A user can interactively signal NULL by entering CTRL-Z ENTER in Windows (or CTRL-D in UNIX).
**Example:**
```c
char str[128];
char * instring;
instring = gets(str);
while(instring != NULL) {
puts(str); // or puts(instring);
instring = gets(str);
}
```
```
while ((instring = gets(str)) != NULL) // equivalent code
puts(str); // or puts(instring);
```
```
while (gets(str) != NULL) // equivalent code
puts(str);
```
- **Problems with gets():**
- `gets()` does not check to see if the destination has room for the string being read in.
Writing String Functions
- **The Function length()**: Write a function that returns the length of a string.
```c
/* function to find and return length of a string */
int length(char * str)
{
int i = 0;
while (str[i] != '\0')
i++;
return(i);
}
```
- **The Function countchar()**: Write a function that counts how many times a particular character appears within a string.
```c
/* function to find number of occurrences of a particular character within a string */
int countchar(char str[], char let)
{
int i= 0, count= 0;
while (str[i] != '\0') {
if (str[i] == let)
count++;
i++;
}
return(count);
}
```
The Function findchar():
- Write a function to determine the position of a particular character within a string.
```c
/* function to return the position of a particular character within a string. Returns -1 if not found. */
int findchar(char * str, char let)
{
int i = 0;
while (str[i] != '\0') {
if (str[i] == let)
return(i);
i++;
}
return(-1);
}
```
Alternate Code:
```c
int findchar(char * str, char let)
{
int i = 0, found = 0;
while (* (str+ i) != '\0' && !found)
if (* (str+ i) == let)
found = 1;
else
i++;
if (!found)
i = -1;
return(i);
}
```
Functions that Return a Value of char *
**The Function classify():**
/* classifies monthname into one of four seasons */
char * classify(char * monthname)
{
if (strcmp(monthname, "December") == 0 ||
strcmp(monthname, "January") == 0 ||
strcmp(monthname, "February") == 0)
return("winter");
else if (strcmp(monthname, "March") == 0 ||
strcmp(monthname, "April") == 0 ||
strcmp(monthname, "May") == 0)
return("spring");
else if (strcmp(monthname, "June") == 0 ||
strcmp(monthname, "July") == 0 ||
strcmp(monthname, "August") == 0)
return("summer");
else if (strcmp(monthname, "September") == 0 ||
strcmp(monthname, "October") == 0 ||
strcmp(monthname, "November") == 0)
return("fall");
else
return("error");
}
**Calling Program:**
char * season;
char month[10];
gets(month);
season = classify(month);
if (strcmp(season,"error") != 0)
printf("%s is in the %s\n",month,season);
else
printf("%s is not a valid month\n",month);
• **The Function split():**
- Write a function to split a string into two parts at its first blank.
```c
/* function to split a string into two parts at the first occurrence of a blank character. Both parts are to be returned to the calling program. */
int split(char * stringtosplit, char * first, char * second)
{
int pos;
pos = findchar(stringtosplit,' ');
if (pos != -1) {
strncpy(first,stringtosplit,pos);
*(first + pos) = '\0'; //first[pos] = '\0';
strcpy(second,stringtosplit + pos + 1);
}
return pos;
}
```
• **Calling Program:**
```c
char * stringtosplit;
int result;
char buffer[50], first[50], second[50];
stringtosplit = gets(buffer);
result = split(stringtosplit,first,second);
if (result != -1)
printf("%s %s\n",second,first);
else
printf("no blank in string\n");
```
Returning to Our Problem
• Pseudocode:
while there is a line of the letter to read
read in a line of the original letter
print the original line
replace the old strings in the line by the new ones
print the new line
• Main Program:
/* program to read a letter and replace all occurrences of old
* strings with new strings.
*/
#include <stdio.h>
#include <string.h>
#define LINESIZE 120
#define REPSIZE 15
/* Function Prototypes Go Here*/
void main()
{
char text[LINESIZE];
while (gets(text) != NULL) {
puts(text);
replace(text); // MUST STILL BE WRITTEN
puts(text);
}
}
Pseudocode for replace():
while there are data values
read in a set of replacements (oldstr & new str)
while oldstr occurs in text
search for next occurrence of oldstr in text
replace oldstr by new str
Revised Pseudocode for replace():
while there are data values
read in a set of replacements (oldstr & new str)
while oldstr occurs in text
call pos() to search for next occurrence of oldstr in text
call splitup() to break up text and remove oldstr
call reassemble() to reconstruct text and insert new str
Function replace():
/* ... */
void replace(char * text)
{
int p,lenold;
char part1[LINESIZE], part2[LINESIZE];
char * oldstr, * new str;
char oldin[REPSIZE], new in[REPSIZE];
while ((oldstr = gets(oldin)) != NULL) {
new str = gets(new in);
lenold = strlen(oldstr);
while ((p = pos(text,oldstr)) != -1) {
splitup(text,lenold,part1,part2,p);
reassemble(text,new str,part1,part2);
}
}
return;
}
Finding one String Within Another String - pos():
/* Function pos:
* Input:
* oldstr - string to search for
* text - string in which to search
* Process:
* finds position of first occurrence of oldstr in text
* Output:
* if found, returns position; if not found, returns -1
*/
int pos(char * text, char * oldstr)
{
int lenold,result,i= 0;
lenold = strlen(oldstr);
while (text[i] != '\0') {
result = strncmp(&text[i],oldstr,lenold);
if (result == 0)
return(i);
i++;
}
return(-1);
}
Splitting the String - splitup():
/* Function splitup:
* Input:
* text - string to split
* lenold - length of old string
* p - position of old string
* part1, part2 - strings to fill
* Process:
* splits text at positions p and p+ lenold
* part1 gets text prior to oldstr
* part2 gets text after oldstr
* Output:
* part1 and part2 get new values
*/
void splitup(char * text, int lenold, char * part1, char * part2, int p)
{
strncpy(part1,text,p);
part1[p] = '\0';
strcpy(part2,&text[p+lenold]);
return;
}
Putting the String back Together - reassemble():
/* Function reassemble:
* Input:
* new str - the replacement string
* part1, part2 - first and last parts of the original string
* Process:
* reassembles text using concatenation of part1, new str, & part2
* Output:
* text has new value of part1+ new str+ part2
*/
void reassemble(char * text, char * new str, char * part1, char * part2)
{
strcpy(text, part1);
strcat(text, new str);
strcat(text, part2);
return;
}
Revised Main Program:
/* program to read a letter and replace all occurrences of old
* strings with new strings.
*/
#include <stdio.h>
#include <string.h>
#define LINESIZE 120
#define REPSIZE 15
/* Function Prototypes */
void replace(char *);
int pos(char *,char *);
void splitup(char *,int,char *,char *,int);
void reassemble(char *,char*,char *,char *,char *);
void main()
{
char text[LINESIZE];
while (fgets(text) != NULL) {
puts(text);
replace(text);
puts(text);
}
}
Additional Character Related Functions
- The Function getchar():
\[ c = \text{getchar();} \]
- This function returns a single integer value indicating the character just read from stdin. On error, EOF is returned.
- The variable \( c \) should be of type integer (not of type char).
- Note, a value of type char can be interpreted as either a character or an integer. An integer in the range of type char (0…255) can be interpreted as either a character or an integer.
- The Function putchar():
\[ \text{putchar}(ch); \]
- The putchar() function sends a value of type char to stdout.
- Example:
```c
int c, count = 0;
c = \text{getchar();}
\text{while (c != EOF) {}
\text{getchar();} \quad \text{// throw out ENTER from buffer}
\text{putchar(c);}
\text{count++ ;}
c = \text{getchar();}
\} \text{printf("there are \%d characters in the input\n", count);}
```
Testing the type of a char Value
- Selected Functions from ctype.h:
<table>
<thead>
<tr>
<th>Function</th>
<th>Checks</th>
</tr>
</thead>
<tbody>
<tr>
<td>isalpha(ch)</td>
<td>is the parameter alphabetic (A..Z or a..z)</td>
</tr>
<tr>
<td>isdigit(ch)</td>
<td>is the parameter a digit (0..9)</td>
</tr>
<tr>
<td>isalnum(ch)</td>
<td>is the parameter alphabetic or a digit</td>
</tr>
<tr>
<td>isspace(ch)</td>
<td>is the parameter a space ('\ ')</td>
</tr>
<tr>
<td>ispunct(ch)</td>
<td>is the parameter a punctuation mark</td>
</tr>
<tr>
<td>islower(ch)</td>
<td>is the parameter a lowercase alphabetic (a..z)</td>
</tr>
<tr>
<td>isupper(ch)</td>
<td>is the parameter an uppercase alphabetic (A..Z)</td>
</tr>
</tbody>
</table>
- All these functions return 1 if true and 0 if false.
- Example:
```c
int ch;
int alpha= 0,digit= 0,space= 0,punct= 0;
while ((ch = getchar()) !=EOF) {
getchar(); // throw out ENTER from buffer
if (isalpha(ch))
alpha+ + ;
else if (isdigit(ch))
digit+ + ;
else if (isspace(ch))
space+ + ;
else if (ispunct(ch))
punct+ + ;
}
```
- The Functions toupper() & tolower():
```c
chout = toupper(ch);
chout = tolower(ch);
```
- These functions force the case of the input character.
- **Example:**
```c
int ch;
do {
...
printf("Do you want to continue? (y/n): ");
ch = getchar();
getchar(); // throw out ENTER from buffer
} while (toupper(ch) == 'Y');
```
- **Example:**
```c
char str[15];
int i= 0;
strcpy(str,"ALPHABETIC");
while (str[i] != '\0') {
str[i] = tolower(str[i]);
i++;
}
```
Arrays of Strings
• Declaring an Array of Strings:
char identifier[ARRAYSIZE][STRINGSIZE];
• Example:
char months[12][10] = {"January","February","March",
"April","May","June","July","August","September",
"October","November","December"};
int i;
for (i = 0; i < 12; i++)
printf(" %s\n",months[i]);
• Example:
char str[10][20];
int i = 0;
while (scanf(" %s",str[i]) != EOF && i < 10) {
printf(" %s\n",str[i]);
i++;
}
• Example:
char animal[3][12];
int i,result;
strcpy(animal[0],"giraffe");
strcpy(animal[1],"tiger");
strcpy(animal[2],"rhinoceros");
for (i = 0; i <= 2; i++) {
result = strcmp(animal[i],"tiger");
if (result == 0) {
printf("tiger was found in position %d\n",i);
break;
}
}
• Example:
/* Function classify:
* finds monthname in array months and classifies its
* position into one of four seasons
*/
char * classify(char * monthname)
{
char months[12][10] = {"January","February","March",
"April","May","June","July","August","September",
"October","November","December"};
int i, found = 0;
for (i = 0; i <= 11 && !found; i++)
if (strcmp(monthname, months[i]) == 0) found = 1;
if (!found) return("error");
switch (i-1) {
case 11:
case 0:
case 1:
return("winter");
case 2:
case 3:
case 4:
return("spring");
case 5:
case 6:
case 7:
return("summer");
case 8:
case 9:
case 10:
return("autumn");
}
}
|
{"Source-Url": "http://www.sci.brooklyn.cuny.edu:80/~ziegler/Cis1_5old/CIS1_5_PT9.pdf", "len_cl100k_base": 6160, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 45914, "total-output-tokens": 7952, "length": "2e12", "weborganizer": {"__label__adult": 0.0005731582641601562, "__label__art_design": 0.0003273487091064453, "__label__crime_law": 0.0002532005310058594, "__label__education_jobs": 0.0005049705505371094, "__label__entertainment": 0.0001156926155090332, "__label__fashion_beauty": 0.00016570091247558594, "__label__finance_business": 0.0001252889633178711, "__label__food_dining": 0.0006299018859863281, "__label__games": 0.0017251968383789062, "__label__hardware": 0.0030765533447265625, "__label__health": 0.00026416778564453125, "__label__history": 0.00019156932830810547, "__label__home_hobbies": 0.00012755393981933594, "__label__industrial": 0.0003349781036376953, "__label__literature": 0.00035881996154785156, "__label__politics": 0.0001652240753173828, "__label__religion": 0.000469207763671875, "__label__science_tech": 0.00469970703125, "__label__social_life": 6.759166717529297e-05, "__label__software": 0.005161285400390625, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002796649932861328, "__label__transportation": 0.0003991127014160156, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21647, 0.01551]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21647, 0.78636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21647, 0.58887]], "google_gemma-3-12b-it_contains_pii": [[0, 1124, false], [1124, 2391, null], [2391, 2968, null], [2968, 3861, null], [3861, 4339, null], [4339, 5219, null], [5219, 6553, null], [6553, 7281, null], [7281, 8299, null], [8299, 8854, null], [8854, 9681, null], [9681, 10456, null], [10456, 11128, null], [11128, 11788, null], [11788, 12877, null], [12877, 13725, null], [13725, 14373, null], [14373, 15359, null], [15359, 15907, null], [15907, 16457, null], [16457, 16954, null], [16954, 17483, null], [17483, 18392, null], [18392, 19452, null], [19452, 19984, null], [19984, 20801, null], [20801, 21647, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1124, true], [1124, 2391, null], [2391, 2968, null], [2968, 3861, null], [3861, 4339, null], [4339, 5219, null], [5219, 6553, null], [6553, 7281, null], [7281, 8299, null], [8299, 8854, null], [8854, 9681, null], [9681, 10456, null], [10456, 11128, null], [11128, 11788, null], [11788, 12877, null], [12877, 13725, null], [13725, 14373, null], [14373, 15359, null], [15359, 15907, null], [15907, 16457, null], [16457, 16954, null], [16954, 17483, null], [17483, 18392, null], [18392, 19452, null], [19452, 19984, null], [19984, 20801, null], [20801, 21647, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21647, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21647, null]], "pdf_page_numbers": [[0, 1124, 1], [1124, 2391, 2], [2391, 2968, 3], [2968, 3861, 4], [3861, 4339, 5], [4339, 5219, 6], [5219, 6553, 7], [6553, 7281, 8], [7281, 8299, 9], [8299, 8854, 10], [8854, 9681, 11], [9681, 10456, 12], [10456, 11128, 13], [11128, 11788, 14], [11788, 12877, 15], [12877, 13725, 16], [13725, 14373, 17], [14373, 15359, 18], [15359, 15907, 19], [15907, 16457, 20], [16457, 16954, 21], [16954, 17483, 22], [17483, 18392, 23], [18392, 19452, 24], [19452, 19984, 25], [19984, 20801, 26], [20801, 21647, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21647, 0.01355]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
d11efbdda14aa09a0e796347dbe903d52a2640b3
|
Variability-Resistant Software Through Improved Sensing & Modeling:
Compiler Directed Strategies
Rajesh K. Gupta
Outline
• Why variability?
• Expedition’s view of UNO Machines
– Sensors, Circuits, Instructions, Procedures, Tasks
– Error rates, vulnerabilities, classifications
• Between sense & adapt and model & predict
– Compile time optimization
– Runtime adaptive guardbanding
• WIP results and summary.
Caveats: A limited view (entirely work by Abbas Rahimi, UCSD)
A very Expedition-centric view, not comprehensive, or even representative of Expeditions.
Variability is about Scale and Cost
- Variability in transistor characteristics is a major challenge in nanoscale CMOS, **PVTA**
- Static **Process** variation: effective transistor channel length and threshold voltage
- Dynamic variations: **Temperature** fluctuations, supply **Voltage droops**, and device **Aging** (NBTI, HCI)
- To handle variations → designers use conservative **guardbands** → loss of operational efficiency 😞
The real effect of variability is uncertainty
- **Two dimensions**
- \{Spatial, Temporal, Dynamic\} x \{Deterministic, Stochastic\}
- **Spatial**
- Manufacturing process variations, random defects
- Affect yield right after production
- **Temporal**
- Aging effects (HCI, NBTI, Soft Breakdown,...)
- EM, TDDB, Corrosion,...
- **Dynamic**
- Workload, temperature variations, EMI events
- How the IC is used.
Deterministic/Stochastic: function of how physics is captured.
Temporal and Functional Uncertainties
• Temporal uncertainties are quite familiar to real-time systems community
– Measures that span architectural simplification to OS simplification, structuring computation as precise and imprecise and combining with real-time OS models (FG/BG).
• PL: from performance to correctness to reliability
• Lately PL community has taken on fault-tolerant computations
– How to avoid BSD? Decompose, Calibrate, Acceptance Tests
– Probabilistic Accuracy Bound & Early Phase Termination, [Rinard, ICS 2006]
– Principled Approximation [Baek/Chilimbi MSR 2009]
• Programmer approximates expensive functions, build a model of QoS loss by the approximation during calibration phase
– Use model during operational phase to save energy by an adaptation function that monitors runtime behavior.
Figure 11. The tradeoff between QoS loss and the improvement in performance and energy consumption of Eon.
Figure 13. The tradeoff between QoS loss and the improvement in performance and energy consumption of CGA.
Figure 15. The tradeoff between QoS loss and the improvement in performance and energy consumption of DFT.
The most immediate manifestations of variability are in path delay and power variations.
Path delay variations has been addressed extensively in delay fault detection by test community.
With Variability, it is possible to do better by focusing on the actual mechanisms
– For instance, major source of timing variation is voltage droops, and errors matter when these end up in a state change.
Combine these two observations and you get a rich literature in recent years for handling variability induced errors: Razor, EDA, TRC, ...
Variability Expeditions: UNO Computing Machines use both Modeling & Sensing
- Variability manifestations:
- faulty cache bits
- delay variation
- power variation
- Variability signatures:
- cache bit map
- cpu speed
- power map
- memory access time
- ALU error rates
- Metadata Mechanisms: Reflection, Introspection
- Sensors
- Models
Variability manifestations:
- faulty cache bits
- delay variation
- power variation
UnO Computing Machines: Taxonomy of Underdesign
- Parametric Underdesign
- Functional Underdesign
- Errored Operation
- Errorfree Operation
Nominal Design
Manufacturing
Performance Constraints
Manufactured Die
Die Specific Adaptation
Hardware Characterization Tests
Signature Burn In
Manufactured Die With Stored Signatures
Puneet Gupta/UCLA
Task Ingredients:
Model, Sense, Predict, Adapt
I. Sense & Adapt
Observation using in situ monitors (Razor, EDS) with cycle-by-cycle corrections (leveraging CMOS knobs or replay)
II. Predict & Prevent
Relying on external or replica monitors → Model-based rule → derive adaptive guardband to prevent error
By the time, we get to TLV, we are into a parallel software context: instruct OpenMP scheduler, even create an abstraction for programmers to express irregular and unstructured parallelism (code refactoring).
Monitor manifestations from instructions levels to task levels.
Methodology
- Characterize effects of Dynamic Voltage and Temperature Variation
- Estimate their effects on instruction executions
- Instruction-level Vulnerability (ILV)
- Sequence-level Vulnerability (SLV)
- Classify instructions, and sequences of instructions
- Use ILV, SLV
- Compile time optimization
- Runtime adaptive guardbanding
Characterize
• Characterize LEON3 in 65nm TSMC across full range of operating conditions: (-40°C−125°C, 0.72V−1.1V)
• Dynamic variations contain both HF and LF components locally as well as across the die.
Dynamic variations cause the critical path delay to increase by a factor of \(6.1\times\).
One First Challenge: How do we make the leap to Software?
WAIT! DID WE MISS A STEP?
Connecting the dots: Delay and Pipestages
Observe:
The *execute* and *memory* parts are sensitive to V/T variations, and also exhibit a large number of critical paths in comparison to the rest of processor.
Hypothesis:
We anticipate that the instructions that significantly exercise the *execute* and *memory* stages are likely to be more vulnerable to V/T variations→ Instruction-level Vulnerability (ILV)
Method for ISA-level & Sequence-level Characterization
- For SPARC V8 instructions (V, T, F) are varied and
- $ILV_i$ is evaluated for every instruction, with random operands
- $SLV_i$ is evaluated for a high-frequent sequence, of instructions
Generate ILV, SLV “Metadata”
- The ILV (SLV) for each instruction_i (sequence_i) at every operating condition is quantified:
\[
ILV(i,V,T,\text{cycle}_\text{time}) = \frac{1}{N_i} \sum_{j=1}^{N_i} \text{Violation}_j \\
SLV(i,V,T,\text{cycle}_\text{time}) = \frac{1}{M_i} \sum_{j=1}^{M_i} \text{Violation}_j
\]
\[
\text{Violation}_j = \begin{cases}
1 & \text{If any stage violates at cycle}_j \\
0 & \text{otherwise}
\end{cases}
\]
– where \( N_i (M_i) \) is the total number of clock cycles in Monte Carlo simulation of instruction_i (sequence_i) with random operands.
– \( \text{Violation}_j \) indicates whether there is a violated stage at clock cycle_j or not.
- ILV_i (SLV_i) defines as the total number of violated cycles over the total simulated cycles for the instruction_i (sequence_i).
Now, I am going to make a jump over characterization data...
1 Classify Instructions in 3 Classes
**ILV at 0.88V, while varying temperature:**
<table>
<thead>
<tr>
<th>(V, T)</th>
<th>(0.88V, -40°C)</th>
<th>(0.88V, 0°C)</th>
<th>(0.88V, 125°C)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Cycle time (ns)</strong></td>
<td>1</td>
<td>1.02</td>
<td>1.06</td>
</tr>
<tr>
<td><strong>Logical & Arithmetic</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>add</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>and</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>or</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sll</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sra</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>srl</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sub</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>xor</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td><strong>Mem</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>load</td>
<td>1</td>
<td>0.824</td>
<td>0</td>
</tr>
<tr>
<td>store</td>
<td>1</td>
<td>0.847</td>
<td>0</td>
</tr>
<tr>
<td><strong>Mul. & Div.</strong></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>mul</td>
<td>1</td>
<td>0.996</td>
<td>0.064</td>
</tr>
<tr>
<td>div</td>
<td>1</td>
<td>0.991</td>
<td>0.989</td>
</tr>
</tbody>
</table>
- Instructions are partitioned into three main classes: (i) Logical & arithmetic; (ii) Memory; (iii) Multiply & divide.
- The 1st class shows an abrupt behavior when the clock cycle is slightly varied, mainly because the path distribution of the exercised part by this class is such that most of the paths have the same length, then we have all-or-nothing effect, which implies that either all instructions within this class fail or all make it.
2 Check them across temperature
ILV at 0.72V, while varying temperature:
<table>
<thead>
<tr>
<th>Corners</th>
<th>(0.72V, -40°C)</th>
<th>(0.72V, 0°C)</th>
<th>(0.72V, 125°C)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Cycle time (ns)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>4.10</td>
<td>4.12</td>
<td>4.14</td>
</tr>
<tr>
<td>add</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>and</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>or</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sll</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sra</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>srl</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>sub</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>xor</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>xnor</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>load</td>
<td>1</td>
<td>0.823</td>
<td>0.823</td>
</tr>
<tr>
<td>store</td>
<td>1</td>
<td>0.847</td>
<td>0.847</td>
</tr>
<tr>
<td>mul</td>
<td>1</td>
<td>0.995</td>
<td>0.995</td>
</tr>
<tr>
<td>div</td>
<td>1</td>
<td>0.995</td>
<td>0.995</td>
</tr>
</tbody>
</table>
- All instruction classes act similarly across the wide range of operating conditions: as the cycle time increases gradually, the ILV becomes 0, firstly for the 1st class, then for the 2nd class, and finally for the 3rd class.
- For every operating conditions
\[
\text{ILV (3rd Class)} \geq \text{ILV (2nd Class)} \geq \text{ILV (1st Class)}
\]
### 3 Classify Instruction Sequences
SLV at (0.81V, 125C)
<table>
<thead>
<tr>
<th>CT (ns)</th>
<th>Seq1</th>
<th>Seq2</th>
<th>Seq3</th>
<th>Seq4</th>
<th>Seq5</th>
<th>Seq6</th>
<th>Seq7</th>
<th>Seq8</th>
<th>Seq9</th>
<th>Seq10</th>
<th>Seq11</th>
<th>Seq12</th>
<th>Seq13</th>
<th>Seq14</th>
<th>Seq15</th>
<th>Seq16</th>
<th>Seq17</th>
<th>Seq18</th>
<th>Seq19</th>
<th>Seq20</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.26</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.27</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.28</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.29</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.30</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.31</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.32</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.33</td>
<td>0.878</td>
<td>0.811</td>
<td>0.881</td>
<td>0.880</td>
<td>0.884</td>
<td>0.892</td>
<td>0.877</td>
<td>0.859</td>
<td>0.879</td>
<td>0.758</td>
<td>0.883</td>
<td>0.883</td>
<td>0.811</td>
<td>0.811</td>
<td>0.952</td>
<td>0.811</td>
<td>0.805</td>
<td>0.810</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>1.34</td>
<td>0.366</td>
<td>0.811</td>
<td>0.515</td>
<td>0.512</td>
<td>0.393</td>
<td>0.429</td>
<td>0.859</td>
<td>0.03</td>
<td>0.403</td>
<td>0.407</td>
<td>0.811</td>
<td>0.811</td>
<td>0.811</td>
<td>0.811</td>
<td>0.811</td>
<td>0.811</td>
<td>0.805</td>
<td>0.810</td>
<td>0</td>
<td></td>
</tr>
<tr>
<td>1.35</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
- The top 20 high-frequent sequences (Seq1-Seq20) are extracted from 80 Billion dynamic instructions of 32 benchmarks.
- Sequences are classified into two classes based on their similarities in SLV values:
- **Class I** (Seq20) only consists of the arithmetic/logical instructions.
- **Class II** (Seq1-Seq19) is a mixture of all types of instructions including the memory, arithmetic/logical, and control instructions.
### Classification of Sequence of Instructions (2/3)
#### SLV at (0.81V, -40°C).
The same trend with 165°C temperature variations.
| CT (ns) | Seq1 | Seq2 | Seq3 | Seq4 | Seq5 | Seq6 | Seq7 | Seq8 | Seq9 | Seq10 | Seq11 | Seq12 | Seq13 | Seq14 | Seq15 | Seq16 | Seq17 | Seq18 | Seq19 | Seq20 |
|---------|------|------|------|------|------|------|------|------|------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| 1.36 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0.475 |
| 1.37 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1.38 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1.39 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1.40 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1.41 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1.42 | 0.878| 0.811| 0.881| 0.880| 0.884| 0.892| 0.877| 0.859| 0.988| 0.758 | 0.882 | 0.883 | 0.811 | 0.811 | 0.815 | 0.870 | 0.811 | 0.807 | 0.810 |
| 1.43 | 0.01 | 0.01 | 0.479| 0.396| 0.06 | 0.04 | 0.01 | 0.01 | 0.901| 0.01 | 0.01 | 0.01 | 0.811 | 0.811 | 0.811 | 0.811 | 0.810 | 0.805 | 0.131 |
| 1.44 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
#### (V,T) = (0.81V, 125°C)
<table>
<thead>
<tr>
<th>CT (ns)</th>
<th>Class II</th>
<th>Class I</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.26</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.27</td>
<td>1</td>
<td>0.69</td>
</tr>
<tr>
<td>1.28</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.29</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.30</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.31</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.32</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.33</td>
<td>0.81</td>
<td>0</td>
</tr>
<tr>
<td>1.34</td>
<td>0.81</td>
<td>0</td>
</tr>
<tr>
<td>1.35</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
#### (V,T) = (0.72V, 125°C)
<table>
<thead>
<tr>
<th>CT (ns)</th>
<th>Class II</th>
<th>Class I</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.78</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1.79</td>
<td>1</td>
<td>0.58</td>
</tr>
<tr>
<td>1.80</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.81</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.82</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.83</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1.84</td>
<td>1.81</td>
<td>0</td>
</tr>
<tr>
<td>1.85</td>
<td>0.13</td>
<td>0</td>
</tr>
<tr>
<td>1.86</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1.87</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
For every operating conditions:
**SLV (Class II) ≥ SLV (Class I)**
Sequences in *Class II* need higher guardbands compared to *Class I*, because in addition of ALU's critical paths, the critical paths of memory are activated (for the load/store instructions) as well as the critical paths of integer code conditions (for the control instructions).
For every operating conditions:
ILV (3\textsuperscript{rd} Class) ≥ ILV (2\textsuperscript{nd} Class) ≥ ILV (1\textsuperscript{st} Class)
SLV (\textit{Class II}) ≥ SLV (\textit{Class I})
ILV and SLV classification for integer SPARC V8 ISA.
I. Error-tolerant Applications
- Duplication of critical instructions
- Satisfying the fidelity metric
II. Error-intolerant Application
- Increasing the percentage of the sequences of Class I, i.e., increasing the number of arithmetic instructions with regard to the memory and control flow instructions, e.g., through loop unrolling technique
• Adaptive clock scaling for each class of sequences mitigates the conservative inter- and intra-corner guardbanding.
• At the runtime, in every cycle, the PLUT module sends the desired frequency to the adaptive clocking circuit utilizing the characterized SLV metadata of the current sequence and the operating condition monitored by CPM.
• Applying the loop unrolling produces a longer chain of ALU instructions, and as a result the percentage of sequences of *Class1* is increased up to 41% and on average 31%.
• Hence, the adaptive guardbanding benefits from this compiler transformation technique to further reduce the guardband for sequences of *Class1*.
Effectiveness of Adaptive Guardbanding
• Using online SLV coupled with offline compiler techniques enables the processor to achieve $1.6 \times$ average speedup for intolerant applications, compared to recent work [Hoang’11], by adapting the cycle time for dynamic variations (inter-corner) and different instruction sequences (intra-corner).
• Adaptive guardbanding achieves up to $1.9 \times$ performance improvement for error-tolerant (probabilistic) applications in comparison to the traditional worst-case design.
Example: Procedure Hopping in Clustered CPU, Each core with its voltage domain
- Statically characterize procedure for PLV
- A core increases voltage if monitored delay is high
- A procedure hops from one core to another if its voltage variation is high
- Less 1% cycle overhead in EEMBC.
\[
\begin{align*}
V_{DD} &= 0.81V \\
V_{DD} &= 0.99V \\
\text{VA-V}_{DD}\text{-Hopping} &= (0.81V, 0.99V)
\end{align*}
\]
HW/SW Collaborative Architecture to Support Intra-cluster Procedure Hopping
- The code is easily accessible via the shared-L1 I$.
- The data and parameters are passed through the shared stack in TCDM.
- A procedure hopping information table (PHIT) keeps the status for a migrated procedure.
NOT SURE HOW FAR WE CAN PUSH THIS SENSING. REMEMBER ILP?
The model takes into account:
1. PVTA parameter variations
2. Clock frequency
3. Physical details of Placed-and-Routed FUs in 45nm TSMC technology
- Analyzed FUs:
- 10 32-bit integer
- 15 single precision floating-point (fully compatible with the IEEE 754 standard)
- A full permutation of PVTA parameters and clock frequency are applied.
For each FU_i working with t_{clk} and a given PVTA variations, we defined Timing Error Rate (TER):
\[ \text{TER} (FU_i, t_{clk}, V, T, P, A) = \frac{\sum \text{CriticalPaths (FU_i, t_{clk}, V, T, P, A) \times 100}}{\sum \text{Paths (FU_i)}} \]
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Start Point</th>
<th>End Point</th>
<th>Step</th>
<th># of Points</th>
</tr>
</thead>
<tbody>
<tr>
<td>Voltage</td>
<td>0.88V</td>
<td>1.10V</td>
<td>0.01V</td>
<td>23</td>
</tr>
<tr>
<td>Temperature</td>
<td>0°C</td>
<td>120°C</td>
<td>10°C</td>
<td>13</td>
</tr>
<tr>
<td>Process (σ_{WID})</td>
<td>0%</td>
<td>9.6%</td>
<td>3.2%</td>
<td>4</td>
</tr>
<tr>
<td>Aging (ΔV_{th})</td>
<td>0mV</td>
<td>100mV</td>
<td>25mV</td>
<td>5</td>
</tr>
<tr>
<td>t_{clk}</td>
<td>0.2ns</td>
<td>5.0ns</td>
<td>0.2ns</td>
<td>25</td>
</tr>
</tbody>
</table>
We used Supervised learning (linear discriminant analysis) to generate a parametric model at the level of FU that relates PVTA parameters variation and $t_{clk}$ to classes of TER.
On average, for all FUs the resubstitution error is $0.036$, meaning the models classify nearly all data correctly.
For extra characterization points, the model makes correct estimates for $97\%$ of out-of-sample data. The remaining $3\%$ is misclassified to the high-error rate class, $C_H$, thus will have safe guardband.
During design time the delay of the FP adder has a large uncertainty of [0.73ns,1.32ns], since the actual values of PVTA parameters are unknown.
The question is that mix of monitors that would be useful?
Sensor overheads:
✓ *In-situ* PVT sensors impose 1–3% area overhead [Bowman'09]
✓ Five replica PVT sensors increase area of by 0.2% [Lefurgy’11]
✓ The banks of 96 NBTI aging sensors occupy less than 0.01% of the core's area [Singh’11]
• 24% (P_sensor)
• 28% (PA_sensors),
• 44% (PATV_sensors)
Online Utilization of HFG
- The control system tunes the clock frequency through an online model-based rule.
- To support fast controller's computation, the parametric model generates distinct Look Up Tables (LUTs) for every FUs.
- We apply HFG to architecture at two granularities
1. Fine-grained granularity of instruction-by-instruction monitoring and adaptation that signals of PATV sensors come from individual FUs
2. Coarse-grained granularity of kernel-level monitoring uses a representative PATV sensors for the entire execution stage of pipeline
1. At kernel-level monitoring, on average, the throughput increases by 70%, when the PE moves from only P_sensor to PATV_sensors scenario. The target TER is set to “0” in preference to the error-intolerant applications.
2. Instruction-by-instruction monitoring and adaptation improves the throughput by $1.8 \times -2.1 \times$ depends to the PATV sensors configuration and kernel's instructions.
Thank You!
The Variability Expedition
http://variability.org
A NSF Expeditions in Computing Project
|
{"Source-Url": "http://mesl.ucsd.edu/site/talks/DATECompiler13.pdf", "len_cl100k_base": 7818, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 77867, "total-output-tokens": 9710, "length": "2e12", "weborganizer": {"__label__adult": 0.0006341934204101562, "__label__art_design": 0.0008358955383300781, "__label__crime_law": 0.0005140304565429688, "__label__education_jobs": 0.00060272216796875, "__label__entertainment": 0.00012034177780151369, "__label__fashion_beauty": 0.0003466606140136719, "__label__finance_business": 0.00031948089599609375, "__label__food_dining": 0.0006098747253417969, "__label__games": 0.0008845329284667969, "__label__hardware": 0.01629638671875, "__label__health": 0.0008831024169921875, "__label__history": 0.0005469322204589844, "__label__home_hobbies": 0.0002677440643310547, "__label__industrial": 0.0015773773193359375, "__label__literature": 0.0002739429473876953, "__label__politics": 0.0004394054412841797, "__label__religion": 0.0010852813720703125, "__label__science_tech": 0.1839599609375, "__label__social_life": 0.00011873245239257812, "__label__software": 0.006832122802734375, "__label__software_dev": 0.78076171875, "__label__sports_fitness": 0.0006289482116699219, "__label__transportation": 0.00125885009765625, "__label__travel": 0.0003509521484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22851, 0.05935]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22851, 0.06944]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22851, 0.7893]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 572, null], [572, 1010, null], [1010, 1496, null], [1496, 2325, null], [2325, 2648, null], [2648, 3185, null], [3185, 3626, null], [3626, 3979, null], [3979, 4285, null], [4285, 4559, null], [4559, 4906, null], [4906, 5206, null], [5206, 5291, null], [5291, 5700, null], [5700, 5949, null], [5949, 6814, null], [6814, 8866, null], [8866, 11571, null], [11571, 13975, null], [13975, 17045, null], [17045, 17288, null], [17288, 17984, null], [17984, 18307, null], [18307, 18931, null], [18931, 19344, null], [19344, 19636, null], [19636, 19693, null], [19693, 20783, null], [20783, 21290, null], [21290, 21435, null], [21435, 21790, null], [21790, 22352, null], [22352, 22750, null], [22750, 22851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 572, null], [572, 1010, null], [1010, 1496, null], [1496, 2325, null], [2325, 2648, null], [2648, 3185, null], [3185, 3626, null], [3626, 3979, null], [3979, 4285, null], [4285, 4559, null], [4559, 4906, null], [4906, 5206, null], [5206, 5291, null], [5291, 5700, null], [5700, 5949, null], [5949, 6814, null], [6814, 8866, null], [8866, 11571, null], [11571, 13975, null], [13975, 17045, null], [17045, 17288, null], [17288, 17984, null], [17984, 18307, null], [18307, 18931, null], [18931, 19344, null], [19344, 19636, null], [19636, 19693, null], [19693, 20783, null], [20783, 21290, null], [21290, 21435, null], [21435, 21790, null], [21790, 22352, null], [22352, 22750, null], [22750, 22851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22851, null]], "pdf_page_numbers": [[0, 115, 1], [115, 572, 2], [572, 1010, 3], [1010, 1496, 4], [1496, 2325, 5], [2325, 2648, 6], [2648, 3185, 7], [3185, 3626, 8], [3626, 3979, 9], [3979, 4285, 10], [4285, 4559, 11], [4559, 4906, 12], [4906, 5206, 13], [5206, 5291, 14], [5291, 5700, 15], [5700, 5949, 16], [5949, 6814, 17], [6814, 8866, 18], [8866, 11571, 19], [11571, 13975, 20], [13975, 17045, 21], [17045, 17288, 22], [17288, 17984, 23], [17984, 18307, 24], [18307, 18931, 25], [18931, 19344, 26], [19344, 19636, 27], [19636, 19693, 28], [19693, 20783, 29], [20783, 21290, 30], [21290, 21435, 31], [21435, 21790, 32], [21790, 22352, 33], [22352, 22750, 34], [22750, 22851, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22851, 0.28526]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
20d5326ee986490deea3b410e6d6a628bfdf997a
|
Architecture Level Prediction of Software Maintenance
PerOlof Bengtsson & Jan Bosch
Department of Computer Science and Business Administration
University of Karlskrona/Ronneby
S-372 25 Ronneby, Sweden
+46 457 787 41
[ PerOlof.Bengtsson | Jan.Bosch ] @ide.hk-r.se
Abstract
A method for the prediction of software maintainability during software architecture design is presented. The method takes (1) the requirement specification, (2) the design of the architecture (3) expertise from software engineers and, possibly, (4) historical data as input and generates a prediction of the average effort for a maintenance task. Scenarios are used by the method to concretize the maintainability requirements and to analyze the architecture for the prediction of the maintainability. The method is formulated based on extensive experience in software architecture design and detailed design and exemplified using the design of software architecture for a haemo dialysis machine. Experiments for evaluation and validation of the method are ongoing and future work.
1 Introduction
One of the major issues in software development today is the software quality. Rather than designing and implementing the correct functionality in products, the main challenge is to satisfy the software quality requirements, e.g. performance, reliability, maintainability and flexibility. The notion of software architecture has emerged during the recent years as the appropriate level to deal with software qualities. This because, it has been recognized [1,2] that the software architecture sets the boundaries for the software qualities of the resulting system.
Traditional object-oriented software design methods, e.g. [5,14,21] focus primarily on the software functionality and give no support for software quality attribute-oriented design, with the exception of reusability and flexibility. Other research communities focus on a single quality attribute, e.g. performance, fault-tolerance or real-time. However, real-world software systems are never just a real-time system or a fault-tolerant system, but generally require a balance of different software qualities. For instance, a real-time system that is impossible to maintain or a high-performance computing system with no reliability is of little use.
To address these issues, our ongoing research efforts aim on developing a method for designing software architectures, i.e. the ARCS method [6]. In short, the method starts with an initial architecture where little or no attention has been given to the required software qualities. This architecture is evaluated using available techniques and the result is compared to the requirements. Unless the requirements are met, the architect transforms the architecture in order to improve the software quality that was not met. Then the architecture is again evaluated and this process is repeated until all the software quality requirements have been met or until it is clear that no economically or technically feasible solution exists.
The evaluation of software architectures plays a central role in architectural design. However, software architecture evaluation is not well understood and few methods and techniques exist. Notable exceptions are the SAAM method discussed in [15] and the approach described in [10]. In this paper, we propose a method for predicting maintainability of a software system based on its architecture. The method defines a maintenance profile, i.e. a set of change scenarios representing perfective and adaptive maintenance tasks. Using the maintenance profile, the architecture is evaluated using so-called scenario scripting and the expected maintenance effort for each change scenario is evaluated. Based on this data, the required maintenance effort for a software system can be estimated. The method is based on our experience in architectural design and its empirical validation is part of ongoing and future work.
The remainder of this paper is organized as follows. In the next section, the maintenance prediction method is presented in more detail. The architecture used as an example is discussed in section 3 and the application of the method in section 4. Related work is discussed in section 5 and the paper is concluded in section 6.
2 Maintenance Prediction Method
The maintenance prediction method presented in this paper estimate the required maintenance effort for a software system during architectural design. The estimated effort can be used to compare two architecture alternatives or to balance maintainability against other quality attributes.
The method has a number of inputs: (1) the requirement specification, (2) the design of the architecture (3) expertise from software engineers and, possibly, (4) historical maintenance data. The main output of the method is, obviously, an estimation of the required maintenance effort of the system built based on the software architecture. The maintenance profile is a second output from the method. The profile contains a set of scenario categories and a set of scenarios for each category with associated weighting and analysis (scripting) results.
The maintenance prediction method consists of six steps:
1. Identify categories of maintenance tasks
2. Synthesize scenarios
3. Assign each scenario a weight
4. Estimate the size of all elements.
5. Script the scenarios
6. Calculate the predicted maintenance effort.
The steps are discussed in more detail in the following sections.
2.1 Identify categories of maintenance tasks
Software maintainability is defined by IEEE [13] as:
The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment.
This definition renders three categories of maintainance, i.e. corrective, perfective and adaptive. The prediction method focuses only on perfective and adaptive maintenance and does not predict efforts required for corrective maintenance. Nevertheless, the remaining categories are too abstract to be relevant in this process step. Instead, the categories are defined based on the application or domain description. For example, a haemo dialysis machine might have maintenance scenarios concerned with treatment maintenance, hardware changes, safety regulation changes, etc. These categories reflect the meaning of the maintainability requirement in the context of this application or domain and give the designer a better understanding of the requirements posed on the architecture.
2.2 Synthesize scenarios
For each of the maintenance categories, a representative set of concrete scenarios is defined. The software architect, or the domain expert, is responsible for selecting the scenarios such that the set is representative for the maintenance category. The number of scenarios in the set is dependent on the application and the domain, but in our experience we define about ten scenarios for each category. Scenarios should define very concrete situations. Scenarios that specify types of maintenance cases or sub categories is to be avoided. For example, a scenario could be, “Due to changed safety regulations, a temperature alarm must be added to the hydraulic module in the dialysis machine”. Another example is, “Due to a new type of pump, the pump interface must be changed from duty cycle into a digital interface, with a set value in kP (kilo Pascal)”.
It is important to note that our use of the term ‘scenarios’ is different from Object-Oriented design methods where the term generally refers to use-case scenarios, i.e. scenarios describing system behavior. Instead, our scenarios describes an action, or sequence of actions that might occur related to the system. Hence, a change scenario describes a certain maintenance task. In addition, due to reasons of space, in this paper we present scenarios as vignettes [15] rather than in their full size representation.
2.3 Assign each scenario a weight
Change scenarios have different likelihood of actually occurring during the lifetime of the system. In order to generate an accurate measure for maintainability, the prediction method requires probability estimates, i.e. weights, for each scenario. These probabilities are used for balancing the impact on the prediction of more occurring and less occurring maintenance tasks. We define the weight measure as the relative probability of this scenario resulting in a maintenance task during a particular time interval, e.g. a year, or between two releases. Consequently, scenarios that describe often-recurring maintenance tasks will get higher probabilities and therefore impact the predicted value more and the architecture will generally be optimized for incorporating those maintenance tasks with minimal effort.
The weight of scenarios is produced in two ways. If no historical maintenance data is available from similar applications or earlier releases, the software architect, or the domain expert, estimates the scenario weights. If empirical data about maintenance of the product exists in the organization, the probability data of earlier maintenance efforts should be used as basis for weighting. Based on the probability data for the individual scenarios, it is possible to calculate a probability figure for each category as well. The exact calculation of probabilities is illustrated in section 4.
2.4 Estimate the size of all elements
To estimate the maintenance effort, the size of the architecture needs to be known and the sizes of the affected components need to be known. The component size influences the effort required to implement a change in the component. At least three techniques can be used for estimating the size of components. First, the size of every component can be estimated using the estimation technique of choice. Secondly, an adaptation of an Object-Oriented metric (SIZE2 [8]) metric may be used (SIZE2’) [2]. Finally, when historical data from similar applications or earlier releases is available, existing size data can be used and extrapolated to new components.
2.5 Script the scenarios
Based on the selected scenarios from each maintenance category, the weights defined for each scenario and the categories and the size data for the components, we estimate the maintainability of the architecture by scripting [16] the scenarios. For each scenario, the impact of the realization of that scenario in the architecture and its components is evaluated. Thus, find what components are affected and to what extent will they be changed.
For example, implementing the earlier described scenario of adding a temperature alarm in the dialysis machine would require changes to the hydraulic module component and addition of three new components of type device and controlling algorithm. In addition, the components for system definition and the protective system need to be changed.
2.6 Calculate the predicted maintenance effort
The prediction value is a weighted average for the effort (expressed as size of modification) for each maintenance scenario. Based on that, one can calculate an average effort per maintenance task. To predict the required maintenance effort for a period of time, an estimation or calculation of the number of maintenance tasks has to be done. That figure is then multiplied with the average effort per maintenance task. Note that the above is only necessary when predicting maintenance effort for a period of time. When comparing two alternative architectures, it is sufficient to compare the weighted average effort per maintenance task.
\[ M_{tot} = \sum_{n=1}^{k_c} \left( P(S_n) \cdot \sum_{m=1}^{k_c} V(S_n, C_m) \right) \]
- \( P(S_n) \) the probability weight of scenario \( n \)
- \( V(S_n, C_m) \) the affected volume of component \( m \) in scenario \( n \)
- \( k_c \) = number of components
- \( k_c \) = number of components in architecture
Figure 1: Assessment Calculation Equation
3 Example Application Architecture
Haemo dialysis systems present an area in the domain of medical equipment where competition has been increasing drastically during recent years. The aim of a dialysis system is to remove water and certain natural waste products from the patient’s blood. Patients that have, generally serious, kidney problems and consequently produce little or no urine use this type of system. The dialysis system replaces this natural process with an artificial one.
An overview of a dialysis system is presented in figure 2. The system is physically separated into two parts by the dialysis membrane. On the left side the dialysis fluid circuit takes the water from a supply of a certain purity (not necessarily sterile), dialysis concentrate is added using a pump. A sensor monitors the concentration of the dialysis fluid and the measured value is used to control the pump. A second pump maintains the flow of dialysis fluid, whereas a third pump increases the flow and thus reduces the pressure at the dialysis fluid side. This is needed to pull the waste products from the patient’s blood through the membrane into the dialysis fluid. A constant flow of dialysis fluid is maintained by the hydro mechanic devices denoted in the figure with rectangles with curls.
On the right side of figure 2, the extra corporal circuit, i.e. the blood-part, has a pump for maintaining a specified blood flow on its side of the membrane. The patient is connected to this part through two needles usually located in the arm that take blood to and from the patient. The extra corporal circuit uses a number of sensors, e.g. for identifying air bubbles, and actuators, e.g. a heparin pump to avoid cluttering of the patients blood while it is outside the body.
However, these details are omitted since they are not needed for the discussion in this paper.
Figure 2: Schematic of Haemo Dialysis Machine
The dialysis process, or treatment, is by no means a standard process. A fair collection of treatments exists including, for example, Haemo Dialysis Filtration (HDF), Ultra Filtration (UF) and other variations, such as single needle/single pump, double needle/single pump. Treatments are changed due to new research results but also since the effectiveness of a particular treatment decreases when it is used too long for a patient. Although the abstract function of a dialysis system is constant, a considerable set of variations exists already. Based on experience, the involved company anticipates several additional changes to the software, hardware and mechanical parts of the system that will be necessary in response to developments in medical research.
3.1 Requirements
The aim during architectural design is to optimize the potential of the architecture (and the system built based on it) to fulfill the software quality requirements. For dialysis systems, the driving software quality requirements are maintainability, reusability, safety, real-timeliness and demonstrability. Below, we elaborate on the maintainability requirement.
Maintainability. Past haemo dialysis machines produced by our partner company have proven to be hard to maintain. Each release of software with bug corrections and function extensions have made the software harder and harder to comprehend and maintain. One of the major requirements for the software architecture for the new dialysis system family is that maintainability should be considerably better than the existing systems, with respect to corrective but especially adaptive maintenance:
- **Corrective maintenance** has been hard in the existing systems since dependencies between different parts of the software have been hard to identify and visualize.
- **Adaptive maintenance** is initiated by a constant stream of new and changing requirements. Examples include new mechanical components as pumps, heaters and AD/DA converters, but also new treatments, control algorithms and safety regulations. All these new requirements need to be introduced in the system as easily as possible. Changes to the mechanics or hardware of the system almost always require changes to the software as well. In the existing system, all these extensions have deteriorated the structure, and consequently the maintainability, of the software and subsequent changes are harder to implement. Adaptive maintainability was perhaps the most important requirement on the system.
3.2 Logic Archetypes
One of our main concerns when we designed the software architecture for the haemo dialysis machine was maintainability. The logical archetypes are based on device hierarchy (figure 3). The archetypes are central to the design and important for understanding the haemo dialysis application architecture when doing the scripting, i.e. change impact analysis.
Device. The system is modeled as a device hierarchy, starting with the entities close to the hardware as leaves, ending with the complete system as the root. For every device, there are zero or more sub-devices and a controlling algorithm. The device is either a leaf device or a logical device.
ControllingAlgorithm. In the device archetype, information about relations and configuration is stored. Computation is done in a separate archetype, the ControllingAlgorithm, which is used to parameterize Device components.
Normaliser. To convert from and to different units of measurement the normalization archetype is used.
AlarmDetectorDevice. Is a specialization of the Device archetype. Components of the AlarmDetectorDevice archetype are responsible for monitoring the sub devices. When threshold limits are crossed an AlarmHandler component is invoked.
AlarmHandler. The AlarmHandler is the archetype responsible for responding to alarms by returning the haemo dialysis machine to a safe-state or by addressing the cause of the alarm.
3.3 Scheduling Archetypes
Haemo dialysis machines are required to operate in real time. However, haemo dialysis is a slow process that makes the deadline requirements on the system less tough to adhere to. A treatment typically takes a few hours and during that time the system is normally stable. Since the timing requirements are not that tight we designed the concurrency using the Periodic Object pattern [19]. It has been used successfully in earlier embedded software projects.
Scheduler. The scheduler archetype is responsible for scheduling and invoking the periodic objects. Only one scheduler element in the application may exist and it handles all periodic objects of the architecture. The scheduler accepts registrations from periodic objects and then distributes the execution between all the registered periodic objects.
Periodic object. A periodic object is responsible for implementing its task using non-blocking I/O and using only the established time quanta. The tick() method will run to its completion and invoke the necessary methods to complete its task.
3.4 Connector Archetypes
Causal connections [18] implements the communication between the architecture elements. The principle is similar to the Observer pattern [11] and the Publisher-Subscriber pattern [7]. The usage of the connection allows for
dynamic reconfiguration of the connection, i.e. push or pull. (Figure 5)
**Target.** Maintains information that other entities may be dependent on. The target is responsible for notifying the link when its state changes.
**Observer.** Depends on the data or change of data in the target. Is either updated by a change or by own request.
**Link.** Maintains the dependencies between the target and its observers. Also holds the information about the type of connection, i.e. push or pull. It would be possible to extend the connection model with periodic updates.
### 3.5 Application Architecture
The archetypes represent the building blocks that we may use to model the application architecture of a haemodialysis machine. In figure 4, the application architecture is presented. The archetypes allow for the application architecture to be specified in a hierarchical way, with the alarm devices being orthogonal to the control systems device hierarchy. The description serves as input for scenario scripting, which is architecture level impact analysis in the maintainability case.
This also allows for a layered view of the system, not meaning that the architecture is layered. For example, to specify a treatment we only have to interface the closest layer of devices to the HaemoDialysisMachine device (figure 4). There would be no need to understand or interfacing the lowest layer.
### 4 PREDICTION Example
In this section, we will present an example prediction for the architecture presented in section 3. It is presented to illustrate the practical usage of the method, rather than to give a perfect prediction of this particular case.
#### 4.1 Scenario Categories
In the example domain we identify the following categories of maintenance tasks;
1. Hardware changes, i.e. additions and replacements of hardware require changes to software.
2. Algorithm changes, i.e. algorithms become obsolete and is replaced by new improved ones.
3. Safety changes, i.e. safety standards changes and sets new requirements on the system.
4. Medical advances requires changes, i.e. new treatments and parameters are introduced.
5. Communication and I/O change.
6. Market driven changes. Different markets or countries require certain functionality.
We use these broad categories of maintenance task in the next step of the method to ensure that we include all the important aspects in the broad sense.
#### 4.2 Change Scenarios
When we have the categories, we list a number of scenarios for each category that describe concrete maintenance tasks that may occur during the next maintenance phase.
Scenarios describe a possible situation and change scenarios, in particular, describe possible change situations that will cause the maintenance organization to perform changes in the software and/or hardware. For reasons of space, the scenarios are presented very brief. In our real world application of the method, the scenarios are generally more verbose.
This list presented in table 1 represents a maintenance profile, i.e. it profiles the relevant interpretation of software maintenance for the resulting system.
<table>
<thead>
<tr>
<th>Category</th>
<th>Scenario Description</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>Market Driven</td>
<td>C1 Change measurement units from Celsius to Fahrenheit for temperature in a treatment.</td>
<td>0.043</td>
</tr>
<tr>
<td>Hardware</td>
<td>C2 Add second concentrate pump and conductivity sensor.</td>
<td>0.043</td>
</tr>
<tr>
<td>Safety</td>
<td>C3 Add alarm for reversed flow through membrane.</td>
<td>0.087</td>
</tr>
<tr>
<td>Hardware</td>
<td>C4 Replace duty-cycle controlled heater with digitally interfaced heater using percent of full effect.</td>
<td>0.174</td>
</tr>
<tr>
<td>Medical Advances</td>
<td>C5 Modify treatment from linear weight loss curve over time to inverse logarithmic.</td>
<td>0.217</td>
</tr>
<tr>
<td>Medical Advances</td>
<td>C6 Change alarm from fixed flow limits to follow treatment.</td>
<td>0.087</td>
</tr>
<tr>
<td>Medical Advances</td>
<td>C7 Add sensor and alarm for patient blood pressure</td>
<td>0.087</td>
</tr>
<tr>
<td>Sum</td>
<td></td>
<td>1.0</td>
</tr>
</tbody>
</table>
Table 1: Maintenance Profile
In section 2.2, a total number of ten scenarios per category were suggested. Both for reasons of space and illustrativeness, we will however only use a total of ten scenarios in this example.
### 4.3 Assign Weights
Each scenario has a certain likelihood of appearing during the next phase of maintenance. Each scenario is therefore assigned a value for the probability of which any arbitrary maintenance task from the maintenance phase will be like this (see table 1, column 3). The sum of all the weights must be exactly 1.
For assigning each weight we can use two approaches. First, we can make qualified guesses that some changes are more likely than others. Domain experts or software engineers can support the estimation with experiences from the earlier maintenance phases. Second, we can collect and categorize historical data from other similar development projects.
### 4.4 Component Size Estimates
There are two ways of estimating the size of components. First, the component sizes are estimated using the estimation technique of choice. In most organizations, some estimation technique is used and could also be used for the method presented in this paper. In many cases the project planning already use and require size estimates of the system for work division. These estimates are either equivalent to those or, the estimates for the architecture are one level more fine grained.
Second, a prototype implementation or a previous release may be available which can be used as basis for the
**Table 1: Maintenance Profile**
<table>
<thead>
<tr>
<th>Category</th>
<th>Scenario Description</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hardware</td>
<td>C8 Replace blood pumps using revolutions per minute with pumps using actual flow rate (ml/s).</td>
<td>0.087</td>
</tr>
<tr>
<td>Com. and I/O</td>
<td>C9 Add function for uploading treatment data to patient’s digital journal.</td>
<td>0.043</td>
</tr>
<tr>
<td>Algorithm Change</td>
<td>C10 Change controlling algorithm for concentration of dialysis fluid from PI to PID.</td>
<td>0.132</td>
</tr>
<tr>
<td></td>
<td>Sum</td>
<td>1.0</td>
</tr>
</tbody>
</table>
**Table 2: Estimated Component Size**
<table>
<thead>
<tr>
<th>Component</th>
<th>Size (LOC)</th>
</tr>
</thead>
<tbody>
<tr>
<td>HDFTreatment</td>
<td>200</td>
</tr>
<tr>
<td>HaemoDialysisMachine</td>
<td>500</td>
</tr>
<tr>
<td>ConcentrationDevice</td>
<td>100</td>
</tr>
<tr>
<td>TemperatureDevice</td>
<td>100</td>
</tr>
<tr>
<td>Sum</td>
<td>2805</td>
</tr>
</tbody>
</table>
**Table 2: Estimated Component Size**
<table>
<thead>
<tr>
<th>Component</th>
<th>Size (LOC)</th>
</tr>
</thead>
<tbody>
<tr>
<td>WeightlossDevice</td>
<td>150</td>
</tr>
<tr>
<td>DialysisFluidFlowDevice</td>
<td>150</td>
</tr>
<tr>
<td>ConcCtrl</td>
<td>175</td>
</tr>
<tr>
<td>TempCtrl</td>
<td>30</td>
</tr>
<tr>
<td>SetCtrl</td>
<td>30</td>
</tr>
<tr>
<td>AcetatPump</td>
<td>100</td>
</tr>
<tr>
<td>ConductivitySensor</td>
<td>100</td>
</tr>
<tr>
<td>FluidHeater</td>
<td>100</td>
</tr>
<tr>
<td>TempSensor</td>
<td>100</td>
</tr>
<tr>
<td>FlowdifferentialPump</td>
<td>100</td>
</tr>
<tr>
<td>FluidPrePump</td>
<td>100</td>
</tr>
<tr>
<td>FluidPostPump</td>
<td>100</td>
</tr>
<tr>
<td>mSTomMol</td>
<td>20</td>
</tr>
<tr>
<td>JouleToPercent</td>
<td>20</td>
</tr>
<tr>
<td>PT100toCelsius</td>
<td>40</td>
</tr>
<tr>
<td>FrequencyToRevolutions</td>
<td>40</td>
</tr>
<tr>
<td>OverHeatAlarm</td>
<td>50</td>
</tr>
<tr>
<td>ReversedFlowAlarm</td>
<td>300</td>
</tr>
<tr>
<td>FluidAlarmHandler</td>
<td>200</td>
</tr>
<tr>
<td>Sum</td>
<td>2805</td>
</tr>
</tbody>
</table>
estimation. The size estimates presented in table 2 are synthesized using the size data from an early prototype implementation.
### 4.5 Script the Scenarios
The scenario scripting, or change impact analysis, is done by investigating the required changes to the components of the application architecture and the severity of the change in percent. To this stage we have not investigated if any particular method for scripting is to prefer over others. An introduction to change impact analysis can be found in [4]. The result of scripting the scenarios in our example is shown in table 3.
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Dirty Components</th>
<th>Volume</th>
</tr>
</thead>
<tbody>
<tr>
<td>C1</td>
<td>HDFTreatment (20% change) + new Normaliser type component</td>
<td>2*200 + 20 = 60</td>
</tr>
<tr>
<td>C2</td>
<td>ConcentrationDevice (20% change) + ConcCtrl (50% change) + reuse with 10% modification of AcetatePump and ConductivitySensor</td>
<td>2<em>100 + 5</em>175 + 1<em>100 + 1</em>100 = 127.5</td>
</tr>
<tr>
<td>C3</td>
<td>HaemoDialysisMachine (10% change) + new AlarmHandler + new AlarmDevice</td>
<td>1*500 + 200 + 100 + 350</td>
</tr>
<tr>
<td>C4</td>
<td>Fluidheater (10% change), remove DutyCycleControl and replace with reused SetCtrl</td>
<td>1*100 = 10</td>
</tr>
<tr>
<td>C5</td>
<td>HDFTreatment (50% change)</td>
<td>5*200 = 100</td>
</tr>
<tr>
<td>C6</td>
<td>AlarmDetectorDevice (50% change) + HDFTreatment (20% change) + HaemoDialysisMachine (20% change)</td>
<td>5<em>100 + 2</em>200 + 2*500 = 190</td>
</tr>
<tr>
<td>C7</td>
<td>see C3</td>
<td>= 350</td>
</tr>
<tr>
<td>C8</td>
<td>new ControllingAlgorithm + new Normaliser</td>
<td>100 + 20 = 120</td>
</tr>
<tr>
<td>C9</td>
<td>HDFTreatment (20% changes) + HaemoDialysisMachines (50% changes)</td>
<td>2<em>200 + 5</em>500 = 290</td>
</tr>
<tr>
<td>C10</td>
<td>Replacement with new ControllingAlgorithm</td>
<td>= 100</td>
</tr>
</tbody>
</table>
### 4.6 Calculation
The prediction is calculated using the formula presented in figure 1:
$$0.043*60 + 0.043*127.5 + 0.087*350 + 0.174*10 + 0.217*100 + 0.087*190 + 0.087*350 + 0.087*120 + 0.043*290 + 0.132*100 = 145 \text{ LOC / Change}$$
Given that we estimate around 20 maintenance task for the predicted period of time, either from first to second release or for the coming year. Assuming that we also have an estimated or historical data of maintenance productivity we are able to extrapolate the estimate from this method to a total maintenance effort estimate. We assume that we have a perfective maintenance productivity that are similar to the median reported in [12], i.e. 1.7 LOC/day, which amounts to about 0.2 LOC/hour. Then we get the following estimate:
20 changes per 145 LOC = 2900 LOC
2900 / 0.2 = 14 500 hours of effort
This would represent a medium project of about 6-7 persons working around 2300 hours per year.
### 5 Related work
Architecture assessment is important for achieving the required software quality attributes. A well-known method is the scenario-based architecture assessment method (SAAM) [15]. The SAAM method of assessing software architecture is primarily intended for assessing the final version of the software architecture and involves all stakeholders in the project. The method we propose differs in that it does not involve all stakeholders, and thus requires less resources and time, but instead provides an instrument to the software architects that allows them to repeatedly evaluate architecture during design. We recognize the need for stakeholder commitment and believe that these two methods should be used in combination.
In addition, a method based on an ISO standard has been proposed in [10], which suggests a rigorous metrics approach to the problem of software quality evaluation of software architectures. The method make a clear distinction on internal and external views, where the external view is the view important to or seen by the clients of the resulting products. The rigorous ambition makes it hard to believe that the method will be suitable for usage in every cycle in an iterative and incremental software architecture design process.
Within the software maintenance community efforts have been made to predict maintainability. A set of object oriented metrics was validated in [17] to be good predictors.
of the software maintenance effort for each module in a software system. However, the metrics suite used requires data that can only be collected from the source code and thus cannot be used for software architecture when no or only prototype source exist.
Software change impact analysis is an established research area within the software maintenance community [4]. A variety of models and techniques exist. However, the techniques are often based on having the software available and its source code and this prohibits their application to software architectures. To the best of our knowledge, no impact analysis method exists that is specific to software architecture.
6 Conclusions
We have presented a method for prediction of maintainability from software architecture. The method provides a number of benefits: First, it is practical and has been used during architectural design. Second, its use provides benefits for more than just the prediction, e.g. improved requirements understanding. Third, it combines the usage of design expertise and historical data for validation of scenario profiles. This way the method more efficiently incorporates the uniqueness of the changes for the predicted period of time. Fourth, the method is very slim in terms of effort and produced artifacts. Finally, it is suitable for design processes that iterate frequently with evaluation in every iteration, e.g. as in the ARCS method [3].
Weaknesses of the method include its dependency on a representative maintenance profile and the problem of validating that a profile is representative. In our future work we aim to address this in a number of ways. First, we are planning a study investigating how individual knowledge and expertise affects the representativeness of a maintenance profile and thus how the activities concerned with generating maintenance profiles should be staffed. Second, we will continue to study industrial maintenance practice and intend to incorporate that knowledge can be incorporated into the method. Finally, we intend to study the sensitivity of the method for variation of the input variables, e.g. if the method is more or less sensitive to the representativeness of the maintenance scenario profile than we currently think, or if the size estimates are more significant for the results.
References
|
{"Source-Url": "http://www.geocities.ws/docubase/csmr_bengtsson.pdf", "len_cl100k_base": 7177, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27247, "total-output-tokens": 8221, "length": "2e12", "weborganizer": {"__label__adult": 0.00039768218994140625, "__label__art_design": 0.0006022453308105469, "__label__crime_law": 0.0003159046173095703, "__label__education_jobs": 0.0006422996520996094, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 0.00015461444854736328, "__label__finance_business": 0.0002448558807373047, "__label__food_dining": 0.0003690719604492187, "__label__games": 0.0005993843078613281, "__label__hardware": 0.0011768341064453125, "__label__health": 0.0006866455078125, "__label__history": 0.0002130270004272461, "__label__home_hobbies": 8.767843246459961e-05, "__label__industrial": 0.00045180320739746094, "__label__literature": 0.00024580955505371094, "__label__politics": 0.00017595291137695312, "__label__religion": 0.0004153251647949219, "__label__science_tech": 0.0171966552734375, "__label__social_life": 5.537271499633789e-05, "__label__software": 0.004772186279296875, "__label__software_dev": 0.97021484375, "__label__sports_fitness": 0.0002906322479248047, "__label__transportation": 0.0003883838653564453, "__label__travel": 0.00019609928131103516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36488, 0.04391]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36488, 0.5476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36488, 0.88474]], "google_gemma-3-12b-it_contains_pii": [[0, 3941, false], [3941, 8781, null], [8781, 13707, null], [13707, 17261, null], [17261, 19114, null], [19114, 23491, null], [23491, 27189, null], [27189, 31212, null], [31212, 36488, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3941, true], [3941, 8781, null], [8781, 13707, null], [13707, 17261, null], [17261, 19114, null], [19114, 23491, null], [23491, 27189, null], [27189, 31212, null], [31212, 36488, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36488, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36488, null]], "pdf_page_numbers": [[0, 3941, 1], [3941, 8781, 2], [8781, 13707, 3], [13707, 17261, 4], [17261, 19114, 5], [19114, 23491, 6], [23491, 27189, 7], [27189, 31212, 8], [31212, 36488, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36488, 0.27805]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f56415b5ce28d7c05c0ae322aff497d8918eeffc
|
Service-Oriented Security
An Oracle White Paper
July 2010
Service-Oriented Security
EXECUTIVE OVERVIEW
Service-Oriented Architecture (SOA) has become an integral part of enterprise software by providing a framework to efficiently develop software as services that is easily sharable, reusable, and integrated. Nowhere is the need more apparent than in the Identity Management space. Welcome to the age of Service-Oriented Security (SOS).
INTRODUCTION
Today’s applications deals with many facets of security – from data encryption to data access, from policy management to user lifecycle, from industry standards to government regulations – and the list goes on. Application vendors and customers have solved many of these problems in their own ways. Such solutions may work perfectly when the application lives in its own silo. But once integrated into an enterprise environment, these individual silos quickly break down. Customers are faced with duplicate and scattered functionalities. Integration with existing customer infrastructure can be challenging and at times impossible.
We have made attempts to bring some of these silos together. For example, system management software exists today spanning across application vendors and their software. LDAP directories have become a common piece in presenting the corporate population in a way that can be consumed by LDAP-enabled applications. Provisioning solutions provide a way to tie in the different user lifecycles needed by individual applications through centralized user management. These bolt-on solutions work to some extent, but are susceptible to changes. With the constant emergence of new standards, new requirements and new applications, it is difficult to keep up.
In taking a holistic view, it becomes obvious that these facets of security problems are in fact common to vendors and customers alike. A set of standard-based security services must be made available to provide standards, guidelines and the necessary infrastructure to support the entire application lifecycle – and in the
process, lowering the overall cost in developing, integrating, administering, monitoring and maintaining these applications and the security infrastructure. That is the ultimate goal of Service-Oriented Security.
EXTERNALIZING “IDENTITY”
From an Identity Management perspective, a key step towards SOS is “Identity” Externalization – the externalization of user and security policy data from the applications themselves. Service-Oriented Security enables the creation of an identity layer – providing a platform on which all identity-enabled applications are built. The externalization is needed to solve a number of the underlying problems both customers and vendors are facing with today’s approach when dealing with development, deployment and emerging trends.
Application Development Nightmare
Consider an application developer and what she has to deal with on a day-to-day basis on identity related issues.
Gone are the days when “identity” in application development simply meant dealing with usernames and passwords, user tables and profile management screens. Today’s application developer is faced with a myriad of issues covering many different aspects of identity.
Authentication is no longer about simple username/password based schemes. Instead applications are faced with having to support different types of authentication mechanisms ranging from the simple to the exotic depending on deployment and security needs.
Authorization schemes have evolved from the simple ACL based models of yester-year into rich models that rely in complex ways on the very data that they are meant to protect.
Roles are now a fundamental part of application security and functionality, and have evolved from the simple group-like structures into complex business objects that rely on both context and relationships.
LDAP provided a means to externalize users and groups for developers to rely on. In pushing the limits of LDAP, many developers have become experts in LDAP, leveraging massive directories with complex user schemas to handle application requirements.
For a developer, this presents a dilemma. On the one hand, she needs to address these problems. On the other hand, she has very limited knowledge, if any, of the customer infrastructure. The ideal solution would allow the application developer to satisfy all her requirements with the flexibility to integrate with and leverage any customers’ infrastructure. In reality, it is a difficult juggling act for the developer
who risks between building too proprietary a solution versus not having all of her requirements satisfied.
This shortcoming extends to application deployment. The application now lies in the hands of the application administrator. He must ensure that the application satisfies all the intended requirements through integration with his company’s existing infrastructure. It can be challenging for the application administrator to fully capture all that is intended by the application vendor. Furthermore, he may have to battle with limitations and additional requirements specific to his enterprise that are not known to the application vendor. These customer issues in turn become challenges that developers and application vendors must now address.
In dealing with the evolution of identity management products, new emerging technologies, standards, corporate policies and government regulations – today’s developer is overburdened with responsibilities way beyond that of fulfilling the business requirements as an application developer.
**Application Black Box**
Many applications have been developed in a silo fashion. Once deployed in an enterprise environment, they can no longer function in a silo manner. Enterprises are increasingly looking at ways to centralize their management and administration, especially in the area of identity management – from identity lifecycle to policy management. This is often hindered by proprietary policies and framework from individual application and vendors.
Furthermore, emerging audit and compliance needs meant that applications could no longer work in a black box mode. It has become essential for auditors to understand what is going on inside an application, so they can understand the application enforcements of the controls and policies (or lack thereof).
The management and configuration of these policies have also ceased being the responsibility of IT professionals, and instead became the domain of business administrators – requiring the policies to be presented in an appropriate business-level context.
This meant that policy logic previously embedded within application code must now be taken out of that code and put into some logical container for administration and audit purpose.
These application black boxes and silos force enterprise to take on a bolt-on approach when it comes to identity management. Applications are not always readily integrated in a heterogeneous fashion with existing customer infrastructure. Similar functionalities exist in each application silo often lead to redundancies in administrative functionalities. Integration among the various applications and identity management products also becomes a nightmare – resulting in identity and security information often being duplicated and scattered across the enterprise.
Lack of a centralized view for such information not only affects administrators, but presents an even greater challenge for auditors and security officers. At the end, a
customer is left with a complex and often rigid solution catered to and only to what they have today.
**User-Centric Identity Management**
One of the newest trends in identity management is User-Centric Identity, a concept that attempts to put the user in the middle of identity related transactions, and provide greater control over the transaction and their own privacy. It relies on a combination of technology and business process to make sure that the user is involved in the exchange of identity data between interested parties.
As more applications are being built to be part of and interact with the wider internet infrastructure, the need to support user-centric identity is becoming a critical requirement for these applications. Applications would like to avoid the headache of dealing with the increasingly stringent audit and privacy requirements facing identity-based applications if they can. It also means that these applications no longer want to be restricted to the identities in their own repositories, and want to be able to work with identities coming in from external sources. All of these means that applications need to be able to adopt some new and emerging technologies into their business processes.
**INTRODUCING IDENTITY SERVICES**
The concept of Identity Services builds on the basic principles of SOA and forms the building blocks for Service-Oriented Security. It takes all the functionality of an identity management solution that would be bolted onto applications and turns the whole thing inside out, making them available as services in an SOA. Applications following SOA guidelines would be able to leverage these services without worrying about how these services are being provided. It enables enterprises to make identity a transparent, ubiquitous part of their applications, while maintaining consistency in the 4 A’s of identity management - Authentication, Authorization, Administration and Auditing.

The focal point of Identity Services is a set of well-defined identity management providers providing a logical set of services across the identity infrastructure.
**Authentication Service**
The goal of the Authentication Service is to provide an application the right level of assurance regarding the identity of the interacting user. This is the most commonly externalized identity service today, thanks to the ubiquity of many Single Sign-On solutions, Federation solutions, and the standardized security APIs available in development frameworks (like JAAS in J2EE). Increasingly, applications are accessed across federated domains. Standards such as Security Assertion Markup Language (SAML) and WS-Trust allow Authentication Service to protect applications deployed beyond the intranet boundaries.
But today’s authentication service is still stuck thinking of authentication as a binary scheme – as far as an application is concerned, a user is either authenticated or unauthenticated. In reality, the needs of modern applications are well beyond this rudimentary capability.
For example, applications now demand the ability to perform multi-level risk-based authentication. A user may log into an application simply to view their data or manage their profile through password-based authentication. But if she tries to initiate a higher value transaction (such as approving a requisition above a certain pre-defined limit), the application may request the authentication service to further authenticate the user with a stronger token like a biometric token.
Authentication Service must also cater to emerging user-centric technologies like Microsoft CardSpace and OpenID.
Proper authentication is important, but may not be enough in a world where malicious attacks are happening at all times. Authentication Service must extend beyond authentication and provide the capability to detect potential fraud and react to such by enforcing additional-level authentication and providing proper alert if needed.
**Identity Provider**
The goal of the Identity Provider is to enable the externalization of identity data from the application itself. Identity data is, by its very nature, de-centralized – employee data can be in an HR system, customer data in a CRM system, and there can be an untold number of databases, LDAP systems, even spreadsheets, that hold information about specific identity populations like contractors, vendors, partners, etc. Without a single authoritative source for identity data, applications have to build and maintain user tables to hold the same identity data. In addition, many applications end up building proprietary ways to handle identity lifecycle such as user creation capabilities – causing a huge management overhead.
Centralizing identities into Meta-directory and Provisioning are two of the common approaches used currently by many enterprises. The divergence of data and the synchronization cost in a meta-directory solution is overwhelming. The cost of
building and maintaining connectors to all the target systems in a provisioning solution is equally big. Neither solution provides great support to tackle the compliance and privacy issues inherent in any data replication strategy.
The Identity Provider brings order, security and compliance to the identity universe where an application can go to when it wants to retrieve identity data for any identity it cares about.
Data Virtualization presents a rationalized, unified and up-to-date profile of a user to consuming applications independent of where the data resides. This is done through virtualization of data from the underlying authoritative sources - eliminating the need for complicated synchronization. The provider will also enforce minimal disclosure of identity data through a combination of features and controls to satisfy the Principle of Least Knowledge - a key characteristic that enables compliance with security and privacy needs by making identity data available to consumers on a need-to-know basis. The Identity Provider should have the ability to enforce policy-based controls over the identity data.
Role Provider
The goal of the Role Provider is to enable the externalization of roles from the application itself. Roles have become an integral part of every application’s architecture, from being part of their authorization model to being a business construct used in various workflows and task flows. Roles are often used as an abstract container for users to which business objects can be attached, making those users part of some particular application logic or decision flow (like connecting them to an approval workflow, or assigning them certain privileges).
In today’s world, LDAP groups are often used as an enterprise role system. While still valuable, over time this model has proven to be too simplistic to support the more advanced role requirements.
The Role Provider is a centralized system that supports both enterprise and application roles. Role Management happens at the enterprise role level, which is a simpler, more understandable role structure. In turn, these enterprise roles impact applications in very granular, very specific ways as intended in the application design through the relationship between the enterprise and application roles. Application roles can also be shared with other applications, allowing for easy integration and functional continuity across applications in the enterprise.
The Role Provider can also support more exotic concepts that LDAP groups are simply not capable of handling...
A major goal for a centralized Role Provider is to allow an enterprise to put in place the right controls to ensure integrity of the system by enabling the enforcement of Segregation of Duty (SoD) rules not just within an application, but across related applications as well. It also allows for approval controls over role assignments related to sensitive privileges. As a result, the overall compliance is improved.
**Authorization Service**
The goal of the Authorization Service is to enable the externalization of authorization checks and decisions from the application itself into a centralized framework. Being able to decouple authorization from the core application logic allows the application developer to concentrate on their application logic, and frees them from having to rewrite their application every time the authorization needs change. The application developer simply defines the permission checks or entitlements that they care about, and publishes these entitlements to the external service. Authorization policies can now be deployed in the externalized service, with the application simply asking for the appropriate permission check. The system is no longer constrained by what the application developer was able to support in terms of policy capabilities. The authorization policies can now be as complex as needed, since the authorization service has far greater capabilities than any individual application.
This externalized Authorization Service supports both entitlement modeling and fine-grained authorization. The emergence of the eXtensible Access Control Markup Language (XACML) standards allows entitlements to be easily defined in application terms, and complex policy criteria to be defined that rely on both identity and application data. These policy definitions can further rely on other components of the Identity Services, especially the Role Provider. Roles are a key part of authorization policy definitions, and having access to the powerful role concepts in the Role Provider service means that authorization decisions can also be more granular and meet the increasingly complex business needs.
The centralization of authorization decisions allows for better compliance by providing control points in the application environment where SoD checks, auditing and enforcement of policies can be handled in a uniform manner.
**Provisioning Service**
The goal of the Provisioning Service is to enable applications to become collaborators in the identity process instead of simply being consumers of identity data. It exposes various services that allow applications to also be involved in the administration of the IAM context.
The Provisioning Service in the Identity Services layer keeps in place those same business controls, even as the other services eliminate the need for data flow. This service provides a centralized administration framework by enabling delegated administration and approval-based administration. It combines the administrative needs of the other Identity services (like identity and role creation, role membership requests, etc) and pushes them through a centralized approval, audit and attestation environment.
environment. It ensures that those other services have all the data they need, while enabling full enforcement of SoD checks, audit and regulatory policies. Thus,
- Users of an application can request a particular role or entitlement.
- An application can expose self-registration features that add identities to the enterprise environment.
- An auditor can view and attest to a person’s access across the entire enterprise.
In some sense, the Provisioning Service acts as the bus for the connected Identity Lifecycle process.
**Controls & Audit Service**
The goal of the Controls & Audit Service, as the name implies, is twofold. Segregation of Duty has shown up numerous times in the services mentioned before. Due to the increasing focus on regulatory requirements and corporate security, internal controls have become a key concern in the enterprise. The Controls Service provides the ability to centrally manage and enforce internal controls and other compliance related activities. When integrated with other Identity Services, Controls Service provides the support to enforce, for example, SoD rules on security policies, to protect access to the application and other crucial data.
The criticality of internal controls means that the management of such controls must be carefully monitored. A comprehensive Audit Service must be available to provide a common service through which to audit the events that are happening within the application. The service can then (at deployment time) be hooked up to a centralized or distributed audit repository as necessary. The service can de-normalize and correlate audit data, providing the enterprise such key features as Event Correlation, Tamper-Proof Audit Trails, Activity Monitoring and Fraud Detection.
**SERVICE-ORIENTED SECURITY**
As a key concept to SOS, these Identity Services take us towards our vision of Application-Centric Identity Management. Starting from development, through deployment, to end user access, administration and maintenance, the application lifecycle exercises many different aspects of these services. From an application-centric view, the success of Identity Services lies upon its ability to tackle the requirements at each stage of the application lifecycle.
Design/Development
The role of an application designer is to outline the business functionality that the application intends to provide and conceptualize the interfaces through which these features are implemented and exposed to the various types of users of the application. These interfaces include UIs, page-flows, tabs, buttons, APIs, web services, etc. From a security standpoint, the designer should carve out a security model to protect these interfaces through a common authorization framework.
The developer is expected to function in an Integrated Development Environment (IDE). The developer’s role is to incorporate the authorization policies defined in the design phase to the actual code. She should also be able to further refine the existing policies. Lastly, the application must be tested in the IDE environment by simulating real world usages.
Identity Services must define a standard format to capture the authorization model - a model that is used not only at the design and development phase, but through the remainder of the application lifecycle. Appropriate tools and integration with IDEs must provide tools for designers and developers to articulate the security policies and their association with the actual code and to allow sharing of the authorization policies through appropriate import/export facilities. And finally, the IDE must provide, in conjunction with Identity Services, the ability to test these security policies in the application before it is packaged.
Packaging
A typical IDE provides the ability to package an application for deployment purpose. In the J2EE world, applications can be packaged as different types of archives (JAR, WAR, EAR, etc). At this point, the application contains not only the code artifacts but also the entire footprint of the security artifacts and its association with the code artifacts as implemented by the developers.
Identity Services must define a standard format to package the security artifacts into an application archive. The standard must capture all the security requirements during the design and development stage. Furthermore, such a standard must be understood by the deployment mechanism to carry these security artifacts into the runtime environment when the application is finally deployed. Using this standard, IDEs can package these security artifacts into the application archive in a format that can be consumed at deployment time. The set of security artifacts will essentially be the out-of-the-box security seed data when the application is first deployed.
**Deployment**
Once packaged, an application may be deployed in multiple ways. A release product may be deployed through an installer. A customer application may be deployed through the middleware framework. For any security-aware applications, deployment means more than having the bits up and running. Identity Services must ensure that the security artifacts are correctly deployed and wired in the runtime environment.
Identity Services must define a standard format that is understood by the deployment mechanism to deploy the security artifacts during application deployment. Such format provides the bridge carrying the security artifacts from the design and development phase to the runtime environment. The standard provides the handshake between the IDEs and the different runtime container vendors in a well-known format that adequately captures the security artifacts on both sides.
A runtime environment can be a customer development environment, a staging environment, a QA environment or an actual production environment. A deployed application must be configured to leverage the available identity services before it can be fully functional. Identity Services must provide tools to configure the deployment to complete this runtime identity infrastructure wiring. In addition, Identity Services should provide mechanisms to integrate with external components such as a 3rd party Single Sign-On products, an external policy engine such as an XACML engine or an external SoD engine.
Identity Services must also handle other application lifecycle activities such as patching, upgrade and migration by ensuring that security artifacts are correctly patched and that any extensions or customizations post-deployment are preserved.
**Runtime Infrastructure**
The runtime infrastructure for an application stretches from interfaces exposed to end users down to the database tier. Identity Services provides, not only the services themselves, but a framework to integrate an application with these services in a provider-agnostic manner.
Wherever a service provider acts as the authoritative source (eg. authoritative identity repository or role repository), Identity Services must ensure that the service
provides a standard mechanism to expose its data for runtime consumption by other applications in a secured manner through APIs, Web Services, etc.
These service providers must also focus on scalability, performance, high availability, back-up and recovery mechanisms – both for the backend storage and the corresponding services provided for accessing this data.
**Administration**
The administration of an enterprise application is handled by many different types of users. A security administrator may change the underlying security artifacts to alter the behaviour of an application. A delegated administrator may revoke somebody access to certain parts of the application by changing his roles. An end user may reset her password through self-service. An auditor may want to review evidence of certain policy enforcements protecting sensitive data in the enterprise. Providing such a platform for administration is a key goal in Identity Services.
**Policy Management**
Policy Management deals with the authorization policies themselves. Immediately after deployment of the application, the seeded security artifacts packaged with the applications becomes available in the policy repository. Before these policies are associated the actual end users, the applications driven by these authorization policies will not function properly.
Identity Services must define a standard format to represent security policies in a runtime environment. The standard captures the policies and how these policies are associated with end users during runtime. For example, a role in the application might be mapped to an LDAP group or to a business role in the enterprise role management system.
The Authorization Service provides a central policy repository along with a centralized framework for policy management to allow administrators the ability to view, modify and create new security policies. It also provides support for audit and compliance such as SoD support where appropriate. The definition of roles and role hierarchy will require this support as it alters the relationship between users and permissions.
**User/Role Management**
In an application-centric view, user management primarily deals with lifecycle of an identity. The Identity Provider provides a central authority on identity data. If multiple user repositories are required, the Identity Provider provides the necessary virtualization to abstract the details from the consuming applications.
Identity Services must define a standard format to represent and to access identity information. The standard should allow an application to specify its own identity requirements and be understood by the Identity Provider to expose the appropriate identity data.
Identity Services also provide the underlying support for user creation, self-registration, self-service, delegated administration, proxy user support, etc. along with the necessary support for compliance such as SoD support and Restricted Party Screening, etc. along with thorough auditing and reporting capabilities in this area. Where provisioning is required, the Provisioning Provider provides support to and from various targets.
Similar to the Identity Provider, the Role Provider provides a central role repository and a framework for role management, role request, role assignments, role catalogues, etc. As roles become an integral part of the authorization framework, advanced features in the Role Provider provides administrators with greater flexibility and control. As important as it is with user management, the Role Provider must provide support for compliance such as SoD support during any request, approval and auto-provisioning of role assignments along with auditing and reporting capabilities on role assignments or role changes.
**Governance, Risk and Compliance Administration**
The need for SoD is just a steppingstone into the broad arena of IT Governance, Risk and Compliance. The ability to enforce security policies within an application is obviously crucial. But more importantly, Identity Services must provide the ability to properly manage and monitor the lifecycle of these security policies and other activities that may alter an individual’s access. Controls and Audit Service provides not only the underlying infrastructure for applications and identity services to plug into, but also the administrative framework to manage policies and controls. This allows the administrator to control and track critical changes to policies and controls; to administer and monitor enforcement of crucial policies such as SoD; to detect and be alerted of any suspicious activities or policy violation. Such reporting and alerting capability provides the necessary input for auditors and security officers alike.
**Hot Pluggable**
One of the goals for Identity Services is to simplify and reduce the number of moving parts in the identity infrastructure. That said, the complex nature of the identity infrastructure makes this a challenging problem. With the heterogeneous nature of today’s enterprise deployment, a Hot-Pluggable framework at every stage of the application lifecycle is essential for Identity Services to succeed.
A developer must be able to use the IDE of her choice. The development framework defined in Identity Services must be portable to all the common IDEs. Identity Services will also not assume a particular runtime environment for the application. In the J2EE world, this implies the ability to run in any vendors’ containers.
In the heterogeneous world, Identity Services must allow customers the flexibility to use any identity management components of their choice where appropriate,
provided that they satisfy the provider functionalities as required by the services themselves.
CURRENT OFFERINGS
Aligning with our vision of Service-Oriented Security, **Oracle Fusion Middleware 11g** brings a comprehensive stack of products through **Oracle WebLogic Server**, **Oracle SOA suite** and **Oracle Identity Management Suite** - addressing many of the key security areas touched upon in this document.
**Development and Packaging**
**Oracle Platform Security Services** (OPSS) provides enterprise product development teams, systems integrators, and independent software vendors with a standards-based, portable, integrated, enterprise-grade security framework for Java SE and Java EE applications. Building upon Java SE and Java EE security, OPSS provides an abstraction layer in the form of standards-based APIs and insulate developers from security and identity management implementation details. It acts as the underlying security platform for many Oracle Fusion Middleware products including Oracle WebLogic Server, SOA suite, Oracle WebCentre, Oracle Application Developer Framework, Oracle Entitlement Server, etc.
**Oracle JDeveloper and Oracle Application Development Framework** (ADF) delivers a powerful development and packaging platform. Oracle JDeveloper provides an award-winning IDE of choice for developing J2EE applications while Oracle ADF provides a visual and declarative approach to Java EE development through Oracle JDeveloper 11g. Together with OPSS, these three components cover all the phases of an application’s life cycle. An application designer can easily model security into the application when building Oracle ADF task flows using Oracle JDeveloper through various security wizards to create the required configuration. Oracle JDeveloper also provides an authorization editor that allows developers to create authorization policies for ADF taskflows and pages without writing a single line of code.
Oracle JDeveloper also contains an embedded application container allowing a developer to test her application and the authorization policies within the IDE. From a packaging standpoint, Oracle JDeveloper packages all the security artifacts as part of the application archive, carrying them to the runtime deployment environment. Once the application has been archived and ready, it can be deployed to a remote WebLogic Server domain using Oracle Enterprise Manager Fusion Middleware Control (FMWControl). OPSS is integrated with FMWControl to allow application security policies and credentials migration to be configured during application deployment.
Runtime
**Oracle WebLogic Server (WLS)** provides the J2EE runtime environment. Authorization policies defined during the development phase are made available to the runtime container during deployment. The policy data can be directly deployed or migrated into LDAP to benefit from the performance, scalability and high-availability benefits of an enterprise directory.
The security layer of Oracle WLS including OPSS is also able to leverage many other Identity Management infrastructure components. For single sign-on, Oracle WLS can be configured to use **Oracle Access Manager** as well as other 3rd party single sign-on solutions. Enterprise Identity Stores in the form of enterprise LDAP, including **Oracle Internet Directory** and **Oracle Virtual Directory** and other 3rd party LDAP such as Microsoft Active Directory, can be used as the identity store for authentication and authorization purpose. They can also be used as an LDAP-based policy store by OC4J.
On the authentication front, **Oracle Enterprise Single Sign-On** provides support for desktop single sign-on. **Oracle Identity Federation** delivers a comprehensive multi-protocol federation solution for cross-domain single sign-on. **Oracle Adaptive Access Manager** brings in strong multi-factor and mutual authentication capability. In addition, it provides a risk management capability to analyze real-time data and detect potential fraud.
**Oracle Entitlements Server** delivers a standards-based (XACML 2.0) fine-grained authorization engine that externalizes, unifies, and simplifies the management of complex entitlement policies away from applications themselves. It supports XACML 2.0. These fine-grained policies can be used to protect user interfaces, business logic, and even databases at runtime. In addition, Oracle Entitlements Server integrates with existing Oracle Identity and Access Management produces (such as Oracle Access Manager, Oracle Adaptive Access Manager) to provide a complete end to end access management solution covering a wide span of entitlement use cases.
Administration
On the user management front, **Oracle Identity Manager (OIM)** brings a well-defined set of identity management and enterprise provisioning functionalities such as self-registration, self-service, delegated administration. OIM provides ability to provision to various backend targets such as RDBMS, LDAP servers, Operating Systems, and applications such as SAP, PeopleSoft and our own EBusiness Suite R11/R12. Custom connectors can also be built to cater other provisioning needs for any custom applications.
**Oracle Identity Analytics** provides an enterprise role lifecycle management solution that can act as the authoritative source for the relationships between business users, organizations, and entitlements, thus enabling automation of role
based provisioning and access control across the IT infrastructure. In addition, it provides an identity and access governance solution with features such as identity audit, access certification, dashboards and reporting support – that are much needed by today’s enterprises to comply with government and corporate compliance regulations. The combined solution of Oracle Identity Manager and Oracle Identity Analytics marries identity governance and identity administration, creating a powerful combination in driving automation and compliance need of any applications in an enterprise environment.
At the platform level, **Oracle Enterprise Manager Fusion Middleware Control** provides the necessary means to manage OPSS policies without directly modifying the policy store, which would be LDAP or RDBMS in many production environments. A new graphical user interface separate from FMWControl called Authorization Policy Manager (APM) is also on the roadmap - designed for applications with advanced authorization policies.
At the system management level, Oracle Enterprise Manager provides system management and performance monitoring capability across the board for system and application administrators. **Oracle Management Pack for Identity Management** streamlines the management and monitoring for Oracle Identity Management to improve service levels and ensure high availability. It provides a single console to manage systems spanning directories, firewalls, applications servers, business applications for Oracle and many non-Oracle systems – providing automated configuration management, fault isolation and diagnosis, and one-step discovery of systems.
The two products also share similar functionalities in areas such as user and role managements, approval workflows, notification, etc. One of the goals is to centralize such shared services across, not only OIM and ORM, but across the entire Oracle Identity Management to better the integration and administrative experience.
**Governance, Risk Management and Compliance (GRC)**
GRC is an important area in user/role provisioning as well as in the overall authorization policy management where permissions/privileges and role hierarchy/inheritance can be modified. Oracle Identity Manager integrates with **Oracle Application Access Control Governor**, part of the **Oracle Governance, Risk, and Compliance Suite**, to provide preventative Segregation of Duty support during access provisioning for Oracle E-Business Suite and PeopleSoft. OIM can also integrate with other 3rd party SOD engines that support other ERPs such as SAP for example.
**Standards**
One key aspect in the application lifecycle is the ability to smoothly carry security information from one stage to the other. The problem is neither component
specific nor vendor specific. Yet standards-based formats are lacking in many of these areas where standards should be defined.
On the authorization front, eXtensible Access Control Markup Language (XACML) is a starting point in capturing the authorization model. It provides a policy language to define access control. Much work is being done to figure out how XACML can be extended to other areas of the application lifecycle – for example, how to represent XACML policies during development and deployment; how to efficiently provide access to XACML policies during runtime.
On the identity front, Oracle’s Identity Governance Framework is leading the way in defining the standards.
From the application end, Client Attribute Requirements Markup Language (CARML) presents an interesting option in dealing with application user attribute mapping. It provides a declarative way for designers and developers to communicate their "identity requirements" to deployment administrators – paving the way for identity virtualization. In addition, a secondary goal with CARML is to support expression of privacy constraints for "identity data".
On the identity provider end, Attribute Authority Policy Markup Language (AAPML) allows identity sources to specify constraints on how information can be used by applications.
Together, the two standards define the handshake between applications and the identity providers, providing a way to govern and protect the flow of user data and identity information.
Built upon Oracle Fusion Middleware, the next-generation Oracle Fusion Applications promises to demonstrate the Service-Oriented Security model by leveraging the very set of Oracle technology mentioned here. It mirrors every step of the application lifecycle as described above and leverages the various identity services provided by Oracle Fusion Middleware.
**CONCLUSION**
Service-Oriented Security presents a unique set of challenges for many aspects of the identity management space. The introduction of Identity Services brings real value in addressing many of the shortcomings in today’s solution. From an application-centric perspective, the ultimate goal is to arrive at a standards-based application lifecycle that addresses all the identity management needs by providing a framework for development, deployment and runtime support – a model that is flexible to support heterogeneity, to support other existing or emerging industry standards – a model that is embraced, not only by Oracle, but by other software
vendors – and more importantly, by any customers wishing to develop applications in an application-centric identity management environment.
*For more information on Oracle’s Identity Management, Go to [http://www.oracle.com/identity](http://www.oracle.com/identity)*
|
{"Source-Url": "http://www.oracle.com/us/products/middleware/identity-management/service-oriented-security-wp-176637.pdf", "len_cl100k_base": 7306, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 91108, "total-output-tokens": 8109, "length": "2e12", "weborganizer": {"__label__adult": 0.0003743171691894531, "__label__art_design": 0.0003657341003417969, "__label__crime_law": 0.0008368492126464844, "__label__education_jobs": 0.0003948211669921875, "__label__entertainment": 7.581710815429688e-05, "__label__fashion_beauty": 0.0001671314239501953, "__label__finance_business": 0.0023860931396484375, "__label__food_dining": 0.00020802021026611328, "__label__games": 0.0006246566772460938, "__label__hardware": 0.0010251998901367188, "__label__health": 0.00032520294189453125, "__label__history": 0.0001900196075439453, "__label__home_hobbies": 7.021427154541016e-05, "__label__industrial": 0.0004849433898925781, "__label__literature": 0.0001666545867919922, "__label__politics": 0.0003173351287841797, "__label__religion": 0.00028896331787109375, "__label__science_tech": 0.02490234375, "__label__social_life": 7.611513137817383e-05, "__label__software": 0.04034423828125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.00020647048950195312, "__label__transportation": 0.0003898143768310547, "__label__travel": 0.0001779794692993164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42051, 0.00344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42051, 0.06726]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42051, 0.92936]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 59, null], [59, 2065, null], [2065, 4557, null], [4557, 7549, null], [7549, 9546, null], [9546, 12551, null], [12551, 15120, null], [15120, 18306, null], [18306, 20558, null], [20558, 22460, null], [22460, 25344, null], [25344, 28077, null], [28077, 31021, null], [31021, 33627, null], [33627, 36466, null], [36466, 39256, null], [39256, 41783, null], [41783, 42051, null], [42051, 42051, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 59, null], [59, 2065, null], [2065, 4557, null], [4557, 7549, null], [7549, 9546, null], [9546, 12551, null], [12551, 15120, null], [15120, 18306, null], [18306, 20558, null], [20558, 22460, null], [22460, 25344, null], [25344, 28077, null], [28077, 31021, null], [31021, 33627, null], [33627, 36466, null], [36466, 39256, null], [39256, 41783, null], [41783, 42051, null], [42051, 42051, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42051, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42051, null]], "pdf_page_numbers": [[0, 59, 1], [59, 59, 2], [59, 2065, 3], [2065, 4557, 4], [4557, 7549, 5], [7549, 9546, 6], [9546, 12551, 7], [12551, 15120, 8], [15120, 18306, 9], [18306, 20558, 10], [20558, 22460, 11], [22460, 25344, 12], [25344, 28077, 13], [28077, 31021, 14], [31021, 33627, 15], [33627, 36466, 16], [36466, 39256, 17], [39256, 41783, 18], [41783, 42051, 19], [42051, 42051, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42051, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
bcfcdbbf07aa02a651d70d60a70c64e2eed4cd54
|
SOFTWARE MAINTENANCE & EVOLUTION
LINGI2252 – PROF. KIM MENS
LINGI2252 – PROF. KIM MENS
CODE REFACTORING
Refactoring: Improving the Design of Existing Code
One of the best references on software refactoring, with illustrative examples in Java:
*Refactoring: Improving the Design of Existing Code.*
See also [www.refactoring.com](http://www.refactoring.com)
Overview of this presentation
A. Refactoring basics
B. Categories of refactoring
C. Words of warning
A. REFACTORING BASICS
What is refactoring?
A **refactoring** is a software transformation that
**preserves the external behaviour** of the software;
**improves the internal structure** of the software.
It is a disciplined way to clean up code that minimises the chances of introducing bugs.
Definition of Refactoring [Fowler 2000]
[noun] “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behaviour”
[verb] “to restructure software by applying a series of refactorings without changing its observable behaviour”
typically with the purpose of making the software easier to understand and modify
Why should you refactor?
**THE LIFE OF A SOFTWARE ENGINEER.**
Clean slate. Solid foundations. This time I will build things the right way.
**MUCH LATER...**
Oh my. I’ve done it again, haven’t I?
Why should you refactor?
To improve the design of software
To counter code decay (software ageing)
refactoring helps code to remain in shape
To increase software comprehensibility
To find bugs and write more robust code
To increase productivity (program faster)
on a long term basis, not on a short term basis
Why should you refactor?
To reduce costs of software maintenance
To reduce testing
automatic refactorings are guaranteed to be behaviour preserving
To prepare for / facilitate future customisations
To turn an OO application into a framework
To introduce design patterns in a behaviourally preserving way
When should you refactor?
Whenever you see the need for it
Do it all the time in little bursts
Not on a pre-set periodical basis
Apply the rule of three
1\text{st} time : implement from scratch
2\text{nd} time : implement something similar by code duplication
3\text{rd} time : do not implement similar things again, but refactor
When should you refactor?
Refactor when adding new features or functions
Especially if feature is difficult to integrate with the existing code
Refactor during bug fixing
If a bug is very hard to trace, refactor first to make the code more understandable, so that you can understand better where the bug is located
Refactor during code reviews
When should you refactor?
Refactoring also fits naturally in the *agile methods* philosophy.
Is needed to address the principle “Maintain simplicity”
Wherever possible, actively work to eliminate complexity from the system.
By refactoring the code.
What do you tell the manager?
When (s)he’s technically aware (s)he’ll understand why refactoring is important.
When (s)he’s interested in quality, (s)he’ll understand that refactoring will improve software quality.
When (s)he’s only interested in the schedule, don’t tell that you’re doing refactoring, just do it anyway.
In the end refactoring will make you more productive.
When shouldn’t you refactor?
When the existing code is such a mess that although you could refactor it, it would be easier to rewrite everything from scratch instead.
When you are too close to a deadline.
The productivity gain would appear after the deadline and thus be too late.
However, when you are not close to a deadline you should never put off refactoring because you don’t have the time.
Not having enough time usually is a sign that refactoring is needed.
B. CATEGORIES OF REFACTORINGS
### Categories of refactorings
<table>
<thead>
<tr>
<th>Small refactorings</th>
<th>Big refactorings</th>
</tr>
</thead>
<tbody>
<tr>
<td>(de)composing methods</td>
<td>Tease apart inheritance</td>
</tr>
<tr>
<td>moving features between objects</td>
<td>Extract hierarchy</td>
</tr>
<tr>
<td>organising data</td>
<td>Convert procedural design to objects</td>
</tr>
<tr>
<td>simplifying conditional expressions</td>
<td>Separate domain from presentation</td>
</tr>
<tr>
<td>dealing with generalisation</td>
<td></td>
</tr>
<tr>
<td>simplifying method calls</td>
<td></td>
</tr>
</tbody>
</table>
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: (de)composing methods
1. Extract Method
2. Inline Method
3. Inline Temp
4. Replace Temp With Query
5. Introduce Explaining Variable
6. Split Temporary Variable
7. Remove Assignments to Parameter
8. Replace Method With Method Object
9. Substitute Algorithm
Legend:
= we will zoom in on these
= home reading
(De)composing methods: 1. Extract Method
What? When you have a fragment of code that can be grouped together, turn it into a method with a name that explains the purpose of the method
Why? improves clarity, removes redundancy
Example:
```java
public void accept(Packet p) {
if ((p.getAddressee() == this) &&
(this.isASCII(p.getContents())))
this.print(p);
else
super.accept(p); }
```
```java
public void accept(Packet p) {
if this.isDestFor(p) this.print(p);
else super.accept(p); }
```
```java
public boolean isDestFor(Packet p) {
return ((p.getAddressee() == this) &&
(this.isASCII(p.getContents()))); }
```
Beware of local variables!
(De)composing methods: 2. Inline Method
(OPPOSITE OF EXTRACT METHOD)
**What?** When a method’s body is just as clear as its name, put the method’s body into the body of its caller and remove the method.
**Why?** To remove too much indirection and delegation.
**Example:**
```java
int getRating() {
return moreThanFiveLate Deliveries();
}
boolean moreThanFiveLateDeliveries() {
return _numberOfLateDeliveries > 5;
}
```
```java
int getRating() {
return (_numberOfLateDeliveries > 5);
}
```
What? When you have a temp that is assigned once with a simple expression, and the temp is getting in the way of refactorings, replace all references to that temp with the expression.
Why? (Part of **Replace Temp with Query** refactoring)
Example:
double basePrice = anOrder.basePrice();
return (basePrice > 100)
return (anOrder. basePrice() > 100)
(De)composing methods: 4. Replace Temp with Query
What? When you use a temporary variable to hold the result of an expression, extract the expression into a method and replace all references to the temp with a method call.
Why? Cleaner code
Example:
```java
double basePrice = _quantity * _itemPrice;
if (basePrice > 1000)
return basePrice * 0.95;
else
return basePrice * 0.98;
```
```java
double basePrice()
{
return _quantity * _itemPrice;
}
if (basePrice() > 1000)
return basePrice() * 0.95;
else
return basePrice() * 0.98;
...```
(De)composing methods:
5. Introduce Explaining Variable
**What?** When you have a complex expression, put the result of the (parts of the) expression in a temporary variable with a name that explains the purpose.
**Why?** Breaking down complex expressions for clarity.
**Example:**
```java
if ((platform.toUpperCase().indexOf("MAC") > -1) &&
(browser.toUpperCase().indexOf("IE") > -1) &&
wasInitialized() && resize > 0 )
{
//ACTION
}
```
```java
final boolean isMacOs = platform.toUpperCase().indexOf("MAC") > -1;
final boolean isIEBrowser = browser.toUpperCase().indexOf("IE") > -1;
final boolean wasResized = resize > 0;
```
```java
if (isMacOs && isIEBrowser && wasInitialized() && wasResized){
//ACTION
}
```
**What?** When you assign a temporary variable more than once, but it is not a loop variable nor a collecting temporary variable, make a separate temporary variable for each assignment.
**Why?** Using temps more than once is confusing.
**Example:**
```java
double temp = 2 * (_height + _width);
System.out.println (temp);
temp = _height * _width;
System.out.println (temp);
```
```java
final double perimeter =
2 * (_height + _width);
System.out.println (perimeter);
final double area = _height * _width;
System.out.println (area);
```
(De)composing methods:
7. Remove Assignments To Parameters
What? When the code assigns to a parameter, use a temporary variable instead.
Why? Lack of clarity and confusion between “pass by value” and “pass by reference”
Example:
```java
int discount (int inputVal, int quantity, int yearToDate){
if (inputVal > 50) inputVal -= 2;
... MORE CODE HERE ...
}
```
```java
int discount (int inputVal, int quantity, int yearToDate){
int result = inputVal;
if (inputVal > 50) result -= 2;
... MORE CODE HERE ...
}
```
(De)composing methods:
8. Replace Method with Method Object
**What?** When you have local variables but cannot use **extract method**, turn the method into its own object, with the local variables as its fields.
**Why?** Extracting pieces out of large methods makes things more comprehensible.
**Example:**
```java
Order
price() double primaryBasePrice;
double secondaryBasePrice;
// long computation
PriceCalculator
primaryBasePrice secondaryBasePrice
compute()
return new PriceCalculator(this).compute()
```
```java
Order
price() 1
PriceCalculator
primaryBasePrice secondaryBasePrice
compute()
return new PriceCalculator(this).compute()
```
(De)composing methods: 9. Substitute Algorithm
**What?** When you want to replace an algorithm with a clearer alternative, replace the body of the method with the new algorithm.
**Why?** To replace complicated algorithms with clearer ones.
**Example:**
```java
String foundPerson(String[] people) {
List candidates = Arrays.asList(new String[] {"John", "Jack"})
for (int i = 0; i < people.length; i++)
if (candidates[i]. contains (people[i]))
return people[i];
}
```
```java
String foundPerson(String[] people) {
for (int i = 0; i < people.length; i++)
if (people[i]. equals ("John")) {
return "John";
}
if (people[i]. equals ("Jack")) {
return "Jack";
}
}
}
```
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: moving features between objects
1. Move Method
2. Move Field
3. Extract Class
4. Inline Class
5. Hide Delegate
6. Remove Middle Man
7. Introduce Foreign Method
8. Introduce Local Extension
Legend:
- = we will zoom in on these
- = home reading
29
Moving features between objects:
1,2. Move Method / Field
**What?** When a method (resp. field) is used by or uses more features of another class than its own, create a similar method (resp. field) in the other class; remove or delegate original method (resp. field) and redirect all references to it.
**Why?** Essence of refactoring
**Example:**
```
Class 1
aMethod()
Class 2
Class 1
aMethod()
Class 2
```
What? When you have a class doing work that should be done by two, create a new class and move the relevant fields and methods to the new class.
Why? Large classes are hard to understand.
Example:
<table>
<thead>
<tr>
<th>Person</th>
<th>PhoneNumber</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td>
<td>areaCode</td>
</tr>
<tr>
<td>officeAreaCode</td>
<td>number</td>
</tr>
<tr>
<td>officeNumber</td>
<td></td>
</tr>
<tr>
<td>homeAreaCode</td>
<td></td>
</tr>
<tr>
<td>homeNumber</td>
<td></td>
</tr>
<tr>
<td>getOfficePhone</td>
<td></td>
</tr>
<tr>
<td>getHomePhone</td>
<td></td>
</tr>
<tr>
<td>getOfficePhone</td>
<td></td>
</tr>
<tr>
<td>getHomePhone</td>
<td></td>
</tr>
</tbody>
</table>
Moving features between objects:
4. Inline Class
**What?** When you have a class that does not do very much, move all its features into another class and delete it.
**Why?** To remove useless classes (as a result of other refactorings).
**Example:**
<table>
<thead>
<tr>
<th>Person</th>
<th>PhoneNumber</th>
<th>Person</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td>
<td>areaCode</td>
<td>name</td>
</tr>
<tr>
<td></td>
<td>number</td>
<td>officeAreaCode</td>
</tr>
<tr>
<td>getPhoneNumber()</td>
<td>getPhoneNumber()</td>
<td>getPhoneNumber()</td>
</tr>
</tbody>
</table>
In the example, the Person class with its getPhoneNumber() method is moved to the PhoneNumber class, which then acquires the name, officeAreaCode, and officeNumber features.
Moving features between objects:
5. Hide Delegate
**What?** When you have a client calling a delegate class of an object, create methods on the server to hide the delegate.
**Why?** Increase encapsulation.
**Example:**
```
Client Class
Person
- getDepartment()
Department
- getManager()
Client Class
Person
- getManager()
Department
- getManager()
```
Moving features between objects:
6. Remove Middle Man
**What?** When a class is doing too much simple delegation, get the client to call the delegate directly.
**Why?** To remove too much indirection (as a result of other refactorings).
**Example:**
Client Class
Person
Department
Client Class
Person
Department
Client Class
Moving features between objects: 7. Introduce Foreign Method
**What?** When a server class needs an additional method, but you cannot modify the class, create a method in the client class with an instance of the server class as its first argument.
**Why?** To introduce one additional service.
**Example:**
```java
Date newStart = new Date (previousEnd.getYear(),
previousEnd.getMonth(), previousEnd.getDate() + 1);
```
```java
Date newStart = nextDay(previousEnd);
```
```java
private static Date nextDay(Date arg) {
return new Date (arg.getYear(),
arg.getMonth(), arg.getDate() + 1);
}
```
What? When a server class needs several additional methods but you cannot modify the class, create a new class containing the extra methods; make the extension class a subclass or wrapper.
Why? To introduce several additional services.
Example:
<table>
<thead>
<tr>
<th>Client Class</th>
</tr>
</thead>
<tbody>
<tr>
<td>nextDayDate(Date): Date</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Date</th>
</tr>
</thead>
</table>
<table>
<thead>
<tr>
<th>MfDate</th>
</tr>
</thead>
<tbody>
<tr>
<td>nextDay(): Date</td>
</tr>
</tbody>
</table>
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: organising data
1. Encapsulate field
2. Replace data value with object
3. Change value to reference
4. Change reference to value
5. Replace array with object
6. Duplicate observed data
7. Change unidirectional association to bidirectional
8. Change bidirectional association to unidirectional
9. Replace magic number with symbolic constant
10. Encapsulate collection
11. Replace record with data class
12. Replace subclass with fields
13-16. Replace type code with class / subclass / state / strategy
Organising Data:
1. Encapsulate Field
**What?** There is a public field. Make it private and provide accessors.
**Why?** Encapsulating state increases modularity, and facilitates code reuse and maintenance.
When the state of an object is represented as a collection of private variables, the internal representation can be changed without modifying the external interface.
**Example:**
```java
private String name;
public String getName() {
return this.name;
}
public void setName(String s) {
this.name = s;
}
```
public String name;
private String contents;
public String getContents() {
return this.contents;
}
public void setContents(String s) {
this.contents = s;
}
private Document doc;
public String getContents() {
return this.doc.getContents();
}
public void setContents(String s) {
this.doc.setContents(s);
}
public class Document {
private String contents;
public String getContents() {
return this.contents;
}
public void setContents(String s) {
this.contents = s;
}
}
Organising Data:
13. Replace Type Code with Subclass
**PROBLEM**
YOU HAVE A CODED TYPE FIELD OF WHICH THE VALUES DIRECTLY AFFECT TRIGGER DIFFERENT BEHAVIOUR IN CONDITIONALS.
**What?** An immutable type code affects the behaviour of a class
**Example:**
<table>
<thead>
<tr>
<th>Employee</th>
</tr>
</thead>
<tbody>
<tr>
<td>const Engineer=0</td>
</tr>
<tr>
<td>const Salesman=1</td>
</tr>
<tr>
<td>const Manager=2</td>
</tr>
<tr>
<td>type:Int</td>
</tr>
</tbody>
</table>
**SOLUTION**
CREATE SUBCLASSES FOR EACH VALUE OF THE CODED TYPE. EXTRACT RELEVANT BEHAVIORS FROM THE ORIGINAL CLASS TO THESE SUBCLASSES. REPLACE THE CONTROL FLOW CODE WITH POLYMORPHISM.
When? If subclassing cannot be used, e.g. because of dynamic type changes during object lifetime (e.g. promotion of employees)
Example:
<table>
<thead>
<tr>
<th>Employee</th>
<th>EmployeeType</th>
</tr>
</thead>
<tbody>
<tr>
<td>const Engineer=0</td>
<td>Engineer</td>
</tr>
<tr>
<td>const Salesman=1</td>
<td>Salesman</td>
</tr>
<tr>
<td>const Manager=2</td>
<td>Manager</td>
</tr>
</tbody>
</table>
Organising Data: 15,16. Replace Type Code with State/Strategy
Makes use of state pattern or strategy design pattern
Organising Data:
12. Replace Subclass with Fields
**What?** Subclasses vary only in methods that return constant data
**Solution:** Change methods to superclass fields and eliminate subclasses
**Example:**
```
Person
Male
Female
Person
sex: [M, F]
```
Similar to **replace inheritance with aggregation**
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: simplifying conditional expressions
1. Decompose conditional
2. Consolidate conditional expression
3. Consolidate duplicate conditional fragments
4. Remove control flag
5. Replace nested conditional with guard clauses
6. Replace conditional with polymorphism
7. Introduce null objects
8. Introduce assertion
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: dealing with generalisation
1. Push down method / field
2. Pull up method / field / constructor body
3. Extract subclass / superclass / interface
4. Collapse hierarchy
5. Form template method
6. Replace inheritance with delegation (and vice versa)
Dealing with Generalisation:
1. Push Down Method
When behaviour on a superclass is relevant only for some of its subclasses, move it to those subclasses.
Dealing with Generalisation:
2. Pull Up Method
Simple variant: look for methods with same name in subclasses that do not appear in superclass
More complex variant: do not look at the name but at the behaviour of the method
If the method that is being pulled up already exists in the superclass as an abstract method, make it concrete with the common behaviour
Dealing with Generalisation: 3. Extract Superclass
When you have 2 classes with similar features
Small refactorings
(de)composing methods [9 refactorings]
moving features between objects [8 refactorings]
organising data [16 refactorings]
simplifying conditional expressions [8 refactorings]
dealing with generalisation [12 refactorings]
simplifying method calls [15 refactorings]
Small Refactorings: simplifying method calls
1. Rename method
2. Add parameter
3. Remove parameter
4. Separate query from modifier
5. Parameterise method
6. Replace parameter with method
7. Replace parameter with explicit methods
8. Preserve whole object
9. Introduce parameter object
10. Remove setting method
11.Hide method
12. Replace constructor with factory method
13. Encapsulate downcast
14. Replace error code with exception
15. Replace exception with test
### Simplifying method calls:
**9. Introduce Parameter Object**
**Problem**
Your methods contain a repeating group of parameters.
<table>
<thead>
<tr>
<th>Customer</th>
<th>Customer</th>
<th>DataRange</th>
</tr>
</thead>
<tbody>
<tr>
<td>amountInvoicedIn(from:Date,to:Date)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>amountReceivedIn(from:Date,to:Date)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>amountOverdueIn(from:Date,to:Date)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>amountInvoicedIn(r:DateRange)</td>
<td></td>
</tr>
<tr>
<td></td>
<td>amountReceivedIn(r:DateRange)</td>
<td></td>
</tr>
<tr>
<td></td>
<td>amountOverdueIn(r:DateRange)</td>
<td></td>
</tr>
</tbody>
</table>
**DataRange**
<table>
<thead>
<tr>
<th>from : Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>to : Date</td>
</tr>
</tbody>
</table>
**Solution**
Replace these parameters with an object and use that object as parameter instead.
Simplifying method calls:
14. Replace Error Code with Exception
What? When a method returns a special code to indicate an error, throw an exception instead.
Why? Clearly separate normal processing from error processing.
Example:
```java
int withdraw(int amount) {
if (amount > balance)
return -1
else
{balance -= amount;
return 0}
}
```
```java
void withdraw(int amount) throws BalanceException {
if (amount > balance) throw new BalanceException();
balance -= amount;
}
```
Categories of refactorings (according to [Fowler2000])
Small refactorings
(de)composing methods [9]
moving features between objects [8]
organising data [16]
simplifying conditional expressions [8]
dealing with generalisation [12]
simplifying method calls [15]
Big refactorings
Tease apart inheritance
Extract hierarchy
Convert procedural design to objects
Separate domain from presentation
Big refactorings
Require a large amount of time (> 1 month)
Require a degree of agreement among the development team
No instant satisfaction, no visible progress
Big Refactorings
1. Tease apart inheritance
2. Extract hierarchy
3. Convert procedural design to objects
4. Separate domain from presentation
Big refactorings:
1. Tease apart inheritance
**Problem**
A tangled inheritance hierarchy that is doing 2 jobs at once
**Solution**
Create 2 separate hierarchies and use delegation to invoke one from the other
Big refactorings:
1. Tease apart inheritance
**Approach**
Identify the different jobs done by the hierarchy.
Extract least important job into a separate hierarchy.
Use *extract class* to create common parent of new hierarchy.
Create appropriate subclasses.
Use *move method* to move part of the behaviour from the old hierarchy to the new one.
Big refactoring:
1. Tease apart inheritance
Big refactorings:
1. Tease apart inheritance
Related design patterns
Bridge
decouples an abstraction from its implementation so that the two can vary independently
Strategy / Visitor / Iterator / State
Big refactorings:
2. Extract hierarchy
Problem
An overly-complex class that is doing too much work, at least in part through many conditional statements.
Solution
Turn class into a hierarchy where each subclass represents a special case.
Big refactorings:
2. Extract hierarchy
Approach
Create a subclass for each special case.
Use one of the following refactorings to return the appropriate subclass for each variation:
- replace constructor with factory method
- replace type code with subclasses
- replace type code with state/strategy
Take methods with conditional logic and apply:
- replace conditional with polymorphism
Calculating electricity bills.
Lots of conditional logic needed to cover many different cases:
- different charges for summer/winter
- different tax rates
- different billing plans for personal / business / government / …
- reduced rates for persons with disabilities or social security
Big refactorings:
3. Convert procedural design into objects
Problem
You have code written in a procedural style.
Solution
Turn the data records into objects, break up the behaviour, and move the behaviour to the objects.
Smaller refactorings used
extract method, move method, …
4. Separate domain from presentation
Goal
Change a two-tier design (user interface/database) into a three-tier one (UI/business logic/database).
Solution
Separate domain logic into separate domain classes.
Smaller refactorings used
extract method, move method/field, duplicate observed data, …
C. REFACTORYING TOOLS
AUTOMATED CODE REFACTORING TOOLS
Available for all major programming languages
(and OO programming languages in particular)
Java : IntelliJ IDEA, Eclipse, NetBeans, JDeveloper, ...
JavaScript : WebStorm, Eclipse, ...
C++ : VisualStudio, Eclipse, ...
ObjectiveC and SWIFT : XCode
.NET : VisualStudio
Smalltalk, PHP, Ruby, Python, C#, Delphi, ...
LIMITATIONS OF MOST REFACTORIZATION TOOLS
Only support for primitive refactorings
class refactorings
add (sub)class to hierarchy, rename class, remove class
method refactorings
add to class, rename, remove, push down, pull up, add parameter, move to component, extract code
variable refactorings
add to class, rename, remove, push down, pull up, create accessors, abstract variable
Often no support for higher-level refactorings
REFACTORING IN ECLIPSE
The refactoring tool in Eclipse supports a number of transformations described in Martin Fowler's book Refactoring.
Refactoring can be accessed via the Refactor menu.
Refactoring commands are also available from the context menus in many views or appear as quick assists.
SUPPORTED REFACTORING ACTIONS IN ECLIPSE (2016)
- Rename, Move, Change Method Signature
- Extract Method, Extract Local Variable, Extract Constant
- Inline, Move Type to New File, Use Supertype Where Possible
- Convert Anonymous Class to Nested, Convert Local Variable to Field
- Extract Superclass, Extract Interface, Extract Class
- Push Down, Pull Up, Encapsulate Field
- Introduce Parameter Object, Introduce Indirection
- Introduce Factory, Introduce Parameter
- Generalize Declared Type, Infer Generic Type Arguments
(and more)
CODE REFACTORING – REFACTORING TOOLS
Changes to be performed
- Node.java – LANwithTests
- Workstation.java – LANwithTests
- NodeTest.java – LANwithTests
- PacketTest.java – LANwithTests
Java Source Compare
Original Source
```java
public class Node {
public String name;
public Node nextNode;
public Node(String s) {
name = s;
}
public Node(String s, Node n) {
this(s); // calls the constructor Node()
nextNode = n;
}
}
```
Refactored Source
```java
public class Node {
private String name;
public Node nextNode;
public Node(String s) {
setName(s);
}
public Node(String s, Node n) {
this(s); // calls the constructor Node()
nextNode = n;
}
}
```
[Image of code refactoring interface]
D. WORDS OF WARNING
A WORD OF WARNING (1)
Know what you are doing
If not applied well, refactoring may decrease quality rather than improve it
"Bad smells" are symptoms that something is wrong
Refactoring are supposed to remove “bad smells”
A WORD OF WARNING (1)
Refactoring should not introduce new smells
```plaintext
Person
name
gender
getOfficePhone
getHomePhone
HumanBeing
gender
SMELLS LIKE A TOO ABSTRACT CLASS
EXTRACT SUPERCLASS
Person
name
getOfficePhone
getHomePhone
```
Bad code smells
indicate that your code is ripe for refactoring
Refactoring is about
*how* to change code
Bad smells are about
*when* to modify it
A WORD OF WARNING (2)
Independently applied refactorings can introduce subtle merge conflicts.
REFACTORING CONFLICT:
In the new version, Safe should not be handled by Bank, but by Agency.
Learning objectives:
- Definition and difference between maintenance, evolution, reuse
- Different types of maintenance
- Causes of maintenance and changes
- Technical differences of evolution
- Reuse
25. Give a definition of **refactoring** in your own words and illustrate it with a concrete example of a refactoring.
26. Explain **why** it is important to refactor.
27. Explain **when** (= at what moment) refactoring should (or should not) be performed.
28. Like refactoring, **performance optimisation** does not usually change the behaviour of code (other than its speed); it only alters the internal structure. So how does it differ from refactoring?
29. Explain and illustrate one of the following refactorings in detail:
- Extract Method, Move Method, Extract Class, Replace Type Code with Subclass, Replace Subclass with Fields, Pull Up Method, Introduce Parameter Object
30. Give a concrete example of how a refactoring could accidentally **reduce quality**.
31. Give a concrete example of how to independently applied refactorings could accidentally introduce a subtle **merge conflict**.
CLASS . . . IS . . . DISMISSED.
|
{"Source-Url": "https://oer.uclouvain.be/jspui/bitstream/20.500.12279/617/5/LINGI2252-04-CodeRefactoring.pdf", "len_cl100k_base": 7196, "olmocr-version": "0.1.51", "pdf-total-pages": 83, "total-fallback-pages": 0, "total-input-tokens": 110797, "total-output-tokens": 10192, "length": "2e12", "weborganizer": {"__label__adult": 0.000453948974609375, "__label__art_design": 0.0003540515899658203, "__label__crime_law": 0.0003552436828613281, "__label__education_jobs": 0.0021305084228515625, "__label__entertainment": 5.14984130859375e-05, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.0002397298812866211, "__label__food_dining": 0.0003740787506103515, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0003483295440673828, "__label__health": 0.0003113746643066406, "__label__history": 0.0001691579818725586, "__label__home_hobbies": 8.827447891235352e-05, "__label__industrial": 0.0002372264862060547, "__label__literature": 0.0002536773681640625, "__label__politics": 0.00023806095123291016, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.0007829666137695312, "__label__social_life": 0.0001195073127746582, "__label__software": 0.00379180908203125, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0003707408905029297, "__label__transportation": 0.0003306865692138672, "__label__travel": 0.00022685527801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29416, 0.01347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29416, 0.4149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29416, 0.76862]], "google_gemma-3-12b-it_contains_pii": [[0, 61, false], [61, 228, null], [228, 642, null], [642, 785, null], [785, 1058, null], [1058, 1452, null], [1452, 1651, null], [1651, 1967, null], [1967, 2278, null], [2278, 2615, null], [2615, 2970, null], [2970, 3223, null], [3223, 3603, null], [3603, 4074, null], [4074, 4212, null], [4212, 4956, null], [4956, 5245, null], [5245, 5573, null], [5573, 6273, null], [6273, 6781, null], [6781, 7134, null], [7134, 7693, null], [7693, 8428, null], [8428, 8968, null], [8968, 9503, null], [9503, 10163, null], [10163, 10923, null], [10923, 11212, null], [11212, 11481, null], [11481, 11900, null], [11900, 12383, null], [12383, 13249, null], [13249, 13610, null], [13610, 13946, null], [13946, 14556, null], [14556, 15054, null], [15054, 15343, null], [15343, 15865, null], [15865, 16412, null], [16412, 16913, null], [16913, 17460, null], [17460, 17887, null], [17887, 18203, null], [18203, 18487, null], [18487, 18816, null], [18816, 19105, null], [19105, 19374, null], [19374, 19531, null], [19531, 19894, null], [19894, 19992, null], [19992, 20281, null], [20281, 20747, null], [20747, 21440, null], [21440, 21959, null], [21959, 22361, null], [22361, 22526, null], [22526, 22669, null], [22669, 22882, null], [22882, 23232, null], [23232, 23276, null], [23276, 23482, null], [23482, 23724, null], [23724, 24117, null], [24117, 24406, null], [24406, 24690, null], [24690, 24990, null], [24990, 25012, null], [25012, 25364, null], [25364, 25807, null], [25807, 26105, null], [26105, 26642, null], [26642, 26642, null], [26642, 26642, null], [26642, 27431, null], [27431, 27451, null], [27451, 27576, null], [27576, 27675, null], [27675, 27931, null], [27931, 28083, null], [28083, 28274, null], [28274, 28475, null], [28475, 29385, null], [29385, 29416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 61, true], [61, 228, null], [228, 642, null], [642, 785, null], [785, 1058, null], [1058, 1452, null], [1452, 1651, null], [1651, 1967, null], [1967, 2278, null], [2278, 2615, null], [2615, 2970, null], [2970, 3223, null], [3223, 3603, null], [3603, 4074, null], [4074, 4212, null], [4212, 4956, null], [4956, 5245, null], [5245, 5573, null], [5573, 6273, null], [6273, 6781, null], [6781, 7134, null], [7134, 7693, null], [7693, 8428, null], [8428, 8968, null], [8968, 9503, null], [9503, 10163, null], [10163, 10923, null], [10923, 11212, null], [11212, 11481, null], [11481, 11900, null], [11900, 12383, null], [12383, 13249, null], [13249, 13610, null], [13610, 13946, null], [13946, 14556, null], [14556, 15054, null], [15054, 15343, null], [15343, 15865, null], [15865, 16412, null], [16412, 16913, null], [16913, 17460, null], [17460, 17887, null], [17887, 18203, null], [18203, 18487, null], [18487, 18816, null], [18816, 19105, null], [19105, 19374, null], [19374, 19531, null], [19531, 19894, null], [19894, 19992, null], [19992, 20281, null], [20281, 20747, null], [20747, 21440, null], [21440, 21959, null], [21959, 22361, null], [22361, 22526, null], [22526, 22669, null], [22669, 22882, null], [22882, 23232, null], [23232, 23276, null], [23276, 23482, null], [23482, 23724, null], [23724, 24117, null], [24117, 24406, null], [24406, 24690, null], [24690, 24990, null], [24990, 25012, null], [25012, 25364, null], [25364, 25807, null], [25807, 26105, null], [26105, 26642, null], [26642, 26642, null], [26642, 26642, null], [26642, 27431, null], [27431, 27451, null], [27451, 27576, null], [27576, 27675, null], [27675, 27931, null], [27931, 28083, null], [28083, 28274, null], [28274, 28475, null], [28475, 29385, null], [29385, 29416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29416, null]], "pdf_page_numbers": [[0, 61, 1], [61, 228, 2], [228, 642, 3], [642, 785, 4], [785, 1058, 5], [1058, 1452, 6], [1452, 1651, 7], [1651, 1967, 8], [1967, 2278, 9], [2278, 2615, 10], [2615, 2970, 11], [2970, 3223, 12], [3223, 3603, 13], [3603, 4074, 14], [4074, 4212, 15], [4212, 4956, 16], [4956, 5245, 17], [5245, 5573, 18], [5573, 6273, 19], [6273, 6781, 20], [6781, 7134, 21], [7134, 7693, 22], [7693, 8428, 23], [8428, 8968, 24], [8968, 9503, 25], [9503, 10163, 26], [10163, 10923, 27], [10923, 11212, 28], [11212, 11481, 29], [11481, 11900, 30], [11900, 12383, 31], [12383, 13249, 32], [13249, 13610, 33], [13610, 13946, 34], [13946, 14556, 35], [14556, 15054, 36], [15054, 15343, 37], [15343, 15865, 38], [15865, 16412, 39], [16412, 16913, 40], [16913, 17460, 41], [17460, 17887, 42], [17887, 18203, 43], [18203, 18487, 44], [18487, 18816, 45], [18816, 19105, 46], [19105, 19374, 47], [19374, 19531, 48], [19531, 19894, 49], [19894, 19992, 50], [19992, 20281, 51], [20281, 20747, 52], [20747, 21440, 53], [21440, 21959, 54], [21959, 22361, 55], [22361, 22526, 56], [22526, 22669, 57], [22669, 22882, 58], [22882, 23232, 59], [23232, 23276, 60], [23276, 23482, 61], [23482, 23724, 62], [23724, 24117, 63], [24117, 24406, 64], [24406, 24690, 65], [24690, 24990, 66], [24990, 25012, 67], [25012, 25364, 68], [25364, 25807, 69], [25807, 26105, 70], [26105, 26642, 71], [26642, 26642, 72], [26642, 26642, 73], [26642, 27431, 74], [27431, 27451, 75], [27451, 27576, 76], [27576, 27675, 77], [27675, 27931, 78], [27931, 28083, 79], [28083, 28274, 80], [28274, 28475, 81], [28475, 29385, 82], [29385, 29416, 83]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29416, 0.06844]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
6382930bd2a82bf8a7ba224a4ccd19a5e2e30295
|
Mining Sequential Patterns
Jilles Vreeken
How can we discover the key patterns from an event sequence?
abc, da + noise
(Tatti & Vreeken, KDD 2012)
First things first
What’s my signature?
data analysis ↔ communication
transfer the data to the analyst in as few as possible bits
‘induction by compression’
What does that mean?
defining well-founded objective functions for **exploratory** tasks
using **information theory**
for measuring how many bits of information a result gives
MDL, Kolmogorov Complexity, Kullback-Leibler, Maximum Entropy, (cumulative) entropy
and now to business...
Event sequences
Alphabet $\Omega \quad \{ a, b, c, d, ... \}$
discrete events,
e.g., words, alarms, etc.
Data $D$
\[
\begin{array}{cccccccccccc}
& a & b & d & c & a & d & b & a & a & b & c \\
\end{array}
\]
once, or multiple sequences
\[
\{ a b d c a d b a a b c , \\
a b d c a d b , \\
a b d c a d b a a , ... \}
\]
Event sequences
Alphabet $\Omega \{ a, b, c, d, \ldots \}$
discrete events, e.g., words, alarms, etc.
Data $D$
one, or multiple sequences
Pattern Language
serial episodes
subsequences allowing for gaps
Summarising Event Sequences
The **ideal** outcome of pattern mining
- patterns that show the structure of the data
- preferably a small set, without redundancy or noise
Summarising Event Sequences
The *ideal* outcome of pattern mining
- patterns that show the structure of the data
- preferably a small set, without redundancy or noise
Frequent pattern mining does **not** achieve this
- pattern explosion → overly many, overly redundant results
The **ideal** outcome of pattern mining
- patterns that show the structure of the data
- preferably a small set, without redundancy or noise
Frequent pattern mining does not achieve this
- pattern explosion → overly many, overly redundant results
We pursue the ideal for serial episodes
- we want a group of patterns that summarise the data well
- we take a **pattern set mining** approach
Summarising Event Sequences
We want to find good summaries.
Three important questions
1. how do we score a pattern-based summary?
2. how do we describe a sequence given a pattern set?
3. how do we find good pattern sets?
Summarising Event Sequences
We want to find good summaries.
Three important questions
1. how do we score a pattern-based summary?
2. how do we describe a sequence given a pattern set?
3. how do we find good pattern sets?
Scoring a Summary
We want models that generalise the data and hence, we need a score that
- **rewards** models that identify real structure, and
- **punishes** redundancy and noise
No off-the-shelf score available for serial episodes
- e.g. no well-founded priors
- we can, however, make these goals concrete by **MDL**
MDL
The Minimum Description Length (MDL) principle
given a set of models $\mathcal{M}$, the best model $M \in \mathcal{M}$ is that $M$ that minimises
$$L(M) + L(D|M)$$
in which
$L(M)$ is the length, in bits, of the description of $M$
$L(D|M)$ is the length, in bits, of the description of the data when encoded using $M$
(see, e.g., Rissanen 1978, Grünwald, 2007)
MDL for Event Sequences
By MDL we define
*the optimal set of serial episodes as the set that describes the data most succinctly*
To use MDL, we need
- a lossless encoding for our models,
- a lossless encoding for the data given a model
(for itemsets, see Vreeken et al 2011)
Models
As models we use **code tables**
- dictionaries of patterns & codes
- always contains all singletons
We use optimal prefix codes
- easy to compute,
- behave predictably,
- good results
<table>
<thead>
<tr>
<th>pattern</th>
<th>code</th>
<th>gap</th>
<th>non-gap</th>
</tr>
</thead>
<tbody>
<tr>
<td>abc</td>
<td>p</td>
<td>?</td>
<td>!</td>
</tr>
<tr>
<td>da</td>
<td>q</td>
<td>?</td>
<td>!</td>
</tr>
<tr>
<td>a</td>
<td>a</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>b</td>
<td>b</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>c</td>
<td>c</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>d</td>
<td>d</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Encoding Event Sequences
Data $D$: \[ a \ b \ d \ c \ a \ d \ b \ a \ a \ b \ c \]
Encoding 1: using only singletons
$C_p$: \[ a \ b \ d \ c \ a \ d \ b \ a \ a \ b \ c \]
$CT_1$: \[ a \ a \ b \ b \ c \ c \ d \ d \]
The length of the code $X$ for pattern $X$
\[ L(X) = -\log(p(X)) = -\log\left(\frac{usg(X)}{\sum usg(Y)}\right) \]
The length of the code stream
\[ L(C_p) = \sum_{X \in CT} usg(X)L(X) \]
Encoding Event Sequences
Data $D$: $a$ $b$ $d$ $c$ $a$ $d$ $b$ $a$ $a$ $b$ $c$
Encoding 2: using patterns
$C_p$: $p$ $d$ $a$ $q$ $b$ $p$
$C_g$: ! ? ! ? ! !
Alignment: $a$ $b$ $d$ $c$ $a$ $d$ $b$ $a$ $a$ $b$ $c$
$CT_2$: $a$ $a$
$gaps$ $non-gaps$
$abc$ $p$ $?$ $!
da$ $q$ $?$ !
Encoding Event Sequences
Data $D$: \[a \ b \ d \ c \ a \ d \ b \ a \ a \ b \ c\]
Encoding 2: using patterns
$C_p$ \[
\begin{array}{ccccc}
p & d & a & q & b & p \\
\end{array}
\]
$C_g$ \[
\begin{array}{cccccc}
\end{array}
\]
$CT_2$: \[
\begin{array}{c}
a \\
b \\
c \\
d \\
abc \\
da \\
a \\
\end{array}
\]
The length of a gap code $?$ for pattern $X$
$$L(?) = - \log(p(?) | p)$$
and analogue for non-gap codes $!$
Encoding Event Sequences
By which, the encoded size of $D$ given $CT$ and $C$ is
$$L(D \mid CT) = L(C_p \mid CT) + L(C_g \mid CT)$$
...skipping the details of $L(CT \mid C)$...
Then, our goal is to minimise
$$L(CT, D) = L(CT \mid C) + L(D \mid CT)$$
Summarising Event Sequences
We want to find good summaries.
Three important questions
1. how do we score a summary?
2. how do we describe a sequence given a pattern set?
3. how do we find good pattern sets?
How to Cover your String
There are many valid C’s that describe a sequence given a set of patterns. We are after the optimum.
$$\begin{array}{ccccccc}
\end{array}$$
CT: $$\begin{array}{ccccccc}
a & b & b \\ b & b \\ c & c \\ abc & p & ? & ! \\
da & q & ? & ! \\
\end{array}$$
or,
$$\begin{array}{ccccccc}
\end{array}$$
or,
$$\begin{array}{ccccccc}
\end{array}$$
or,
$$\begin{array}{ccccccc}
\end{array}$$
etc...
How to Cover your String
There are many valid C’s that describe a sequence given a set of patterns. We are after a **good** one.
1. if we fix the **cover**, we can obtain the optimal code lengths
2. if we fix the **code lengths**, we can obtain the optimal cover by dynamic programming
We alternate these steps until **convergence**
Summarising Event Sequences
We want to find good summaries.
Three important questions
1. how do we score a summary?
2. how do we describe a sequence given a pattern set?
3. how do we find good pattern sets?
Mining Code Tables
There are very many possible pattern sets. We are after the **optimum**
However, the search space is huge, complex, and does **not** exhibit trivial structure
We propose two algorithms for mining code tables
- **SQS-CANDS** filters ordered lists of pre-mined candidates
- **SQS-SEARCH** mines good code tables directly from data
SQS-CANDIDATES
SQS-CANDS
select pattern
accept/reject
add to code table
MDL
compress database
Database
Many many patterns
Code table
SQS-SEARCH
Database → SQS-SEARCH → MDL
compress database → accept/reject
add to code table → generate candidates
select pattern → Code table
The Basic Idea
Given a code table and cover, how can we refine it?
- by checking if there are **patterns** in how the codes are used
Patterns in the code stream imply **unmodeled structure**!
\[ C_p \mid CT_0 : \quad a \quad b \quad c \quad d \quad a \quad d \quad b \quad a \quad a \quad b \quad c \quad d \quad \cdots \]
\[ a \rightarrow b \quad \text{happens a lot, let’s add it to } CT \]
Given a code table, how can we refine it?
- by checking if there are patterns in how the codes are used
Patterns in the code stream imply unmodeled structure
\[ C_p \mid CT_0 : \quad \text{a b c d a d b a a b c d} \quad \ldots \]
\[ C_p \mid CT_1 : \quad \text{p c d p d a p c d} \quad \ldots \quad p : \quad \text{a} \rightarrow \text{b} \]
The Basic Idea
Given a code table, how can we refine it?
- by checking if there are patterns in how the codes are used
Patterns in the code stream imply unmodeled structure
\[ C_p \mid CT_0 : \quad a \ b \ c \ d \ a \ d \ b \ a \ a \ b \ c \ d \ \cdots \]
\[ C_p \mid CT_1 : \quad p \ c \ d \ p \ d \ a \ p \ c \ d \ \cdots \quad p : \quad a \rightarrow b \]
\[ C_p \mid CT_2 : \quad p \ q \ p \ d \ a \ p \ q \ \cdots \quad q : \quad c \rightarrow d \]
The Basic Idea
Given a code table, how can we refine it?
- by checking if there are patterns in how the codes are used
Patterns in the code stream imply unmodeled structure
Given a code stream, generate all code pairs
- consider these as candidates, in order of estimated gain
- when total encoded size decreases, re-generate and re-rank
The Basic Idea
Given a code table, how can we refine it?
- by checking if there are **patterns** in how the codes are used
Patterns in the code stream imply **unmodeled structure**
Given a code stream, generate all code pairs
- consider these as candidates, in order of estimated gain
- when batch is empty, re-generate and re-rank
Both strategies show good convergence. **SQS-Search** dips due to batch-wise search.
## Experiments
- **synthetic data**
- random
- HMM
- ✓ no structure found
- ✓ structure recovered
text for interpretation
- **real data**
- various
| | $|\Omega|$ | $|D|$ | $\#\text{freq ep.}$ | $|P|$ | $\Delta L$ |
|----------------|-----------|------|---------------------|------|-----------|
| Pres. Addresses| 5 295 | 62 066| 15 506 | 155 |
| JMLR | 3 846 | 75 646| 40 879 | 580 |
| Moby Dick | 10 277 | 105 719| 22 559 | 231 |
**Sqs-Search**
Results of SQs
**JMLR**
- support vector machine
- machine learning
- state [of the] art
- data set
- Bayesian network
**PRES. ADDRESSES**
- unit[ed] state[s]
- take oath
- army navy
- under circumst.
- econ. public expenditur
(top-5 from 563) (selection from top-25)
That was back in 2012
now back to 2015
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
Though nice, SQS is limited.
With SQUEEZE we aim to push the envelope.
1) richer pattern language
serial episodes
\[ a \rightarrow b \rightarrow c \]
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
serial episodes
parallel episodes
\[
\begin{align*}
\text{a} \rightarrow \text{b} \rightarrow \text{c} \\
\text{a} \rightarrow \text{b} \rightarrow \text{d} \rightarrow \text{c} \\
\text{a} \rightarrow \text{d} \rightarrow \text{b} \rightarrow \text{c}
\end{align*}
\]
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
serial episodes
parallel episodes
SQUEEZE
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
serial episodes
\[ a
ightarrow b
ightarrow c \]
parallel episodes
\[ a
ightarrow b \text{ and } d \rightarrow c \]
‘choice‘ episodes
\[ a
ightarrow d \rightarrow c \]
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
- serial episodes
a → b → c
- parallel episodes
a → b and d → c
- ‘choice’ episodes
a → b or d → c
**SQUEEZE**
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
- **serial episodes**
- `a → b → c`
- **parallel episodes**
- `a → b and d → c`
- **‘choice’ episodes**
- `a → b or d → c`
- **‘stopisodes’**
Though nice, SQS is limited
With **SQUEEZE** we aim to push the envelope.
1) richer pattern language
- serial episodes
- parallel episodes
- ‘choice’ episodes
- ‘stopisodes’
Though nice, SQS is limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
serial episodes
parallel episodes
‘choice’ episodes
‘stopisodes’
Though nice, SQS is quite limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
2) better covers
\[ a \ b \ c \ d \ a \ d \ b \ a \ a \ b \ c \ a \ d \ a \ b \ a \ b \ c \]
Though nice, SQS is quite limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
2) better covers
SQS: non-overlapping, non-nested, non-interleaving
Though nice, SQS is quite limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
2) better covers
SQUEEZE: non-overlapping, non-nesting, non-interleaving
Though nice, SQS is quite limited
With SQUEEZE we aim to push the envelope.
1) richer pattern language
2) better covers
SQUEEZE: non-overlapping, non-nesting, non-interleaving
(work in progress, with Bhattacharyya)
Though nice, SQS is quite limited
With Ditto we push the envelope to **multivariate** data & patterns
<table>
<thead>
<tr>
<th>Categorical</th>
<th>X</th>
<th>Y</th>
</tr>
</thead>
<tbody>
<tr>
<td>$S^0$: a b c a ... b c a b a a</td>
<td></td>
<td>a</td>
</tr>
<tr>
<td>$S^1$: d e f d ... d f e f e d</td>
<td>d e</td>
<td>f d</td>
</tr>
<tr>
<td>$S^2$: g h i g ... g i h h i g</td>
<td>h i</td>
<td>g</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Itemset</th>
<th>Z</th>
</tr>
</thead>
<tbody>
<tr>
<td>$S^0$: a a a a a ... a a a a a a a</td>
<td>a a a a a</td>
</tr>
<tr>
<td>$S^1$: b b ... b b b b b</td>
<td>b</td>
</tr>
<tr>
<td>$S^2$: c c c ... c c c c c</td>
<td>c</td>
</tr>
</tbody>
</table>
(Bertens, Vreeken & Siebes, under submission)
We ran DITTO on translations of the same EU document, stemming, and removing stop words, aligning per sentence. For a minimal support of 10, among the top-ranked results,
Pattern 1:
- French: relev
- German: stellt fest dass
- English: note
Pattern 3:
- German: million eur
- English: eur million
Pattern 7:
- French: parl
- German: parlament
- English: parliament
\[ t_1 \quad t_2 \quad t_3 \quad t_4 \]
So, patterns, that is all?
No.
MDL scores can be seen as a **likelihood** score
- and... with such a score we can do all sorts of cool things
What I’ve been doing before
- classification
- missing value estimation
- clustering
- ...etc...
What I’m currently exploring
- measure `structuredness`
- noise reduction
- budgeted description
Conclusions
Mining informative *sets of patterns*
- important aspect of exploratory data mining
**SQs** approximates the ideal for serial episodes
- complex problem, fast heuristics
- **SQs** extracts good models directly from data
**Ongoing work** includes
- more complex data and pattern types
- applying **SQs** and friends in real-world settings
(implementations available)
Thank you!
Mining informative **sets of patterns**
- important aspect of exploratory data mining
**SQs** approximates the ideal for serial episodes
- **complex** problem, **fast** heuristics
- **SQs** extracts **good** models directly from data
**Ongoing work** includes
- more complex data and pattern types
- applying **SQs** and friends in real-world settings
(Implementations available)
|
{"Source-Url": "http://www.sfb1102.uni-saarland.de/wp/wp-content/uploads/2015/03/sfb-b1-coll-JillesVreeken-March2015.pdf", "len_cl100k_base": 4635, "olmocr-version": "0.1.50", "pdf-total-pages": 58, "total-fallback-pages": 0, "total-input-tokens": 83325, "total-output-tokens": 6926, "length": "2e12", "weborganizer": {"__label__adult": 0.0004191398620605469, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.0006399154663085938, "__label__education_jobs": 0.00209808349609375, "__label__entertainment": 0.00013065338134765625, "__label__fashion_beauty": 0.0002114772796630859, "__label__finance_business": 0.0005888938903808594, "__label__food_dining": 0.0004105567932128906, "__label__games": 0.0010843276977539062, "__label__hardware": 0.0011472702026367188, "__label__health": 0.0006189346313476562, "__label__history": 0.000415802001953125, "__label__home_hobbies": 0.0002944469451904297, "__label__industrial": 0.0007653236389160156, "__label__literature": 0.0006971359252929688, "__label__politics": 0.0003638267517089844, "__label__religion": 0.0005865097045898438, "__label__science_tech": 0.1854248046875, "__label__social_life": 0.0002472400665283203, "__label__software": 0.0183258056640625, "__label__software_dev": 0.78369140625, "__label__sports_fitness": 0.0004334449768066406, "__label__transportation": 0.0005064010620117188, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15181, 0.00868]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15181, 0.06907]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15181, 0.73486]], "google_gemma-3-12b-it_contains_pii": [[0, 43, false], [43, 104, null], [104, 149, null], [149, 168, null], [168, 309, null], [309, 573, null], [573, 596, null], [596, 921, null], [921, 1130, null], [1130, 1301, null], [1301, 1582, null], [1582, 1974, null], [1974, 2197, null], [2197, 2420, null], [2420, 2744, null], [2744, 3115, null], [3115, 3395, null], [3395, 3870, null], [3870, 4281, null], [4281, 4566, null], [4566, 5012, null], [5012, 5267, null], [5267, 5476, null], [5476, 6304, null], [6304, 6640, null], [6640, 6849, null], [6849, 7201, null], [7201, 7343, null], [7343, 7485, null], [7485, 7882, null], [7882, 8227, null], [8227, 8684, null], [8684, 9029, null], [9029, 9364, null], [9364, 9449, null], [9449, 9992, null], [9992, 10263, null], [10263, 10303, null], [10303, 10373, null], [10373, 10471, null], [10471, 10626, null], [10626, 11000, null], [11000, 11134, null], [11134, 11436, null], [11436, 11657, null], [11657, 11920, null], [11920, 12107, null], [12107, 12282, null], [12282, 12480, null], [12480, 12653, null], [12653, 12831, null], [12831, 13049, null], [13049, 13656, null], [13656, 13656, null], [13656, 14065, null], [14065, 14405, null], [14405, 14787, null], [14787, 15181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 43, true], [43, 104, null], [104, 149, null], [149, 168, null], [168, 309, null], [309, 573, null], [573, 596, null], [596, 921, null], [921, 1130, null], [1130, 1301, null], [1301, 1582, null], [1582, 1974, null], [1974, 2197, null], [2197, 2420, null], [2420, 2744, null], [2744, 3115, null], [3115, 3395, null], [3395, 3870, null], [3870, 4281, null], [4281, 4566, null], [4566, 5012, null], [5012, 5267, null], [5267, 5476, null], [5476, 6304, null], [6304, 6640, null], [6640, 6849, null], [6849, 7201, null], [7201, 7343, null], [7343, 7485, null], [7485, 7882, null], [7882, 8227, null], [8227, 8684, null], [8684, 9029, null], [9029, 9364, null], [9364, 9449, null], [9449, 9992, null], [9992, 10263, null], [10263, 10303, null], [10303, 10373, null], [10373, 10471, null], [10471, 10626, null], [10626, 11000, null], [11000, 11134, null], [11134, 11436, null], [11436, 11657, null], [11657, 11920, null], [11920, 12107, null], [12107, 12282, null], [12282, 12480, null], [12480, 12653, null], [12653, 12831, null], [12831, 13049, null], [13049, 13656, null], [13656, 13656, null], [13656, 14065, null], [14065, 14405, null], [14405, 14787, null], [14787, 15181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15181, null]], "pdf_page_numbers": [[0, 43, 1], [43, 104, 2], [104, 149, 3], [149, 168, 4], [168, 309, 5], [309, 573, 6], [573, 596, 7], [596, 921, 8], [921, 1130, 9], [1130, 1301, 10], [1301, 1582, 11], [1582, 1974, 12], [1974, 2197, 13], [2197, 2420, 14], [2420, 2744, 15], [2744, 3115, 16], [3115, 3395, 17], [3395, 3870, 18], [3870, 4281, 19], [4281, 4566, 20], [4566, 5012, 21], [5012, 5267, 22], [5267, 5476, 23], [5476, 6304, 24], [6304, 6640, 25], [6640, 6849, 26], [6849, 7201, 27], [7201, 7343, 28], [7343, 7485, 29], [7485, 7882, 30], [7882, 8227, 31], [8227, 8684, 32], [8684, 9029, 33], [9029, 9364, 34], [9364, 9449, 35], [9449, 9992, 36], [9992, 10263, 37], [10263, 10303, 38], [10303, 10373, 39], [10373, 10471, 40], [10471, 10626, 41], [10626, 11000, 42], [11000, 11134, 43], [11134, 11436, 44], [11436, 11657, 45], [11657, 11920, 46], [11920, 12107, 47], [12107, 12282, 48], [12282, 12480, 49], [12480, 12653, 50], [12653, 12831, 51], [12831, 13049, 52], [13049, 13656, 53], [13656, 13656, 54], [13656, 14065, 55], [14065, 14405, 56], [14405, 14787, 57], [14787, 15181, 58]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15181, 0.05239]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
c6564484844be154df848ad728fdb117be0c07fb
|
WEAK MEASUREMENT THEORY AND MODIFIED COGNITIVE COMPLEXITY MEASURE
Sanjay Misra, Hürevren Kılıç
Department of Computer Engineering, Atılım University,
Kızıldağ Köyü, İncek, Gölbasta, 06836, Ankara, Turkey
{smisra, hurevren}@atilim.edu.tr
Keywords: Validation criteria, weak measurement theory, software complexity measure, scale of a measure
Abstract: Measurement is one of the problems in the area of software engineering. Since traditional measurement theory has a major problem in defining empirical observations on software entities in terms of their measured quantities, Morasca has tried to solve this problem by proposing Weak Measurement theory. In this paper, we tried to evaluate the applicability of weak measurement theory by applying it on a newly proposed Modified Cognitive Complexity Measure (MCCM). We also investigated the applicability of Weak Extensive Structure for deciding on the type of scale for MCCM. It is observed that the MCCM is on weak ratio scale.
1 INTRODUCTION
The key element of any engineering process is measurement. Engineers use measures to better understand and assess the quality of engineered products or systems that they built. However, absolute measures are uncommon in software engineering. Instead, software engineers attempt to derive a set of indirect measures that provide an indication of quality of some representation of software. Software engineers plan ‘how’ an information system should be developed in order to achieve its quality objectives. The quality objectives may be listed as performance, reliability, availability and maintainability and are closely related to software complexity. Numbers of researchers have proposed variety of software complexity measures (Halstead, 1997), (Kushwaha and Misra, 2006), (Woodward and Hennel, 1979) and (Wang, 2003). Out of the numerous proposed measures, selecting a particular complexity measure is again a problem, as every measure has its own advantages and disadvantages. There is an ongoing effort to find such a comprehensive complexity measure, which addresses most of the parameters of software.
Elements of measurement theory has been proposed and extensively discussed in literature (Briand et.al., 1996), (Basili, 2007), (Fenton, 1994), (_, 1998), (Weyuker, 1998), (Zuse, 1991) and (Zuse, 1992) as a means to evaluate the software complexity measures. However, for formal approach of measurement theory, there is a problem: How can we recognize and describe the attribute in the empirical observation domain in order to relate its values to the proposed metric? (Misra and Kilic, 2006). Naturally, the representational measurement theory does not care about the practical difficulty of making empirical observations on the attribute and their identification. These are the underlying reasons for proposal of weak measurement theory by (Morasca, 2003). He has argued that the representation condition is very demanding for state of art of software engineering measurement. Therefore, he proposed for weakening the representation condition and developed the concept of weak measurement theory. We will discuss on it in detail in section 3.
In this paper, an effort has been made to validate MCCM (Misra, 2006) against the weak measurement theory. This theory for metric evaluation is more practical and useful and
encompasses all the factors which are important for the evaluation of any proposed measure. We know that a newly proposed complexity measure is acceptable, only when its usefulness has been proved by a validation process. It must be validated and evaluated both formally and practically. The purpose of the validation is to prove the usefulness of software attribute, which is measured by proposed metric. Actually, validation in narrow sense is the process through which one can test whether the measure’s design purpose is achieved and the intended dimension of software is represented by the measure or not. According to (Fenton, 1991), by narrow sense validation the theoretical soundness of the measure is verified. Using a set of narrow sense validated measures one can show that the authentication of whole prediction system which is called validation in wide sense. From this perspective, the effort in this paper is to validate MCCM in narrow sense while our aim is to verify its theoretical soundness. A detailed discussion about the importance of validating software measures can be found in (Neal, 1997).
A brief introduction of MCCM is given in section 2. We validated MCCM from the perspective of weak measurement theory in section 3. After that in section 4, we examined the scale of the MCCM through weak extensive structure concept. The conclusion drawn is in section 5.
### 2 MODIFIED COGNITIVE COMPLEXITY MEASURE
The complexity measures based on cognitive informatics are in developing phase. Wang’s cognitive functional size measure (Wang, 2003) depends upon internal architecture of the software, input and output. In MCCM (Misra, 2006), occurrences of operators and operands are taken into account in place of inputs and outputs. (Wang and Shao, 2003) claim that basic control structures are used for building logical software architecture, but operators and operands are equally important and part of design information. Once operators and operands have been considered, the number of input and output are automatically included. Further, the occurrence of operators and operands directly affect the architecture and as well as cognitive complexity of software, which was not taken into consideration in the cognitive functional size approach. Based on this, the cognitive complexity should depend on total occurrences of operators, operands and cognitive weights of basic control structures. Accordingly, MCCM is defined as:
\[ \text{MCCM} = S_{oo} \times W_c \]
(1)
where, \( S_{oo} \) is the total occurrences of operators and operands and given by,
\[ S_{oo} = N_{o1} + N_{o2} \]
(2)
where, \( N_{o1} \): The total occurrences of operators.
\( N_{o2} \): The total occurrences of operands.
\( S_{oo} \): Total occurrences of operators and operands.
\( W_c \) is the cognitive weights of basic control structures. Basic Control Structures (BCS), sequence, branch and iteration (Wang and Shao, 2002), (Wang and Shao, 2003), (Wang, 2004) are the basic logic building blocks of any software. The cognitive weight of BCS is the extent of difficulty or relative time and effort for comprehending given software modelled by a number of BCS’s. There are two different architectures for calculating \( W_{bc} \): either all the BCS’s are in a linear layout or some BCS’s are embedded in others. In the former case, sum of the weights of all n BCS’s; are added and in the latter, cognitive weights of inner BCS’s are multiplied with the weights of external BCS’s. The total cognitive weight of a software component \( W_c \) is defined as the sum of cognitive weight of its q linear blocks composed in individuals BCS’s. Since each block may consists of m layers of nesting BCS’s, and each layer with n linear BCS’s, the total cognitive weight, \( W_c \) can be calculated by:
\[
W_c = \sum_{j=1}^{q} \prod_{k=1}^{m} \sum_{i=1}^{n} W_c (j, k, i)
\]
(3)
In fact, cognitive weights correspond to the number of executed instructions. For example, if in a simple program without any loop, the weights assigned to such code is one. Cognitive weights of basic control structures are basic building blocks of software and the standard weights for different control structures are given in (Wang and Shao, 2003).
In Equation-1, the \( S_{oo} \) values are multiplied by \( W_c \) values because of the possible higher structure value. For a simple program having only basic control structure the “sequence,” \( W_c \) will not have any additional contribution to complexity. Therefore, for those programs the complexities are only due to \( S_{oo} \). The above measure has been illustrated with the help of an example as described below:
Example 1. An algorithm to calculate the factorial of a number, to illustrate the application of MCCM
```c
c#include< stdio.h >
#include< stdlib.h >
#include< conio.h >
int main ()
{
long int fact=1;
int n;
clrscr();
printf(" input the number");
scanf ("%d", &n);
if (n==0)
else
for (int i=n;i>1;i--)fact=fact*i;
printf(" factorial(n)=1%d",fact);
getch();
}
```
We illustrate the MCCM to calculate the complexity of this program as under:
- Total number of operands = 15.
- Total number of operators = 24.
- So = 24 + 15 = 39.
- BCS (sequence) W1 = 1.
- BCS (branch) W2 = 2.
- BCS (iteration) W3 = 3.
- Wc = W1 + W2 + W3 = 1 + 2 + 3 = 6.
- MCCM = So * Wc = 39 * 6 = 234 CCU.
Thus, the cognitive complexity measure value of the algorithm is 234 CCU.
3 VALIDATING MCCM BY WEAK MEASUREMENT
Measurement is simply the process of converting qualities to quantities. Such conversion process requires a formal description of the systems worked on. The components of the qualified system are (1) Entities whose attributes are wanted to be quantified; (2) Empirical binary relations showing the intuitive knowledge about the attributes and (3) Binary operations describing the production of new entities from the existing ones. Entities can either be physical objects or abstract artifacts that can be characterized or defined by a set of basic characteristics known as attribute (Wang, 2003). In the following paragraphs we describe the basic definition of measurement theory and check the validity of MCCM against it. We have also shown the problem related with the empirical observations in empirical relation system.
Definition 1: (Empirical Relational System-ERS) (Zuse, 1991). For a given attribute, an Empirical Relational System is an ordered tuple
\[ ERS = \langle E, R_1, \ldots, R_n, o_1, \ldots, o_m \rangle \]
where
- \( E \) : the set of entities,
- \( R_1, \ldots, R_n \) denote \( n \) empirical relations such that each \( R_i \) has an arity \( n_i \), and \( R \subseteq E^n \)
- \( o_1, \ldots, o_m \) denote \( m \) empirical binary operations on the entities that produces new entities from the existing ones, so \( o_j : E \times E \rightarrow E \) and the operations are represented with an infix notation, for example, \( e_i = o_j e_k \).
The components of the quantification system are the values representing the decided quantities; the binary relations showing the dependencies among them and the binary operations describing the production of new values from the existing ones. In MCCM, the entities are the program bodies. The only empirical relation is assumed to be more_or_equal_complex and the only empirical binary operation is the concatenation of program bodies. However, from practical point of view there is a major problem for the identification and possibly the existence of such empirical observations. We can explain it by a solid example. Assume that we are given a program body \( P \) and we obtain a new program body \( Q \) by simply duplicating \( P \). Also, assume that we are given another program body \( R \) for which there is no direct clear relation between \( P \) and \( R \). One may easily establish the relation more_or_equal_complex between \( P \) and \( Q \) however it may not easy to make such an empirical observation between \( P \) and \( R \). This is due to that we may not reach a consensus on how to order \( P \) and \( R \) based on their complexity.
Definition 2: (Numerical Relational System-NRS). A Numerical Relational System is an ordered tuple
\[ NRS = \langle V, S_1, \ldots, S_n, p_1, \ldots, p_m \rangle \]
where
- \( V \) : the set of values,
- \( S_1, \ldots, S_n \) denote \( n \) relations such that the arity of \( S_i \) is equal to the arity of \( R_i \), and \( S_i \subseteq V^n \)
- \( p_1, \ldots, p_m \) denote \( m \) numerical binary operations on the values that produces new values from the existing ones, so \( p_j : V \times V \rightarrow V \) and the operations are represented with an infix notation, for example, \( v_i = p_j v_k \).
For MCCM, \( V \) is the set of positive integers, the binary relation is assumed to be \( \geq \) and the numerical binary operation is the addition (i.e. +) of two positive integers.
Definition 3: Measure \( m \) is a mapping of entities to the values i.e. \( m : E \rightarrow V \).
The measure for MCCM is defined by Equation (1). Note that the measure by itself does not provide
any mapping between empirical and numerical knowledge.
Definition 4: A measure must satisfy the following two conditions known as Representation Condition.
\[ \forall i \in 1..n \forall <e_1, ..., e_n> \in E^n \]
\[ (<e_1, ..., e_n> \in R_i \iff \langle m(e_1), ..., m(e_n) \rangle \in S_i \) \]
(Part 1)
\[ \forall j \in 1..m \forall <e_1, e_2> \in E \times E \]
\[ (m(e_1, o, e_2) = m(e_1) \land m(e_2)) \]
(Part 2)
The first part of the Representation Condition says that for a given empirically observed relation between entities, there must exist a numerical relation between corresponding measured values and vice versa. In other words, any empirical observation should be measurable and any measurement result should be empirically observable. The second part says a measured value of an entity which is obtained by the application of an empirical binary operation on two entities should be equal to the value obtained by corresponding numerical binary operation executed over individually measured values of entities. In other words, complexity of the whole should be definable in terms of complexities of its parts and their higher order relations.
For MCCM, the representation condition requires that (1) if for any two program body \( e_1 \) and \( e_2 \) are in more_or_equal_complex relation (i.e. \(<e_1, e_2> \in \text{more_or_equal_complex}\)) then the measured complexity value of entity \( e_1 \) should be greater than the measured complexity value of entity \( e_2 \) (i.e. \( m(e_1) > m(e_2) \)) and vice versa. When we reconsider the program bodies \( P \) and \( Q \) where \( P \) is the double of \( P \), we can say that since MCCM is based on the counting of operators, operands and cognitive weights of basic control structures, they also become double or vice versa. Consequently, for part (1) of the condition we can say that the empirically observed more_or_equal_complex relation between two program bodies leads to a numerical binary relation > among those entities or vice versa. However, part (1) is only satisfied if there is such clear empirically observable relations between program bodies for example \( P \) and \( Q \). On the other hand, in case of \( P \) and \( R \) since we do not have any clear empirical relation between them, the requirement
\[ \forall i \in 1..n \forall <e_1, ..., e_n> \in E^n \]
\[ (<m(e_1), ..., m(e_n) \rangle \in S_i \Rightarrow <e_1, ..., e_n> \in R_i \) \]
implied by part (1) may not be required anymore.
The formal approach describing such relaxation is proposed by (Morasca, 2003). He has argued that the original definition of Representation Condition is very demanding for state of art of software engineering measurement. Therefore, he suggested weakening (only) the first part of the condition two way link \( \iff \), to a one way link, \( \Rightarrow \) as follows:
Definition 5: Weak Representation Condition is defined by[8].
\[ \forall i \in 1..n \forall <e_1, ..., e_n> \in E^n \]
\[ (<e_1, ..., e_n> \in R_i \Rightarrow \langle m(e_1), ..., m(e_n) \rangle \in S_i \) \]
(Part 1)
\[ \forall j \in 1..m \forall <e_1, e_2> \in E \times E \]
\[ (m(e_1, o, e_2) = m(e_1) \land m(e_2)) \]
(Part 2)
When we consider the above example again, although we can calculate the MCCM values for \( P \) and \( R \), this does not imply the existence of corresponding empirical relations between \( P \) and \( R \). If we take a given more_or_equal_complex relation between \( P \) and \( Q \) that can be empirically observable one can always find corresponding metric values satisfying the Weak Representation Condition.
For part two of the Representation Condition, we can say that the complexity value of a program body which is obtained by concatenation (i.e. the empirical binary operation) of \( e_1 \) and \( e_2 \) is equal to the sum (i.e. the numerical binary operation) of their calculated complexity values. Therefore, MCCM satisfies the second part of the Representation Condition. Finally, we can say that MCCM satisfies the Weak Representation condition.
Showing the MCCM satisfies the Weak Representation Condition, we can investigate the type of the scale for our proposal. In order to be able to decide on the scale type we need to define the Weak Scale and Weak Meaningful Statement concepts (Morasca, 2003).
Definition 6: A weak scale is a triple \(<ERS, NRS, m>\), where \( ERS \) is an Empirical Relational System, \( NRS \) is a Numerical Relational System, and \( m \) is a measure that satisfies the Weak Representation Condition.
Definition 7: A statement is called Weak Meaningful Statement if its truth value does not change if a weak scale is replaced by another weak scale. Formally, if \( S(m) \) is based on measure \( m \) and \( S(m') \) is the same statement obtained by replacing \( m \) with \( m' \), we have \( S(m) \iff S(m') \).
Based on the notion of weak meaningful statement we can talk about four different types of weak scales:
Weak nominal scale: The meaningful statements of this class of scales are of the form \( m(e_1) = m(e_2) \) for at least one pair of entities \( e_1 \) and \( e_2 \). If for one scale, \( m(e_1) = m(e_2) \) is satisfied for a pair of entities \( e_1 \) and \( e_2 \) then we must have \( m'(e_1) = m'(e_2) \) for all other scales \( m' \).
Weak ordinal scale: \(<E, NRS, m>\) is a weak ordinal scale if \( m(e_1) > m(e_2) \) or \( m(e_1) = m(e_2) \) be weak meaningful statements for at least one pair of entities \( e_1, e_2 \). It is not required that \( m(e_1) > m(e_2) \) or \( m(e_1) = m(e_2) \) be weak meaningful statements for all pairs of entities \( e_1, e_2 \).
Weak interval scale: \(<E, NRS, m>\) is a weak interval scale if \( (m(e_1) - m(e_2)) / (m(e_1) - m(e_2)) = k \) is a weak meaningful statement for at least one four-tuple of entities \( e_1, e_2, e_3, e_4 \) i.e., \( k \) is a constant value of for all scales. It is not required that this statement is meaningful for all four-tuples of entities.
Weak ratio scale: \(<E, NRS, m>\) is a weak ratio scale if \( m(e_1) / m(e_2) = k \) is a weak meaningful statement for at least one four-tuple of entities \( e_1, e_2, e_3, e_4 \) i.e., \( k \) is a constant value of for all scales defined by the corresponding meaningful statement. Reconsider the two program bodies \( P \) and \( Q \) above as entities \( e_1 \) and \( e_2 \) where we calculate \( k \) as 2. Then, the statement \( m(Q) / m(P) = 2 \) is also a Weak Meaningful Statement for LOC or Control Complexity metrics.
Therefore, we can informally say that MCCM is a hierarchy (axiom of hierarchy).
\[ \forall \, e_1, e_2, e_3 \in E \ (R(e_1, e_2) \Rightarrow (e_1, e_2) \circ \ (e_2, e_3) \circ \ (e_3, e_4)) \] (axiom of monotonicity).
\[ \forall \, e_1, e_2, e_3, e_4 \in E \ (R(e_1, e_2) \Rightarrow \exists \, n \in N \ (n \circ \ (n, e_1 \circ (e_2, e_3))) \] where \( n \) is recursively defined for any \( e \in E \) as \( 1^{e} = e \) and \( \forall \, n > 1 \ (n \circ \ (n-1) \circ e \circ o \ e) \) (Archimean Axiom).
4 WEAK EXTENSIVE STRUCTURE
Definition 8: A hierarchy is a pair \(<E, R>\) where \( R \subseteq E \times E \) is a binary relation on \( E \) such that it does not contain any cycle, i.e. any sequence of pairs \( \{<e_1, e_2>, <e_2, e_3>, ..., <e_{n-1}, e_n>, ..., <e_1, e_{n+1}>\} \) of any length \( n \) with \( \forall \, i \in 1..n \ R(e_i, e_{i+1}) \) such that \( e_1 \neq e_{n+1} \).
A1: For any program bodies \( X \) being more or equal complex \( X \) and \( Y \) being more or equal complex \( Z \), the \( Z \) can never be more or equal complex than \( X \). Therefore, we can say that \(<E, more_or_equal_complex, concatenation>\) is a hierarchy.
A2: For any program bodies \( X \) being more or equal complex \( P \) and \( R \), since we do not have any knowledge of relation between \( R \) and the other two, we cannot say that \( P \) concatenated with \( (Q \ concatenated with R) \) is more or equal complex than \( (P \ concatenated with Q) \ concatenated with R \). Therefore, the concatenation operator of MCCM satisfies the weak associativity property.
A3: When we consider the example program bodies \( P, Q \) and \( R \), we do not have any knowledge of relation between \( R \) and the other two. We cannot say that the relation between \( Q \ concatenated with R \) and \( Q \ concatenated with R \) because we have no knowledge of empirical relation of \( R \) and the others. Then, monotonicity property is also satisfied.
A4: If entity \( e_1 \) is more or equal complex than \( e_2 \) then for any \( e_3, e_4, \) we cannot establish a new more or equal complex relation by any number of concatenations; say \( n \) times, of \( e_1 \) and \( e_2 \) to themselves followed by concatenation of \( e_3 \) and \( e_4 \) with them, respectively. This is because we may not have any knowledge of relation between the results.
of $n_2$ concatenated with $e_2$ and $n_2$ concatenated with $e_3$ due to unknown relation between each of $e_1$ and $e_2$ with other two. Consequently, Archimedean axiom is also satisfied.
As a result, the ERS description of the proposed MCCM is a Weak Extensive Structure. Based on the theorems “Existence of an Additive Scale for a Weak Extensive Structures” and “Weak Additive Scale and Weak Ratio Scales” given by in (Morasca, 2003) we can say that MCCM is defined on Weak Ratio Scale. Note that among the scales defined above, the ratio scale is the highest in level. Therefore, it may be more powerful than the other scales reflect.
5 CONCLUSIONS
MCCM is a new proposed complexity measure based on cognitive aspects software development. Any proposed complexity measure should be validated and evaluated against mathematical tool of measurement theory which is extensively used in the literature as a means to evaluate software engineering metrics. However it is known that in classical measurement theory there is problem in defining empirical observations on software entities in terms of their measured quantities. Consequently, the proposal of weak measurement theory is thought to be a useful alternative for validating and evaluating the MCCM. We showed that MCCM satisfies most of the parameters required by the weak measurement theory and it is also found that the proposed measure is on weak ratio scale.
In the light of the experiences we propose the future work to include the following:
1. Further researches on weak measurement theory are required. Weak measurement theory is only a partial solution to problem related to definition of a measure based on measurement theory.
2. To the best of our knowledge, complexity measures based on cognitive aspects are not tested by the practitioners. This is also a task for future work.
REFERENCES
|
{"Source-Url": "http://eprints.covenantuniversity.edu.ng/1484/1/ENASE.pdf", "len_cl100k_base": 5866, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21528, "total-output-tokens": 7218, "length": "2e12", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.0003745555877685547, "__label__crime_law": 0.0004069805145263672, "__label__education_jobs": 0.0009403228759765624, "__label__entertainment": 6.002187728881836e-05, "__label__fashion_beauty": 0.00015437602996826172, "__label__finance_business": 0.0002341270446777344, "__label__food_dining": 0.00040435791015625, "__label__games": 0.0006508827209472656, "__label__hardware": 0.0006718635559082031, "__label__health": 0.0006499290466308594, "__label__history": 0.00021338462829589844, "__label__home_hobbies": 9.328126907348631e-05, "__label__industrial": 0.0003616809844970703, "__label__literature": 0.00041604042053222656, "__label__politics": 0.00024509429931640625, "__label__religion": 0.0004601478576660156, "__label__science_tech": 0.026763916015625, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.00567626953125, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.0002789497375488281, "__label__transportation": 0.0004074573516845703, "__label__travel": 0.00016641616821289062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25572, 0.04133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25572, 0.78819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25572, 0.90337]], "google_gemma-3-12b-it_contains_pii": [[0, 3329, false], [3329, 8002, null], [8002, 12488, null], [12488, 17458, null], [17458, 21387, null], [21387, 25572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3329, true], [3329, 8002, null], [8002, 12488, null], [12488, 17458, null], [17458, 21387, null], [21387, 25572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25572, null]], "pdf_page_numbers": [[0, 3329, 1], [3329, 8002, 2], [8002, 12488, 3], [12488, 17458, 4], [17458, 21387, 5], [21387, 25572, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25572, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
49f71e1dd869bbc2f508c798b677c0519fd7d246
|
Predictive Parsing
The Front End
- Perform a membership test: code ∈ source language?
- Is the program well-formed (semantically)?
- Build an IR version of the code for the rest of the compiler
The front end is not monolithic
The Front End - Scanner
**Scanner**
- Maps stream of characters into words
- Basic unit of syntax
- \( x = x + y ; \) becomes \(<id,x><eq,><plus><id,y><semi>;<>\)
- Characters that form a word are its *lexeme*
- Its *part of speech* (or *syntactic category*) is called its *token type*
- Scanner discards white space & (often) comments
**The Front End - Parser**
**Parser**
- Checks the stream of words and their parts of speech (produced by the scanner) for grammatical correctness
- Determines if the input is syntactically well formed
- Guides checking at deeper levels than syntax
- Builds an IR representation of the code
Roadmap (Where are we?)
In CMSC 330 we studied scanners & parsers
- Specifying tokens
- Regular expressions
- Specifying syntax
- Context-free grammars
Now we’ll look at more advanced parsers
- Predictive top-down parsing
- FIRST, FOLLOW, FIRST+
- The LL(1) condition
- Table-driven LL(1) parsers
- Bottom-up shift-reduce parsers
Parsing Techniques
Top-down parsers (LL(1), recursive descent)
- Start at the root of the parse tree and grow toward leaves
- Pick a production & try to match the input
- Bad “pick” ⇒ may need to backtrack
- Some grammars are backtrack-free (predictive parsing)
Bottom-up parsers (LR(1), operator precedence)
- Start at the leaves and grow toward root
- As input is consumed, encode possibilities in an internal state
- Start in a state valid for legal first tokens
- Bottom-up parsers handle a large class of grammars
Parsing Techniques: Top-down parsers
**LL(1), recursive descent**
1 input symbol lookahead
construct leftmost derivation (forwards)
input: read left-to-right
\[ S \Rightarrow^*_{lm} A \beta \Rightarrow^*_{lm} \delta \beta \Rightarrow^*_{lm} y \]
**LR(1), operator precedence**
1 input symbol lookahead
construct rightmost derivation (backwards)
input: read left-to-right
\[ S \Rightarrow^*_{rm} B \gamma \Rightarrow^*_{rm} \alpha \gamma \Rightarrow^*_{rm} y \]
CS430 Lecture 4 7
CS430 Lecture 4 8
Top-down Parsing
A top-down parser starts with the root of the parse tree
The root node is labeled with the goal symbol of the grammar
Top-down parsing algorithm:
- Construct the root node of the parse tree
- Repeat until the fringe of the parse tree matches the input string
1. At a node labeled A, select a production with A on its lhs and, for each symbol on its rhs, construct the appropriate child
2. When a terminal symbol is added to the fringe and it doesn’t match the fringe, backtrack
3. Find the next node to be expanded (label ∈ NT)
- The key is picking the right production in step 1
→ That choice should be guided by the input string
Picking the “Right” Production
If it picks the wrong production, a top-down parser may backtrack
Alternative is to look ahead in input & use context to pick correctly
How much lookahead is needed?
- In general, an arbitrarily large amount
- Use the Cocke-Younger, Kasami algorithm or Earley’s algorithm
Fortunately,
- Large subclasses of CFGs can be parsed with limited lookahead
- Most programming language constructs fall in those subclasses
Among the interesting subclasses are LL(1) and LR(1) grammars
Predictive Parsing
Basic idea
Given $A \rightarrow \alpha | \beta$, the parser should be able to choose between $\alpha$ & $\beta$
We can try to predict the correct choice by calculating
FIRST$(\alpha)$ sets
The set of tokens that appear as the first symbol in some string that derives from $\alpha$
That is, $a \in$ FIRST$(\alpha)$ iff $\alpha \Rightarrow^* a \gamma$, for some $\gamma$
FOLLOW$(A)$ sets
The set of tokens that appear immediately to the right of $A$ in some sentential form
Predictive Parsing
Basic idea
Given $A \rightarrow \alpha | \beta$, the parser should be able to choose between $\alpha$ & $\beta$
FIRST sets
For some rhs $\alpha \in G$, define FIRST$(\alpha)$ as the set of tokens that appear as the first symbol in some string that derives from $\alpha$
That is, $a \in$ FIRST$(\alpha)$ iff $\alpha \Rightarrow^* a \gamma$, for some $\gamma$
The LL(1) Property
If $A \rightarrow \alpha$ and $A \rightarrow \beta$ both appear in the grammar, we would like
FIRST$(\alpha) \cap$ FIRST$(\beta) = \emptyset$
This would allow the parser to make a correct choice with a lookahead of exactly one symbol!
The FIRST Set
\[ a \in \text{FIRST}(\alpha) \iff \alpha \Rightarrow^* a \gamma, \text{ for some } \gamma \]
To build FIRST(X) for all grammar symbols X:
1. if X is a terminal (token), FIRST(X) := \{ X \}
2. if X ::= \varepsilon, then \varepsilon \in \text{FIRST}(X)
3. iterate until no more terminals or \varepsilon can be added to any FIRST(X):
if X ::= Y_1 Y_2 \ldots Y_k then
a \in \text{FIRST}(X) if a \in \text{FIRST}(Y_j) and
\varepsilon \in \text{FIRST}(Y_j) for all 1 \leq j < i
\varepsilon \in \text{FIRST}(X) if \varepsilon \in \text{FIRST}(Y_i) for all 1 \leq i \leq k
end iterate
Note: if \varepsilon \in \text{FIRST}(Y_j), then \text{FIRST}(Y_i) is irrelevant, for 1 < i
CS430 Lecture 4 13
---
The FIRST Set
\[ a \in \text{FIRST}(\alpha) \iff \alpha \Rightarrow^* a \gamma, \text{ for some } \gamma \]
To build FIRST(\alpha) for \alpha = X_1 X_2 \ldots X_n:
1. \varepsilon \in \text{FIRST}(\alpha) if \varepsilon \in \text{FIRST}(X_j) and
\varepsilon \in \text{FIRST}(X_j) for all 1 \leq j < i
2. \varepsilon \in \text{FIRST}(\alpha) if \varepsilon \in \text{FIRST}(X_i) for all 1 \leq i \leq n
## LL(1) Example - First Sets
<table>
<thead>
<tr>
<th>Grammar</th>
<th>Production</th>
<th>FIRST Sets</th>
<th>Nonterminals</th>
</tr>
</thead>
<tbody>
<tr>
<td>Goal → Expr</td>
<td>{ num, id }</td>
<td>Goal</td>
<td>{ num, id }</td>
</tr>
<tr>
<td>Expr → Term Expr’</td>
<td>{ num, id }</td>
<td>Expr</td>
<td>{ num, id }</td>
</tr>
<tr>
<td>Expr’ → + Expr’</td>
<td>{ + }</td>
<td>Expr’</td>
<td>{ +, -, e }</td>
</tr>
<tr>
<td></td>
<td>- Expr’</td>
<td>Term</td>
<td>{ num, id }</td>
</tr>
<tr>
<td></td>
<td>e</td>
<td>Term’</td>
<td>{ *, /, e }</td>
</tr>
<tr>
<td>Term → Factor Term’</td>
<td>{ num, id }</td>
<td>Factor</td>
<td>{ num, id }</td>
</tr>
<tr>
<td>Term’ → * Term</td>
<td>{ * }</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>/ Term</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Factor → num</td>
<td>{ num }</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>id</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### The FOLLOW Set
For a non-terminal A, define FOLLOW(A) as:
\[
\text{FOLLOW}(A) := \text{the set of terminals that can appear immediately to the right of } A \text{ in some sentential form.}
\]
Thus, a non-terminal's FOLLOW set specifies the tokens that can legally appear after it; a terminal has no FOLLOW set.
**The FOLLOW Set**
To build FOLLOW(X) for all non-terminal X:
1. Place $ in FOLLOW( <goa ) // $ = EOF
iterate until no more terminals or $ can be added
to any FOLLOW(X):
2. If $ -> $ then
put {FIRST($) - $} in FOLLOW(B)
3. If $ -> $ then
put FOLLOW(A) in FOLLOW(B)
4. If $ -> $ and $ \in $ FIRST($) then
put FOLLOW(A) in FOLLOW(B)
---
**LL(1) Example - Follow Sets**
<table>
<thead>
<tr>
<th>Grammar</th>
<th>FIRST Sets</th>
<th>FOLLOW Sets</th>
</tr>
</thead>
<tbody>
<tr>
<td>Goal</td>
<td>{ num, id }</td>
<td>{ $ }</td>
</tr>
<tr>
<td>Expr</td>
<td>{ num, id }</td>
<td>{ $ }</td>
</tr>
<tr>
<td>Expr’</td>
<td>{ +, - }</td>
<td>{ $ }</td>
</tr>
<tr>
<td>Term</td>
<td>{ num, id }</td>
<td>{ +, - }</td>
</tr>
<tr>
<td>Term’</td>
<td>{ *, / }</td>
<td>{ +, - }</td>
</tr>
<tr>
<td>Factor</td>
<td>{ num, id }</td>
<td>{ *, / }</td>
</tr>
</tbody>
</table>
1. Place $ in FOLLOW( <goa )
2. If $ -> $ then
put {FIRST($) - $} in FOLLOW(B)
3. If $ -> $ then
put FOLLOW(A) in FOLLOW(B)
4. If $ -> $ and $ \in $ FIRST($) then
put FOLLOW(A) in FOLLOW(B)
Predictive Parsing
If $A \to \alpha$ and $A \to \beta$ and $\epsilon \in \text{FIRST}(\alpha)$, then we need to ensure that $\text{FIRST}(\beta)$ is disjoint from $\text{FOLLOW}(A)$, too.
**Define $\text{FIRST}^*(\delta)$ for rule $A \to \delta$ as**
- $\text{FIRST}(\delta) \cup \text{FOLLOW}(A)$, if $\epsilon \in \text{FIRST}(\delta)$
- $\text{FIRST}(\delta)$, otherwise
---
**Predictive Parsing**
**The LL(1) Property**
A grammar is LL(1) iff $A \to \alpha$ and $A \to \beta$ implies $\text{FIRST}^*(\alpha) \cap \text{FIRST}^*(\beta) = \emptyset$
This would allow the parser to make a correct choice with a lookahead of exactly one symbol!
**Question:** Can there be two rules $A \to \alpha$ and $A \to \beta$ in a LL(1) grammar such that $\epsilon \in \text{FIRST}(\alpha)$ and $\epsilon \in \text{FIRST}(\beta)$?
Predictive Parsing
Given a grammar that has the $LL(1)$ property
- Problem: NT $A$ needs to be replaced in next derivation step
- Assume $A \rightarrow \beta_1 | \beta_2 | \beta_3$, with
$\text{FIRST}^+(\beta_1) \cap \text{FIRST}^+(\beta_2) \cap \text{FIRST}^+(\beta_3) = \emptyset$
/* find rule for $A$ */
if (current token $\in$ FIRST$^+(\beta_1)$)
select $A \rightarrow \beta_1$
else if (current token $\in$ FIRST$^+(\beta_2)$)
select $A \rightarrow \beta_2$
else if (current token $\in$ FIRST$^+(\beta_3)$)
select $A \rightarrow \beta_3$
else
report an error and return false
Grammars with the $LL(1)$ property are called **predictive grammars** because the parser can "predict" the correct expansion at each point in the parse.
Parsers that capitalize on the $LL(1)$ property are called **predictive parsers**.
One kind of predictive parser is the **recursive descent parser**. The other is a table-driven parser **table-driven parser**.
$LL(1)$ Parser Example
Is the following grammar $LL(1)$?
$S ::= a \ S \ b | \epsilon$
First($aSb$) = \{a\}
First($\epsilon$) = \{\epsilon\}
First$^*(aSb)$ = \{a\}
First$^*(\epsilon)$ = (First ($\epsilon$) - \{\epsilon\}) $\cup$ Follow ($S$) = \{$\$, b\}
$LL(1)$? YES, since \{a\} $\cap$ \{$\$, b\} = \emptyset
**LL(1) Parser Example**
Table-driven LL(1) parser
- **current input symbol**
- **rules for non-terminal**
- **non-terminal on top of the stack**
---
**Building Table-driven Top Down Parsers**
Building the complete table
- Need a row for every NT & a column for every T
- Need an algorithm to build the table
Filling in TABLE[X,y], X ∈ NT, y ∈ T
- entry is the rule X ::= β, if y ∈ FIRST+(β)
- entry is error otherwise (can treat empty entry as implicit error)
If any entry is defined multiple times, G is not LL(1)
This is the LL(1) table construction algorithm
**LL(1) Skeleton Parser**
```
token ← next_token()
push $ onto Stack // $ used to mark EOF
push the start symbol, $, onto Stack
TOS ← top of Stack
loop forever
if TOS = $ and token = $ then
break & report success (accept)
else if TOS is a terminal then
if TOS matches token then
pop Stack // recognized TOS
token ← next_token()
else report error looking for TOS
else // TOS is a non-terminal
if TABLE[TOS, token] is A → B₁B₂...Bᵦ then
pop Stack // get rid of A
push Bᵦ, Bᵦ₋₁, ... B₁ // in that order
else report error expanding TOS
TOS ← top of Stack
```
Table-driven $\text{LL}(1)$ Parser Example
<table>
<thead>
<tr>
<th></th>
<th>a</th>
<th>b</th>
<th>$</th>
<th>other</th>
</tr>
</thead>
<tbody>
<tr>
<td>S $\Rightarrow aSb$</td>
<td>S $\Rightarrow \varepsilon$</td>
<td>S $\Rightarrow \varepsilon$</td>
<td>error</td>
<td></td>
</tr>
</tbody>
</table>
Stack | Remaining Input | Action
--- | --- | ---
[$, S$] | aabb$b$, | S $\Rightarrow aSb$
[$, b, S, a$] | aabb$b$, | next input+pop
[$, b, S$] | abbb$, | S $\Rightarrow aSb$
[$, b, b, S, a$] | abbb$, | next input+pop
[$, b, b, S$] | abbb$, | next input+pop
[$, b, b, a, S$] | abbb$, | S $\Rightarrow \varepsilon$
[$, b, b, b, S$] | abbb$, | next input+pop
[$, b, b, b$] | bbb$b$, | next input+pop
[$, b, b$] | b$b$, | next input+pop
[$, b$] | b$$. | next input+pop
[$$] | $$. | accept
LL(1) Example - LL(1) Table
<table>
<thead>
<tr>
<th>Grammar</th>
<th>FIRST Sets</th>
<th>FOLLOW Sets</th>
</tr>
</thead>
<tbody>
<tr>
<td>Goal $\Rightarrow$ Expr</td>
<td>(num, id)</td>
<td>Goal ($\varepsilon$)</td>
</tr>
<tr>
<td>Expr $\Rightarrow$ Term Expr'</td>
<td>(num, id)</td>
<td>Expr ($\varepsilon$)</td>
</tr>
<tr>
<td>Expr' $\Rightarrow$ * Expr'</td>
<td>(+)</td>
<td>Expr' ($\varepsilon$)</td>
</tr>
<tr>
<td></td>
<td>- Expr'</td>
<td>Term' ($\varepsilon$, $\varepsilon$)</td>
</tr>
<tr>
<td></td>
<td>- (Expr)</td>
<td>Term' ($\varepsilon$, $\varepsilon$)</td>
</tr>
<tr>
<td>Term $\Rightarrow$ Factor Term'</td>
<td>(num, id)</td>
<td>Factor' ($\varepsilon$, $\varepsilon$)</td>
</tr>
<tr>
<td>Term' $\Rightarrow$ * Term</td>
<td>(*$\varepsilon$)</td>
<td>Factor' ($\varepsilon$, $\varepsilon$)</td>
</tr>
<tr>
<td></td>
<td>/ Term</td>
<td>(/)</td>
</tr>
<tr>
<td></td>
<td>$\varepsilon$</td>
<td>($\varepsilon$)</td>
</tr>
<tr>
<td>Factor $\Rightarrow$ num</td>
<td>(num)</td>
<td>entry is the rule X $\Rightarrow$ $\beta$, if $y \in$ FIRST$(\beta)$</td>
</tr>
<tr>
<td></td>
<td>id</td>
<td>(id)</td>
</tr>
</tbody>
</table>
| num | id | . | . | . | / | $\varepsilon$
|-----|----|---|---|---|---|---
| Goal | Goal $\Rightarrow$ Expr | Goal $\Rightarrow$ Expr |
| Expr | Expr $\Rightarrow$ Term Expr' | Expr $\Rightarrow$ Term Expr' |
| Expr' | Expr $\Rightarrow$ * Expr' | Expr' $\Rightarrow$ * Expr' |
| Term | Term $\Rightarrow$ Factor Term' | Term $\Rightarrow$ Factor Term' |
| Term' | Term' $\Rightarrow$ * Term | Term' $\Rightarrow$ * Term |
| Factor | Factor $\Rightarrow$ num | Factor $\Rightarrow$ id |
LL(1) Languages
Question
By eliminating left recursion and left factoring, can we transform an arbitrary CFG to a form where it meets the LL(1) condition? (and can be parsed predictively with a single token lookahead?)
Answer
Given a CFG that doesn’t meet the LL(1) condition, it is undecidable whether or not an equivalent LL(1) grammar exists.
Example
\{a^n b^n | n \geq 1\} \cup \{a^n 1 b^{2n} | n \geq 1\} has no LL(1) grammar
Language that Cannot Be LL(1)
Example
\{a^n b^n | n \geq 1\} \cup \{a^n 1 b^{2n} | n \geq 1\} has no LL(1) grammar
\begin{align*}
G & \rightarrow aAb \\
& \mid aAbbb \\
A & \rightarrow aAb \\
& \mid 0 \\
B & \rightarrow aAbbb \\
& \mid 1
\end{align*}
Problem: need an unbounded number of a characters before you can determine whether you are in the A group or the B group.
|
{"Source-Url": "http://www.cs.umd.edu/class/spring2011/cmsc430/lectures/lec05.pdf", "len_cl100k_base": 4650, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31427, "total-output-tokens": 5020, "length": "2e12", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.00031685829162597656, "__label__crime_law": 0.00031876564025878906, "__label__education_jobs": 0.00217437744140625, "__label__entertainment": 7.444620132446289e-05, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.00012445449829101562, "__label__food_dining": 0.0004117488861083984, "__label__games": 0.0006899833679199219, "__label__hardware": 0.0007486343383789062, "__label__health": 0.00036263465881347656, "__label__history": 0.00022017955780029297, "__label__home_hobbies": 0.0001220107078552246, "__label__industrial": 0.00044918060302734375, "__label__literature": 0.00035452842712402344, "__label__politics": 0.0002512931823730469, "__label__religion": 0.0005965232849121094, "__label__science_tech": 0.009552001953125, "__label__social_life": 0.00014960765838623047, "__label__software": 0.003536224365234375, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0005488395690917969, "__label__travel": 0.00022912025451660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13826, 0.01111]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13826, 0.44337]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13826, 0.63002]], "google_gemma-3-12b-it_contains_pii": [[0, 229, false], [229, 869, null], [869, 1733, null], [1733, 2242, null], [2242, 3412, null], [3412, 4552, null], [4552, 5697, null], [5697, 6886, null], [6886, 7798, null], [7798, 8627, null], [8627, 9901, null], [9901, 10474, null], [10474, 11078, null], [11078, 13016, null], [13016, 13826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 229, true], [229, 869, null], [869, 1733, null], [1733, 2242, null], [2242, 3412, null], [3412, 4552, null], [4552, 5697, null], [5697, 6886, null], [6886, 7798, null], [7798, 8627, null], [8627, 9901, null], [9901, 10474, null], [10474, 11078, null], [11078, 13016, null], [13016, 13826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13826, null]], "pdf_page_numbers": [[0, 229, 1], [229, 869, 2], [869, 1733, 3], [1733, 2242, 4], [2242, 3412, 5], [3412, 4552, 6], [4552, 5697, 7], [5697, 6886, 8], [6886, 7798, 9], [7798, 8627, 10], [8627, 9901, 11], [9901, 10474, 12], [10474, 11078, 13], [11078, 13016, 14], [13016, 13826, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13826, 0.14334]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
ad85dd2515ae0d353ac7073cf8b179d52a0597c4
|
[REMOVED]
|
{"len_cl100k_base": 7911, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 36711, "total-output-tokens": 10098, "length": "2e12", "weborganizer": {"__label__adult": 0.0003886222839355469, "__label__art_design": 0.0004744529724121094, "__label__crime_law": 0.00041961669921875, "__label__education_jobs": 0.0009593963623046876, "__label__entertainment": 0.000118255615234375, "__label__fashion_beauty": 0.00022459030151367188, "__label__finance_business": 0.0003056526184082031, "__label__food_dining": 0.0004072189331054687, "__label__games": 0.000743865966796875, "__label__hardware": 0.0034694671630859375, "__label__health": 0.0007047653198242188, "__label__history": 0.00047850608825683594, "__label__home_hobbies": 0.0001577138900756836, "__label__industrial": 0.0011129379272460938, "__label__literature": 0.0002892017364501953, "__label__politics": 0.0003709793090820313, "__label__religion": 0.0007524490356445312, "__label__science_tech": 0.276611328125, "__label__social_life": 0.00010579824447631836, "__label__software": 0.01177978515625, "__label__software_dev": 0.6982421875, "__label__sports_fitness": 0.0004563331604003906, "__label__transportation": 0.0009784698486328125, "__label__travel": 0.0002777576446533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44945, 0.01991]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44945, 0.42296]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44945, 0.89289]], "google_gemma-3-12b-it_contains_pii": [[0, 1777, false], [1777, 5036, null], [5036, 8168, null], [8168, 11473, null], [11473, 12771, null], [12771, 15950, null], [15950, 18798, null], [18798, 21486, null], [21486, 23529, null], [23529, 25864, null], [25864, 28590, null], [28590, 30965, null], [30965, 34165, null], [34165, 35086, null], [35086, 35724, null], [35724, 38861, null], [38861, 42071, null], [42071, 44945, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1777, true], [1777, 5036, null], [5036, 8168, null], [8168, 11473, null], [11473, 12771, null], [12771, 15950, null], [15950, 18798, null], [18798, 21486, null], [21486, 23529, null], [23529, 25864, null], [25864, 28590, null], [28590, 30965, null], [30965, 34165, null], [34165, 35086, null], [35086, 35724, null], [35724, 38861, null], [38861, 42071, null], [42071, 44945, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44945, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44945, null]], "pdf_page_numbers": [[0, 1777, 1], [1777, 5036, 2], [5036, 8168, 3], [8168, 11473, 4], [11473, 12771, 5], [12771, 15950, 6], [15950, 18798, 7], [18798, 21486, 8], [21486, 23529, 9], [23529, 25864, 10], [25864, 28590, 11], [28590, 30965, 12], [30965, 34165, 13], [34165, 35086, 14], [35086, 35724, 15], [35724, 38861, 16], [38861, 42071, 17], [42071, 44945, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44945, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b8d211ae7e04dab3cd2247eaba7a2921a3e13b4e
|
ON THE DISTRIBUTION OF SOURCE CODE FILE SIZES
Israel Herraiz
Technical University of Madrid, Madrid, Spain
Daniel M. German
University of Victoria, Victoria, Canada
Ahmed E. Hassan
Queen’s University, Kingston, Canada
Keywords: Mining software repositories, Software size estimation, Open source.
Abstract: Source code size is an estimator of software effort. Size is also often used to calibrate models and equations to estimate the cost of software. The distribution of source code file sizes has been shown in the literature to be a lognormal distribution. In this paper, we measure the size of a large collection of software (the Debian GNU/Linux distribution version 5.0.2), and we find that the statistical distribution of its source code file sizes follows a double Pareto distribution. This means that large files are to be found more often than predicted by the lognormal distribution, therefore the previously proposed models underestimate the cost of software.
1 INTRODUCTION
Source code size is a simple, yet powerful metric for software maintenance and management. Over the years, much research has been devoted to the quest for metrics that could help optimize the allocation of resources in software projects, both at the development and maintenance stages. Two examples of these are McCabe’s cyclomatic complexity (McCabe, 1976) and Halstead’s software science metrics (Halstead, 1977). In spite of all the theoretical considerations that back up these metrics, previous research shows that simple size metrics are highly correlated with them (Herraiz et al., 2007), or that these metrics are not better defect predictors than just lines of code (Graves et al., 2000).
Perhaps due to these facts, software size, rather than many other more sophisticated metrics, has been used for effort estimation. Standard models like COCOMO are now widely used in industry to estimate the effort needed to develop a particular piece of software, or to determine the number of billable hours when building software (Boehm, 1981).
In recent years, besides the previously mentioned works, the statistical properties of software size have attracted some attention in research. Recent research shows that the statistical distribution of source code file sizes is a lognormal distribution (Concas et al., 2007), and some software size estimation techniques built on that finding (Zhang et al., 2009). Some other preliminary research conflicts with this finding for the distribution of size, proposing that the statistical distribution of source code file sizes follows a double Pareto distribution (Herraiz et al., 2007; Herraiz, 2009).
All the mentioned works use publicly available software, so they can be repeated and verified by third parties. These are crucial aspects to determine without further doubt which is the statistical distribution of size. However, some of these studies (Zhang et al., 2009; Concas et al., 2007) are based on a few case studies, with the consequent risk of lack of generality, opening the door to possible future conflicting studies. To overcome these drawbacks, we report here the results for a very large number of software projects, which source code have been obtained from the Debian GNU/Linux distribution, release 5.0.2. Our sample contains nearly one million and a half files, obtained from more than 11,000 source packages.
The main contributions of this paper are:
---
Herraiz I., German D., and Hassan A.
ON THE DISTRIBUTION OF SOURCE CODE FILE SIZES.
DOI: 10.5220/0003426200050014
In Proceedings of the 6th International Conference on Software and Database Technologies (ICSOFT-2011), pages 5-14
Copyright © 2011 SCITEPRESS (Science and Technology Publications, Lda.)
- **Software size's distribution is a double Pareto**
This implies that the distribution of software size is a particular case of the distribution of the size of filesystems (Mitzenmacher, 2004b), and it also confirms previous results based on other case studies (Herráiz et al., 2007; Herráiz, 2009).
- **Estimation techniques based on the lognormal distribution underestimate the potential size of software**
And therefore they underestimate its cost. We calculate the bias of lognormal models compared to the size estimated using a double Pareto model.
The rest of the paper is organized as follows. Section 2 gives an overview of the related work. Section 3 describes the data sources and the methodology used in our study. Section 4 shows our approach to determine the shape of the statistical distribution of software size. Section 5 compares the lognormal and double Pareto distributions for software estimation, also showing how the lognormal distribution always underestimates the size of large files. For clarity purposes, all the results are briefly summarized in section 6. Section 7 discusses some possible threats to the validity of our results. Section 8 discusses some possible lines of further work. And finally, section 9 concludes this paper.
## 2 RELATED WORK
In the mathematics and computer science communities, the distribution of file sizes has been an object of intense debate (Mitzenmacher, 2004a). Some researchers claim that this distribution is a lognormal, and some others claim that it is a power law. In some cases, lognormal distributions fit better some empirical data, and in some other cases power law distributions fit better. However, the generative processes that give birth to those distributions, and the possible models that can be derived based on those processes, are fundamentally different (Mitzenmacher, 2004b).
Power-laws research was already a popular topic in the software research community. Clark and Green (Clark and Green, 1977) found that the pointers to atoms in Lisp programs followed the Zipf’s law, a form of power law. More recent studies have found power laws in some properties of Java programs, although other properties (some of them related to size) do not have a power law distribution (Baxter et al., 2006). But it is only in the most recent years when some authors have started to report that this distribution might be lognormal, starting the old debate previously found for file sizes in general. Concasa et al. (Concas et al., 2007) studied an object-oriented system written in Smalltalk, finding evidences of both power law and lognormal distributions in its properties. Zhang et al. (Zhang et al., 2009) confirmed some of those findings, and they proposed that software size distribution is lognormal. They also derive some estimation techniques based on that finding, aimed to determine the size of software. Louridas et al. (Louridas et al., 2008) pointed out that power laws might not be the only distribution found in the properties of software systems.
For the more general case of sizes of files of any type, Mitzenmacher proposed that the distribution is a double Pareto (Mitzenmacher, 2004b). This result reconciles the two sides of the debate. But more interestingly, the generative process of double Pareto distributions mimics the actual work-flow and life cycle of files. He also shows a model for the case of file sizes, and some simulation results. The same distribution was found in the case of software (Herráiz et al., 2007; Herráiz, 2009), for a large sample of software, although the results were only for the C programming language.
The Debian GNU/Linux distribution has been the object of research in previous studies (Robles et al., 2005; Robles et al., 2009). It is one of the largest distributions of free and open source software.
In the spirit of the pioneering study by Knuth in 1971 (Knuth, 1971), where he used a survey approach of FORTRAN programs to determine the most common case for compiler optimizations, we use the Debian GNU/Linux distribution with the goal of enlightening this debate about the distribution of software size, extending previous research to a large amount of software, written in several programming languages, and coming from a broad set of application domains.
## 3 DATA SOURCE AND METHODOLOGY
We retrieved the source code of all the source packages of the release 5.0.2 of the Debian GNU/Linux distribution. We used both the main and contrib sections of distribution, for a total of 11,571 source code packages, written in 30 different programming languages, with a total size of more than 313 MLOC, and more than 1,300,000 files. Figure 1 shows the relative importance of the top seven programming languages in this collection; they account more than 90% of the files.
We measured every file in Debian, using the SLOCCount tool by David A. Wheeler 1. This tool
---
1Available at http://www.dwheeler.com/sloccount
measures the size of files in SLOC, which is number of source code lines, excluding blanks and comments. Table 1 shows a summary of the statistical properties of this sample of files, for the overall sample and for the top seven programming languages. Approximately half of Debian is written in C, and almost three quarters of it is written in either C or C++. The large number of shell scripts is mainly due to scripts used for build, and installation purposes; shell scripting is present in about half of the packages in Debian.
We divided the collection of files into 30 groups, one for each programming language and another one for the overall sample, and estimated the shape of the statistical distribution of size. For this estimation, we plotted the Complementary Cumulative Distribution Function (CCDF) for the top seven programming languages. The Cumulative Distribution Function (CDF) is the integral of the density function, and its range goes from zero to one. The CCDF is the complementary of the CDF. All these three functions (density function, CDF and CCDF) show the same information, although their properties are different. For instance, in logarithmic scale, a power law distribution appears as a straight line in a CCDF, while a lognormal appears as a curve. So the CCDF can be used to distinguish between power laws and other kind of distributions.
In a CCDF, the double Pareto distribution appears as a curve with two straight segments, one at the low values side, and another one at the high values side. The difference between a lognormal and a power law at very low values is negligible, and therefore imperceptible in a plot. This means that in a CCDF plot the main difference between a lognormal and a double Pareto is only spotted at high values. In any case, for our purposes, it is more important to focus on the high values side. A difference for very small files (e.g. < 10 SLOC) is harmless. However, a difference for large files (e.g. > 1000 SLOC) may have a great impact in the estimations.
To estimate the shape of the distribution, we use the method proposed by Clauset et al. (Clauset et al., 2007); in particular, as implemented in the GNU R statistical software (R Development Core Team, 2009). They argue that power law data are often fitted using standard techniques like least squares regression, that are very sensible to observations corresponding to very high values. For instance, a new observation at a very high value may greatly shift the scaling factor of a power law. The result is that the level of confidence for the parameters of the distribution obtained using those methods is very low.
Clauset et al. propose a different technique, based on maximum-likelihood, that allows for a goodness-of-fit test of the results. Furthermore, their technique can deal with data that deviate from the power law behavior for values lower than a certain threshold, providing the minimum value of the empirical data that belongs to a power law distribution. For double Pareto distributions, that value can be used to calculate the point where the data changes from lognormal to power law. That shifting value can be used to determine at what point the lognormal estimation model starts to deviate from the actual data, and to quantify the amount of that deviation.
4 DETERMINING THE SHAPE OF THE SIZE DISTRIBUTION
As Table 1 shows, evidenced by the difference between the median and the average values, our data is highly right skewed. This is typical of lognormal or power law-like distributions. There exist many different methods to empirically determine the distribution of a data set. Here we use a combination of different statistical techniques, to show that in our case, the studied size distribution is a double Pareto one. We first show some results for the global sample, and later we will split our results by programming language.
Histograms are a simple tool that can help to find the distribution behind some data. When the width of the bars is decreased till nearly zero, we have a density function, that is a curve that resembles the shape of the histogram. Although a density function is only defined for continuous data, we can estimate it for our discrete data, and use it to determine the shape of the distribution. For the case of our sample, that function is shown in Figure 2. Note that the horizontal axis shows SLOC using a logarithmic scale.
Because our data are integers values and discrete, the estimation of the density function tries to interpolate the missing values, showing some “jumps” for
### Table 1: Properties of the sample of files. Values in SLOC.
<table>
<thead>
<tr>
<th>Lang</th>
<th>Num. of files</th>
<th>Max.</th>
<th>Avg.</th>
<th>Median</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Overall</td>
<td>1,355,752</td>
<td>765,108</td>
<td>231</td>
<td>63</td>
<td>313,774,217</td>
</tr>
<tr>
<td>C</td>
<td>498,484</td>
<td>765,108</td>
<td>306</td>
<td>85</td>
<td>152,368,424</td>
</tr>
<tr>
<td>C++</td>
<td>332,652</td>
<td>172,487</td>
<td>193</td>
<td>58</td>
<td>64,267,501</td>
</tr>
<tr>
<td>Shell</td>
<td>66,107</td>
<td>46,497</td>
<td>409</td>
<td>62</td>
<td>27,038,314</td>
</tr>
<tr>
<td>Java</td>
<td>158,414</td>
<td>28,784</td>
<td>109</td>
<td>43</td>
<td>17,334,539</td>
</tr>
<tr>
<td>Python</td>
<td>63,590</td>
<td>65,538</td>
<td>156</td>
<td>59</td>
<td>9,888,159</td>
</tr>
<tr>
<td>Perl</td>
<td>48,055</td>
<td>58,164</td>
<td>188</td>
<td>69</td>
<td>9,037,066</td>
</tr>
<tr>
<td>Lisp</td>
<td>21,101</td>
<td>105,390</td>
<td>373</td>
<td>132</td>
<td>7,870,134</td>
</tr>
</tbody>
</table>
---
very low values. The most important feature is that it shows that the logarithm of size is symmetric, and that the shape of the curve somehow resembles a bell-shaped normal distribution, meaning that the data could belong to a lognormal distribution.
To determine whether the data is lognormal or not, we can compare its quantiles against the quantiles of a theoretical reference normal distribution. Such a comparison is done using a quantile-quantile plot. An example of such a plot is shown in Figure 3.
In that plot, if the points fall over a straight line, they belong to a lognormal distribution. If they do not, then the distribution must be of another kind. The shape shown in Figure 3 is similar to the profile of a double Pareto distribution. The main body of the data is lognormal, and so it appears as a straight line in the plot (the points fall over the dashed line that is shown as a reference). Very low and very high values deviate from linearity though. However, with only that plot, we cannot say whether the tails are power law or any other distribution.
Power laws appear as straight lines in a logarithmic-scale plot of the cumulative distribution function (or its complementary). Therefore, combining the previous plots with this new plot, we can fully characterize the shape of the distribution. Figure 4 shows the logarithmic-scale plot of the complementary cumulative distribution function for the overall sample. The main lognormal body clearly appears as a curve in the plot. The low values hypothetical power law cannot be observed, because at very low values the difference between a power law and a lognormal is negligible. The high values power law does not clearly appear either. It seems that the high values segment is straight, but at a first glance it cannot be distinguished from other shapes.
Using the methodology proposed by Clauset et al. (Clauset et al., 2007), we estimate the parameters of the power law distribution that better fit the
---
Figure 2: Density probability function for the overall sample. Horizontal axis shows SLOC in logarithmic scale.
Figure 3: Quantile-quantile plot of the overall sample. Logarithmic scale.
Figure 4: Complementary cumulative distribution function of the overall sample.
Table 2: Parameters of the power law tails for the top seven programming languages and the overall sample.
<table>
<thead>
<tr>
<th>Lang.</th>
<th>$\alpha$</th>
<th>$x_{\text{min}}$</th>
<th>$D$</th>
<th>$p$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Overall</td>
<td>2.73</td>
<td>1,072</td>
<td>0.01770</td>
<td>0.1726</td>
</tr>
<tr>
<td>C</td>
<td>2.87</td>
<td>1,820</td>
<td>0.01694</td>
<td>0.5349</td>
</tr>
<tr>
<td>C++</td>
<td>2.80</td>
<td>1,258</td>
<td>0.01202</td>
<td>0.5837</td>
</tr>
<tr>
<td>Shell</td>
<td>1.71</td>
<td>133</td>
<td>0.13721</td>
<td>$\sim 10^{-14}$</td>
</tr>
<tr>
<td>Java</td>
<td>3.17</td>
<td>846</td>
<td>0.01132</td>
<td>0.6260</td>
</tr>
<tr>
<td>Python</td>
<td>2.90</td>
<td>826</td>
<td>0.01752</td>
<td>0.2675</td>
</tr>
<tr>
<td>Perl</td>
<td>2.23</td>
<td>137</td>
<td>0.02750</td>
<td>$\sim 10^{-5}$</td>
</tr>
<tr>
<td>Lisp</td>
<td>2.73</td>
<td>1,270</td>
<td>0.01996</td>
<td>0.5229</td>
</tr>
</tbody>
</table>
Table 3: Parameters of the lognormal body for the top seven programming languages and the overall sample.
<table>
<thead>
<tr>
<th>Lang.</th>
<th>$\hat{\mu}$</th>
<th>$\hat{\sigma}$</th>
<th>$D$</th>
<th>$p$</th>
</tr>
</thead>
<tbody>
<tr>
<td>Overall</td>
<td>4.1262</td>
<td>1.4857</td>
<td>0.0436</td>
<td>0.0444</td>
</tr>
<tr>
<td>C</td>
<td>4.3421</td>
<td>1.5651</td>
<td>0.0448</td>
<td>0.0365</td>
</tr>
<tr>
<td>C++</td>
<td>4.1182</td>
<td>1.2598</td>
<td>0.0363</td>
<td>0.1447</td>
</tr>
<tr>
<td>Shell</td>
<td>2.8480</td>
<td>1.3118</td>
<td>0.0656</td>
<td>0.0005</td>
</tr>
<tr>
<td>Java</td>
<td>3.7477</td>
<td>1.2627</td>
<td>0.0411</td>
<td>0.0697</td>
</tr>
<tr>
<td>Python</td>
<td>3.9543</td>
<td>1.3972</td>
<td>0.0416</td>
<td>0.0340</td>
</tr>
<tr>
<td>Perl</td>
<td>3.5272</td>
<td>0.9533</td>
<td>0.0700</td>
<td>$\sim 10^{-16}$</td>
</tr>
<tr>
<td>Lisp</td>
<td>4.6485</td>
<td>1.3834</td>
<td>0.0381</td>
<td>0.1265</td>
</tr>
</tbody>
</table>
5 SIZE ESTIMATION USING THE LOGNORMAL AND DOUBLE PARETO DISTRIBUTIONS
Software size can be estimated using the shape of the distribution of source code file sizes, and knowing the number of files that are going to be part of the system. Size estimation can also be used for software effort estimation. Analytical formulas for the case of Java have even been proposed in the literature (Zhang et al., 2009). Those formulas are based on the fact that program size distribution is a lognormal, and use the CCDF for the estimations.
For small files, the difference between a lognormal or a double Pareto distribution is negligible. However, for large files, this difference may be high. This means that the proposed estimation models and formulas can be very biased for large files.
Figure 5 compares the CCDF of the files written in Lisp, with the double Pareto and lognormal estimations of the CCDF. The threshold value is shown with a vertical dashed line. For values higher than the threshold, the lognormal model underestimates
the size of the system, and this bias grows with the size of file size — the bigger the file the more the bias. Figure 6 shows the relative error of the lognormal and double Pareto models, for the case of the Lisp language, when predicting the size of large files. The CCDF of the fitted models were compared against the CCDF of the actual sizes of the files. The lognormal model always underestimates the size of files (negative relative error). The difference for large files is so large that it cannot even be calculated because of overflow errors.
The same pattern appears for the rest of programming languages, as shown in Figure 7. The lognormal model has a permanent bias that underestimate the size of large. The reason is that files above a certain threshold do not belong to that kind of distribution, but to the power law tail of a double Pareto distribution.
Although large files are not as numerous (see Figure 8), their contribution to the overall size is as important as small files’ contribution. Figure 9 shows the relative importance of small and large files for all the programming languages that have been identified as double Pareto. The bottom part (dark) of every column is the contribution of small files, and the top part (clear) the contribution by large files. Small files are those files with a size lower than the threshold value shown in Table 2.
We do not include the Shell and Perl languages because the distribution is not a double Pareto, and therefore the threshold values do not make sense in those cases. The plot clearly shows that the relative contribution of large files is quite notable, even if large files are only a minority. In other words, if we compare Figures 8 and 9, even if the proportion of large files is small, their relative contribution to the overall size is much higher than that proportion.
6 SUMMARY OF RESULTS
After measuring the size of almost 1.4 millions of files, with more than 300 millions of SLOC in total, we find that the statistical distribution of source code file sizes is a double Pareto. Our findings hold for five of the top seven programming languages in our sample: C, C++, Java, Python and Lisp. The two cases that do not exhibit this distribution are Shell and Perl.
This finding is in conflict with previous studies (Zhang et al., 2009), which found that software size’s distribution is a lognormal, and which proposed software estimation models based on that finding. We show how lognormal-based models dangerously underestimate the size of large files.
Although the proportion of large files is very small (e.g. less than 3% of the files in the case of C), their relative contribution to the overall size of the system is much higher (in the case of C, large files account for more than 30% of the SLOC). Therefore, lognormal-based estimation models are underestimating the size of files that have the most impact in the overall size of the system.
7 THREATS TO VALIDITY
The main threat to the validity of the results and conclusions of this paper is the metric used for the study. SLOC is defined as lines of text, removing blanks and
comments. It is a measure of physical size, not logical size. For instance, if a function call in C is spawned over several lines, it will be counted as several SLOC; with a logical size metric, it would count only as one.
Measuring logical size is not straightforward. It depends on the programming language. Also, there might not be consensus about how to count some structures like variables declaration. Should variable declarations be counted as a logical line? And if there is an assignation together with the declaration, should be counted as one or two?
Coding style influences SLOC measuring. If the coding style of a developer is to write function calls in one line, and other developer spawns them over several lines, the second developer would appear as more productive for the same code.
The sample under study includes code originating from many different software projects. Therefore the different coding styles are balanced, and the net result is representative of the actual size of the files under study. However, when comparing different languages, such balance may not occur. Think for instance of Lisp. Lisp syntax is based on lists, that are represented by parentheses. Everything in Lisp is a list: function definitions, function calls, control structures, etc. This provokes an accumulation of parentheses at the end of code blocks. Sometimes, for clar-
However, this practice will make Lisp to appear with larger files than other languages. In general, the coding style can over represent the size of some programming languages, and this may affect the shape of the distribution.
Another threat to the validity of the results is the sample itself. We have exclusively selected open source, coming from only one distribution. Although the sample is very broad and large, and the only requirement for a project to be included is to be open source, the distribution practices when adding software to the collection may suppose a bias in the sample. This threat to the validity can be solved by studying other samples coming from different sources. It can easily be tested whether the distribution with these other sources is still a double Pareto, and whether the parameters of the distributions for the different programming languages are different to the values here. We have not distinguished between different domains of application either, this is to say, we include software that belongs to the Linux kernel, libraries, desktop applications, etc. under the same sample. There might be differences in the typical size of a file for different domains of applications. However, we believe that the size distribution remains regardless the domain of application. This threat to the validity can be addressed extending this study splitting the sample by domain of application.
Finally, some packages may contain automatically generated code. We have not tried to remove generated code in this study. In a similar study (Herraiz et al., 2007), the authors showed that the influence of very large generated files in the overall size and in the shape of the distribution was negligible for a similar sample, so we believe that in this case it does not affect the validity of the results.
8 FURTHER WORK
The size of the sample under study makes it possible to estimate some statistical properties of the population from where it was extracted. In particular, the parameters of the distribution appear to be related to the properties of the programming language. Those parameters could be used for software management purposes. In this section, we discuss and speculate about some of the possible implications of those parameters, which is clearly a line of work that deserves further research.
The first interesting point about the distribution of file sizes is the difference between the two regions, lognormal and power-law, within that distribution. One of the parameters of the distribution, $x_{\text{min}}$, divides the files among small and large files. But this is much more than a label: small files belong to a lognormal distribution and large files to a power law distribution. In other words, that threshold value separates files that are of a different nature, small and large files probably will exhibit different behavior in the maintenance and development processes. One explanation could be that large files are not manageable by developers, so they either are split or abandoned. If they are split, the original file will appear as one or more small files. If they are abandoned, then instead of an active maintenance process, they are probably subject of only corrective maintenance.
This transition from the small to large file is unconscious, developers do not split files on purpose when they get large. However, this unconscious process is reflected as a statistical property of the system. This means that the value of this transition point can be used as a warning threshold for software maintenance. If a file gets larger than the threshold, it is likely that it will need splitting or it will become unmanageable.
To verify these claims we need to obtain historical data about the life of files. We must obtain the size of files after every change during their life, following possible renames. If we assume that files start empty (or with very small sizes), with that historical information we can find out how files grow over the threshold size and change their nature. We can also observe how the parameters of the distribution of size changes over the history of the project. For instance, the double Pareto distribution might be a characteristic of only mature projects. Another point that de-
serves further work is why shell and Perl do not exhibit this behavior.
The parameters of the distribution seem to be related to the programming language. Table 4 summarizes the median, threshold sizes and scaling parameters for the five languages with a double Pareto distribution. We use medians and not average values because the distribution of size is highly right skewed, and very large files may easily distort the average value; the median value is more robust to very large files.
The higher thresholds correspond to C, C++, and Lisp. These languages also have the highest median. This is probably due to the expressiveness of the language (in terms of number of required lines of code per unit of functionality). C and Lisp are the least expressive languages. In C for instance, complex data structures are not available by default in the language, and they have to be implemented by the developer, or reused from a library. In Lisp, because of its simple syntax, it probably requires more lines of code to perform the same tasks that in other languages. The median size of Lisp files seem to support this argument.
Java and Python have the lowest thresholds. The case of Java is interesting because the median size of a file in Java is much lower than in C++, in spite of the similarity between the two programming languages. This difference is also present in the threshold values. Again, the reason may be in the rich standard libraries that accompany Java and that are not present in C++. The same can be said if we compare C++ and Python. These results are similar to the tables comparing function points and lines of code, that were firstly reported by Jones (Jones, 1995): higher level (and more expressive) languages have lower number of lines of code per function point.
In short, these threshold values can be understood as a measurement of the maximum amount of information that can be comprehended by a developer. Above that threshold, programmers decide to split the file, or just abandon it because it turns unmanageable. The values are different for different programming languages because the same task will require more or less lines depending on the language, but they represent the same quantity or limit value. There are of course other factors that can influence program comprehension (Woodfield et al., 1981), but all other factors being the same, we believe that these parameters can be related to comprehension effort for different programming languages.
The scaling parameter, $\alpha$, is also related to the expressiveness. Its value is related to the slope of CCDF in the large files side. Lower values of $\alpha$ will lead to higher file sizes in that section. If we sort by its value all the programming languages (see Table 4), the most expressive language is Java, closely followed by Python. The least expressive language is Lisp. Lisp is a simple language in terms of syntax, and it probably requires to write more lines of code than in other languages to perform similar tasks. Therefore, the scaling parameter can also be understood as a measure of the expressiveness of the programming language.
In short, the plan that we plan to explore as further work are the following:
- Analysis of the evolution of files over time, to find out how the threshold value is related to the evolution of files.
- Extend the study to large samples of other programming languages, and divide the analysis by domain of application, to determine whether the features of the language are related to the values of the parameters of the double Pareto distribution, and whether different domains exhibit different behaviors.
- Why some languages do not show a double Pareto distribution?. How the evolution of files of systems written in these languages differ from double Pareto languages?
9 CONCLUSIONS
The distribution of software source code size follows a double Pareto. We found the double pareto characteristic to hold in five of the top seven programming languages of Debian. The languages whose size follows a double Pareto are C, C++, Java, Python, and Lisp. However, Shell and Perl behave differently.
Shell and Perl are scripting languages. In the Debian GNU/Linux distribution, shell and Perl are popular languages for package maintenance. The package maintenance tasks are quite repetitive, and they are probably the same for a broad range of different packages. So it is probably not difficult to find scripts as part of the packages to make the packaging process easier. Scripts are different to other kind of pro-
grams: they are probably less complex and smaller. If the difference is due to this cause, it would mean that double Pareto distributions are the signature of the programming process, and that different programming activities (scripting, complex programs coding) can be identified by different statistical distributions of software size.
In any case, the double Pareto distribution already has important practical implications for software estimation. Previously proposed models (Zhang et al., 2009) are based on the lognormal distribution, that consistently and dangerously underestimate the size of large files. It is true that large files are only a minority in software projects, the so-called small class/file phenomenon, however they account for a proportion of the size as important as in the case of small files. Therefore, using the lognormal assumption leads to an underestimation of the size of large files. This underestimation will have a great impact on the accuracy of the estimation of the size of the overall system.
REFERENCES
|
{"Source-Url": "http://www.scitepress.org/Papers/2011/34262/34262.pdf", "len_cl100k_base": 7664, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32768, "total-output-tokens": 9048, "length": "2e12", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.00024056434631347656, "__label__crime_law": 0.0002617835998535156, "__label__education_jobs": 0.0006313323974609375, "__label__entertainment": 5.6743621826171875e-05, "__label__fashion_beauty": 0.0001233816146850586, "__label__finance_business": 0.00027942657470703125, "__label__food_dining": 0.00028824806213378906, "__label__games": 0.00044345855712890625, "__label__hardware": 0.0006031990051269531, "__label__health": 0.0003998279571533203, "__label__history": 0.00020897388458251953, "__label__home_hobbies": 6.473064422607422e-05, "__label__industrial": 0.00023233890533447263, "__label__literature": 0.00028824806213378906, "__label__politics": 0.0001856088638305664, "__label__religion": 0.0002989768981933594, "__label__science_tech": 0.01377105712890625, "__label__social_life": 8.374452590942383e-05, "__label__software": 0.00730133056640625, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00020992755889892575, "__label__transportation": 0.00029206275939941406, "__label__travel": 0.00015544891357421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36310, 0.08314]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36310, 0.53892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36310, 0.89813]], "google_gemma-3-12b-it_contains_pii": [[0, 3748, false], [3748, 8708, null], [8708, 13281, null], [13281, 16145, null], [16145, 18573, null], [18573, 21701, null], [21701, 23082, null], [23082, 27356, null], [27356, 31922, null], [31922, 36310, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3748, true], [3748, 8708, null], [8708, 13281, null], [13281, 16145, null], [16145, 18573, null], [18573, 21701, null], [21701, 23082, null], [23082, 27356, null], [27356, 31922, null], [31922, 36310, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36310, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36310, null]], "pdf_page_numbers": [[0, 3748, 1], [3748, 8708, 2], [8708, 13281, 3], [13281, 16145, 4], [16145, 18573, 5], [18573, 21701, 6], [21701, 23082, 7], [23082, 27356, 8], [27356, 31922, 9], [31922, 36310, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36310, 0.19737]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
626e6c4983fb4d4534e2249720771c5b1c33d2c7
|
Getting Started with React Native
Web developers who want to develop native mobile applications face a high barrier to entry because they are forced to learn platform-specific languages and frameworks. Numerous hybrid technologies have tried to simplify this process, but have failed to achieve the performance and appearance that users expect.
This book will show you all the advantages of true native development that React Native has without the steep learning curve, leveraging the knowledge you already have. We do this by getting you up and running quickly with a sample application. We'll introduce you to the fundamentals of creating components and explain how React Native works behind the scenes. Once you have established a solid foundation, you will dive headfirst into developing a real-world application from start to finish. Along the way, we will demonstrate how to create multiple screens and navigate between them, use layout and style native UI components, and access native APIs, such as local storage and geolocation. Finally, we tackle the advanced topic of Native modules, which demonstrates that there are truly no limits to what you can do with React Native.
Who this book is written for
This book is for web developers who want to learn to build fast, good-looking, native mobile applications using the skills they already have. If you already have some JavaScript knowledge or are using React on the web, then you will be able to quickly get up and running with React Native for iOS and Android.
What you will learn from this book
- Set up the React Native environment on both devices and emulators
- Gain an in-depth understanding of how React Native works behind the scenes
- Write your own custom native UI components
- Learn the ins and outs of screen navigation
- Master the art of layout and styles
- Work with device-exclusive data such as geolocation
- Integrate native modules in Objective-C and Java that interact with JavaScript
- Test and deploy your application for a production-ready environment
Ethan Holmes
Tom Bray
Getting Started with React Native
Learn to build modern native iOS and Android applications using JavaScript and the incredible power of React
In this package, you will find:
- The author’s biography
- A preview chapter from the book, Chapter 3 *Beginning with the Example Application*
- A synopsis of the book’s content
- More information on *Getting Started with React Native*
About the Authors
**Ethan Holmes** is a Software Engineer from Vancouver, BC, Canada. He obtained a B.Sc. in computer science from Simon Fraser University. He has primarily been a full-stack web developer working and creating applications for start-ups in the Silicon Beach area. Currently, he is stationed at Cargomatic, disrupting the freight industry. After learning React for the web, learning React Native complemented the skills he obtained as a web developer and allowed him to quickly make the transition to mobile development.
You can follow him on Twitter at @sherclockholmes.
Tom Bray has been developing for the web since the browser wars of the late 90s when DHTML was the buzzword of the day. Creating great user experiences using the cutting edge technologies of the day has always been his passion, from Flash to Flex to Adobe AIR to React, and React Native.
He has created sophisticated software that has been used by large companies, such as Adobe, MySpace, Cisco, Informatica, and Dell; it has been a key contributor to numerous start-ups where he has worn many hats and gained a broad skill set. He currently serves as the Principal Software Architect with Cargomatic where he has designed a system to orchestrate the movement of ocean freight to and from America's ports—a solution that leveraged React Native to assign work to truck drivers and track their progress.
You can follow him on Twitter at @tombray.
Preface
Why are there so many alternatives to using native languages to write mobile apps? And, more importantly, why does the world need yet another approach? Obviously, there must be a problem that hasn't been solved.
Developers want to use just one language to develop for both iOS and Android. Web developers want to reuse their existing JavaScript knowledge and leverage the web frameworks they already know and love. This is why Apache Cordova (PhoneGap) exists. By wrapping a web browser in a native app, developers can package their HTML, CSS, and JavaScript applications in a native shell, but why aren't all mobile applications based on Cordova?
Users expect native performance, with a native user experience. Hybrid apps don't solve the user's problems, they solve the developer's problems. We need a technology that can do both!
React Native changes the game with applications that are truly native. It doesn't use a WebView or transpile JavaScript to native languages. Think of it as native UI components being controlled by a JavaScript brain. The result is a user experience that is indistinguishable from any other native app, and a developer experience that leverages the amazing productivity benefits of JavaScript and the React framework.
Armed with React Native, you'll finally be able to leverage your web development skills in the mobile world without sacrificing quality or performance. It's the Holy Grail, and we're excited to show you what React Native can do and to see what amazing apps you create with it!
What this book covers
Chapter 1, Exploring the Sample Application, is a step-by-step guide to running the sample iOS Application.
Chapter 2, Understanding React Native Fundamentals, covers the basics of React Native and gives brief insight into how the Virtual DOM improves performance. Then there is an introduction to props and state by creating your first component.
Chapter 3, Beginning with the Example Application, begins with generating the project files for iOS and Android. Then it continues with creating the first screens and adding navigation to the application.
Chapter 4, Working with Styles and Layout, covers the ins and outs of laying out and styling content in React Native. Learn how to apply React CSS and Flexbox to your components.
Chapter 5, Displaying and Saving Data, uses ListViews to display data and save notes using the AsyncStorage API.
Chapter 6, Working with Geolocation and Maps, discusses the geolocation API and Map Component.
Chapter 7, Integrating Native Modules, focuses on integrating third party native modules from the React Native community into your applications.
Chapter 8, Releasing the Application, goes through the release process for iOS and Android so you are ready to submit an application to the AppStore or the Google Play Store.
Now that you have an idea about how React Native works and how to create components, let's create your first React Native application. Throughout this book, we will be developing a note-taking application which we'll call ReactNotes. By the end of the book, you'll have a fully featured application that allows you to create notes, save them to a device, view the list of the notes you've saved, take pictures with the device and attach them to your notes, and much more.
In this chapter, we'll build the skeleton of the application, create a HomeScreen and NoteScreen. We'll also add navigation that allows you to switch between the screens, and along the way you'll learn about creating your own components and handling events.
The topics that we will cover in this chapter are:
- How to generate iOS and Android project files
- Examining the React Native starter template
- Creating the first component, SimpleButton
- Debugging with Chrome Developer Tools
- Exploring navigation and transitioning between screens
- Developing the UI to create notes
Generating the projects
To start building our note taking application for iOS, we are going to need a couple of command-line tools.
- React Native 0.14.2 requires Node.js v4+, we are going to use v5.0.0; visit https://nodejs.org for more information (we recommend managing different node versions with NVM https://github.com/creationix/nvm)
- Install the latest version of NPM from https://www.npmjs.com/
Great, now that we have these tools we can install the react-native-cli. The react-native-cli exposes an interface that does all the work of setting up a new React Native project for us:
1. To install react-native-cli, use the npm command:
```bash
npm install -g react-native-cli
```
2. Next, we are going to generate a new React Native project called ReactNotes using the cli and the react-native init command. The output of the command looks similar to the following:
```bash
$ react-native init ReactNotes
```
This will walk you through the creation of a new React Native project in/Users/ethanholmes/ReactNotes.
3. Set up a new React Native app in /Users/ethanholmes/ReactNotes:
```bash
create .flowconfig
create .gitignore
create .watchmanconfig
create index.ios.js
create index.android.js
create ios/main.jsbundle
create ios/ReactNotes/AppDelegate.h
create ios/ReactNotes/AppDelegate.m
create ios/ReactNotes/Base.lproj/LaunchScreen.xib
create ios/ReactNotes/Images.xcassets/AppIcon.
```
appiconset/Contents json
create ios/ReactNotes/Info.plist
create ios/ReactNotes/main.m
create ios/ReactNotesTests/ReactNotesTests.m
create ios/ReactNotesTests/Info.plist
create ios/ReactNotes.xcodeproj/project.pbxproj
create ios/ReactNotes.xcodeproj/xcshareddata/xcschemes/
ReactNotes.xcscheme
To run your app on iOS:
Open /Users/ethanholmes/ReactNotes/ios/ReactNotes.xcodeproj in Xcode
Hit Run button
To run your app on Android:
Have an Android emulator running, or a device connected
cd /Users/ethanholmes/ReactNotes
react-native run-android
The root directory of the Xcode project is generated in the ReactNotes folder, with the same name as we gave react-native-cli when we ran the command. Follow the steps at the end of the React Native set up to see what it produces.
Xcode and the iOS simulator
We are going to start by running the starter template in the iOS simulator through Xcode:
1. In Xcode, select File | Open and navigate to the ReactNotes folder.
2. Open the ReactNotes.xcworkspace file, as shown in the following figure:
3. Click on Run (or Cmd + R) to run the application in the iOS simulator, the following screenshot will be shown:
Chapter 3
Just like that, we already have the React Native template up and running on the iOS simulator!
Welcome to React Native!
To get started, edit index.ios.js
Press Cmd+R to reload,
Cmd+D or shake for dev menu
Just like that, we already have the React Native template up and running on the iOS simulator!
The Android SDK and emulator
Facebook has a detailed step by step guide set up on Android SDK and emulator. You can access the React Native Docs at https://facebook.github.io/react-native/docs/android-setup.html. In this section, we will only cover the basics of running the application on the Android emulator.
When running the project in the iOS simulator, we can run it from the Xcode IDE. Android, on the other hand, doesn't require any particular IDE and can be launched directly from the command line.
To install the Android apk to the emulator, use the following command:
```bash
$ react-native run-android
```
The following screenshot will be generated:
Let's start by modifying the contents of the starter template and display a different message.

Modifying the React Native starter template
Open `index.ios.js`, located in the root directory, in the text editor of your choice. Here is the code that `react-native-cli` generated:
```javascript
/**
* Sample React Native App
* https://github.com/facebook/react-native
*/
'use strict';
var React = require('react-native');
var {
AppRegistry,
StyleSheet,
Text,
View,
} = React;
var ReactNotes = React.createClass({
render: function() {
return (
<View style={styles.container}>
<Text style={styles.welcome}>
Welcome to React Native!
</Text>
<Text style={styles.instructions}>
To get started, edit `index.ios.js`
</Text>
<Text style={styles.instructions}>
Press Cmd+R to reload,
Cmd+D or shake for dev menu
</Text>
</View>
);
}
});
var styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: '#F5FCFF',
},
});
```
### Beginning with the Example Application
```javascript
welcome: {
fontSize: 20,
textAlign: 'center',
margin: 10,
},
instructions: {
textAlign: 'center',
color: '#333333',
marginBottom: 5,
},
});
AppRegistry.registerComponent('ReactNotes', () => ReactNotes);
```
Although `react-native-cli` generates the starter template using the ES5 `createClass`, we will be creating our components using ES6 classes.
A lot of things are included in here, but bear with us as we break it down for you. If we take a closer look at the render method, we can see the familiar `View` and `Text` components that we encountered in the previous chapter. Note how the `index.js` file is a component itself (`ReactNotes`). Change the value in line 30 to `Welcome to React Notes!`. Save it and then press `Cmd + R` from the simulator or, in the top menu, navigate to **Hardware | Shake Gesture** and select **Reload** from the pop-up action sheet. The text on screen re-renders to show the text value we just modified! We are no longer constrained to wait for the Xcode to recompile in order to see our changes as we can reload straight from the simulator. Continue making changes and reload it in the simulator to get a feel for the work flow.
### Structuring the application
It's time to add a little interactivity to our application. You can begin by adding a simple button component to the screen that is touchable. In the root directory, create a folder called `App` and another folder inside the `App` folder called `Components`. In the `Components` directory, add a file named `SimpleButton.js`. This will be the directory in which we store and reference the components we create.
Note that the React Native code created in this chapter will work for both iOS and Android. Simply replace `index.ios.js` with `index.android.js` if you are interested in Android only. The screenshots and instructions will be mainly for the iOS simulator.
Creating the SimpleButton component
Let's start by rendering some text to the screen and importing it into our index.ios.js file. In SimpleButton.js, add:
```javascript
import React, {
Text,
View
} from 'react-native';
export default class SimpleButton extends React.Component {
render () {
return (<View>
<Text>Simple Button</Text>
</View>);
}
}
```
ES6 de-structuring assignment var [a, b] = [1, 2]; is used to extract Text and View from the React Native module.
We are going to include our newly created component in index.ios.js and simplify it to ES6 syntax:
```javascript
import React, {
AppRegistry,
StyleSheet,
View
} from 'react-native';
import SimpleButton from './App/Components/SimpleButton';
class ReactNotes extends React.Component {
render () {
return (<View style={styles.container}>
<SimpleButton />
</View>);
}
}
```
Beginning with the Example Application
```javascript
var styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
}
});
AppRegistry.registerComponent('ReactNotes', () => ReactNotes);
```
The output for the preceding code is:
We're off to a good start; it's time to add some interactivity to our button. In SimpleButton.js, add the TouchableOpacity component to the destructuring assignment. TouchableHighlight, TouchableOpacity, and TouchableWithoutFeedback are similar components that respond to touches, and it takes an onPress prop for a function to react to the touch. Wrap the existing code in the render function with the TouchableOpacity component:
```javascript
import React, {
Text,
TouchableOpacity,
View
} from 'react-native';
export default class SimpleButton extends React.Component {
render () {
return (
<TouchableOpacity onPress={() => console.log('Pressed!')}>
<View>
<Text>Simple Button</Text>
</View>
</TouchableOpacity>
);
}
}
```
Beginning with the Example Application
Go ahead and try tapping (or clicking) on the text now, you should be able to see that the opacity of the text decreases as you press it. But where has our `console.log(...)` output gone? Open the Developer menu (Hardware | Shake Gesture) and select Debug in Chrome. This opens a Chrome Window at localhost:8081/debugger-ui for debugging, as shown in the following screenshot:
Lo and behold, here is the console log that we specified in our `SimpleButton` component. Behind the scenes, the JavaScript code is being run from inside the Chrome tab and loaded onto the mobile device on startup or reload. From here, you have access to all the Chrome Developer Tools you will normally use, including the addition of break points.
Navigation
Now, it's time to make our application more actionable. Let's begin by transforming our `SimpleButton` into a Create Note button. When the user clicks on the Create Note button, it transitions them to another screen to create notes. To do this, we need our button to be able to accept a function via props from `index.ios.js` to activate the transition. We will add some custom text as well for extra flair:
```javascript
import React, {
Text,
TouchableOpacity,
View
} from 'react-native';
```
export default class SimpleButton extends React.Component {
render () {
return (
<TouchableOpacity onPress={this.props.onPress}>
<View>
<Text>{this.props.customText || 'Simple Button'}</Text>
</View>
</TouchableOpacity>
);
}
}
SimpleButton.propTypes = {
onPress: React.PropTypes.func.isRequired,
customText: React.PropTypes.string
};
Now, we have extended our SimpleButton component to be reusable with minimal changes. We can always pass different functions through the onPress prop and add custom text if we choose. This is all that we need to modify our SimpleButton; now to include the transition functionality to our index.io.js file.
The following image shows the validating props revisited page:
Remember propTypes from the previous chapter? If we forget to pass the onPress prop, the console will log a warning reminding us to pass it. Note that there is no warning for customText since it was not set to isRequired.
The Navigator component
The Navigator component is a reimplemention of the UINavigationController provided by React Native to manage various screens. Similar to a stack, you can push, pop, and replace routes onto the Navigator. It is fully customizable on both iOS and Android, which we will cover in the next chapter. Import the Navigator into index.ios.js and replace the contents of the render method with:
```javascript
import React, {
AppRegistry,
Navigator,
StyleSheet,
View
} from 'react-native';
render () {
return (
<Navigator
initialRoute={{name: 'home'}}
renderScene={this.renderScene}
/>
);
}
```
Navigator receives a prop called `initialRoute` that accepts an object to be the first route to be put on the stack. The route object can contain any attribute that you need to pass to the screen components. All we need for now is the name of the screen we want to transition to. Next, we need to create the function to pass to the `renderScene` prop. In the ReactNotes component, we are going to create a function that takes `route` and `navigator` as parameters, as shown:
```javascript
class ReactNotes extends React.Component {
renderScene (route, navigator) {
...
}
render () {
...
}
}
```
When we first load our application, the parameter route will be the object we pass into initialRoute. Using a switch statement and looking at the values of route.name allows us to choose the component we want to render:
```javascript
renderScene (route, navigator) {
switch (route.name) {
case 'home':
return (
<View style={styles.container}>
<SimpleButton
onPress={() => console.log('Pressed!')}
customText='Create Note'
/>
</View>
);
case 'createNote':
}
}
```
Here, under the home case, you can see our slightly modified code from the original render method in ReactNotes; we have included the onPress and customText props we created earlier. You can add another component to App/Components/named NoteScreen.js; this screen will contain the functionality to create a new note:
```javascript
import React, { StyleSheet, Text, View } from 'react-native';
export default class NoteScreen extends React.Component {
render () {
return (
<View style={styles.container}>
<Text>Create Note Screen!</Text>
</View>
);
}
}
var styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
}
});
```
For now, we are only going to use this screen when we press the Create Note button. In the onPress prop arrow function, we are going to push a new route onto the stack using navigator.push:
```javascript
import NoteScreen from './App/Components/NoteScreen';
class ReactNotes extends React.Component {
renderScene (route, navigator) {
switch (route.name) {
case 'home':
return (
<View style={{styles.container}}>
<SimpleButton
onPress={() => {
navigator.push({
name: 'createNote'
});
}}
customText='Create Note'
/>
</View>
);
case 'createNote':
return {
<NoteScreen />
};
}
}
}
```
Note that push also takes a regular JavaScript object, so we need to include the name attribute for our NoteScreen; reload the application in the simulator and press on the Create Note button. A smooth animated transition between the two screens will occur without adding any extra code.
**Navigator.NavigationBar**
At this point you must be thinking A button is OK, but is there a better, more native way to do navigation? Of course, as a part of the Navigator component, you can pass a navigationBar prop to add a persistent top navigation bar across every screen. The Navigator.NavigationBar is a subcomponent that accepts an object that defines the left and right buttons, a title, and styles (although we are going to leave it unstyled until the next chapter). Modify the ReactNotes render function to include the navigationBar, as shown:
```javascript
render () {
return (
<Navigator
```
initialRoute={{name: 'home'}}
renderScene={this.renderScene}
navigationBar={
<Navigator.NavigationBar
routeMapper={NavigationBarRouteMapper}
/>
/>
};
The **routeMapper** prop accepts an object containing functions for the **LeftButton**, **RightButton**, and **Title** attributes. Let’s insert this object after the imports at the top of index.ios.js:
```javascript
var NavigationBarRouteMapper = {
LeftButton: function(route, navigator, index, navState) {
...
},
RightButton: function(route, navigator, index, navState) {
...
},
Title: function(route, navigator, index, navState) {
...
}
};
```
Advancing the flow of our application to the **CreateNote** screen will require displaying a right-hand button in the navigator bar. Luckily, we already have our simple button set up to push the state onto the navigator. In the **RightButton** function, add:
```javascript
var NavigationBarRouteMapper = {
...
RightButton: function(route, navigator, index, navState) {
switch (route.name) {
case 'home':
return ( ...
<SimpleButton
onPress={() => {
navigator.push({
name: 'createNote'
})
}}
);
}
}
};
```
Beginning with the Example Application
```javascript
Similar to our previous renderScene method, we can switch on the value of route.name. The default expression in the switch statement is there to ensure that different screens do not return a button unless we include them. Let's also go ahead and add a LeftButton to the NavigationBar when it's on the NoteScreen to return to the home screen.
var NavigationBarRouteMapper = {
LeftButton: function(route, navigator, index, navState) {
switch (route.name) {
case 'createNote':
return (<SimpleButton
onPress={() => navigator.pop()}
customText='Back'/>
);
default:
return null;
}
},
...,
};
The navigator.pop() will remove the route on the top of the stack; thus, returning us to our original view. Finally, to add a title, we do the exact same thing in the Title attributes function:
var NavigationBarRouteMapper = {
...,
Title: function(route, navigator, index, navState) {
switch (route.name) {
```
case 'home':
return (
<Text>React Notes</Text>
);
case 'createNote':
return (
<Text>Create Note</Text>
);
}
}
}
Now, let's update the original renderScene function to get rid of the button and include the home screen as a component. Create a new component called HomeScreen; the contents of this screen won't matter much, as we will come back to it later:
```javascript
import React, {
StyleSheet,
Text,
View
} from 'react-native';
export default class HomeScreen extends React.Component {
render () {
return (
<View style={styles.container}>
<Text>Home</Text>
</View>
);
}
}
var styles = StyleSheet.create(
{
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
}
});
Then import it into index.ios.js or index.android.js:
import HomeScreen from './App/Components/HomeScreen';
...
Beginning with the Example Application
class ReactNotes extends React.Component {
renderScene (route, navigator) {
switch (route.name) {
case 'home':
return (<HomeScreen />);
case 'createNote':
return (<NoteScreen />);
}
...
}
}
Now, let's see how the navigation bar persists across each route:
That's it! Reload and take a look at how the static navigation bar persists across each route:
![iOS Simulator - iPhone 5s - iPhone 5...]
Carrier ☼
4:44 PM
React Notes
Create Note
Home
For a more detailed guide on Navigator, check out the React Native documentation at [https://facebook.github.io/react-native/docs/navigator.html](https://facebook.github.io/react-native/docs/navigator.html). We now have the proper infrastructure to go ahead and start adding the create note functionality to our application.
The NoteScreen – first pass
Now that we have a NoteScreen and can navigate to it, let's start making it useful. We'll need to add some TextInput components, one for the title of the note and one to capture the body. We'll want to automatically set focus on the TextInput for the title, so the user can start typing right away. We'll need to listen to events on the TextInput components, so we can keep a track of what the user has typed by updating the state. We'd also like to know when the user has finished editing the title of the note, so that we can automatically set focus on the TextInput for the body.
First, let's add the TextInput component to our list of dependencies and remove the Text component since we no longer need it:
```javascript
import React, {
StyleSheet,
TextInput,
View
}from 'react-native';
```
Before we add the TextInput components to the View, let's get a few style updates out of the way:
```javascript
var styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
marginTop: 64
},
title: {
height: 40
},
body: {
flex: 1
}
});
```
Note that we've added a marginTop: 64 to the container. This is important because we want to make sure that the NavigationBar doesn't accidentally intercept the onPress events we want our TextInput to receive. We've also added styles for each of the TextInput we're about to add. We'll talk more about styles in detail in Chapter 4, Working with Styles and Layout.
Now, in our render function, let's replace the `Text` component with two `TextInput` components, such as:
```javascript
render () {
return (
<View style={styles.container}>
<TextInput placeholder="Untitled"
style={styles.title}/>
<TextInput placeholder="Start typing" style={styles.body}/>
</View>
)
}
```
Before we try this out, notice that the `TextInput` component has a placeholder property that allows us to tell the user what the `TextInput` is for without having to take up additional screen real estate by labeling our form fields. I've also specified `multiline={true}` on the second `TextInput` so the user can add as much text as they want.
Now let's refresh the application in the simulator and you should see something like this:
**Beginning with the Example Application**
You should be able to click into TextInput and start typing. If you'd like to use the on-screen keyboard available in the simulator, you can press CMD+K / CTRL+K.
Let's improve the user experience a little bit by making the title TextInput focus automatically and show the keyboard when the user navigates to the NoteScreen:
```jsx
<TextInput
ref="title"
autoFocus={true}
placeholder="Untitled"
style={styles.title}
/>
```
To be even more user friendly, let's listen for the event that tells us the user has finished editing the title and automatically set focus on the body TextInput. To do that we'll need to make a slight change to the body TextInput so that we can refer to it in our event handler:
```jsx
<TextInput
ref="body"
multiline={true}
placeholder="Start typing"
style={styles.body}
/>
```
Notice the `ref="body"`. Any React component can be given a `ref` so that it can be referenced in your javascript code. Now, in the title TextInput, we can add an onEndEditing event handler that sets focus on the TextInput body:
```jsx
<TextInput
autoFocus={true}
placeholder="Untitled"
style={styles.title}
onEndEditing={(text) => {this.refs.body.focus()}}
/>
```
Avoid using refs to set and get values on your components! That's what `state` is for and we'll learn all about state in Chapter 5, Displaying and Saving Data.
Now when you refresh the application in the simulator and navigate to the NoteScreen, you will see that the title TextInput has focus and you should be able to type something. Press Enter and see the focus automatically switch to the body and start typing there as well. If you're not seeing the on-screen keyboard when you try this, press CMD + K / CTRL + K and try again.
Summary
In this chapter, we have created the skeleton of our ReactNotes application, walked you through how to create a new project, created Views and custom components, navigated between the HomeScreen and NoteScreen, and debugged your application.
You now have a solid foundation for all of the topics we'll introduce throughout the rest of the book. However, there are two big problems with this application, it's not pretty and it doesn't do anything! In the next two chapters, we'll solve both of those problems and you'll be well on your way to master React Native!
Where to buy this book
You can buy Getting Started with React Native from the Packt Publishing website.
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet book retailers.
Click here for ordering and shipping details.
|
{"Source-Url": "http://cdn.oreillystatic.com/oreilly/booksamplers/packt/9781785885181_Sample.pdf", "len_cl100k_base": 6975, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 55160, "total-output-tokens": 8667, "length": "2e12", "weborganizer": {"__label__adult": 0.00044083595275878906, "__label__art_design": 0.0003256797790527344, "__label__crime_law": 0.00015926361083984375, "__label__education_jobs": 0.0005502700805664062, "__label__entertainment": 6.526708602905273e-05, "__label__fashion_beauty": 0.0001628398895263672, "__label__finance_business": 0.00014257431030273438, "__label__food_dining": 0.0003821849822998047, "__label__games": 0.0005898475646972656, "__label__hardware": 0.0005383491516113281, "__label__health": 0.00018167495727539065, "__label__history": 0.00013136863708496094, "__label__home_hobbies": 6.872415542602539e-05, "__label__industrial": 0.0001728534698486328, "__label__literature": 0.00019180774688720703, "__label__politics": 0.00013744831085205078, "__label__religion": 0.0003204345703125, "__label__science_tech": 0.0006608963012695312, "__label__social_life": 6.157159805297852e-05, "__label__software": 0.004322052001953125, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.0002532005310058594, "__label__transportation": 0.0003020763397216797, "__label__travel": 0.00021564960479736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32847, 0.00373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32847, 0.27364]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32847, 0.837]], "google_gemma-3-12b-it_contains_pii": [[0, 2285, false], [2285, 2522, null], [2522, 3111, null], [3111, 3958, null], [3958, 5498, null], [5498, 6788, null], [6788, 7844, null], [7844, 9577, null], [9577, 10062, null], [10062, 10443, null], [10443, 11069, null], [11069, 11632, null], [11632, 12792, null], [12792, 14745, null], [14745, 15680, null], [15680, 15965, null], [15965, 16751, null], [16751, 18032, null], [18032, 19015, null], [19015, 20326, null], [20326, 21723, null], [21723, 23620, null], [23620, 24855, null], [24855, 25995, null], [25995, 26973, null], [26973, 27374, null], [27374, 27888, null], [27888, 29404, null], [29404, 30214, null], [30214, 32016, null], [32016, 32590, null], [32590, 32847, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2285, true], [2285, 2522, null], [2522, 3111, null], [3111, 3958, null], [3958, 5498, null], [5498, 6788, null], [6788, 7844, null], [7844, 9577, null], [9577, 10062, null], [10062, 10443, null], [10443, 11069, null], [11069, 11632, null], [11632, 12792, null], [12792, 14745, null], [14745, 15680, null], [15680, 15965, null], [15965, 16751, null], [16751, 18032, null], [18032, 19015, null], [19015, 20326, null], [20326, 21723, null], [21723, 23620, null], [23620, 24855, null], [24855, 25995, null], [25995, 26973, null], [26973, 27374, null], [27374, 27888, null], [27888, 29404, null], [29404, 30214, null], [30214, 32016, null], [32016, 32590, null], [32590, 32847, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32847, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 32847, null]], "pdf_page_numbers": [[0, 2285, 1], [2285, 2522, 2], [2522, 3111, 3], [3111, 3958, 4], [3958, 5498, 5], [5498, 6788, 6], [6788, 7844, 7], [7844, 9577, 8], [9577, 10062, 9], [10062, 10443, 10], [10443, 11069, 11], [11069, 11632, 12], [11632, 12792, 13], [12792, 14745, 14], [14745, 15680, 15], [15680, 15965, 16], [15965, 16751, 17], [16751, 18032, 18], [18032, 19015, 19], [19015, 20326, 20], [20326, 21723, 21], [21723, 23620, 22], [23620, 24855, 23], [24855, 25995, 24], [25995, 26973, 25], [26973, 27374, 26], [27374, 27888, 27], [27888, 29404, 28], [29404, 30214, 29], [30214, 32016, 30], [32016, 32590, 31], [32590, 32847, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32847, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
64b029bfe2396861ddd265744d7b4d6dc2826051
|
SLA Management in Federated Environments
P. Bhoj, S. Singhal
Hewlett Packard Laboratories, 1501 Page Mill Road
Palo Alto, CA 94304
USA
{preeti sharad}@hpl.hp.com
S. Chutani *
Oracle Corporation
600 Oracle Parkway
Redwood Shores, CA 94065
USA
schutani@us.oracle.com
Abstract
Increasingly, services such as E-commerce, web hosting, application hosting, etc., are being deployed over an infrastructure that spans multiple control domains. These end-to-end services require cooperation and internetworking between multiple organizations, systems and entities. Currently, there are no standard mechanisms to share selective management information between the various service providers or between service providers and their customers. Such mechanisms are necessary for end-to-end service management and diagnosis as well as for ensuring the service level obligations between a service provider and its customers or partners.
In this paper we describe an architecture that uses contracts based on service level agreements (SLAs) to share selective management information across administrative boundaries. We also describe the design of a prototype implementation of this architecture that has been used by us for automatically measuring, monitoring, and verifying service level agreements for Internet services.
Keywords
Federated management, service level agreements, system monitoring
1 Introduction
Increasingly, services such as E-commerce, web hosting, application hosting, etc., are being deployed over an infrastructure that spans multiple control domains. These end-to-end services require cooperation and internetworking between multiple organizations, systems and entities. As shown in Figure 1, even within a single enterprise, multiple organizations (or geographically distributed sites) can maintain independent management systems that need to share management information. Cross-domain management is especially critical when outsourcing Information Technology (IT) services or when extending business applications to systems in other enterprises in order to form extended enterprises.
Managing large-scale applications that cross such administrative boundaries is a problem because current management solutions either allow partners access to all
* Work performed while author was at Hewlett Packard Laboratories.
management information (e.g., by providing remote consoles) or deny access to this information. Inter-domain management in competitive environments places two fundamental requirements on the management system:
1. Service management and diagnosis require the knowledge and view of the end-to-end service. This means that management information has to flow across administrative domain boundaries to provide an end-to-end view.
2. Business requirements restrict information sharing across domains because the details of the service implementation and much of the customer information is considered proprietary by each business. Thus business policies restrict the sharing of details about components and infrastructure used in delivering the service. These policies are particularly stringent if customer data needs to be shared across domains.
Meeting these objectives requires that service management systems need to
- Selectively share information about the components of the overall service only to the extent necessary to ensure overall operation of the service.
- Hide the details of the system components by abstracting information.
- Provide mechanisms to ensure that service level obligations provided by the administrative domain to its customers and partners are being met.
Over the years, a number of solutions have been proposed for inter-domain management. The telecom industry and the ATM/POS network providers have coordinated and shared information between multiple entities. Unlike the Internet their environments are regulated and are typically designed to offer a single service. Differences between these services and Internet-based services are discussed in [1].
Recently, work has also been done on inter-domain information sharing on the Internet. In [2], a trouble-shooting methodology for coordinating network problem diagnosis among peer administrative domains and untrusted observers is presented. Mechanisms to manage security in a heterogeneous multi-provider environment are discussed in [3].
Figure 1. Internet Business Environment
(c) 1999 IFIP
Inter-domain communication is discussed in TMN. There are a number of publications in the area of using the TMN model to manage Virtual Private Networks (VPN). The design of a management service for a VPN that addresses multiple domains and heterogeneous systems are discussed in [4]. In [5], approaches in Internet services management are compared to those taken by the telecommunication industry (TMN). It is pointed out that more effort is required to achieve a standard Internet service management strategy to manage all types of internet services as well as non-internet based services such as voice services.
When describing service level management, one of the most commonly used service metric is availability. [6] describes methods for testing the availability of distributed applications by constructing a service graph for the description of functional dependencies and applying calculation rules on an instantiated graph to determine the availability of applications.
Most published work in this area describes approaches to managing and trouble-shooting network services, and managing security, in an inter-domain environment. A few products (e.g., InfoVista systems [7], Netcool [8], Vital Analysis[9], Network Health Reporter[10], etc.) allow customers to monitor the quality of service offered by providers.
While the research and products mentioned above offer a good start towards Internet service management, there still remain unresolved problems:
- None of the research addresses how to selectively share management information across administrative domain boundaries in a secure way. This capability is particularly important with the introduction of extended enterprises where a service is composed of components from several service providers.
- There are no tools available to derive measurable aspects from Service Level Agreements (SLAs). It is unclear how a legal service level agreement document is translated into a measurable specification that can be automatically monitored for compliance.
- There are no recommendations and policies to define metrics (what they are and how their values are computed) and their bounds (thresholds, baselines, etc.) for service compliance.
Work is underway in the IPPM (Internet Protocol Performance Metrics) working group of the IETF, the XIWT (Cross Industry Working Team) and ANSI T1A1 committee to identify Internet service related metrics, and measurement methodologies. We have focussed our work on developing mechanisms to share selective management information across federated domain boundaries, and measuring, monitoring, verifying, and managing service level agreements for Internet services. We expect our solution to complement and work side by side with traditional management solutions, such as HP OpenView, CA Unicenter, and Tivoli TME. We assume that the measurements collected by the management and measurement systems can be combined to service level metrics used in the service level agreements.
In section 2, we discuss our overall architecture for sharing information in federated systems. Section 3 describes contracts, which are used in our architecture for encapsulating measurable aspects of service level agreements for the purposes of management. The design of Conformance, a prototype system to monitor SLAs for compliance and provide selective sharing of management information is discussed in
section 4. Finally, we describe an example where we have used Conformance to monitor Internet services in section 5 and finish with conclusions in section 6.
2 Federation Architecture
A federated system is defined to be a system composed of components within different administrative entities cooperating to provide a service. A service is an application with a well-defined interface and functionality. Federated service management is the management of services that span multiple heterogeneous control domains, and which rely on correct functioning of components across those domains. A control domain is defined to be an administrative domain that is managed by a single administrative entity, typically a business. [1] elaborates on these concepts in greater detail.
Service providers are increasingly using SLAs to define agreements for sharing resources with partners, as well as for offering service quality guarantees to customers. These SLAs contain (along with other legal obligations) details of information that is shared and service level guarantees that are offered by the provider. Management systems should contain information about these SLAs, and should use this information both to control access to system resources as well as to monitor the system for compliance with the SLA. Ultimately, the management system should control resources to actively manage the services with the objective of meeting these agreements.
We have developed an architecture to allow SLA monitoring and sharing of selective management information across administrative domain boundaries in a secure way. In our architecture we assume that all interactions between federated domains are based on bilateral agreements that can be implemented using verifiable and consistent contracts. A contract is a specification (derived from the SLA) of the service attributes that are meaningful and automatically measurable for correct service behavior. The contract specification contains both the attributes and the bounds within which the attributes must stay in order for the service to behave in a desired manner. Attributes have to be both quantifiable and measurable to be included in a contract. Different domains are likely to contain heterogeneous systems, it is thus important that there be agreement on how contracts will be invoked and how the systems will communicate. We describe contracts in further detail in section 3.
Figure 2 shows a high level overview of our architecture. In this section we explain the diagram with general descriptions of each of the components.
The ovals represent administrative domains. All external interactions to a given domain happen through one or more Contract Verification Interfaces provided by the domain. Service level agreements offered by the domain define the nature of information provided at the interface and the security parameters needed for the interaction. The smaller oval at the bottom represents another service domain, which might either be self-contained within a single business entity, or might be federated. The architecture provides a recursive and hierarchical model for communication in a federation, thus providing a scalable solution. It is assumed in the architecture that each domain controls only the service aspects that are provided by it.
Inside the domain, the service manager directs the collection of management information using the service-specific data contained in the service model. The data collection is guided by the contracts in the contract repository.
![Diagram of Federation Architecture]
**Figure 2. Federation Architecture**
The *service model* includes a description of the service that this administration is managing. It
- Identifies the various components that enable a service. For example, if the top level service being managed is electronic mail, the service model would list the components as the email server host, the networks connecting the email host to the internet, the email application itself, the name server used to resolve hostnames to internet addresses, etc.
- Expresses interdependencies that exist among the different elements of the service. From the above email example, all components identified in the service model should function properly for the email service to work. The interdependencies capture the cause and effect between the components of a service.
- Identifies the measurements that are available from each component. Thus, the email server could identify the number of email transactions, and active measurements could be used to get an estimate of the response time seen by email clients.
The *contract repository* contains a set of contracts, which this domain has with its providers and customers. It contains information on how to validate an incoming contract verification request and places constraints on what data may be accessed from outside the domain as well as how that data is computed. As mentioned earlier, contracts consist of a specification of attributes and bounds within which the attributes must stay in order for the service to behave correctly.
The *service manager* is the engine responsible for directing the verification task. It has the knowledge of how to evaluate an incoming contract verification request. This would involve interfacing with the contract repository to get the details of the contract, and interfacing with the local resources and local system and service management modules to collect information needed to verify the contract. If evaluation of a contract has dependencies on other external contracts then it uses the...
contract verification interfaces provided by those external domains to collect the data.
Figure 3 shows how the local resources and management systems are coupled to the contract verification interface to expose selected information. The local measurement and management systems collect data from the local infrastructure and applications. Customizable plug-ins allow the service manager to communicate with a variety of systems to extract information about the domain. The plug-ins provide an abstract view of the system to the service manager that is independent of the underlying implementation. The contract evaluation and notification handlers use this abstract view to compare the system behavior to pre-set thresholds and conditions specified in the contracts to monitor the contracts for compliance.
Figure 3. Contract Verification Framework
3 Contracts
Contracts govern the details of which data can be exposed and to whom. In this section, we describe the functionality offered by contracts in more detail.
3.1 Contract Definition
Formally, a contract $C$ is defined by the triple $(P, M, A)$, where $P$ is the set of properties associated with the contract, $A$ is the set of assertions agreed upon by the parties, and $M$ is the set of methods (or operations) available on the contract. Properties define information (needed for contract verification) that does not necessarily relate directly to the specific service that is the subject of the contract. Examples of items in $P$ are
$P = \{ \text{authentication mechanism, access control, invocation methodology, ...} \}$
Assertions contain service-related agreements or guarantees. The assertion set $A$ consists of
$A = \{ \mathcal{B}(\mathbf{v}) \}$
where $\mathbf{v}$ is a vector of variables that reflect some aspect of the service and $\mathcal{B}(\mathbf{v})$ is a relationship that constrains those variables according to contract agreements. At any given time, $\mathcal{B}(\mathbf{v})$ takes on the values TRUE or FALSE depending on whether the constraint described in the relationship is being met or not. Examples of assertions in $A$ may be
\[ A = \{ availability > 99.9\%, \text{ packet loss} < 2\% \land \text{ round trip delay} < 150 \text{ ms}, \ldots \} \]
Methods describe the operations available on the contract at the contract verification interface. They permit the invoker to query the truthfulness of the assertions in the contract, identify assertions that are FALSE, and retrieve the values of associated variables in the assertions. For example, \( M \) may consist of
\[ M = \{ \text{verify contract, query variable values, register constraint, notify event, \ldots} \} \]
Contracts are described by a Contract Definition Language (CDL) which gives a formal declaration of the assertions in the contract, and allows the service manager to associate the contract with a number of handlers, which hide the details of the service and the means of verification. Contract structure and verification is discussed further in sections 4 and 5 using examples.
### 3.2 Contract Content
From an operational point of view the set of assertions in the contract defines its content. Each assertion is an atomic group of statements that is agreed upon by the parties defining the contract. At any given time, an assertion may be TRUE or FALSE depending on whether the party is meeting the obligations stated in the assertion or not. Statements in an assertion are made up of logical predicates whose value can be uniquely determined. The logical predicates are composed using variables as well as logical operators such as \( \land, \lor, \neg \), quantifiers \( \forall, \exists \), set operations \( \in, \cup, \cap \) and constraints such as \( \leq, \geq, \neq, = \) on those variables. Variables may be simple variables (e.g., current network load), statistical variables (e.g., averages or variances), or trends (time dependent variables such as growth rates). They reflect measures that are meaningful for the operation of the service.
A contract is said to be in compliance if all assertions within it are TRUE. This requires that all assertions in the contract should be verifiable and consistent. We call an assertion verifiable if means exist to programmatically compute whether it is true or not at any given time. For a set of assertions to be consistent, we require that no dependencies exist within the set such that compliance of one assertion forces non-compliance in another.
### 3.3 Contract Verification Interface
Contracts are verified using a Contract Verification Interface that describes the set of operations that may be invoked across the domain boundary during system operation. Since contracts govern both the behavior of the interaction between domains as well as the nature of information that is exchanged between domains, the verification interface can potentially be very complex. However, the problem of specifying the interface becomes simpler if we note that not all information in the contract is dynamic. Most of the contract content mentioned in the earlier section is agreed upon ahead of time, and remains static (and known to both domains) during operation. Typically this information includes:
* What are the service quality metrics and service guarantees and what thresholds will be met?
* What specific information will be shared across domains, i.e., what parameters will be passed and what results will be returned?
What are the methods for communications between systems and what authentication and access control methods are acceptable?
What are the arbitration policies and what information will be checked for audits?
Thus, this information does not need to be communicated across the domain boundary at run-time but can be defined using an administration interface. A description of the requirements and desired features for such administrative tools is beyond the scope of this paper.
At a high level, we believe that only two capabilities need to be supported at the contract verification interface: contract verification and event notification. The interface is described in terms of an API consisting of parameters and operations that are invoked across the domain boundary during the operation of the system.
4 Conformance Prototype
In the previous sections we described an architecture for sharing information in federated environments based on service level agreements. We also defined a contract-based mechanism to specify the information needed to monitor the SLAs for compliance. Contract interfaces can also be used for sharing other information between administrative domains. The architecture leaves open the exact communication and security mechanisms to be used, as well as methods for specifying contracts and coupling them to local management systems.
In this section, we describe Conformance, a prototype implementation of this architecture. Conformance is specifically targeted to allow automatic verification of quality of service guarantees as described in a contract. The implementation of Conformance is web-based, i.e., inter-domain communication uses the HTTP protocol. This allows the use of existing facilities provided by web servers and clients for data encryption using secure socket layers (SSL) and authentication using public key certificates. Conformance is written in Java, and uses a Java-enabled web server as its front-end. The verification requests are authenticated at the web server, and the request parameters are passed back to Conformance through a Java API. Results from Conformance can be passed directly back to the client, or in the form of HTML reports. We have tested our implementation on both NT and Unix platforms.
4.1 Conformance Process Flow
Figure 4 shows the process used to create the data necessary for contract verification using Conformance.
The service model describes the service implemented by the domain as well as the dependencies between service components. The System Dictionary, which is a part of the service manager, contains the abstract view of the service model in terms of high-level service attributes that are offered to customers. The System Dictionary is used to identify the attributes as well as meta-information about the attributes such as which measurement plug-in is used to obtain the attribute value, how often the measurement is made, what parameters are necessary to measure it and so on. Thus, for example, for a packet loss measurement, the attributes could specify that it has a TTL (Time-To-Live) of 15 minutes, that it requires a network segment identification as a parameter, and that it should be obtained using the Network_Measures plug-in.
This way the System Dictionary isolates the Conformance Engine from the details of the underlying system implementation.
For each customer SLA, information about the guarantees (thresholds and bounds on the attributes) as well as information about the service components which impact the service offered to a given customer are placed in the Customer SLA Database. An example of the former could be \textit{guaranteed availability} = 99.9\% while an example of the latter would be \textit{premises router address} = 15.25.0.0.
A Contract Definition Language (see section 4.3) is used to specify the assertions in the form of a template that uses the attributes and thresholds as parameters. The contract is compiled into a graph structure that is then added to the Contract Repository.
When a contract is to be verified, the contract template is retrieved from the contract repository, and the customer specific information is retrieved from the customer SLA database. This information is then passed to Conformance. Conformance uses the information in the System Dictionary to make (or retrieve) the appropriate system measures and computes the compliance with the contract.
4.2 Design of Conformance
The overall structure of the Conformance Engine is shown in Figure 5. Objects that contain contract templates form the core of the engine. Because the same contract could be offered to multiple customers, customer-specific thresholds, bounds, and system parameters are filled in the template at contract evaluation time. This allows the templates to be shared across multiple customers.
System measurements are accessed through system specific measurement plug-ins. Because the plug-ins hide the details of the underlying system and the measurement protocols from Conformance, new measurements are easy to add to the system. Conformance dynamically loads measurement plug-ins as needed.
To minimize measurement traffic, the state of the system is cached in the system variable cache. The cache uses a logical database view of the system, i.e., every system attribute is treated as an entry in a logical database, with the attribute name being used as the key to retrieve its current value. Properties associated with the
attribute define how frequently the value is updated in the cache, which parameters
are needed to obtain the measurement value, and which measurement plug-in is
responsible for obtaining the attribute value. The cache can also store measurement
history if needed to compute aggregate values (e.g., time averages) if they are not
directly available from the underlying measurement system.

When a customer request is received at the verification interface, the following steps
occur:
1. The customer specific data is retrieved from the customer SLA database and
inserted in the contract templates.
2. The actual system attribute values are obtained from the system variable cache.
If the cached values are stale, measurement plug-ins are used to update the cache
before the values are returned.
3. The contract is evaluated by computing the values of the assertions defined in
the contract using the attribute values and customer-specific parameters,
thresholds and bounds.
4. The compliance results are logged and communicated either programmatical-
ly by the verification interface, or through reports generated by the report
generator. The reports are typically customized for each customer. The report
generator uses the Visualization Templates to create reports.
4.3 Contract Definition Language
Systems can be defined using a formal language such as UML [11]. Other system
modeling languages used in CIM [12] and Webm [13] can also be used to describe
and collect management data using multiple heterogeneous sources of data such as
SNMP, CMIP, DMI, etc. In our implementation, we assume that these data
---
1 We use system attributes and system measurements interchangeably. System attributes are abstract or
derived measurements computed from element level measurements. Examples are service availability,
response time, thruput, utilization, etc.
collection mechanisms can be captured in one or more measurement plug-ins and the system attributes can be derived from those measurements.
Since we did not need the complexity enabled by UML, we used a declarative language with syntax similar to the C language. A partial description of the grammar used is shown below:
<table>
<thead>
<tr>
<th>Contract</th>
<th>DeclarationList AssertionList [ FilterList ]</th>
</tr>
</thead>
<tbody>
<tr>
<td>Declaration</td>
<td>ContractName</td>
</tr>
<tr>
<td>Assertion</td>
<td>PredicateList</td>
</tr>
<tr>
<td>Predicate</td>
<td>Assertion</td>
</tr>
<tr>
<td>expression</td>
<td>unary and binary C expressions; constants; and identifiers</td>
</tr>
<tr>
<td>identifier</td>
<td>system attribute name, user variable, constant, AssertionLabel</td>
</tr>
<tr>
<td>Filter</td>
<td>Event EventName FilterDescription EventValue</td>
</tr>
<tr>
<td>Status</td>
<td>StatusVariable FilterDescription</td>
</tr>
<tr>
<td>FilterDescription</td>
<td>expression</td>
</tr>
<tr>
<td>EventValue</td>
<td>expression</td>
</tr>
</tbody>
</table>
Declarations contain meta-information about the contract as well as type specifications for variables used in the contract. The value of the contract is the logical AND of all assertion values. Assertions may optionally be given a label. If an assertion is labeled, it may be verified (computed) independently of the contract within which it resides. In addition, other assertions may refer to its value using its label. Assertions consist of predicates, which are logical expressions formed using system attributes, customer dependent variables, constants, and arithmetic and logical operators. System attributes and customer dependent variables are set by the measurement plug-ins and the verification interface respectively, and are thus treated as read-only within the language. Filters can be associated with a contract. Filters may be used to compute various kinds of status information (e.g., expected time to repair, trouble ticket information, etc.) about the contract and/or generate events (e.g., notify operators when a contract is not met) when certain conditions are met. Filters and events are defined using a syntax similar to assertions, and can use any of the variables defined as part of the contract.
5 Experience using Conformance
We have tested the Conformance prototype using live measurement data we are collecting from the various XIWT member sites and a large ISP.
Figure 6 shows the experimental setup. The ISP (a large national ISP) monitors its network, the various servers that compose its Email service, and the POP (Point of Presence) sites using active and passive measurements. The measurement data is pushed by the ISP to a measurement station outside the HP firewall. Available measurements include availability and response time measurements from DNS, POP sites, mail servers, and NFS. In addition, measurement stations at HP and several partner sites make periodic network delay and packet-loss measurements by pinging one another. Measurement data is pulled through the HP firewall as shown by measurement-specific plug-ins. In our example scenario, we assumed that the following metrics are specified in the service level agreement of interest:
- **Availability** – Email service is expected to be available 99% of the time as measured over any day. The network is available 99.9% of the time between 8:00 AM and 5:00 PM.
- **Performance** – Email performance is characterized in terms of a) response time < 2.5 seconds when an employee retrieves mail. Network performance is measured by a) round-trip delay < 150 ms and b) packet loss rate < 5% when averaged over daily intervals.
- **Utilization** – The ISP is expected to reserve sufficient capacity at its POP (Point of Presence) so that employees are not denied access to the email service. The ISP is expected to create daily, weekly, and monthly reports on the overall service compliance and the individual service level metric values.

We now discuss the details of how the email service is modeled and how the contract for email service is monitored. We have constructed similar contracts for the network access services.
Figure 7 shows a part of the service model for the email service provided by the ISP. The model is represented as a dependency graph with measurements (indicated by arrows) associated with each node in the graph. The measurements may be made directly (using active tests or passive monitoring) or may be derived from measurements made at other nodes. For example, mail service availability is derived...
using availability measurements from the POP \textsubscript{m} Dialup Service and the MailFEP\textsubscript{i} Service. These in turn depend on the availability of the Authentication Service, Terminal Server, Service, MailFEP, Authentication Service, and so on\textsuperscript{2}.
The email service contract is comprised of two service components: the access component (describing the POP and its associated components) and the mail subsystem (describing the mail server and its associated components). A partial description of the contract written in CDL is shown below
```cdl
/* Email Hosting contract template */
Contract Email_System;
Service Email_Service;
%
/* POP metrics in SLA- availability, delay, thruput, utilization */
ISP_Access: {
%popAvailability($popName, ...) > $minPopAvailability;
%popAvgDelay($popName, ...) < $maxPopDelay;
%popThruput($popName, ...) > $minPopThruput;
%popUtilization($dialinServer, ...) < $maxPopUtilization;
}
/* Test Mail subsystem for availability and response times*/
Mail_System: {
%mailAvailability($mailServer, ...) > $minMailAvailability;
%mailResponseTime($mailServer, ...) < $maxMailResponseTime;
}
```
In the contract, \texttt{name} is used as a notation to identify system attributes and \texttt{name} is used to identify customer-dependent parameters to be filled in from the SLA database at the time of the evaluation. The customer parameters define both
\textsuperscript{2} Note that only a small part of the service model is shown in the figure. The leaf nodes on the graph are other services, which have their own service models and associated measurements.
thresholds (e.g., $\text{minPopThruput}$) as well as parameters necessary for the measurement system (e.g., $\text{mailServer}$). Thus, the ISP can check if the email service is meeting SLAs for different customers by filling in the customer specific thresholds (e.g., $\text{minMailAvailability} = 99\%$) and system parameters (e.g., $\text{popName} = \text{“Atlanta”}$).
Figure 8: Example of a contract compliance report
Figure 8 shows a sample report generated by Conformance for the email service contract on a day when problems occurred in the service. This gives the status for the last 24 hours. The aggregate view pie chart shows the percentage of time the contract was compliant vs. non-compliant over the last day. An hourly behavior of the contract is shown in the bar chart, and the compliance percentage for each service component is shown in the ISP Access, and Mail System pie charts. Following other history and detail links gives a historical view of the service, and how the service components are behaving over time.
The report contents and frequencies can be customized depending on what has been agreed upon in the SLA between the provider and the customer. We have generated similar reports for network access using measurements of both inbound and outbound traffic.
In our experience, Conformance scales well. The hierarchical architecture allows domains to be loosely coupled. In addition, because inter-domain interaction takes place using the web, it is easy to replicate Conformance over multiple web servers and share measurement and compliance data using existing servers in the different domains. Finally, because the details of the measurements are hidden from
Conformance by the measurement plug-ins, it is easy to add new measurements and couple to different management and measurement systems.
We have found our current implementation of the CDL be sufficient for most SLAs we have encountered. Although the CDL only provides Boolean values for contract compliance, filters defined in contracts can provide multi-valued status information (e.g., warning, critical etc.) prior to the contract being violated. Our current CDL implementation does not support arrays. This causes the system specification in the System Dictionary to become complex for large environments where multiple instances of the same component exist. We are extending our CDL to support arrays to simplify system specification. For large scale systems, we believe that storage of service models will require database support.
6 Summary
Internet services such as e-commerce, web hosting, application hosting, etc., require cooperation and internetworking between multiple organizations, systems and entities while maintaining the confidentiality and privacy of management data considered proprietary by each organization. Currently there are no standard mechanisms to share selective management information between the various service providers. These mechanisms are needed to aid in management and diagnosis of end-to-end services. In particular, service providers are using service level agreements as a means of specifying service level attributes that are offered to their partners and customers. This implies that it is necessary to develop tools and techniques to monitor whether providers are meeting their service level obligations, and to enable providers to manage their infrastructure to those agreements.
In this paper we describe an architecture to share selective management information across multiple business entities. The architecture can be used for automatically measuring, monitoring, verifying, and managing service level agreements for Internet services. The architecture allows specification of attributes that are quantifiable and measurable in a service contract. This allows a service provider to offer verifiable and meaningful service behavior to their customers. Providers can offer customers the capability to automatically verify the current service behavior against the guarantees, by exposing the values of service parameters as agreed upon in the contract.
We also described the design and implementation of Conformance, a prototype implementation of this architecture. Conformance is web-based, i.e., uses the standard HTTP protocol, to allow easy inter-domain communication. It isolates the abstractions used in the service level agreements from the details of the service implementation, thus allowing management information to be shared across domain boundaries while hiding the system implementation details.
We have used our implementation to demonstrate how service providers can offer SLA monitoring capabilities to their customers for a number of services including email and network access services.
7 References
2. Thaler, D., Ravishankar, C. An Architecture for Inter-Domain Troubleshooting. *ICCCN '97*.
|
{"Source-Url": "http://dl.ifip.org/db/conf/im/im1999/BhojSC99.pdf", "len_cl100k_base": 7090, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35766, "total-output-tokens": 8080, "length": "2e12", "weborganizer": {"__label__adult": 0.00038313865661621094, "__label__art_design": 0.0006031990051269531, "__label__crime_law": 0.0010318756103515625, "__label__education_jobs": 0.0018367767333984375, "__label__entertainment": 0.00025773048400878906, "__label__fashion_beauty": 0.00018155574798583984, "__label__finance_business": 0.005748748779296875, "__label__food_dining": 0.0004024505615234375, "__label__games": 0.0008015632629394531, "__label__hardware": 0.00336456298828125, "__label__health": 0.0007276535034179688, "__label__history": 0.0005555152893066406, "__label__home_hobbies": 0.0001100301742553711, "__label__industrial": 0.0011119842529296875, "__label__literature": 0.0005726814270019531, "__label__politics": 0.0005183219909667969, "__label__religion": 0.00035500526428222656, "__label__science_tech": 0.408203125, "__label__social_life": 0.00012290477752685547, "__label__software": 0.11968994140625, "__label__software_dev": 0.45166015625, "__label__sports_fitness": 0.0002465248107910156, "__label__transportation": 0.0011701583862304688, "__label__travel": 0.000370025634765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38562, 0.01523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38562, 0.2966]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38562, 0.90955]], "google_gemma-3-12b-it_contains_pii": [[0, 2331, false], [2331, 4415, null], [4415, 7811, null], [7811, 11122, null], [11122, 13415, null], [13415, 15542, null], [15542, 18874, null], [18874, 22110, null], [22110, 24340, null], [24340, 26269, null], [26269, 28627, null], [28627, 30975, null], [30975, 32617, null], [32617, 34313, null], [34313, 37532, null], [37532, 38562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2331, true], [2331, 4415, null], [4415, 7811, null], [7811, 11122, null], [11122, 13415, null], [13415, 15542, null], [15542, 18874, null], [18874, 22110, null], [22110, 24340, null], [24340, 26269, null], [26269, 28627, null], [28627, 30975, null], [30975, 32617, null], [32617, 34313, null], [34313, 37532, null], [37532, 38562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38562, null]], "pdf_page_numbers": [[0, 2331, 1], [2331, 4415, 2], [4415, 7811, 3], [7811, 11122, 4], [11122, 13415, 5], [13415, 15542, 6], [15542, 18874, 7], [18874, 22110, 8], [22110, 24340, 9], [24340, 26269, 10], [26269, 28627, 11], [28627, 30975, 12], [30975, 32617, 13], [32617, 34313, 14], [34313, 37532, 15], [37532, 38562, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38562, 0.05584]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
57a9f8230186fda2d40ae612c47e2497f2f79679
|
Regular Paper
Verification of Concurrent Programs
Using the Coq Proof Assistant: A Case Study (Preprint)
REYNALD AFFELDT,† NAOKI KOBAYASHI †† and AKINORI YONEZAWA†
We show how to model and verify a concurrent program using the Coq proof assistant. The program in question is an existing mail server written in Java. The approach we take is to use an original library that provides a language for modeling, a logic, and lemmas for verification of concurrent programs. First, we report on the modeling of the mail server. Using the language provided by the library, we build a model by (1) translating the original program and (2) building appropriate abstractions to model its environment. Second, we report on the verification of a property of the mail server. We compare this library-based approach with an alternative approach that directly appeals to the Coq language and logic for modeling and specification. We show that the library-based approach has many advantages. In particular, non-functional aspects (communications, non-determinism, multi-threading) are handled directly by the library and therefore do not require complicated modeling. Also, the model can be directly run using existing compilers or virtual machines, thus providing us with a certified implementation of the mail server.
1. Introduction
Mechanical verification of programs is important to guarantee their correctness. Among the tools for such verifications, proof assistants are particularly attractive, because they combine inductive reasoning with automation, what makes them more widely applicable, compared to fully automated tools such as model checkers.
Proof assistants cannot in general be used directly for program verification. The main reason is that programs may use programming constructs the proof assistant is unaware of. For instance, verification of concurrent programs in a proof assistant based on the $\lambda$-calculus requires an additional machinery to handle typical concurrency concepts such as non-determinism.
We have been developing a library to enable mechanical verification of concurrent programs in the Coq proof assistant. This library (called appl$\pi$, which stands for “applied $\pi$-calculus”) provides a modeling language, a specification language, and lemmas for verification of realistic concurrent programs.
This paper reports on the modeling and verification of a concurrent program using the appl$\pi$ library. More precisely, we model an existing mail server and we verify that it correctly handles requests from clients. In fact, we have already performed this verification, but using a different approach that directly appeals to the Coq language and logic for modeling and verification. Our main contribution is to show the advantages of using the appl$\pi$ library for the verification of concurrent programs. In particular, we illustrate that modeling is simplified because typical concurrency concepts (communications, non-determinism, multi-threading) are handled by the library and does not require complicated modeling. In addition, it is possible to run the model using existing compilers or virtual machines, thus providing us with a certified implementation of the mail server. We believe that these advantages reinforce confidence in the verification.
This paper is organized as follows. In Section 2, we give an overview of the Coq proof assistant and the appl$\pi$ library. In Section 3, we describe our case study. In Section 4, we explain how we model the original program into a concurrent model written with the appl$\pi$ library. In Section 5, we explain how we model the environment of the program. In Section 6, we report on the mechanical verification in itself. In Section 7, we conclude and list related and future work.
† Department of Computer Science, the University of Tokyo
†† Department of Computer Science, Tokyo Institute of Technology
2. Preliminaries
2.1 The Coq Proof Assistant
The Coq proof assistant is the implementation of a typed λ-calculus (namely the Calculus of Inductive Constructions) that can be used to represent datatypes, functions, and predicates, and therefore encode, among others, computer languages and proof systems. In this paper, we use Coq for both implementation and notation. In this section, we give an overview of the Coq proof assistant. Intuitively, it can be thought as an ML-like language with a rich type system.
In Coq, datatypes are represented by inductive types. For example, natural numbers are defined as follows:
\[ \text{Inductive nat : Set := O : nat | S : nat -> nat.} \]
This definition introduces the type of natural numbers \( \text{nat} \), which is itself of type \( \text{Set} \), a Coq built-in type for datatypes. \( \text{O} \) and \( \text{S} \) are the constructors for natural numbers; observe that \( \text{S} \) is functional. The intent is that the constructor \( \text{O} \) represents the natural 0, \( (\text{S} \ \text{O}) \) represents the natural 1, etc.
Records are represented by a syntax similar to most programming languages. For example, two-dimensional points can be defined as follows:
\[ \text{Record point : Set := pt{ x: nat; y: nat}.} \]
\( \text{pt} \) is the constructor for two-dimensional points, and \( \text{x} \) and \( \text{y} \) are their projection functions.
Functions are represented by \( \lambda \)-abstractions. For example, the function that computes the predecessor of a natural number can be defined by case analysis as follows:
\[ \text{Definition pred \ [n:nat] := Cases n of} \]
\[ 0 => 0 \]
\[ \text{| (S m) => m} \]
\[ \text{end.} \]
where \( \text{[n:nat]} \) is the Coq syntax for \( \lambda n: \text{nat} \).
Predicates are represented by inductive types or functions whose resulting type is the Coq built-in type \( \text{Prop} \). For example, the following predicate defines even natural numbers:
\[ \text{Inductive even : nat -> Prop :=} \]
\[ \text{base : (even O)} \]
\[ \text{| step : (n:nat)(even n) -> (even (S (S n)))).} \]
The constructor \( \text{base} \) represents the fact that 0 is an even number, and the constructor \( \text{step} \) let us construct, from any even number \( \text{n} \), the proof that the natural \( (\text{S} \ (\text{S} \ \text{n})) \) is an even number. Intuitively, \( (\text{n:nat}) \) can also be thought as universal quantification.
Proof goals are stated by the keyword \text{Lemma}. For example, the following proof goal states a property of the \( \text{pred} \) function:
\[ \text{Lemma pred_even : (n:nat)(even n) ->} \]
\[ \text{(even (pred (pred n)))}. \]
Once a proof goal is stated, Coq enters an internal loop where the user is prompted to prove the goal interactively by means of pre-existing lemmas. Upon completion of the proof, the lemma is saved in Coq and is reusable for further proof developments.
2.2 The \text{appl\pi} Library
\text{appl\pi} is a Coq library that enables verification of concurrent programs. It provides (1) a modeling language, (2) a specification language (or logic, for short), and (3) a collection of lemmas. Using this library, it is possible to verify concurrent programs using the following approach:
1. Write a model of the concurrent program using the \text{appl\pi} language.
2. Write the properties of the concurrent program using the \text{appl\pi} logic.
3. Prove that the property holds of the model using the lemmas of the library.
In the following, we give an overview of the contents of the \text{appl\pi} library.
The \text{appl\pi} modeling language is a Coq encoding of a minimal concurrent language based on the \( \pi \)-calculus that it extends with datatypes and functions, like the Pict programming language. The \text{appl\pi} modeling language defines two syntactic entities: channels and processes. Intuitively, processes interact with each other by exchanging values over channels.
Channels are implemented by the functional type \text{chan}. For some Coq datatype \( \text{T} \), the type \( (\text{chan } \text{T}) \) represents the type of channels that carry data of type \( \text{T} \). For instance, \( (\text{chan nat}) \) represents the type of channels that carry natural numbers.
The syntax of processes is implemented by the inductive type \text{proc}. It defines a set of constructors that each represents a basic process:
- \text{zeroP} represents a terminated process.
- \( \text{(inP c Q)} \), where \( \text{c} \) is a channel and \( \text{Q} \) is a function, represents a process that waits for some value \( v \) along the channel \( \text{c} \) and behaves as the process \( (\text{Q v}) \) after reception.
\[(\text{out} P c v P) \Rightarrow (\text{in} P c Q) \Rightarrow (\text{in} P c R)\]
\[P \Rightarrow (Q v) \Rightarrow (\text{in} P c Q)\Rightarrow (R v)\]
**Fig. 1** Example of communications between processes.
- \((\text{rin} P c Q)\) intuitively represents infinitely many \((\text{in} P c Q)\) processes in parallel. \(\text{rin} P\) stands for replicated inputs and corresponds to multi-threading.
- \((\text{out} P c v Q)\) represents the process that emits some value \(v\) along the channel \(c\) and then behaves as the process \(Q\).
- \((\text{par} P P Q)\) represents the parallel composition of processes \(P\) and \(Q\).
- \((\nu P Q)\) represents a process that creates some private channel \(c\) and then behaves as \((Q c)\).
We define \(\text{InAtom}\) and \(\text{OutAtom}\) as abbreviations for input and output processes whose continuation is \(\text{zero} P\).
The semantics of the appl\(\pi\) modeling language is a binary relation between processes that formalizes in particular the notion of communication between processes. In this paper, we informally represent communications using a diagram notation. For example, the possible communications inside the process \((\text{par} P (\text{out} P c v P) \text{ (par} P (\text{in} P c Q) \text{ (in} P c R)))\) are depicted in **Fig. 1** (the constructor \(\text{par} P\) is written \(|\) to save space).
The second part of the appl\(\pi\) library is a logic for specification of appl\(\pi\) programs. This logic consists of:
- a set of state formulas including propositional and spatial formulas\(^4\),
- a satisfaction relation between state formulas and processes noted \(\text{sat}\),
- a set of temporal formulas similar to temporal logics\(^6\),
- another satisfaction relation between temporal formulas and processes noted \(\text{tsat}\)\(^5\).
The informal semantics of formulas is given in **Fig. 2**.
The last part of the appl\(\pi\) library is a collection of lemmas for verification. We defer overview of these lemmas to Section 6.2, where our case study will provide us with a concrete illustration.
---
\(^3\) The existence of two satisfaction relations is to prevent temporal formulas from being used inside spatial formulas. This is required to guarantee the soundness of some important lemmas\(^3\).
---
### 3. The SMTP Model
In this section, we explain the SMTP model on which the mail server is based. This model is defined in the RFC 821\(^1\). \(^1\)
The mail server consists of several parts, as depicted in **Fig. 3**. The SMTP receiver receives mails from other mail servers and mail clients using the SMTP protocol and stores received mails in a mail queue, implemented by a file system. The SMTP sender extracts mails from the mail queue and sends them to other mail servers or mail clients using the SMTP protocol. In this paper, we are interested in the SMTP receiver part.

**Fig. 3** SMTP model.
The SMTP protocol is depicted in **Fig. 4**. An SMTP protocol session consists of commands and some mail contents sent to the mail server, that sends back replies. The client starts a session by sending the HELO command; the server replies with its identity and creates an envelope. The client sends the MAIL command to set the return path of the envelope with the address of the mail sender. The client then sends one or more RCPT commands to add addresses to the list of recipients of the envelope. The client eventually sends the DATA command, followed by the mail contents and terminated by a “dot”. At any moment, it can reset the session using the RSET command, abort using the ABORT command, sends a dummy command NOOP, or closes the session with the QUIT command. For each command, the server answers with an appropriate message, possibly reporting an error.

**Fig. 4** SMTP commands.
4. Modeling of the Mail Server
In this section, we explain how we model the mail server written in Java using a concurrent program written in the applπ modeling language. Intuitively, modeling consists in translating Java datatypes and control structures into the applπ modeling language. This is facilitated by the fact that the applπ modeling language provides (1) Coq datatypes and control structures which are very close to their Java counterparts, and (2) concurrency primitives to handle directly communications, nondeterminism, and multi-threading (as seen in Section 2.2). In the following, we comment on the main aspects of this translation.
4.1 Datatypes
The mail server defines a number of datatypes that we directly translate into Coq inductive types. For example, the mail server defines constants to implement SMTP commands, like the HELO command:
```java
static final int cmd_helo = 0;
```
Other SMTP commands are similarly implemented by the constants `cmd_mail_from`, `cmd_rcpt_to`, `cmd_data`, `cmd_rset`, `cmd_abort`, `cmd noop`, `cmd_quit`, and `cmd unknown` (for unknown commands). We represent these constants in Coq using the following inductive type:
```
Inductive SMTP_cmd : Set :=
cmd_helo: String -> SMTP_cmd
| cmd_mail_from: String -> SMTP_cmd
| cmd_rcpt_to: String -> SMTP_cmd
| cmd_data: String -> SMTP_cmd
| cmd noop: SMTP_cmd
| cmd rset: SMTP_cmd
| cmd quit: SMTP_cmd
| cmd abort: SMTP_cmd
| cmd unknown: SMTP_cmd.
```
Similarly, we represent SMTP replies using an inductive type (we have elided one part of the definition to save space):
```
Inductive ReplyMsg : Set :=
rep_ok_helo: ReplyMsg
| ...
```
4.2 Communications
In the mail server, communications are implemented by means of the java.io package. For example, the input stream of SMTP commands is implemented by an instance of a subclass of the class java.io.InputStream:
```java
PushbackInputStream from_client;
```
Similarly, the output stream of SMTP replies is implemented by an instance of a subclass of the class java.io.OutputStream:
```java
PrintStream to_client;
```
Input and output operations in the mail server are implemented by calls to adequate methods. For example, the method that receives an SMTP command from the input stream is implemented by the `read` method:
```java
int get_cmd() throws IOException {
...
int b = from_client.read();
...
}
```
Similarly, the method that sends the SMTP reply following the HELO command is implemented by the `println` method:
```java
void reply_ok_helo() throws IOException {
to_client.println
(*"250 " + hostname + " hello";
```
Translation to aplr communication primitives is direct. We represent input and output streams by channels of the appropriate type:
```coq
Definition InStream := (chan SMTP_cmd).
Definition OutStream := (chan ReplyMsg).
```
and input and output operations by input and output processes:
```coq
Definition reply [r:ReplyMsg; c:OutStream; cont: proc] := (outP c r cont).
Definition reply_ok_helo := (reply rep_ok_helo).
```
We have seen above how to model communications between the mail server and a client. Similarly, communications between the mail server and the file system are modeled by a channel of type \((chan Mail)\) and corresponding input and output processes.
### 4.3 Server State
The mail server features a number of variables (fields of Java objects) that capture its state. To represent these variables, we introduce a representation of the state of the mail server using a record that is intended to be passed around following the flow of execution:
```coq
Record STATE : Set := smtp_state{
to_client: OutStream;
in_stream: InStream;
queue_dir: File;
buf: Buffer;
to_fs : (chan Mail);
server_name: String;
from_domain: String;
rev_path: Rfc821_path;
fwd_paths: Rfc821_pathlist;}
```
The fields `in_stream` and `to_client` contain the channels used for communication with the client. The fields `queue_dir` and `buf` represent respectively the directory and the file that implement the mail queue. The field `to_fs` contains the channel for communication with the file system. The field `server_name` contains an identifier that the mail server uses for SMTP replies. The fields `from_domain`, `rev_path`, and `fwd_paths` correspond to the envelope being built.
### 4.4 Control Structures
The mail server appeals to a wide variety of control structures. Basic control structures such as conditionals are easily translated into case analyses. However, non-terminating loops cannot be represented directly in Coq (they are rejected because they make proof checking undecidable). In the following, we explain how we translate the main loop of the mail server into the aplr modeling language. The idea is to implement it using communications with respect to replicated inputs.
The main loop of the mail server (Fig. 5, on the left) waits for incoming requests and, upon reception of an HELO command, it enters a loop in which subsequent commands are processed. This processing of subsequent SMTP commands is ensured by methods `get_helo`, `get_mail_from`, `get_rcpt_to`, and `get_data`.
We represent the main loop of the mail server by means of replicated inputs that exchange the state of the mail server through a set of private channels: `heloc`, `mailc`, and `rcptc` (Fig. 5, on the right). These replicated inputs are written in such a way that they emulate the flow of control of the original program. It should be observed that, even though we use replicated inputs, the resulting process represent a single thread of computation. (Multiple threads of computation will appear when composing the mail server with its environment, see next section.)
Each method called from the main loop of the mail server is modeled in a systematic way as a process. We illustrate this modeling using the example of the `get_helo` Java method.
The Java method `get_helo` (Fig. 6, on the left) tries to fetch incoming HELO commands, replies appropriately, and redirects the flow of control to other methods. It is essentially a switch statement.
We represent the Java method `get_helo` by means of the process `get_helo_def` (Fig. 6, on the right). The switch statement is translated into a case analysis. The control flow is emulated by means of communications: break statements are replaced by communications along the `heloc` channel, return statements are replaced by communications along the `mailc` channel, etc. Each method call in the original program is translated to a call to the corresponding function resulting from the translation. Finally, successful termination is modeled by the presence of a dummy value along a special channel `resultc`:
```coq
Variable resultc : (chan unit).
Definition succ := (OutAtom resultc tt).
(The statement Variable declares a global variable in Coq. tt is the only element of type unit.)
```
5. Modeling of the Environment
In order to verify that the mail server correctly handles client requests, we need to make some hypotheses on its environment. In particular, we assume that we perform the verification in presence of a correct client and a correct file system. The network communications and the host computer are also part of the environment of the mail server. In general, we cannot
Inductive val [s:InStream] : proc->Prop :=
say_helo: (P:proc)(c: SMTP_cmd) (valid_cmd_helo c) ->
(val_after_helo s P) ->
(val s (outP s c P))
| say_quit: (val s (OutAtom s cmd_quit))
| say_abort: (val s (OutAtom s cmd_abort))
| say_skip: (P:proc)(c: SMTP_cmd)
~(valid_cmd_helo c) ->
~(c=cmd_quit -> ~(c=cmd_abort ->
(val s P) ->
(val s (outP s c P)))
| say_io_error: (P:proc)
(val s P) ->
(val s (outP IOexnc tt P))
with val_after_helo [s:InStream] : proc->Prop :=
...
Fig. 7 Specification of the input stream.
expect network communications and the host computer to be reliable.
In this section, we show how to model a correct mail client and a correct file system and the hypotheses of unreliable network communications and of an unreliable host computer.
5.1 Network Errors
We model network errors by the presence of a dummy value along a special channel IOexnc:
Variable IOexnc : (chan unit).
Network errors may occur during communications between the mail server and the mail client, and between the mail server and the file system. We therefore use the special channel IOexnc in the specification of the client and of the file system, as explained below.
5.2 Client Specification
A client is correct if it emits valid streams of SMTP commands, as specified by the RFC 821. This requirement amounts to specifying the set of valid streams of SMTP commands. We render this specification by means of the predicate val in Fig. 7.
Informally, the predicate val can be read as follows. The client emits valid streams of SMTP commands if:
- after emitting a valid HELO command it
still emits valid streams of SMTP commands, as defined by the predicate val_after_helo (constructor say_helo),
- it emits a QUIT or ABORT command (constructors say_quit and say_abort),
- it emits any other command such that the rest of the emission is still valid (constructor say_skip).
The constructor say_io_error does not correspond to the definition of validity of streams of SMTP commands, we add it to take into account the possibility for a network error.
Similarly, we take into account the possibility for a network error prior to emission of SMTP replies. This is rendered by the predicate ack in Fig. 8.
Inductive ack : OutStream -> proc -> Prop :=
ack_rep : (y:OutStream)(P:ReplyMsg -> proc)
((x:ReplyMsg)(ack y (P x))) -> (ack (inP y P))
ack_rep_io_error : (y:OutStream)(P:proc)
(ack y P) -> (ack y (outP IOexnc tt P)).
Fig. 8 Specification of the output stream.
The constructor ack_rep corresponds to the usual situation where the SMTP reply is sent to the client, and the constructor ack_rep_io_error corresponds to the exceptional situation where a network error prevents the emission of the SMTP reply.
We pack above val and ack predicates into a single valid_client predicate that specifies correct clients. Similarly, we define a predicate valid_fs that specifies correct file systems.
5.3 System Failures
An unreliable host computer is modeled by the non-deterministic possibility for a failure. A failure is modeled by the presence of a dummy value along a special channel failc:
Variable failc : (chan unit).
A failure may occur at any moment. We model this non-determinism by a process that non-deterministically output a dummy value along the above special channel:
Definition may_fail := (nuP [x:?]
(parP (InAtom x)
(parP (OutAtom x tt)
(inP x [_.:?](OutAtom failc tt))))).
(The question mark ? in Coq automatically infers the corresponding type.) The possible reductions of this non-deterministic failure generator are depicted in Fig. 9.
6. Formal Verification
6.1 Goal Statement
We are interested in verifying that the mail server modeled in Section 4 executed under the environment modeled in Section 5 correctly handles incoming streams of SMTP commands. In other words, we want to verify that, for any possible execution, the process formed by the
Lemma valid_protocol:
(client:InStream->OutStream->proc)
((i:?)(o:?)(valid_client (client i o)) ->
(fs:(chan Mail) ->proc)
((tofs:?)(valid_fs (fs tofs))) ->
(is_set resultc(IDexecn&failc&nilC)) ->
let P = (resultc(IDexecn&failc&nilC))#
(nup [i:InStream]
(nup [tofs:(chan Mail)]
(parP (client i o)
(parP (work i o tofs)
(parP (fs tofs)
(may fail))))))
in
(tsat reports succ or error P).
Fig. 10 Goal statement.
parallel composition of a correct client, a correct file system, the model of the mail server, and a non-deterministic failure generator results in successful termination, a network error, or a system failure. Observe that the resulting process has multiple threads of computation.
The property informally stated above can be formally written using the appl\(_\pi\) logic as follows:
Definition reports succ or error :=
(MUSTEV (STAT
(OR (OUTPUTS resultc tt ISANY)
(OR (OUTPUTS IDexecn tt ISANY)
(OUTPUTS failc tt ISANY)))).
(Special channels resultc, IDexecn, and failc are used to observe respectively successful termination, network error, and failures.)
Let us write P for the process formed by the parallel composition of a correct client, a correct file system, the mail server, and a non-deterministic failure generator. The proof goal can be stated as follows:
Lemma valid_protocol: ...
(tsat reports succ or error P).
The complete statement is given in Fig. 10.
6.2 Formal Proof
We have formally proved using Coq the goal stated above. The proof is by induction on the predicate val (excerpt in Fig. 7). It requires 3927 commands using the appl\(_\pi\) library (for a 400 lines model and 200 lines specification).
An important aspect of the formal proof is how we deal with interleaving sequences of communications. Since the mail server runs in parallel with the failure generator, there are several possible execution paths that only differ by the moment when the failure generator is scheduled for execution. The proliferation of such different possible execution paths is harmful because they considerably augment the number of subgoals of the formal proof. This situation is an instance of the state-space explosion problem. The appl\(_\pi\) library provides lemmas that enable partial order reduction to deal with the state-space explosion problem. We illustrate the basic idea of partial order reduction with the following example. Let us consider some process P in which two communications along channels c and d are enabled, such that both communications can be executed in whatever order to reach the same process Q:
To verify a formula of the form (MUSTEV f) in this situation, it is often sufficient to explore only one execution path. More generally, partial order reduction reduces the number of execution paths to be explored for the purpose of verification to a subset representative of all the possible orderings of communications. The appl\(_\pi\) library provides lemmas that enable partial order reduction for the appl\(_\pi\) language and its logic\(^2,3\).
Lemmas from the appl\(_\pi\) library that enable partial order reduction are particularly useful to verify the mail server. Let us consider the fragment of the state space of the whole system depicted in Fig. 11. The initial process is shortened to client|work|fs may fail to save space. From the initial process, it is possible to perform either (1) a communication along channel i through which the client sends a first
SMTP command to the mail server, leading to the process client’\texttt{\textbackslash work/}fs\texttt{\textbackslash may\_fail}, or (2) one of the two communications along channel $x$ enabled by the non-deterministic failure generator. Lemmas from the \texttt{appl\_library} tell us that we can safely ignore the execution paths starting with the failure generator (dotted lines in Fig. 11), and resume verification from the process $\text{client’\texttt{\textbackslash work/}fs\texttt{\textbackslash may\_fail}}$.
6.3 Discussion
The size of the formal proof is large but can be substantially reduced. In fact, prior to the case study presented in this paper, we had already verified the same mail server using a different approach$^1$). The basic idea of this approach was to model the mail server using only functional constructs provided by the Coq language. The formal proof that the mail server correctly handles client requests was almost four times smaller (1059 commands). However, it appears that both approaches lead to essentially the same proof tree thanks to partial order reduction. Therefore, the overhead induced by the \texttt{appl\_library} is not a fundamental issue and can be alleviated by improving automation.
Despite this overhead, the verification of concurrent programs using the \texttt{appl\_library} is still a satisfactory approach because it handles multi-threading explicitly and because it is possible to extract a runnable concurrent program from the model. The latter was not possible with our previous model$^1$) because it was polluted with functional constructs whose purpose was only the modeling of non-determinism. In contrast, modeling of non-deterministic failures in the \texttt{appl\_library} model does not interfere with the modeling of the original program. Consequently, it is possible to execute the code of the model as it is.
To execute the code of the model, it is possible to write a virtual machine to interpret the \texttt{appl\_library} modeling language. However, the extraction facility of Coq provides a more effective solution. This extraction facility turns Coq programs into ML programs (OCaml, Haskell, etc.) by associating to each Coq inductive type a corresponding ML datatype, to each Coq function a corresponding ML function, etc. In particular, the concurrency primitives of the \texttt{appl\_library} modeling language are extracted in the form of the constructors of the following ML datatype (we use OCaml syntax for concreteness):
\begin{verbatim}
Coq < Extraction proc.
type proc =
| ZeroP
| InP of Obj.t * (Obj.t -> proc)
| RinP of Obj.t * (Obj.t -> proc)
| OutP of Obj.t * Obj.t * proc
| ParP of proc * proc
| NuP of (Obj.t -> proc)
\end{verbatim}
where the type Obj.t corresponds to channels (this is because we have only provided Coq with the type of channels, not their implementation). The idea to execute the extracted OCaml code is to pre-process it to replace each call to one of the constructor above by a call to an OCaml function that implements the appropriate semantics. In other words, given some (parameterized) type \texttt{channel} and a set of functions \texttt{zeroP}, \texttt{inP}, etc. that implement the semantics of the \texttt{appl\_library} modeling language, we have a complete mechanism to run models.
7. Conclusion
In this paper, we have explained how one can verify an existing concurrent program using a Coq library. More precisely, we gave an overview of the verification of an existing mail server written in Java using the \texttt{appl\_library} library. First, we introduced the Coq proof assistant and the \texttt{appl\_library} library. Second, we explained how to model the mail server into an \texttt{appl\_library} program. Third, we explained how to model the environment of the mail server, including modeling of system errors. Last, we reported on the formal proof that the mail server correctly handles client requests. We compared the results of verification with an alternative approach and observed that (1) the overhead induces by the \texttt{appl\_library} library can potentially be eliminated through better automation, and that (2) the \texttt{appl\_library} model is more satisfactory because in particular it can be run as it is. This case study shows that the \texttt{appl\_library} library provides us with a complete solution to write, verify, and run concurrent programs in the Coq interface.
8. Related Work
The issue of verification of concurrent programs in proof assistants has been addressed through formalization of the UNITY formalism$^{5,8,13})$. This work also includes various verifications of non-trivial concurrent programs. The originality of our case study is that we verify an existing implementation and discuss for that purpose several reusable techniques for modeling.
There exist several encodings of the π-calculus in proof assistants with accompanying libraries\textsuperscript{(9),(10),(16)}. This work essentially focuses on verification of meta-properties of the pure π-calculus. In comparison, the applr library is built above an applied version of the π-calculus and we focus on verification of properties of programs.
In this paper, we used a proof assistant to verify a concurrent program. Model checking is an alternative approach\textsuperscript{(7)} that could have equally-well applied to verification of the mail server. The advantage of proof assistants is that they can handle directly infinite state-spaces thanks to induction, contrary to model checkers that are limited to finite state-spaces (unless one resorts to appropriate abstraction techniques). This is the reason why we investigate the usage of proof assistants to verify concurrent programs.
9. Future Work
We plan to tackle the issue of reducing the size of formal proofs by improving automation in the applr library and combining interactive proofs with model checking for the applr modeling language and its logic.
Acknowledgments This work is partially supported by a research project funded by Japanese Ministry of Education and Science’s research program “e-Society.”
References
(Received July 2, 2004)
( Accepted September 21, 2004)
Akinori Yonezawa received his Ph.D. degree in Computer Science from the MIT in 1977. He is currently professor in the Department of Computer Science at the University of Tokyo. His current major research interests are in the areas of concurrent/parallel computation models, programming languages, distributed computing, and software security. He was a member of the Scientific Advisory Board of the German National Research Institute of Computer Science, served as an associate editor of the ACM Transactions of Programming Languages and Systems (TOPLAS), and was a member of the editorial boards of IEEE Computer and IEEE Concurrency. He also acted as the president of the Japanese Society of Software Science and Technology. In 2000, he was appointed by the Prime Minister to be a member of the Reformation and Deregulation Committee and the chairman of its Education Subcommittee for three years. He is a fellow of the ACM as well as the Japanese Society of Software Science and Technology.
|
{"Source-Url": "https://staff.aist.go.jp/reynald.affeldt/documents/swopp04.pdf", "len_cl100k_base": 7843, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40851, "total-output-tokens": 10109, "length": "2e12", "weborganizer": {"__label__adult": 0.0003771781921386719, "__label__art_design": 0.00028514862060546875, "__label__crime_law": 0.00037741661071777344, "__label__education_jobs": 0.0008983612060546875, "__label__entertainment": 7.128715515136719e-05, "__label__fashion_beauty": 0.0001499652862548828, "__label__finance_business": 0.00018894672393798828, "__label__food_dining": 0.0003674030303955078, "__label__games": 0.0006070137023925781, "__label__hardware": 0.0008091926574707031, "__label__health": 0.0006594657897949219, "__label__history": 0.00020122528076171875, "__label__home_hobbies": 8.547306060791016e-05, "__label__industrial": 0.0003619194030761719, "__label__literature": 0.00029015541076660156, "__label__politics": 0.0002435445785522461, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.0255279541015625, "__label__social_life": 0.00010710954666137697, "__label__software": 0.0049896240234375, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.00032830238342285156, "__label__transportation": 0.0005159378051757812, "__label__travel": 0.00018012523651123047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38538, 0.0151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38538, 0.50496]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38538, 0.86286]], "google_gemma-3-12b-it_contains_pii": [[0, 3904, false], [3904, 8635, null], [8635, 12491, null], [12491, 15099, null], [15099, 19442, null], [19442, 19842, null], [19842, 23767, null], [23767, 27204, null], [27204, 32047, null], [32047, 35736, null], [35736, 37545, null], [37545, 38538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3904, true], [3904, 8635, null], [8635, 12491, null], [12491, 15099, null], [15099, 19442, null], [19442, 19842, null], [19842, 23767, null], [23767, 27204, null], [27204, 32047, null], [32047, 35736, null], [35736, 37545, null], [37545, 38538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38538, null]], "pdf_page_numbers": [[0, 3904, 1], [3904, 8635, 2], [8635, 12491, 3], [12491, 15099, 4], [15099, 19442, 5], [19442, 19842, 6], [19842, 23767, 7], [23767, 27204, 8], [27204, 32047, 9], [32047, 35736, 10], [35736, 37545, 11], [37545, 38538, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38538, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3663bbd0684d9fe8fbb60e8f2ca15a568a02266c
|
Query processing
• Understanding query processing helps producing better applications
• SQL is a declarative language: it describes the query result, but not how to get it.
• Query processing:
– Query analysis → logical query plan
– Query transformation
– Physical plan generation and optimization
– Query execution
Physical db design
• A query optimizer uses all available indexes, materialized views, etc. in order to better execute the query
– Data Base Administrator (DBA) is expected to set up a good physical design
– Good DBAs understand query optimizers very well
– Good DBAs are hard to find
Query execution steps: analysis
SQL COMMAND
CATALOG
ANALYSIS AND SIMPLIFICATION
LOGICAL OPERATOR TREE
QUERY TRANSFORMATION
SELECT Name
FROM Students S, Exams E
WHERE S.StudCode = E.Candidate AND City = 'PI' AND Grade > 25
Check command rewrite Boolean conditions produce logical tree
\[ \pi_{\text{Name}} \]
\[ \sigma_{\text{City} = 'PI' \text{ and Grade} > 25} \]
\[ S.\text{StudCode} = E.\text{Candidate} \]
Students S
Exams E
Query execution steps: transformation
Transform a logical query plan using equivalence rules to get a faster plan.
```
π_{Name}
/
σ_{City = 'PI' and Grade > 25}
/
S.StudCode = E.Candidate
```
```
π_{Name}
/
S.StudCode = E.Candidate
/
σ_{City = 'PI'}
/
σ_{Grade > 25}
/
Students S
/
Exams E
```
Select an algorithm for each logical operation.
**Ideally**: Want to find **best** physical plan.
**In practice**: Avoid **worst** physical plans!
---
**Query ex. steps: physical plan generation**
---
**PHYSICAL PLAN GENERATION**
**PHYSICAL PLAN:**
**PHYSICAL OPERATORS**
**TREE**
**PLAN EXECUTION**
**RESULT**
---
**Project**
$$\{\text{Name}\}$$
**NestedLoop**
$$\text{(S.StudCode=E.Candidate)}$$
**IndexFilter**
$$\text{(Students,IdxP, City = 'Pi') }$$
**Filter**
$$\text{(Grade > 25) }$$
**TableScan**
$$\text{(Exams)}$$
Physical plan execution
• Each operator is implemented as an iterator using a ‘pull’ interface: when an operator is ‘pulled’ for the next output tuples, it ‘pulls’ on its inputs and computes them.
• An operator interface provides the methods open, next, isDone, and close implemented using the Storage Engine interface.
Interesting transformations
• **DISTINCT** Elimination
• **GROUP BY** Elimination
• **WHERE**-Subquery Elimination
• **VIEW** Elimination (Merging)
• Many are based on functional dependencies
• Do you remember functional dependencies?
Functional dependencies
• For $R(T)$ and $X, Y \subseteq T$
• $X \rightarrow Y$ (X determines Y) iff:
- $\forall r$ valid instance of $R$.
$\forall t_1, t_2 \in r$. If $t_1[X] = t_2[X]$ then $t_1[Y] = t_2[Y]$
## Example
<table>
<thead>
<tr>
<th>StudCode</th>
<th>Name</th>
<th>City</th>
<th>Region</th>
<th>BirthYear</th>
<th>Subject</th>
<th>Grade</th>
<th>Univ</th>
</tr>
</thead>
<tbody>
<tr>
<td>1234567</td>
<td>Mary</td>
<td>Pisa</td>
<td>Tuscany</td>
<td>1995</td>
<td>DB</td>
<td>30</td>
<td>Pisa</td>
</tr>
<tr>
<td>1234567</td>
<td>Mary</td>
<td>Pisa</td>
<td>Tuscany</td>
<td>1995</td>
<td>SE</td>
<td>28</td>
<td>Pisa</td>
</tr>
<tr>
<td>1234568</td>
<td>John</td>
<td>Lucca</td>
<td>Tuscany</td>
<td>1994</td>
<td>DB</td>
<td>30</td>
<td>Pisa</td>
</tr>
<tr>
<td>1234568</td>
<td>John</td>
<td>Lucca</td>
<td>Tuscany</td>
<td>1994</td>
<td>SE</td>
<td>28</td>
<td>Pisa</td>
</tr>
</tbody>
</table>
- StudCode $\rightarrow$ Name, City, Region, BirthYear
- City $\rightarrow$ Region
- StudCode, Subject $\rightarrow$ Grade
- $\emptyset$ $\rightarrow$ Univ
- StudCode, Name $\rightarrow$ City, Univ, Name
Functional dependencies
- Trivial dependencies: $XY \rightarrow X$
- Atomic dependency: $X \rightarrow A$ (A attribute)
- Union rule:
- $X \rightarrow A_1...A_n$ iff $X \rightarrow A_1$ ... $X \rightarrow A_n$
- What about the lhs:
- Does $A_1...A_n \rightarrow X$ imply $A_1 \rightarrow X$ ... $A_n \rightarrow X$?
- Does $A_1 \rightarrow X$ imply $A_1..A_n \rightarrow X$?
- What does $\emptyset \rightarrow X$ mean?
Functional dependencies and keys
• Canonical dependencies:
– $X \rightarrow A$ but not $X' \rightarrow A$, for any $X' \subseteq X$
• Every non-trivial dependency ‘contains’ one or more canonical dependencies – just remove extraneous attributes
• Key: set $K$ such that $K \rightarrow T$ holds and is canonic
• In a well designed relation, only one kind of non-trivial canonical dependencies (BCNF):
– Key $\rightarrow A$ (key dependencies)
Deriving dependencies
• Given a set F of FDs, \( X \rightarrow Y \) is derivable from F (\( F \models X \rightarrow Y \)), iff \( X \rightarrow Y \) can be derived from F using the following rules:
– If \( Y \subseteq X \), then \( X \rightarrow Y \) (Reflexivity R )
– If \( X \rightarrow Y \) and \( Z \subseteq T \), then \( XZ \rightarrow YZ \) (Augmentation A)
– If \( X \rightarrow Y \) and \( Y \rightarrow Z \), then \( X \rightarrow Z \) (Transitivity T)
• Soundness:
– when \( r \models F \) and \( F \models X \rightarrow Y \), then \( r \models X \rightarrow Y \)
Closure of an attribute set
• **Definition** Given \( R<T, F> \), and \( X \subseteq T \), the *closure* of \( X \) wrt \( F \), denoted by \( X_F^+ \), (or just \( X^+ \) when \( F \) is clear), is:
\[
- X_F^+ = \{ A_i \in T \mid F \vdash X \rightarrow A_i \}
\]
• **Theorem:** \( F \vdash X \rightarrow Y \iff Y \subseteq X_F^+ \)
Example
• StudCode → Name, City, BirthYear
• City → Region
• StudCode, Subject → Grade
• Ø → Univ
• StudCode⁺ = {StudCode, Name, City, BirthYear, Region, Univ}
• (StudCode, Name)⁺ = {
• (Name,City)⁺ = {Name, City, Region, Univ}
• (StudCode, Subject)⁺
• Ø⁺
Dependencies in a SQL query
• Consider a query on a set of tables \( R_1(T_1), \ldots, R_n(T_n) \) such that no attribute name appears in two tables.
• After joins and select, assuming that the WHERE condition \( C \) is in CNF, these dependencies hold on the result:
– The initial dependencies: \( K_{ij} \rightarrow T_i \) for any key \( K_{ij} \) of the table \( T_i \)
– Constant dependencies \( \emptyset \rightarrow A \) for any factor \( A=c \) in \( C \)
– Join dependencies \( A_i \rightarrow A_j \) and \( A_j \rightarrow A_i \) for any factor \( A_i=A_j \)
Computing the closure of X
• Assume a product-select-project expression with CNF condition
• Let $X^+ = X$
• Add to $X^+$ all attributes $A_i$ such that $A_i = c$ is in C
• Repeat until $X^+$ stops changing:
– Add to $X^+$ all $A_j$ such that $A_k$ is in $X^+$ and $A_j = A_k$ or $A_k = A_j$ is in C
– Add to $X^+$ all attributes of $R_i$ if one key of $R_i$ is included in $X^+$
DISTINCT elimination
• Consider a SELECT DISTINCT query
– Duplicate elimination is very expensive, and DISTINCT is often redundant
• SELECT Name FROM Students
• SELECT StudId FROM Students
• SELECT StudId FROM Students NATURAL JOIN Exams
DISTINCT elimination
• Consider E returning a set of tuples of type \{T\}. If $A \rightarrow T$, then $\pi^b_A(E)$ creates no duplicates: if two lines coincide on $A$ they are the same line
• SELECT DISTINCT A
FROM R1(T1),...,Rn(Tn)
WHERE C:
– DISTINCT is redundant when $A^+$ is $T1 \cup ... \cup Tn$ (or $A^+$ includes a key for every relation in the join), assuming that all input tables are sets (have a key)
– $A^+$ can be computed as in the previous slide
Distinct elimination: example
Products(PkProduct, ProductName, UnitPrice)
Invoices(PkInvoiceNo, Customer, Date, TotalPrice)
InvoiceLines(FkInvoiceNo, LineNo, FkProduct, Qty, Price)
SELECT DISTINCT FkInvoiceNo, TotalPrice
FROM InvoiceLines, Invoices
WHERE FkInvoiceNo = PkInvoiceNo;
SELECT DISTINCT FkInvoiceNo, TotalPrice
FROM InvoiceLines, Invoices
WHERE FkInvoiceNo = PkInvoiceNo AND LineNo = 1;
DISTINCT elimination with GROUP BY
• Consider a GROUP BY query:
– SELECT DISTINCT X, f
– FROM R1,...,Rn WHERE C1
– GROUP BY X,Y HAVING C2
• The set X,Y determines all other attributes in the output of the run-time \( \{X,Y\} \gamma \{f,g\} \) operation
• Hence, DISTINCT is redundant when \( XY \subseteq X+ \)
• The X+ computation has to use the keys of R1,...,Rn and the conditions C1 and C2
Distinct elimination: example
SELECT DISTINCT FkInvoiceNo, COUNT(*) AS N
FROM InvoiceLines, Invoices
WHERE FkInvoiceNo = PkInvoiceNo
GROUP BY FkInvoiceNo, Customer;
Group by elimination
Products(PkProduct, ProductName, UnitPrice)
Invoices(PkInvoiceNo, Customer, Date, TotalPrice)
InvoiceLines(FkInvoiceNo, LineNo, FkProduct, Qty, Price)
SELECT FkInvoiceNo, COUNT(*) AS N
FROM InvoiceLines, Invoices
WHERE FkInvoiceNo = PkInvoiceNo
AND TotalPrice > 10000 AND LineNo = 1
GROUP BY FkInvoiceNo, Customer;
The query producing the data to be grouped is without duplicates?
SELECT FkInvoiceNo, Customer
FROM InvoiceLines, Invoices
WHERE FkInvoiceNo = PkInvoiceNo
AND TotalPrice > 10000 AND LineNo = 1;
WHERE-subquery elimination
```
select * nested correlated
from students s
where exists (select * from exams e where e.sid=s.sid)
```
```
select * nested not correlated
from students s
where s.id in (select e.sid from exams e)
```
```
select distinct s.* unnested
from students s natural join exams e
```
WHERE-subquery elimination
• The most important transformation: very common and extremely relevant
• Very difficult problem: no general algorithm
• We only consider here the basic case:
– Subquery is EXISTS (do not consider NOT EXISTS)
– Correlated subquery
– Subquery with no GROUP BY
# Left outer join
<table>
<thead>
<tr>
<th>R</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>b</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>c</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>S</th>
<th>A</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>x</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>y</td>
<td></td>
</tr>
</tbody>
</table>
**SQL:**
```
SELECT * FROM R
NATURAL LEFT JOIN S;
```
Also called: natural left outer join
---
<table>
<thead>
<tr>
<th>R</th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>x</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>b</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>c</td>
<td>y</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>S</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>c</td>
<td></td>
</tr>
</tbody>
</table>
**SQL:**
```
SELECT * FROM R
NATURAL JOIN S;
```
Also called: natural inner join
---
<table>
<thead>
<tr>
<th>R</th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>x</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>b</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>c</td>
<td>y</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>S</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>c</td>
<td></td>
</tr>
</tbody>
</table>
**SQL:**
```
SELECT * FROM R
NATURAL LEFT JOIN S;
```
Also called: natural left outer join
## Outer join: right, full
### SELECT * FROM R NATURAL RIGHT JOIN S;
Also called: natural right **outer** join
### SELECT * FROM R NATURAL FULL JOIN S;
Also called: natural full **outer** join
WHERE unnesting
• Courses(CrsName, CrsYear, Teacher, Credits)
• Transcripts(StudId, CrsName*, Year, Date, Grade)
WHERE unnesting
SELECT *
FROM Courses C
WHERE CrsYear = 2012 AND EXISTS (SELECT FROM Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear);
• The unnested equivalent query is
SELECT DISTINCT C.*
FROM Courses C, Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
AND CrsYear = 2012;
WHERE unnesting
SELECT DISTINCT C.Teacher
FROM Courses C
WHERE CrsYear = 2012 AND
EXISTS (SELECT FROM Transcripts T
WHERE T.CrsName = C.CrsName
AND T.Year = CrsYear);
• The unnested equivalent query is
SELECT DISTINCT C.Teacher
FROM Courses C, Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
AND CrsYear = 2012;
WHERE unnesting
SELECT C.Teacher
FROM Courses C
WHERE CrsYear = 2012 AND
EXISTS (SELECT FROM Transcripts T
WHERE T.CrsName = C.CrsName
AND T.Year = CrsYear);
• Is not equivalente to the following, w or w/o distinct:
SELECT (DISTINCT) C.Teacher
FROM Courses C, Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
AND CrsYear = 2012;
WHERE unnesting
• SELECT C.CrsName, C.Teacher
FROM Courses C
WHERE CrsYear = 2012 AND
EXISTS ( SELECT count(*) FROM Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
HAVING 27 < AVG(Grade))
• The unnested equivalent query is
• SELECT C.CrsName, C.Teacher
FROM Courses C, Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
AND CrsYear = 2012
GROUP BY C.CrsName, C.Teacher
HAVING 27 < AVG(Grade);
WHERE unnesting
• SELECT C.CrsName, C.Teacher
FROM Courses C
WHERE C.CrsYear = 2012 AND
EXISTS ( SELECT count(*) FROM Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
HAVING 0 = Count(*) )
• The following is wrong (the count bug problem)
• SELECT C.CrsName, C.Teacher
FROM Courses C, Transcripts T
WHERE T.CrsName = C.CrsName AND T.Year = CrsYear
AND CrsYear = 2012
GROUP BY C.CrsName, C.Teacher
HAVING 0 = Count(*) ;
WHERE unnesting
- **SELECT** C.CrsName, C.Teacher
**FROM** Courses C
**WHERE** C.CrsYear = 2012 **AND**
**EXISTS** ( **SELECT** * FROM Transcripts T
**WHERE** T.CrsName = C.CrsName **AND** T.Year = CrsYear
**HAVING** 0 = Count(*))
- The following is ok:
- **SELECT** C.CrsName, C.Teacher
**FROM** Courses C **LEFT JOIN** Transcripts T
**ON** (T.CrsName = C.CrsName **AND** T.Year = CrsYear)
**WHERE** CrsYear = 2012
**GROUP BY** C.CrsName, C.Teacher
**HAVING** 0 = Count(C.Grade);
View merging
- CREATE VIEW TestView AS
SELECT Price, AName
FROM Order, Agent
WHERE FKAgent = PKAgent;
- SELECT Price, AName
FROM TestView
WHERE Price = 1000;
- Can the query be transformed to avoid the use of the view?
Temporary view
• Created by a SELECT in the FROM:
• SELECT ...
FROM (SELECT ... FROM ...) AS Q1,
(SELECT ... FROM ...) AS Q2,
WHERE ...
• Same as
• WITH Q1 AS (SELECT ... FROM ...)
, Q2 AS (SELECT ... FROM ...)
SELECT .... FROM Q1, Q2, WHERE ...
View merging
The approach:
Let a Logical Plan be
View merging:
(1) View Logical Plan
(2) Query Logical Plan
(3) Query View
(4) Query Transformed Logical Plan
π
σ_H
γ
σ_W
SQL without View
View merging: an equivalence rule
• Let $X_R$ be attributes of $R$ with $f_k \in X_R$ a foreign key of $R$ referring to $pk$ of $S$ with attributes $A(S)$, then
$$(X_R \gamma F(R)) \mathrel{\bowtie} f_k = p_k S \equiv X_R \cup A(S) \gamma F(R) \mathrel{\bowtie} f_k = p_k S'$$
CREATE VIEW TestView AS
SELECT Price, AName
FROM Order, Agent
WHERE FKAgent = PKAgent;
SELECT Price, AName
FROM TestView
WHERE Price = 1000;
TestView =
\[ \pi_{Price, AName} \]
(1)
FKAgent = PKAgent
\[ \bowtie \]
Order \quad Agent
\[ \pi_{Price, AName} \]
(2)
\[ \sigma_{Price = 1000} \]
TestView
\[ \pi_{Price, AName} \]
(3)
\[ \sigma_{Price = 1000} \]
\[ \pi_{Price, AName} \]
FKAgent = PKAgent
\[ \bowtie \]
Order \quad Agent
\[ \pi_{Price, AName} \]
(4)
\[ \sigma_{Price = 1000} \]
FKAgent = PKAgent
\[ \bowtie \]
Order \quad Agent
CREATE VIEW FKAgent_GBY AS
SELECT FKAgent, COUNT(*) AS No
FROM Order
GROUP BY FKAgent;
SELECT AName, No
FROM FKAgent_GBY, Agent
WHERE FKAgent = PKAgent
AND ACity = 'Pisa';
CREATE VIEW FKAgent_GBY AS
SELECT FKAgent, COUNT(*) AS No
FROM Order
GROUP BY FKAgent;
SELECT AName, No
FROM FKAgent_GBY, Agent
WHERE FKAgent = PKAgent
AND ACity = 'Pisa';
GROUP BY FKAgent, AName;
\[
\left( X_R \gamma F(R) \right) \underset{f_k=p_k}{\bowtie} S \equiv X_R \cup A(S) \gamma F(R) \underset{f_k=p_k}{\bowtie} S
\]
Physical plan generation
• Main steps:
– Generate plans
– Evaluate their cost
• Plan generation:
– Needs to keep track of attributes and order of each intermediate result
• Cost evaluation:
– Evaluate the size of each intermediate result
– Evaluate the cost of each operator
Physical plan generation phase: statistics and catalog
- The Catalog contains the following statistics:
- \( N_{\text{reg}} \) and \( N_{\text{pag}} \) for each relation.
- \( N_{\text{key}} \) and \( N_{\text{leaf}} \) for each index.
- min/max values for each index key.
- ... Histograms
- The Catalog is updated with the command \textbf{UPDATE STATISTICS}
Single relation queries
• \( S(PkS, FkR, aS, bS, cS) \)
• SELECT \( bS \)
FROM \( S \)
WHERE \( FkR > 100 \) AND \( cS = 2000 \)
• The only question is which index or indexes to use
• If we have an index on \( (cS, FkR, bS) \), an IndexOnly plan can be used
Multiple relation queries
• Basic issue: join order
• Every permutation is a different plan
– AxBxCxD
– BxAxCxD
– BxCxAxD
– ...
• $n!$ permutations
Multiple relation queries
• Every permutation is many different plans
– $Ax(Bx(CxD))$
– $(AxB)x(CxD)$
– $(Ax(BxC))xD$
– $Ax((BxC)xD)$
– ...
• Many different choices of join operator
• Huge search space!
Full search
R Join S Join T
One relation
S1 \rightarrow S2 \rightarrow S3
Two relations
S4 \rightarrow S5 \rightarrow S10
Three relations
S6
= Physical plan min cost
Optimization algorithm for a join
• Initialize Plans with one tree for each restricted relation
• repeat {
extract from Plans the fastest plan P
if P is complete, exit.
else, expand P:
join P with all other plans $P'$ on disjoint relations
for each $P$ join $P'$, put the best tree in Plans
remove $P$
}
Optimization algorithm: heuristics
- Left deep: generate left-deep trees only
- Greedy: after a node is expanded, only expand its expansions
- Iterative full search: alternate full and greedy
- Interesting-order plans should also be considered
Example
R(N, D, T, C), with indexes on C and T
S(C, O, E), with indexes on C and E
SELECT S.C, S.O
FROM S, R
WHERE S.C = R.C AND E = 13 AND T = ‘AA’;
\[ \pi_{S.C, S.O}^{b}(\sigma_{E = 13 \land T = ‘AA’}(S \bowtie R)) \]
\[ \pi_{S.C, S.O}^{b}(\text{\underline{\sigma}_{E = 13}(S) \bowtie \sigma_{T = ‘AA’}(R)}) \]
Example
R(N, D, T, C), with indexes on C and T
S(C, O, E), with indexes on C and E
\[ \pi_{S \cdot C, S \cdot O}^{b} \left( \sigma_E = 13(S) \bowtie \sigma_T = 'AA'(R) \right) \]
Physical plans for subexpression on relations
minimum cost
Example
\[ \pi^b_{S.C, S.O}(\sigma_{E=13} (S) \bowtie \sigma_{T='AA'} (R)) \]
\[
\begin{align*}
\sigma_{E=13} (S) & \quad \bowtie \quad \sigma_{T='AA'} (R) \\
\text{IndexNestedLoop} & \quad \text{(S.C = R.C)} \\
\text{NestedLoop} & \quad \text{(S.C = R.C)} \\
\text{IndexFilter} & \quad \text{(S, IdxE, E=13)} \\
\text{IndexFilter} & \quad \text{(R, IdxT, T='AA')} \\
\text{Filter} & \quad \text{(T='AA')} \\
\text{IndexFilter} & \quad \text{(R, IdxRC, C=S.C)} \\
\end{align*}
\]
\[
\begin{align*}
\sigma_{T='AA'} (R) & \quad \bowtie \quad \sigma_{E=13} (S) \\
\text{IndexNestedLoop} & \quad \text{(R.C = S.C)} \\
\text{NestedLoop} & \quad \text{(R.C = S.C)} \\
\text{IndexFilter} & \quad \text{(R, IdxT, T='AA')} \\
\text{IndexFilter} & \quad \text{(R, IdxRC, C=S.C)} \\
\text{Filter} & \quad \text{(E=13)} \\
\end{align*}
\]
\[ \times \quad \times \quad \text{minimum cost} \]
Example
\[ \pi_{S.C, S.O}^{b}(\sigma_{E = 13}(S) \bowtie \sigma_{T = 'AA'}(R)) \]
Final physical plan
Optimization of queries with grouping and aggregations
• The standard way to evaluate queries with group-by is to produce a plan for the join, and then add the group-by
• To produce cheaper physical plans the optimizer should consider doing the group-by before the join
Example
```
SELECT FKAgent, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent AND ACity = 'Pisa'
GROUP BY FKAgent;
```
Pre-grouping
Standard Physical Plan
Physical Plan with the Pre-Grouping
HashGroupBy
({FKAgent}, {SUM(Qty) AS SQ})
NestedLoop
(PKAgent = FKAgent)
Filter
(ACity = 'Pisa')
TableScan
(Order)
TableScan
(Agent)
Project
({FKAgent, SQ})
NestedLoop
(FKAgent = PKAgent)
Filter
(ACity = 'Pisa')
TableScan
(Agent)
TableScan
(Order)
HashGroupBy
({FKAgent}, {SUM(Qty) AS SQ})
TableScan
(Agent)
TableScan
(Order)
SELECT FKAgent, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent and ACity = 'Pisa'
GROUP BY FKAgent;
Assumptions
• The tables do not have null values, and primary and foreign keys have only one attribute
• The queries are a single SELECT with GROUP BY and HAVING but without subselect, DISTINCT and ORDER BY clauses
• In the SELECT there are all the grouping attributes
The pre-grouping problem
\[ x \gamma_F (R \times_{f_k=p_k} S) \]
When and how can the group-by be pushed through the join?
\[ x \gamma_F (R \times_{f_k=p_k} S) \equiv \ldots ((x', \gamma_{F'}(R)) \times_{f_k=p_k} S) \]
Grouping equivalence rules: $\sigma$
$$\sigma_{\phi}(x\gamma_f(E)) \equiv x\gamma_f(\sigma_{\phi}(E))$$
Two cases to consider for the selection
1) $\sigma_{\phi_X}(x\gamma_f(E)) \equiv x\gamma_f(\sigma_{\phi_X}(E))$ \hspace{1cm} \text{In SQL}
2) $\sigma_{\phi_F}(x\gamma_{\text{AGG}(A_1)} \text{ AS } F_1, \ldots, \text{AGG}(A_n) \text{ AS } F_n(E))$
\hspace{1cm} $\text{AGG} = \text{COUNT, SUM, MIN, MAX, AVG}$
Bad news: two cases only
$$\sigma_{Mb \geq \nu(x\gamma_{\text{MAX}(b)} \text{ AS } Mb(E))} \equiv x\gamma_{\text{MAX}(b)} \text{ AS } Mb(\sigma_{b \geq \nu(E)})$$
$$\sigma_{mb \leq \nu(x\gamma_{\text{MIN}(b)} \text{ AS } mb(E))} \equiv x\gamma_{\text{MIN}(b)} \text{ AS } mb(\sigma_{b \leq \nu(E)})$$
Grouping equivalence rules
Assume that $X \rightarrow Y$:
$$X \gamma_F(E) \equiv \pi^b_{X \cup F}(X \cup Y \gamma_F(E))$$
<table>
<thead>
<tr>
<th>PKOrder</th>
<th>FKAgent</th>
<th>...</th>
<th>PKAgent</th>
<th>AName</th>
<th>ACity</th>
<th>...</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>...</td>
<td>1</td>
<td>Rossi</td>
<td>Pisa</td>
<td>...</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>...</td>
<td>2</td>
<td>Verdi</td>
<td>Firenze</td>
<td>...</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>...</td>
<td>1</td>
<td>Rossi</td>
<td>Pisa</td>
<td>...</td>
</tr>
<tr>
<td>4</td>
<td>2</td>
<td>...</td>
<td>2</td>
<td>Verdi</td>
<td>Firenze</td>
<td>...</td>
</tr>
</tbody>
</table>
Grouping equivalence rules
• Let $F$ be decomposable with $F_l - F_g$
\[ x \gamma_F(E) \equiv x \gamma_{F_g}(x \cup Y \gamma_{F_l}(E)) \]
The pre-grouping problem
\[ X \gamma_F(R_{f_k=p_k} \ S) \]
When and how can the group-by be pushed through the join?
\[ X \gamma_F(R_{f_k=p_k} \ S) \equiv \ldots ((X, \gamma_{F'})((R))_{f_k=p_k} \ S) \]
Three cases
The invariant grouping rule
**Proposition 1.** $R$ has the **invariant grouping** property
\[
X \gamma_F(R \bowtie^C_j S) \equiv \pi^b_{X \cup F}((X \cup A(C_j) - A(S) \gamma_F(R)) \bowtie^C_j S)
\]
if the following conditions are true:
1. $C_j |\rightarrow X \rightarrow A(S)$: in every group, only one line from $S$
- in practice: $C_j$ is $f_k = p_k$, with $f_k$ in $R$, $p_k$ key for $S$, $X \rightarrow f_k$
2. Each aggregate function in $F$ only uses attributes from $R$.
Example
```
SELECT PKAgent, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent AND ACity = 'Pisa'
GROUP BY PKAgent;
```
Example
\[ x \gamma_F(R_{C_j} S) \equiv \]
\[ \pi^b_{X \cup F}((X \cup A(C_j) - A(S) \gamma_F(R))_{C_j} S) \]
\[ \pi^b_{PKAgent, SQty} \]
\[ \times \]
\[ \pi^b_{FKAgent=PKAgent} \]
\[ FKAgent \gamma_{SUM(Qty)} AS SQty \]
\[ \sigma_{ACity = 'Pisa'} \]
\[ Order \]
\[ Agent \]
Tests
SELECT PKAgent, ACity, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent
GROUP BY PKAgent, ACity;
SELECT ACity, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent
GROUP BY ACity;
SELECT AName, SUM(Qty) AS SQ
FROM Order, Agent
WHERE FKAgent = PKAgent AND ACity = 'Pisa'
GROUP BY AName;
Summary
- Understand principles and methods of query processing in order to produce a good physical design and better applications
- Query rewriting
- Production of alternative plans and cost evaluation
|
{"Source-Url": "http://pages.di.unipi.it/ghelli/bd2/8.queryoptimization.pdf", "len_cl100k_base": 7941, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 93902, "total-output-tokens": 10236, "length": "2e12", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0005812644958496094, "__label__crime_law": 0.0004715919494628906, "__label__education_jobs": 0.0138092041015625, "__label__entertainment": 9.1552734375e-05, "__label__fashion_beauty": 0.0001895427703857422, "__label__finance_business": 0.0011396408081054688, "__label__food_dining": 0.0005555152893066406, "__label__games": 0.0007543563842773438, "__label__hardware": 0.0008358955383300781, "__label__health": 0.0004906654357910156, "__label__history": 0.0005168914794921875, "__label__home_hobbies": 0.0002484321594238281, "__label__industrial": 0.0011959075927734375, "__label__literature": 0.0004200935363769531, "__label__politics": 0.0003025531768798828, "__label__religion": 0.0004906654357910156, "__label__science_tech": 0.046875, "__label__social_life": 0.0001850128173828125, "__label__software": 0.039093017578125, "__label__software_dev": 0.89013671875, "__label__sports_fitness": 0.00031256675720214844, "__label__transportation": 0.0007181167602539062, "__label__travel": 0.0004117488861083984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22768, 0.01419]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22768, 0.48421]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22768, 0.71573]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 327, false], [327, 619, null], [619, 1058, null], [1058, 1361, null], [1361, 1907, null], [1907, 2229, null], [2229, 2465, null], [2465, 2686, null], [2686, 3341, null], [3341, 3767, null], [3767, 4216, null], [4216, 4814, null], [4814, 5150, null], [5150, 5408, null], [5408, 5993, null], [5993, 6381, null], [6381, 6623, null], [6623, 7091, null], [7091, 7492, null], [7492, 7897, null], [7897, 8063, null], [8063, 8680, null], [8680, 9030, null], [9030, 9323, null], [9323, 10006, null], [10006, 10203, null], [10203, 10317, null], [10317, 10628, null], [10628, 10970, null], [10970, 11324, null], [11324, 11779, null], [11779, 12245, null], [12245, 12773, null], [12773, 13003, null], [13003, 13265, null], [13265, 13464, null], [13464, 13743, null], [13743, 14284, null], [14284, 14457, null], [14457, 14787, null], [14787, 15075, null], [15075, 15444, null], [15444, 15707, null], [15707, 15864, null], [15864, 16080, null], [16080, 16251, null], [16251, 16574, null], [16574, 16819, null], [16819, 17136, null], [17136, 17379, null], [17379, 18262, null], [18262, 18366, null], [18366, 18638, null], [18638, 18765, null], [18765, 19289, null], [19289, 19559, null], [19559, 19781, null], [19781, 20503, null], [20503, 20992, null], [20992, 21132, null], [21132, 21351, null], [21351, 21838, null], [21838, 21965, null], [21965, 22249, null], [22249, 22565, null], [22565, 22768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 327, true], [327, 619, null], [619, 1058, null], [1058, 1361, null], [1361, 1907, null], [1907, 2229, null], [2229, 2465, null], [2465, 2686, null], [2686, 3341, null], [3341, 3767, null], [3767, 4216, null], [4216, 4814, null], [4814, 5150, null], [5150, 5408, null], [5408, 5993, null], [5993, 6381, null], [6381, 6623, null], [6623, 7091, null], [7091, 7492, null], [7492, 7897, null], [7897, 8063, null], [8063, 8680, null], [8680, 9030, null], [9030, 9323, null], [9323, 10006, null], [10006, 10203, null], [10203, 10317, null], [10317, 10628, null], [10628, 10970, null], [10970, 11324, null], [11324, 11779, null], [11779, 12245, null], [12245, 12773, null], [12773, 13003, null], [13003, 13265, null], [13265, 13464, null], [13464, 13743, null], [13743, 14284, null], [14284, 14457, null], [14457, 14787, null], [14787, 15075, null], [15075, 15444, null], [15444, 15707, null], [15707, 15864, null], [15864, 16080, null], [16080, 16251, null], [16251, 16574, null], [16574, 16819, null], [16819, 17136, null], [17136, 17379, null], [17379, 18262, null], [18262, 18366, null], [18366, 18638, null], [18638, 18765, null], [18765, 19289, null], [19289, 19559, null], [19559, 19781, null], [19781, 20503, null], [20503, 20992, null], [20992, 21132, null], [21132, 21351, null], [21351, 21838, null], [21838, 21965, null], [21965, 22249, null], [22249, 22565, null], [22565, 22768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22768, null]], "pdf_page_numbers": [[0, 0, 1], [0, 327, 2], [327, 619, 3], [619, 1058, 4], [1058, 1361, 5], [1361, 1907, 6], [1907, 2229, 7], [2229, 2465, 8], [2465, 2686, 9], [2686, 3341, 10], [3341, 3767, 11], [3767, 4216, 12], [4216, 4814, 13], [4814, 5150, 14], [5150, 5408, 15], [5408, 5993, 16], [5993, 6381, 17], [6381, 6623, 18], [6623, 7091, 19], [7091, 7492, 20], [7492, 7897, 21], [7897, 8063, 22], [8063, 8680, 23], [8680, 9030, 24], [9030, 9323, 25], [9323, 10006, 26], [10006, 10203, 27], [10203, 10317, 28], [10317, 10628, 29], [10628, 10970, 30], [10970, 11324, 31], [11324, 11779, 32], [11779, 12245, 33], [12245, 12773, 34], [12773, 13003, 35], [13003, 13265, 36], [13265, 13464, 37], [13464, 13743, 38], [13743, 14284, 39], [14284, 14457, 40], [14457, 14787, 41], [14787, 15075, 42], [15075, 15444, 43], [15444, 15707, 44], [15707, 15864, 45], [15864, 16080, 46], [16080, 16251, 47], [16251, 16574, 48], [16574, 16819, 49], [16819, 17136, 50], [17136, 17379, 51], [17379, 18262, 52], [18262, 18366, 53], [18366, 18638, 54], [18638, 18765, 55], [18765, 19289, 56], [19289, 19559, 57], [19559, 19781, 58], [19781, 20503, 59], [20503, 20992, 60], [20992, 21132, 61], [21132, 21351, 62], [21351, 21838, 63], [21838, 21965, 64], [21965, 22249, 65], [22249, 22565, 66], [22565, 22768, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22768, 0.05856]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
bfd52c0ace9a82f5de253c599146d9e952930f19
|
Towards Assisted Remediation of Security Vulnerabilities
Gabriel Serme, Anderson Santana De Oliveira, Marco Guarnieri and Paul El Khoury
Abstract—Security vulnerabilities are still prevalent in systems despite the existence of their countermeasures for several decades. In order to detect the security vulnerabilities missed by developers, complex solutions are undertaken like static analysis, often after the development phase and with a loss of context. Although vulnerabilities are found, there is also an absence of systematic protection against them. In this paper, we introduce an integrated Eclipse plug-in to assist developers in the detection and mitigation of security vulnerabilities using Aspect-Oriented Programming early in the development life-cycle. The work is a combination of static analysis and protection code generation during the development phase. We leverage the developer interaction with the integrated tool to obtain more knowledge about the system, and to report back a better overview of the different security aspects already applied, then we discuss challenges for such code correction approach. The results are an in-depth solution to assist developers to provide software with higher security standards.
Keywords—Security, AOP, Software Engineering, Static Analysis, Vulnerability Remediation
I. INTRODUCTION
After a decade of existence, cross-site scripting (XSS), SQL Injection and other of types of security vulnerabilities associated to input validation can cause severe damage once exploited. To analyze this fact, Scholte et al. conducted an empirical study that shows that the number of reported vulnerabilities is not decreasing.
While computer security is primarily a matter of secure design and architecture, it is also known that even with best designed architectures, security bugs will still show up due to poor implementation. Thus, fixing security vulnerabilities before shipment can no more be considered optional. Most of the reported security vulnerabilities are leftovers forgotten by developers, thought to be some benign code. Such kind of mistakes can survive unaudited for years until they end up exploited by hackers.
The software development lifecycle introduces several steps to audit and test the code produced by developers in order to detect the security bugs, such as code review tools for early detection of security bugs to penetration testing. The tools are used to automate some tasks normally handled manually or requiring complex processing and data manipulation. They are able to detect several of errors and software defects, but developers have to face heterogeneous tools, each one with a different process to make it run correctly, and they have to analyze the results of all the tools, merge them and fix the source code accordingly. For instance, code scanner tools are usually designed to be independent from the developers’ environment. Therefore, they gain in flexibility but lose comprehensiveness and the possibility to interact with people having the experience on application code. Thus, tools produce results that are not directly linked to application defects. It is the case for example for code scanner tools triggering several false positives, which are not actual vulnerabilities.
The contributions of this paper are twofold. First, we focus on static code analysis, an automated approach to perform code review integrated in developer’s environment. This technique analyzes the source code and/or binary code without executing it and identifies anti-patterns that lead to security bugs. We focus on security vulnerabilities caused by missing input validation, the process of validating all the inputs to an application before using it. Although our tool handles other kinds of vulnerabilities, here we discuss on three main vulnerabilities caused by missing input validation, or mis-validation of the input: cross-site scripting (also called XSS), Directory Path Traversal and SQL Injection. Second, we provide an innovative assisted remediation process that
employs Aspect-Oriented Programming for semi-automatic vulnerability correction. The combination of these mechanisms improves the quality of the software with respect to security requirements.
The paper is structured as follows: Section II presents the overall agile approach to conduct code scanning and correct vulnerability during the development phase. Then, Section III presents the architecture we adopt to combine the static analysis with the code correction component. The Section IV describes the static analysis process with its integration in the developers’ environment. Then, we explain techniques for assisted remediation along with pros and cons in Section V. Finally, we discuss the advantages of our approach compared to related work in Section VI and we conclude in Section VII.
II. AN AGILE APPROACH
Agile approaches to software development require the code to be refactored, reviewed and tested at each iteration of the development lifecycle. While unit testing can be used to check functional requirements fulfillment during iterations, checking emerging properties of software such as security or safety is more difficult. We aim to provide each developer with a simple way to do daily security static analysis on his code. That would be properly achieved by providing a security code scanner integrated in the development environment, i.e., Eclipse IDE in this case, and a decentralized architecture that allows the security experts to assist the developers in any of the findings. Typically, that would include verifying false positives and correspondingly adjusting the code scanner test cases, or assisting in reviewing the solutions for the fixes. It brings several advantages over the approach in which the static analysis phase stays only at the end. The expertise of the context in which the code was developed lies in development groups. Therefore, the interaction between development team and security experts is faster with less effort in finding and applying corrections on the security functionalities. The experts provide support on a case basis for a better tuning of false positive detection across teams and reducing final costs of maintenance: solving security issues into the development phase can reduce the number of issues that the security experts should analyze at the end.
Maintaining the separation of roles between the security experts performing the code scanning and the team members developing the application raises a critical complication, typically, from a time perspective, due to the human interaction between security experts and developers. If such an approach would have to scale to what most of the agile approaches describe, the amount of iteration between developers and experts would need to be reduced. That could be reduced by up-skilling the developers and reducing the interaction between them and the security experts for the analysis of the security scans of the project, which is simplified by the introduction of our tool.
Our incentive is to harvest the advantages acquired by using our approach in an agile and decentralized static analysis process early in the software development lifecycle. It raises security awareness for the developers at the development time and reduces maintenance costs. A tool covering the previous needs should fulfill several requirements:
- easy-to use for users non-experts in security
- domain specific with integration into developers’ daily environment, to maximize adoption and avoid additional steps to run the tool
- adjustable to maximize project knowledge and reduce false positives and negatives
- reflexive to adjust accuracy of the scan over time, with collaborative feedbacks for example
- supportive to assist developers in correcting and understand issues.
- educative to help developers understanding errors, steps to correct existing error, and techniques to prevent future vulnerability
We have developed an Eclipse Plugin, presented in [2], made of components leveraging decentralized approach for static analysis. It gives direct access to detected flaws and global overview on system vulnerabilities. The developer analyzes its code and review vulnerabilities when necessary.
Figure 1 presents the interaction between the two phases: the static analysis phase allows scanning the code in order to identify and classify the different vulnerabilities found. It is described in details in Section IV. The measurement is performed directly by developers who decide what to remediate by undertaken actions, with support from our second component. The full remediation process is given in Section V.
III. ARCHITECTURE
Figure 2 represents the architecture of our prototype. First of all, we consider two main stakeholders involved in the configuration and usage of the prototype. Security experts and developers regroup different profiles whose goal is to provide and configure the knowledge database in order to avoid false positives and negatives, and to provide better accuracy during the analysis phase. They have two main tasks. First, they update the knowledge base, adding to its classes or methods that can be considered as trusted for one or more vulnerabilities. Second, the knowledge database receives feedback from analysis on possible trusted objects.
for one or more security vulnerabilities; they must analyze them more in detail and, if these objects are really trusted they tag them as trusted into the knowledge base. We better explain the different concepts and tasks in Section IV.
The second role is the developer, interacting directly with the static analysis engine to verify vulnerabilities in application, code and libraries under its responsibility. The developer at this stage can be naive, therefore with no focus on complexity of security flow. The knowledge base is shared among developers. It contains all the security knowledge about trust: objects that do not introduce security issues into the code. Security experts and developers with understanding of security patterns maintain and keep under control the definitions used by all developers in an easy way using one admin web application or some web-services. In this way the code scanner testing rules are harmonized for the whole application or even on a project-basis. The knowledge base allows developers to run static analysis that is perfectly adapted to the context of their project.
In industrial scale projects, daily scans are recommended. In order to facilitate this task, we provide a plugin for Eclipse that uses an abstract syntax tree (AST) generated by the JDT compiler - the compiler that Eclipse provides as part of the Java Development Tools platform, to simplify the static analysis process. The plugin accesses the knowledge database via web-services making it possible to each developer to run independently the code scanner. We detail its components in the next section.
IV. STATIC ANALYSIS
Static analysis can report security bugs even when scanning small pieces of code. Another family of code scanners is based on dynamic analysis techniques that acquire information at runtime. Unlike static analysis, dynamic analysis requires a running executable code. Static analysis scans all the source code while dynamic analysis can verify certain use cases being executed. The major drawback of static analysis is that it can report both false positives and false negatives. The former detects a security vulnerability that is not truly a security vulnerability, while the latter means that it misses to report certain security vulnerabilities. Having false negatives is highly dangerous as it gives one sensation of protection while vulnerability is present and can be exploited, whereas having false positives primarily slows down the static analysis process. Modern static analysis tools, similarly to compilers, build an abstract syntax tree that represents the abstract syntactic structure of the code from the source code and analyze it.
A. Static Analysis Process
In a nutshell, our process allows developers to run a check on their code to uncover potential vulnerabilities by checking for inputs that have not been validated. It finds information flows connecting an entry point and exit point that does not use a trusted object for the considered vulnerabilities. The algorithm uses an abstract syntax tree of the software in conjunction with the knowledge base to identify the vulnerable points. The Figure 3 presents the different analysis steps performed from the moment developer presses the analysis button to the display of results.

The static analysis works on Document Object Model
generated by the Eclipse JDT component able of handling all constructs described in the Java Language Specification [3].
The static analysis process is described as follows:
- The engine contacts the knowledge database in order to retrieve the up-to-date and most accurate configuration from the shared platform. If the developer cannot retrieve the configuration, it can still work independently with the latest local configuration.
- The process identifies all entry points of interest in the accessible source code and libraries. The analysis is based on the previously mentioned AST. We are gathering the different variables and fields used as well as the different methods. We apply a first filter with pattern-matching on the potential entry points: a method call or a new object instantiation might be tagged as returning trusted inputs.
- For each entry point the control flow is followed to create the connections between methods, variables and fields to discover all the exit points. For instance, the engine visits assignments, method invocations and construction of new objects with the variables and fields detected during the entry point gathering.
- Once the different exit points have been collected, we evaluate the risk of having security vulnerabilities in the code. We check for an absence of validation in the flow for the different kinds of vulnerabilities. For instance, if the flow from an entry point to an exit point passes through a method or a class, which is known to validate SQL input, the flow is tagged as trusted for this specific vulnerability. Of course, the tag runs from the moment where the method validates for the vulnerability to the moment of a novel composition with potential vulnerable code, or until an exit point.
B. Multiple vulnerability analysis
In the previous section, we have presented the global analysis process. In this section, we discuss more in-depth the notion of trusted object and vulnerability propagation for the different vulnerabilities we address. The Listing 1 presents some source code vulnerable to cross site scripting.
The vulnerability propagates from the request parameter to the object query, which is then written in the response. The problem of identifying security vulnerabilities caused by errors in input validation can be translated to finding an information flow connecting an entry point and an exit point for one or more security vulnerabilities (e.g., a class trusts all the fields and methods in it. A trusted object definition process to trust a class, a package or a method from the code being programmed into external objects or method invocations. Our approach relies on our trusted object definition, which impacts the detection accuracy. A trusted object is a class or a method that can sanitize all the information flow from an entry point to an exit point for one or more security vulnerabilities. We implemented the trust definitions into the centralized knowledge base presented in the previous section. The knowledge database represents the definitions using a trusting hierarchy that follows the package hierarchy.
Security experts can tag classes, packages or methods as trusted for one or more security vulnerabilities, accordingly to their analysis, feedbacks from developers or static analysis results. Obviously defining a trusted element in the trust hierarchy also adds all the elements below it: trusting a package trusts all the classes and methods into it and trusting a class trusts all the fields and methods in it. A trusted object can sanitize one or more security vulnerabilities (e.g., sanitization method can be valid for both SQL Injection and cross site scripting). This approach enables developers and security experts to define strong trust policies with regards to the system they are securing.
Defining a trusted object is a strong assertion as it taints a given flow as valid and free for a given vulnerability. The definition process to trust a class, a package or a method must be rigorous: it influences the risk evaluation accuracy. The object must not introduce a specific vulnerability into the code. This is the reason why developers report feedback and security experts take the decision. The experts can also analyze, manage and update the base, if the class, package or method is considered trusted. This phase allows system tuning that is related to a given organization and leads to fewer false positives while ensuring no false negatives.
The detected vulnerabilities (Figure 3) gives an example of analysis result in the tool) are mainly caused by lack of input validation, namely SQL Injection, Directory Path Traversal and Cross Site Scripting. The engine detects also a more general Malformed Input vulnerability that represents any input that is not validated using a standard implementation.
---
Listing 1. Vulnerability propagation of a cross site scripting
```java
7 String query = req.getParameter("query");
8 writer.print("\n\n9 writer.print("\n10 writer.print("\n11 writer.flush();
12 writer.close();
13 }
```
We define an input as a data flow from any external class, method or parameter into the code being programmed. We also define as entry point any point into the source code where an untrusted input enters to the program being scanned, like the query input from Listing 1. In an analogous way we define as output any data flow that goes from the code being programmed into external objects or method invocations. Our approach relies on our trusted object definition, which impacts the detection accuracy. A trusted object is a class or a method that can sanitize all the information flow from an entry point to an exit point for one or more security vulnerabilities. We implemented the trust definitions into the centralized knowledge base presented in the previous section. The knowledge database represents the definitions using a trusting hierarchy that follows the package hierarchy.
Security experts can tag classes, packages or methods as trusted for one or more security vulnerabilities, accordingly to their analysis, feedbacks from developers or static analysis results. Obviously defining a trusted element in the trust hierarchy also adds all the elements below it: trusting a package trusts all the classes and methods into it and trusting a class trusts all the fields and methods in it. A trusted object can sanitize one or more security vulnerabilities (e.g., sanitization method can be valid for both SQL Injection and cross site scripting). This approach enables developers and security experts to define strong trust policies with regards to the system they are securing.
Defining a trusted object is a strong assertion as it taints a given flow as valid and free for a given vulnerability. The definition process to trust a class, a package or a method must be rigorous: it influences the risk evaluation accuracy. The object must not introduce a specific vulnerability into the code. This is the reason why developers report feedback and security experts take the decision. The experts can also analyze, manage and update the base, if the class, package or method is considered trusted. This phase allows system tuning that is related to a given organization and leads to fewer false positives while ensuring no false negatives.
The detected vulnerabilities (Figure 3) gives an example of analysis result in the tool) are mainly caused by lack of input validation, namely SQL Injection, Directory Path Traversal and Cross Site Scripting. The engine detects also a more general Malformed Input vulnerability that represents any input that is not validated using a standard implementation.
```
The engine can be easily extended to support new kinds of vulnerabilities caused by missing input validation. One needs to add the definition of the new vulnerability to the centralized knowledge base (and, if exist, adding trusted objects that mitigate it), and creating a new class extending an interface, that implements the checks to be done on the result of the static analysis to detect the vulnerability.
V. ASSISTED REMEDIATION
Performing static analysis is yet integrated in quality processes in several companies. But, the actual identification of vulnerabilities does not mean they are correctly mitigated. Given this problem, we can have several approaches: (i) refactoring the code, (ii) applying a proxy in inbound and outbound connections, and finally the solution we adopted, (iii) to generate protection code linked to the application being analyzed.
Software refactoring involves the developer into understanding the design of its application and the potential threats, to manually rewrite part of the code. The refactoring improves the design, performance and manageability of the code, but is difficult to address. It costs time and is error prone. Up to six distinct activities have been observed in [4] from identification to verification of refactoring. The impacted code is generally scattered over the application, and some part can be left unchecked easily. This can lead to an inconsistent state where the application does not reflect the intended goal. In terms of vulnerability remediation, the software refactoring is one of the most powerful due to the flexibility in terms of code rewriting and architecture evolution.
The proxy solution is equivalent to a gray-box approach, with no in-depth visibility of internal processes. It can be heavy to put in place, especially when the environment is under control of a different entity than the development team. For instance, on cloud platforms, one can deploy its application but has limited management on other capabilities, leading to the impossibility to apply filter on the application. The lack of flexibility and the absence of small adjustments make it complicated to adopt at the development phase.
In this work we provide inline protection with the application. This solution has several advantages, but also brings new limitations due to the technology we use: Aspect-Oriented Programming paradigm (AOP) [5], which is a paradigm to ease programming concerns that crosscut and pervade applications. In the next section, we describe our methodology and provide a comprehensive list of advantages and drawbacks.
A. Methodology
The approach comprises the automatic discovery of vulnerability and weaknesses in the code. In addition, we integrate a protection phase tied to the analysis process which guides developers through the correct and semi-automatic correction of vulnerabilities previously detected. It uses information from the static analysis engine to know what vulnerabilities have to be corrected. Then it requires inputs from the developer to extract knowledge about the context, like in Figure 5. These steps allow gathering places in the code where to inject security correction. The security correction uses AOP. The goal is to bring proper separation of concerns for cross cutting functionalities, such as security. Thus, code related to a concern is maintained separately from the base application. The main advantage using this technology is the ability to intervene in the control flow of a program without interfering with the base program code.
The list of vulnerability we cover principally are in Table I. The Table highlights the potential origin vulnerabilities and some of known remediation techniques. These vulnerabilities are known and subject to high attention. For instance, we can retrieve them in the OWASP Top Ten [6] for several years now, but also in the MITRE Top 25 Most Dangerous Software Errors [7]. Albeit several approaches exist to remediate the vulnerabilities, we are considering mainly escaping and validation to consistently remediate the problems with the aspect-oriented technique.
By adopting this approach, we reduce the time to correct vulnerabilities by applying semi-automatic and pre-defined
<table>
<thead>
<tr>
<th>Vulnerability</th>
<th>Origin</th>
<th>Potential Remediation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cross-Site Scripting</td>
<td>Server does not validate input coming from external source</td>
<td>Validate input and filter or encode properly the output depending on the usage: the encoding differs from HTML content to Javascript content for example</td>
</tr>
<tr>
<td>SQL Injection</td>
<td>Server does not validate input and use it directly in a construct of a SQL Query</td>
<td>Use a parameterized query or a safe API. Escape special characters. Validate the input used in the construction of query</td>
</tr>
<tr>
<td>Directory Path Traversal</td>
<td>Application server is misconfigured, or the file-system policy contains weaknesses</td>
<td>Enclose the application with strict policies, that restrict access to the filesystem by default. Filter and validate the input prior to direct file access</td>
</tr>
<tr>
<td>Other malformed input</td>
<td>Misvalidation</td>
<td>Validate input, determine the origin and possible manipulation from externals</td>
</tr>
</tbody>
</table>
Table I
LIST OF DETECTED VULNERABILITIES WITH POTENTIAL ORIGIN AND POTENTIAL REMEDIATION.
mechanisms to mitigate them. We use the component to apply protection code which is mostly tangled and scattered over an application.
Correcting a security vulnerability is not trivial. Different refactoring are possible depending on the issue. For instance, the guides for secure programming advises SQL prepared statement to prevent SQL Injection. But, developers might be constrained by their frameworks to forge SQL queries themselves. Therefore, developers would try another approach such as input validation and escaping of special characters.
We assist developers by proposing them automated solutions. For the previously mentioned correction, our integrated solution would propose to mitigate the vulnerability with an automatic detection of incoming, unsafe and unchecked variables. The developer does not need to be security expert to correct vulnerabilities as our approach provides interactive steps to generate AOP protection code, like in Figure 6. Although semi-automation simplifies the process to introduce protection code, the technique can introduce several side-effects if the developers are not following closely what is generated. The plug-in gives an overview for the developer of all corrected vulnerabilities, allowing him to visually manage and re-arrange them in case of need. Currently, the prototype does not analyze interaction between the different protection code generated.
By adopting this approach, we allow better understanding from a user point of view of the different vulnerabilities affecting the system, and we guide the developer towards more compliance in its application. The protection code can be deployed by security expert teams and change without refactoring.
B. Constraints from Aspect-Oriented Programming
The usage of AOP in the remediation of vulnerability bring us more flexibility. One can evolve the techniques used to protect the application, by switching the process to resolve a problem, making the security solution independent from the application. But this approach also brings us some limitations we discuss in this section.
Firstly, the language is designed to modify the application control flow. One of the limitations we have is related to the deep modification we need to perform in order to replace partially a behavior. For example, suppose a SQL query written manually in the application we would like to validate. We are able to weave validation and escaping code, but we can hardly modify the application to construct a parameterized query.
Secondly, the aspects cover the application in the whole. When more than one aspect is involved, the cross-cutting concerns can intersect. Therefore, we need to analyze aspect interaction and prevent an annihilation of the behavior we intended to address.
Thirdly, the evolution of the program leads to a different repartition of vulnerabilities. The vulnerabilities are detected after the static analysis phase. We are not addressing yet this problem of evolution to maintain the relation between the aspects and the application. This differs from the fragile pointcut problem inherent of aspect using pointcut languages referring to the syntax of the base language: the evolution affects the application as a whole, by introducing new entry points and exit points that need to be considered, or introducing methods that validate a flow for a given vulnerability.
The fourth constraint is that aspects have no specific certification. The actual protection library is defined globally, but applied locally, with a late binding to the application. The protection code is the same everywhere, but we put strong trust in the protection library by assuming that aspects are behaving properly with the actual modification of the flow to mitigate the vulnerability.
Finally, the fifth constraint is user acceptance. Since the developers rely on cross cutting solution, the code itself does not reflect the exact state of the application. The point where the aspect interferes with the base application is not presented in the code. We address this limitation with the strong interaction with the developer’s environment. The Eclipse plugin provides a mean to display remediation code in place at a given time.
VI. RELATED WORK
The interest into static analysis field has led to several approaches. They go from simple techniques like pattern matching and string analysis like in [8]–[11] to more complex techniques like data flow analysis in [12]–[14]. Commercial tools, such as Fortify [15] or CodeProfiler [16] propose better integration in developers’ environment but lack of decentralized approach and assistance in security management. Several tools are based on the Eclipse’s platform and detect vulnerabilities in web applications [17], flaws [18], bugs [19], and propose testing and audit to verify respect of organizational guidelines [20]. Compared to the aforementioned techniques, we advocate a better integration into the daily development lifecycle with our tool, and propose an integrated correction with good accuracy as we leverage developer’s knowledge on development context.
Hermosillo et al. [21] uses AOP to protect against web vulnerabilities: XSS and SQL Injection. They use AspectJ - the mainstream AOP language, to intercept method calls in an application server then perform validation on parameters. Viega et al. [22] presents simple use case on the usage of AOP for software security. Masuhara et al. [23] introduces an aspect primitive for dataflow, allowing to detect vulnerabilities like XSS. Our approach reduces the overhead brought by the detection of vulnerability patterns at runtime and allows wider range of vulnerability detection. Also, the aforementioned approaches do not rely on external tools to gather security context, but rather a manual processing to understand the architecture and decide where to apply aspects. Our approach also brings more awareness to the developer as he obtains a visual indicator of what is applied at which place in its application.
A combination of detection and protection is found in Deeprasertkul et al. [24] approach for detecting faults identified by pre-compiled patterns. Faults are corrected using a correction module. The difference with our approach lies in the detection of faults rather than security vulnerabilities. Also, the correction module fixes the faults statically and prevents further modifications of the introduced code. A recent work conducted by Yang et al. [25] uses static analysis to detect security points to deploy protection code with aspects, on distributed tuple space systems. These two approaches suffer from same limitation as the ones presented in the previous paragraph, which is a lack of visual support from the tool, and a loose of context. It is worth mentioning the work from Hafiz et al. [26], where authors propose several techniques to correct data injection through program transformations. They have list several cases along with steps to describe transformations to realize security policies. Their work can benefit our overall methodology to propose multiple corrections once vulnerability has been identified.
VII. CONCLUSION AND FUTURE WORKS
We presented how to overcome several security vulnerabilities using a combination between a static analyzer that assists developers to report security vulnerabilities and a semi-automated correction of these findings with AOP. The usage of an integrated tool to provide support for security bugs detection and mitigation has several advantages. It benefits to several stakeholders at the same time. First, security teams are able to distribute the maintenance of the code to the people writing their code and let them mitigate security bugs whenever they are detected. They can interact closely to decide of the best solutions for a given situation, and apply security across development teams. Developers benefit from this approach, having an operational tool already configured for their development. They can focus on writing their functional code and, time to time, verify the accuracy of their implementation. Security concerns are often cross cutting the application, which tends to have security checks spread around application. Using one central tool to have an overview is more efficient and productive, and gives the possibility to track all applied protection code. The automation allows a broader and consistent application of security across applications. The usage of AOP eases the deployment and change of security protection code, in a single environment and during the development phase. The overall vision we would like to achieve in the future is the specification and maintenance of security concerns in one central place, and usage by developers of these concerns by defining some places in application where they should be active.
We have designed this plug-in for an improved awareness of security concerns from a developer point of view. It is important to notice that correcting vulnerabilities doesn’t make the whole system secure. It only means the code tends to be free of security bugs. Other parts of the application, such as authentication flow, authorization checks, etc. are not covered by our analysis. Besides, we encourage developers to look further in vulnerabilities’ descriptions, as the automated correction proposed might not be the best choice in all situations. We do not want developers to believe our solution is bullet-proof. It leads to a false sensation of security, which is the opposite of our goal.
Albeit we have listed several benefits for an integrated tool, we know that it suffers from limitations. For instance, when we are developing a tool such as an Eclipse plug-in, we are targeting a platform and a language, thus voluntarily restricting the scope of application. From the tool itself, we have designed a working prototype that we have validated on projects internally at SAP and compared to commercial softwares. In several cases, the agile approach leads to a reduction of false positives and an absence of false negatives. Also, the approach of providing support for correcting the vulnerability is novel and we focus now in
improving accuracy of the protection code. Especially, we need to investigate in the cost in term of complexity and maintainability for the different stakeholders interacting with the system.
ACKNOWLEDGMENT
This work has been partially carried out in the CESSA project (project id.: 09-SEGI-002-01).
REFERENCES
|
{"Source-Url": "http://www.eurecom.fr/en/publication/3805/download/rs-publi-3805.pdf", "len_cl100k_base": 6598, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24387, "total-output-tokens": 8586, "length": "2e12", "weborganizer": {"__label__adult": 0.0004127025604248047, "__label__art_design": 0.0002396106719970703, "__label__crime_law": 0.0006031990051269531, "__label__education_jobs": 0.00040268898010253906, "__label__entertainment": 3.987550735473633e-05, "__label__fashion_beauty": 0.000164031982421875, "__label__finance_business": 0.00013053417205810547, "__label__food_dining": 0.0002918243408203125, "__label__games": 0.0004642009735107422, "__label__hardware": 0.000690460205078125, "__label__health": 0.0004563331604003906, "__label__history": 0.00012874603271484375, "__label__home_hobbies": 7.158517837524414e-05, "__label__industrial": 0.0002846717834472656, "__label__literature": 0.00016391277313232422, "__label__politics": 0.0002263784408569336, "__label__religion": 0.0004088878631591797, "__label__science_tech": 0.00417327880859375, "__label__social_life": 7.039308547973633e-05, "__label__software": 0.00383758544921875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002930164337158203, "__label__transportation": 0.0003960132598876953, "__label__travel": 0.00017142295837402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41196, 0.02041]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41196, 0.35683]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41196, 0.90642]], "google_gemma-3-12b-it_contains_pii": [[0, 4059, false], [4059, 9352, null], [9352, 12753, null], [12753, 20446, null], [20446, 25924, null], [25924, 30152, null], [30152, 36110, null], [36110, 41196, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4059, true], [4059, 9352, null], [9352, 12753, null], [12753, 20446, null], [20446, 25924, null], [25924, 30152, null], [30152, 36110, null], [36110, 41196, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41196, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41196, null]], "pdf_page_numbers": [[0, 4059, 1], [4059, 9352, 2], [9352, 12753, 3], [12753, 20446, 4], [20446, 25924, 5], [25924, 30152, 6], [30152, 36110, 7], [36110, 41196, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41196, 0.04762]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
cdadb9177f0f810b671cf64a66c99600dc8c42ee
|
[REMOVED]
|
{"len_cl100k_base": 6288, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 23589, "total-output-tokens": 7978, "length": "2e12", "weborganizer": {"__label__adult": 0.0012664794921875, "__label__art_design": 0.0007829666137695312, "__label__crime_law": 0.0014705657958984375, "__label__education_jobs": 0.003452301025390625, "__label__entertainment": 0.0008635520935058594, "__label__fashion_beauty": 0.0007371902465820312, "__label__finance_business": 0.0009908676147460938, "__label__food_dining": 0.0015277862548828125, "__label__games": 0.112060546875, "__label__hardware": 0.005035400390625, "__label__health": 0.001781463623046875, "__label__history": 0.0013532638549804688, "__label__home_hobbies": 0.00026869773864746094, "__label__industrial": 0.0012350082397460938, "__label__literature": 0.0010328292846679688, "__label__politics": 0.0005431175231933594, "__label__religion": 0.0012521743774414062, "__label__science_tech": 0.2037353515625, "__label__social_life": 0.00028443336486816406, "__label__software": 0.03680419921875, "__label__software_dev": 0.6201171875, "__label__sports_fitness": 0.0011949539184570312, "__label__transportation": 0.00109100341796875, "__label__travel": 0.0008673667907714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32521, 0.03299]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32521, 0.39684]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32521, 0.93176]], "google_gemma-3-12b-it_contains_pii": [[0, 3542, false], [3542, 8965, null], [8965, 13168, null], [13168, 16635, null], [16635, 19380, null], [19380, 22076, null], [22076, 25419, null], [25419, 29989, null], [29989, 32521, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3542, true], [3542, 8965, null], [8965, 13168, null], [13168, 16635, null], [16635, 19380, null], [19380, 22076, null], [22076, 25419, null], [25419, 29989, null], [29989, 32521, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32521, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32521, null]], "pdf_page_numbers": [[0, 3542, 1], [3542, 8965, 2], [8965, 13168, 3], [13168, 16635, 4], [16635, 19380, 5], [19380, 22076, 6], [22076, 25419, 7], [25419, 29989, 8], [29989, 32521, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32521, 0.06923]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
02cb385e5b9f6e42731dfae90432f542539aecc6
|
Internet Nomenclator Project
Status of this Memo
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (1998). All Rights Reserved.
Abstract
The goal of the Internet Nomenclator Project is to integrate the hundreds of publicly available CCSO servers from around the world. Each CCSO server has a database schema that is tailored to the needs of the organization that owns it. The project is integrating the different database schema into one query service. The Internet Nomenclator Project will provide fast cross-server searches for locating people on the Internet. It augments existing CCSO services by supplying schema integration, more extensive indexing, and two kinds of caching -- all this in a system that scales as the number of CCSO servers grows. One of the best things about the system is that administrators can incorporate their CCSO servers into Nomenclator without changing the servers. All Nomenclator needs is basic information about the server.
This document provides an overview of the Nomenclator system, describes how to register a CCSO server in the Internet Nomenclator Project, and how to use the Nomenclator search engine to find people on the Internet.
1. Introduction
Hundreds of organizations provide directory information through the CCSO name service protocol [3]. Although the organizations provide a wealth of information about people, finding any one person can be difficult because each organization’s server is independent. The different servers have different database schemas (attribute names and data formats). The 300+ CCSO servers have more than 900 different attributes to describe information about people. Very few common attributes exist. Only name and email occur in more than 90% of the servers [4]. No special support exists for cross-server searches, so searching can be slow and expensive.
The goal of the Internet Nomenclator Project is to provide fast, integrated access to the information in the CCSO servers. The project is the first large-scale use of the Nomenclator system. Nomenclator is a more general system than a white pages directory service. It is a scalable, extensible information system for the Internet.
Nomenclator answers descriptive (i.e. relational) queries. Users can locate information about people, organizations, hosts, services, publications, and other objects by describing their attributes. Nomenclator achieves fast descriptive query processing through an active catalog, and extensive meta-data and data caching. The active catalog constrains the search space for a query by returning a list of data repositories where the answer to the query is likely to be found. Meta-data and data caching keep frequently used query processing resources close to the user, thus reducing communication and processing costs.
Through the Internet Nomenclator Project, users can query any CCSO server, regardless of its attribute names or data formats, by specifying the query to Nomenclator (see Figure 1). Nomenclator provides a world view of the data in the different servers. Users express their queries in this world view. Nomenclator returns the answer immediately if it has been cached by a previous query. If not, Nomenclator uses its active catalog to constrain the query to the subset of relevant CCSO servers. The speed of the query is increased, because only relevant servers are contacted. Nomenclator translates the global query into local queries for each relevant CCSO server. It then translates the responses into the format of the world view.
Nomenclator translates queries to and from the language of the relevant CCSO servers.
The Internet Nomenclator Project makes it easier for users to find a particular CCSO server, but it does not send all queries to that server. When Nomenclator constrains the search for a query answer, it screens out irrelevant queries from ever reaching the server. When Nomenclator finds an answer in its cache, it screens out redundant queries from reaching the server. The server becomes easier to find and use without experiencing the high loads caused by exhaustive and redundant searches.
The Internet Nomenclator Project creates the foundation for a much broader heterogeneous directory service for the Internet. The current version of Nomenclator provides integrated access to CCSO and relational database services. The Nomenclator System Architecture supports fast, integrated searches of any collection of heterogeneous directories. The Internet Nomenclator Project can be enhanced to support additional name services, or provide intergated query services for other application domains. The project is starting with CCSO services, because the CCSO services are widely available and successful.
Section 2 describes the Nomenclator system in more detail. Section 3 explains how to register a CCSO server as part of the project. Section 4 briefly describes how to use Nomenclator. Section 5 provides a summary.
2. Nomenclator System
Nomenclator is a scalable, extensible information system for the Internet. It supports descriptive (i.e. relational) queries. Users locate information about people, organizations, hosts, services, publications, and other objects by describing their attributes. Nomenclator achieves fast descriptive query processing through an active catalog, and extensive meta-data and data caching.
The active catalog constrains the search space for a query by returning a list of data repositories where the answer to the query is likely to be found. Components of the catalog are distributed indices that isolate queries to parts of the network, and smart algorithms for limiting the search space by using semantic, syntactic, or structural constraints. Meta-data caching improves performance by keeping frequently used characterizations of the search space close to the user, thus reducing active catalog communication and processing costs. When searching for query responses, these techniques improve query performance by contacting only the data repositories likely to have actual responses, resulting in acceptable search times.
Administrators make their data available in Nomenclator by supplying information about the location, format, contents, and protocols of their data repositories. Experience with Nomenclator shows that gathering a small amount of information from data owners can have a substantial positive impact on the ability of users to retrieve information. For example, each CCSO administrator provides a mapping from the local view of data (i.e. the local schema) at the CCSO server to Nomenclator’s world view. The administrator also supplies possible values for any attributes with small domains at the data repository (such as the "city" or "state_or_province" attributes). With this information, Nomenclator can isolate queries to a small percentage of the CCSO data repositories, and provide an integrated view of their data. Nomenclator provides tools that minimize the effort that administrators expend in characterizing their data repositories. Nomenclator does not require administrators to change the format of their data or the access protocol for their database.
2.1 Components of a Nomenclator System
A Nomenclator system is comprised of a distributed catalog service and a query resolver (see Figure 2). The distributed catalog service gathers meta-data about data repositories and makes it available to the query resolver. Meta-data includes constraints on attribute
values at a data repository, known patterns of data distribution
across several data repositories, search and navigation techniques,
schema and protocol translation techniques, and the differing schema
at data repositories.
<table>
<thead>
<tr>
<th>World View</th>
<th>Meta Data</th>
<th>Distributed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Query</td>
<td>Request</td>
<td>Catalog</td>
</tr>
<tr>
<td><----------</td>
<td>(caches)</td>
<td><-----------</td>
</tr>
<tr>
<td>World View</td>
<td>Meta Data</td>
<td></td>
</tr>
<tr>
<td>Response</td>
<td>Response</td>
<td></td>
</tr>
</tbody>
</table>
**Figure 2: Components of a Nomenclator System**
Query resolvers at the user sites retrieve, use, cache, and re-use
this meta-data in answering user queries. The catalog is "active" in
two ways. First, some meta-data moves from the distributed catalog
service to each query resolver during query processing. Second, the
query resolver uses the initial meta-data, in particular the search
and navigation techniques, to generate additional meta-data that
guides query processing. Typically, one resolver process serves a
few hundred users in an organization, so users can benefit from
larger resolver caches.
Query resolvers cache techniques for constraining the search space
and the results of previously constrained searches (meta-data), and
past query answers (data) to speed future query processing. Meta-
data and data caching tailor the query resolver to the specific needs
of the users at the query site. They also increase the scale of a
Nomenclator system by reducing the load from repeated searches or
queries on the distributed catalog service, data repositories, and
communications network.
The distributed catalog service is logically one network service, but it can be divided into pieces that are distributed and/or replicated. Query resolvers access this distributed, replicated service using the same techniques that work for multiple data repositories.
A Nomenclator system naturally includes many query resolvers. Resolvers are independent, but renewable, query agents that can be as powerful as the resources available at the user site. Caching decreases the dependence of the resolver on the distributed catalog service for frequently used meta-data, and on data repositories for frequently used data. Caching thus improves the number of users that can be supported and the local availability of the query service.
2.2 Meta-Data Techniques
The active catalog structures the information space into a collection of relations about people, hosts, organizations, services and other objects. It collects meta-data for each relation and structures it into "access functions" for locating and retrieving data. Access functions respond to the question: "Where is data to answer this query?" There are two types of responses corresponding to the two types of access functions. The first type of response is: "Look over there." "Catalog functions" return this response; they constrain the query search by limiting the data repositories contacted to those having data relevant to the query. Catalog functions return a referral to data access functions that will answer the query or to additional catalog functions to contact for more detailed information. The second response to "Where?" is: "Here it is!" "Data access functions" return this response; they understand how to obtain query answers from specific data repositories. They return tuples that answer the query. Nomenclator supplies access functions for common name services, such as the CCSO service, and organizations can write and supply access functions for data in their repositories.
Access functions are implemented as remote or local services. Remote access functions are services that are available through a standard remote procedure call interface. Local access functions are functions that are supplied with the query resolver. Local access functions can be applied to a variety of indexing and data retrieval tasks by loading them with meta-data stored in distributed catalog service. Remote access functions are preferred over local ones when the resources of the query resolver are inadequate to support the access function. The owners of data may also choose to supply remote access functions for privacy reasons if their access functions use proprietary information or algorithms. Local functions are preferred whenever possible, because they are highly replicated in resolver caches. They can reduce system and network load by bringing the resources of the active catalog directly to the users.
Remote access functions are simple to add to Nomenclator and local access functions are simple to apply to new data repositories, because the active catalog provides "referrals" that describe the conditions for using access functions. For simplicity, this document describes referral techniques for exact matching of query strings. Extensions to these techniques in Nomenclator support matching query strings that contain wildcards or word-based matching of query strings in the style of the CCSO services.
Each referral contains a template and a list of references to access functions. The template is a conjunctive selection predicate that describes the scope of the access functions. Conjunctive queries that are within the scope of the template can be answered with the referral. When a template contains a wildcard value ("**") for an attribute, the attribute must be present in any queries that are processed by the referral. The system follows the following rule:
Query Coverage Rule:
If the set of tuples satisfying the selection predicate in a query is covered by (is a subset of) the set of tuples satisfying the template, then the query can be answered by the access functions in the reference list of the referral.
For example, the query below:
select * from People where country = "US" and surname = "Ordille";
is covered by the following templates in Lines (1) through (3), but not by the templates in Lines (4) and (5):
(1) country = "US" and surname = "**"
(2) country = "US" and surname = "Ordille"
(3) country = "US"
(4) organization = "**"
(5) country = "US" and surname = "Elliott"
Referrals form a generalization/specialization graph for a relation called a "referral graph." Referral graphs are a conceptual tool that guides the integration of different catalog functions into our system and that supplies a basis for catalog function construction and query processing. A "referral graph" is a partial ordering of
the referrals for a relation. It is constructed using the
subset/superset relationship: "S is a subset of G." A referral S is
a subset of referral G if the set of queries covered by the template
of S is a subset of the set of queries covered by the template of G.
S is considered a more specific referral than G; G is considered a
more general referral than S. For example, the subset relationship
exists between the pairs of referrals with the templates listed
below:
(1) country = "US" and surname = "Ordille"
is a subset of
country = "US"
(2) country = "US" and surname = "Ordille"
is a subset of
country = "US" and surname = "*"
(3) country = "US" and surname = "*"
is a subset of
country = "US"
(4) country = "US"
is a subset
"empty template"
but it does not exist between the pairs of referrals with the
following templates:
(5) country = "US"
is not a subset of
department = "CS"
(6) country = "US" and name = "Ordille"
is not a subset of
country = "US" and name = "Elliott"
In Lines (1) and (2), the more general referral covers more queries,
because it covers queries that list different values for surname. In
Line (3), the more general referral covers more queries, because it
covers queries that do not constrain surname to a value. In Line
(4), the specific referral covers only those queries that constrain
the country to "US" while the empty template covers all queries.
During query processing, wildcards in a template are replaced with
the value of the corresponding attribute in the query. For any query
covered by two referrals S and G such that S is a subset of G, the
set of tuples satisfying the template in S is covered by the set of
tuples satisfying the template in G. S is used to process the query, because it provides the more constrained (and faster) search space. The referral S has a more constrained logical search space than G, because the set of tuples in the scope of S is no larger, and often smaller, than the set in the scope of G. Moreover, S has a more constrained physical search space than G, because the data repositories that must be contacted for answers to S must also be contacted for answers to G, but additional data repositories may need to be contacted to answer G.
In constraining a query, a catalog function always produces a referral that is more specific than the referral containing the catalog function. Wildcards ("*"※) in a template indicate which attribute values are used by the associated catalog function to generate a more specific referral. In other words, catalog functions always follow the rule:
**Catalog Function Constrained Search Rule:**
Given a referral R with a template t and a catalog function cf, and a query q covered by t, the result of using cf to process q, cf(q), is a referral R' with template t' such that q is covered by t' and R' is more specific than R.
Catalog functions make it possible to import a portion of the indices for the information space into the query resolver. Since they generate referrals, the resolver can cache the most useful referrals for a relation and call the catalog function as needed to generate new referrals.
The resolver query processing algorithm obtains an initial set of referrals from the distributed catalog service. It then navigates the referral graph, calling catalog functions as necessary to obtain additional referrals that narrow the search space. Sometimes, two referrals that cover the query have the relationship of general to specific to each other. The resolver eliminates unnecessary access function processing by using only the most specific referral along each path of the referral graph.
The search space for the query is initially set to all the data repositories in the relation. As the resolver obtains referrals to sets of relevant data repositories (and their associated data access functions) it forms the intersection of the referrals to constrain the search space further. The intersection of the referrals includes only those data repositories listed in all the referrals. Intersection combines independent paths through the referral graph to derive benefit from indices on different attributes.
2.3 Meta-Data and Data Caching
A Nomenclator query resolver caches the meta-data that result from calling catalog functions. It also caches the responses for queries. If the predicate of a new query is covered by the predicate of a previous query, Nomenclator calculates the response for the new query from the cached response of the old query. Nomenclator timestamps its cache entries to provide measures of the currentness of query responses and selective cache refresh. The timestamps are used to calculate a t-bound on query responses [5][1]. A t-bound is the time after which changes may have occurred to the data that are not reflected in the query response. It is the time of the oldest cache entry used to calculate the response. Nomenclator returns a t-bound with each query response. Users can request more current data by asking for responses that are more recent than this t-bound. Making such a request flushes older items from the cache if more recent items are available. Query resolvers calculate a minimum t-bound that is some refresh interval earlier than the current time. Resolvers keep themselves current by replacing items in the cache that are earlier than the minimum t-bound.
2.4 Scale and Performance
Three performance studies of active catalog and meta-data caching techniques are available [5]. The first study shows that the active catalog and meta-data caching can constrain the search effectively in a real environment, the X.500 name space. The second study examined the performance of an active catalog and meta-data caching for single users on a local area network. The experiments showed that the techniques to eliminate data repositories from the search space can dramatically improve response time. Response times improve, because latency is reduced. The reduction of latency in communications and processing is critical to large-scale descriptive query optimization. The experiments also showed that an active catalog is the most significant contributor to better response time in a system with low load, and that meta-data caching functions to reduce the load on the system. The third study used an analytical model to evaluate the performance and scaling of these techniques for a large Internet environment. It showed that meta-data caching plays an essential role in scaling the distributed catalog service to millions of users. It also showed that constraining the search space with an active catalog contributes significantly to scaling data repositories to millions of users. Replication and data caching also contribute to the scale of the system in a large Internet environment.
3. Registering a CCSO Server
The Internet Nomenclator Project supports the following home page:
http://cm.bell-labs.com/cs/what/nomenclator
The home page provides a variety of information and services.
Administrators can register their CCSO servers through services on this home page. The registration service collects CCSO server location information, contact information for the administrator of the CCSO server, implicit and explicit constraints on entries in the server’s database, and a mapping from the local schema of the CCSO server to the schema of the world view.
The implicit and explicit constraints on the server’s database are the fuel for Nomenclator’s catalog functions. The registration center currently collects constraints on organization name, department, city, state or province name, country, phone number, postal code, and email address. These constraints are automatically incorporated into Nomenclator’s distributed catalog service. They are used by catalog functions in query resolvers to constrain searches to relevant CCSO servers. For example, a database only contains information about the computer science and electrical engineering departments at a French university. The department, organization and country attributes are constrained. Nomenclator uses these constraints to prevent queries about other departments, organizations or countries from being sent to this CCSO server.
The mapping from the local schema of the CCSO server to the schema of the world view allows Nomenclator to translate queries and responses for the CCSO server. The registration center currently collects this mapping by requesting an example of how to translate a typical entry in the CCSO server into the world view schema and, optionally, an example of how to translate a canonical entry in the world view schema into the local schema of the CCSO server [4]. These examples are then used to generate a mapping program that is stored in the distributed catalog service. The CCSO data access function in the query resolver interprets these programs to translate queries and responses communicated with that CCSO server. We plan to release the mapping language to CCSO server administrators, so administrators can write and maintain the mapping for their servers. We have experimented with more than 20 mapping programs. They are seldom more than 50 lines, and are often shorter. It typically takes one or two lines to map an attribute.
4. Using Nomenclator
The Internet Nomenclator Project currently provides a centralized query service on the Internet. The project runs a Nomenclator query resolver that is accessible through its Web page (see the URL in Section 3) and the Simple Nomenclator Query Protocol (SNQP) [2].
The service answers queries that are a conjunction of string values for attributes. A variety of matching techniques are supported including exact string matching, matching with wildcards, and word-based matching in the style of the CCSO service. Our web interface uses the Simple Nomenclator Query Protocol (SNQP) [2]. Programmers can create their own interfaces by using this protocol to communicate with the Nomenclator query resolver. They will require the host name and port number for the query resolver which they can obtain from the Nomenclator home page. SNQP, and hence the web interface, are defined for US-ASCII. Support for other character sets will require further work.
Subsequent phases of the project will provide enhanced services such as providing advice about the cost of queries and ways to constrain queries further to produce faster response times, and allowing users to request more current data. We also plan to distribute query resolvers, so users can benefit from running query resolvers locally. Local query resolvers reduce latency for the user, and distribute query processing load throughout the network.
5. Summary
The Internet Nomenclator Project augments existing CCSO services by supplying schema integration and fast cross-server searches. The key to speed in descriptive query processing is an active catalog, and extensive meta-data and data caching. The Nomenclator system is the result of research in distributed systems [5][6][7][4]. It can be extended to incorporate other name servers, besides the CCSO servers, and to address distributed search and retrieval challenges in other application domains. In addition to providing a white pages service, the Internet Nomenclator Project will evaluate how an active catalog, meta-data caching and data caching perform in very large global information system. The ultimate goal of the project is to refine these techniques to provide the best possible global information systems.
6. Security Considerations
In the Internet Nomenclator Project, the participants’ data are openly available and read-only. Since the risk of tampering with queries and responses is considered low, this version of Nomenclator does not define procedures for protecting the information in its queries and responses.
7. References
8. Author’s Address
Joann J. Ordille
Bell Labs, Lucent Technologies
Computing Sciences Research Center
700 Mountain Avenue, Rm 2C-301
Murray Hill, NJ 07974 USA
EMail: joann@bell-labs.com
9. Full Copyright Statement
Copyright (C) The Internet Society (1998). All Rights Reserved.
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.
The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc2258.pdf", "len_cl100k_base": 5407, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28064, "total-output-tokens": 6439, "length": "2e12", "weborganizer": {"__label__adult": 0.0003712177276611328, "__label__art_design": 0.00036978721618652344, "__label__crime_law": 0.0008211135864257812, "__label__education_jobs": 0.0023193359375, "__label__entertainment": 0.0003230571746826172, "__label__fashion_beauty": 0.00019860267639160156, "__label__finance_business": 0.0011243820190429688, "__label__food_dining": 0.00033593177795410156, "__label__games": 0.0008068084716796875, "__label__hardware": 0.003055572509765625, "__label__health": 0.0005636215209960938, "__label__history": 0.0007910728454589844, "__label__home_hobbies": 0.0001405477523803711, "__label__industrial": 0.0004696846008300781, "__label__literature": 0.0006928443908691406, "__label__politics": 0.000518798828125, "__label__religion": 0.000614166259765625, "__label__science_tech": 0.31640625, "__label__social_life": 0.0004372596740722656, "__label__software": 0.286865234375, "__label__software_dev": 0.381591796875, "__label__sports_fitness": 0.00020956993103027344, "__label__transportation": 0.0005893707275390625, "__label__travel": 0.00038242340087890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28942, 0.01517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28942, 0.45097]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28942, 0.87818]], "google_gemma-3-12b-it_contains_pii": [[0, 1338, false], [1338, 3687, null], [3687, 5094, null], [5094, 7613, null], [7613, 9204, null], [9204, 12087, null], [12087, 14031, null], [14031, 15718, null], [15718, 18209, null], [18209, 20838, null], [20838, 23291, null], [23291, 25547, null], [25547, 27333, null], [27333, 27532, null], [27532, 28942, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1338, true], [1338, 3687, null], [3687, 5094, null], [5094, 7613, null], [7613, 9204, null], [9204, 12087, null], [12087, 14031, null], [14031, 15718, null], [15718, 18209, null], [18209, 20838, null], [20838, 23291, null], [23291, 25547, null], [25547, 27333, null], [27333, 27532, null], [27532, 28942, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28942, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28942, null]], "pdf_page_numbers": [[0, 1338, 1], [1338, 3687, 2], [3687, 5094, 3], [5094, 7613, 4], [7613, 9204, 5], [9204, 12087, 6], [12087, 14031, 7], [14031, 15718, 8], [15718, 18209, 9], [18209, 20838, 10], [20838, 23291, 11], [23291, 25547, 12], [25547, 27333, 13], [27333, 27532, 14], [27532, 28942, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28942, 0.03922]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
d6670f4895a4c643692ecab4e68b6012da8628d9
|
Chapter 11
Locality Lower Bounds
In Chapter 1, we looked at distributed algorithms for coloring. In particular, we saw that rings and rooted trees can be colored with 3 colors in $\log^* n + O(1)$ rounds. In this chapter, we will reconsider the distributed coloring problem. We will look at a classic lower bound by Nathan Linial that shows that the result of Chapter 1 is tight: Coloring rings (and rooted trees) indeed requires $\Omega(\log^* n)$ rounds. In particular, we will prove a lower bound for coloring in the following setting:
- We consider deterministic, synchronous algorithms.
- Message size and local computations are unbounded.
- We assume that the network is a directed ring with $n$ nodes.
- Nodes have unique labels (identifiers) from 1 to $n$.
Remarks:
- A generalization of the lower bound to randomized algorithms is possible. Unfortunately, we will however not have time to discuss this.
- Except for restricting to deterministic algorithms, all the conditions above make a lower bound stronger. Any lower bound for synchronous algorithms certainly also holds for asynchronous ones. A lower bound that is true if message size and local computations are not restricted is clearly also valid if we require a bound on the maximal message size or the amount of local computations. Similarly also assuming that the ring is directed and that node labels are from 1 to $n$ (instead of choosing IDs from a more general domain) strengthen the lower bound.
- Instead of directly proving that 3-coloring a ring needs $\Omega(\log^* n)$ rounds, we will prove a slightly more general statement. We will consider deterministic algorithms with time complexity $r$ (for arbitrary $r$) and derive a lower bound on the number of colors that are needed if we want to properly color an $n$-node ring with an $r$-round algorithm. A 3-coloring lower bound can then be derived by taking the smallest $r$ for which an $r$-round algorithm needs 3 or fewer colors.
Algorithm 39. Synchronous Algorithm: Canonical Form
1. In \( r \) rounds: send complete initial state to nodes at distance at most \( r \)
2. // do all the communication first
3. Compute output based on complete information about \( r \)-neighborhood
4. // do all the computation in the end
11.1 Locality
Let us for a moment look at distributed algorithms more generally (i.e., not only at coloring and not only at rings). Assume that initially, all nodes only know their own label (identifier) and potentially some additional input. As information needs at least \( r \) rounds to travel \( r \) hops, after \( r \) rounds, a node \( v \) can only learn about other nodes at distance at most \( r \). If message size and local computations are not restricted, it is in fact not hard to see, that in \( r \) rounds, a node \( v \) can exactly learn all the node labels and inputs up to distance \( r \). As shown by the following lemma, this allows to transform every deterministic \( r \)-round synchronous algorithm into a simple canonical form.
**Lemma 11.1.** If message size and local computations are not bounded, every deterministic, synchronous \( r \)-round algorithm can be transformed into an algorithm of the form given by Algorithm 39 (i.e., it is possible to first communicate for \( r \) rounds and then do all the computations in the end).
**Proof.** Consider some \( r \)-round algorithm \( A \). We want to show that \( A \) can be brought to the canonical form given by Algorithm 39. First, we let the nodes communicate for \( r \) rounds. Assume that in every round, every node sends its complete state to all of its neighbors (remember that there is no restriction on the maximal message size). By induction, after \( r \) rounds, every node knows the initial state of all other nodes at distance at most \( i \). Hence, after \( r \) rounds, a node \( v \) has the combined initial knowledge of all the nodes in its \( r \)-neighborhood. We want to show that this suffices to locally (at node \( v \)) simulate enough of Algorithm \( A \) to compute all the messages that \( v \) receives in the \( r \) communication rounds of a regular execution of Algorithm \( A \).
Concretely, we prove the following statement by induction on \( i \). For all nodes at distance at most \( r - i + 1 \) from \( v \), node \( v \) can compute all messages of the first \( i \) rounds of a regular execution of \( A \). Note that this implies that \( v \) can compute all the messages it receives from its neighbors during all \( r \) rounds. Because \( v \) knows the initial state of all nodes in the \( r \)-neighborhood, \( v \) can clearly compute all messages of the first round (i.e., the statement is true for \( i = 1 \)). Let us now consider the induction step from \( i \) to \( i + 1 \). By the induction hypothesis, \( v \) can compute the messages of the first \( i \) rounds of all nodes in its \((r - i + 1)\)-neighborhood. It can therefore compute all messages that are received by nodes in the \((r - i)\)-neighborhood in the first \( i \) rounds. This is of course exactly what is needed to compute the messages of round \( i + 1 \) of nodes in the \((r - i)\)-neighborhood.
\(\square\)
11.1. LOCALITY
Remark:
- It is straightforward to generalize the canonical form to randomized algorithms: Every node first computes all the random bits it needs throughout the algorithm. The random bits are then part of the initial state of a node.
Definition 11.2 (r-hop view). We call the collection of the initial states of all nodes in the r-neighborhood of a node v, the r-hop view of v.
Remark:
- Assume that initially, every node knows its degree, its label (identifier) and potentially some additional input. The r-hop view of a node v then includes the complete topology of the r-neighborhood (excluding edges between nodes at distance r) and the labels and additional inputs of all nodes in the r-neighborhood.
Based on the definition of an r-hop view, we can state the following corollary of Lemma 11.1
Corollary 11.3. A deterministic r-round algorithm A is a function that maps every possible r-hop view to the set of possible outputs.
Proof. By Lemma 11.1, we know that we can transform Algorithm A to the canonical form given by Algorithm 39. After r communication rounds, every node v knows exactly its r-hop view. This information suffices to compute the output of node v.
Remarks:
- Note that the above corollary implies that two nodes with equal r-hop views have to compute the same output in every r-round algorithm.
- For coloring algorithms, the only input of a node v is its label. The r-hop view of a node therefore is its labeled r-neighborhood.
- Since we only consider rings, r-hop neighborhoods are particularly simple. The labeled r-neighborhood of a node v (and hence its r-hop view) in a directed ring is simply a \((2r + 1)\)-tuple \((\ell_{-r}, \ell_{-r+1}, \ldots, \ell_0, \ldots, \ell_r)\) of distinct node labels where \(\ell_0\) is the label of v. Assume that for \(i > 0\), \(\ell_i\) is the label of the \(i\)th clockwise neighbor of v and \(\ell_{-i}\) is the label of the \(i\)th counterclockwise neighbor of v. A deterministic coloring algorithm for directed rings therefore is a function that maps \((2r + 1)\)-tuples of node labels to colors.
- Consider two r-hop views \(V_v = (\ell_{-r}, \ldots, \ell_r)\) and \(V'_v = (\ell'_{-r}, \ldots, \ell'_r)\). If \(\ell'_i = \ell_{i+1}\) for \(-r \leq i \leq r - 1\) and if \(\ell'_i \neq \ell_i\) for \(-r \leq i \leq r\), the r-hop view \(V'_v\) can be the r-hop view of a clockwise neighbor of a node with r-hop view \(V_v\). Therefore, every algorithm \(A\) that computes a valid coloring needs to assign different colors to \(V_v\) and \(V'_v\). Otherwise, there is a ring labeling for which \(A\) assigns the same color to two adjacent nodes.
11.2 The Neighborhood Graph
We will now make the above observations concerning colorings of rings a bit more formal. Instead of thinking of an \( r \)-round coloring algorithm as a function from all possible \( r \)-hop views to colors, we will use a slightly different perspective. Interestingly, the problem of understanding distributed coloring algorithms can itself be seen as a classical graph coloring problem.
**Definition 11.4 (Neighborhood Graph).** For a given family of network graphs \( \mathcal{G} \), the \( r \)-neighborhood graph \( N_r(\mathcal{G}) \) is defined as follows. The node set of \( N_r(\mathcal{G}) \) is the set of all possible labeled \( r \)-neighborhoods (i.e., all possible \( r \)-hop views). There is an edge between two labeled \( r \)-neighborhoods \( V_r \) and \( V'_r \) if \( V_r \) and \( V'_r \) can be the \( r \)-hop views of two adjacent nodes.
**Lemma 11.5.** For a given family of network graphs \( \mathcal{G} \), there is an \( r \)-round algorithm that colors graphs of \( \mathcal{G} \) with \( c \) colors iff the chromatic number of the neighborhood graph is \( \chi(N_r(\mathcal{G})) \leq c \).
**Proof.** We have seen that a coloring algorithm is a function that maps every possible \( r \)-hop view to a color. If two \( r \)-hop views \( V_r \) and \( V'_r \) can be the \( r \)-hop views of two adjacent nodes \( u \) and \( v \) (for some labeled graph in \( \mathcal{G} \)), every coloring algorithm must assign different colors to \( V_r \) and \( V'_r \). Hence, a coloring algorithm assigns a color to every node of the neighborhood graph \( N_r(\mathcal{G}) \). The algorithm must assign different colors to adjacent neighborhood graph nodes (i.e., if the corresponding \( r \)-hop views can be \( r \)-hop views of neighboring nodes).
Instead of directly defining the neighborhood graph for directed rings, we define directed graphs \( B_{k,n} \) that are closely connected to the neighborhood graph.
**Lemma 11.6.** Viewed as an undirected graph, the graph \( B_{2r+1,n} \) is a subgraph of the \( r \)-neighborhood graph of directed \( n \)-node rings with node labels from \( [n] \).
**Proof.** The claim follows directly from the observations regarding \( r \)-hop views of nodes in a directed ring from Section 11.1. The set of \( k \)-tuples of increasing node labels is a subset of the set of \( k \)-tuples of distinct node labels. Two nodes of \( B_{2r+1,n} \) are connected by a directed edge if the two corresponding \( r \)-hop views are connected by a directed edge in the neighborhood graph. Note that if there is an edge between \( \vec{\alpha} \) and \( \vec{\beta} \) in \( B_{k,n} \), \( \alpha_1 \neq \beta_k \) because the node labels in \( \vec{\alpha} \) and \( \vec{\beta} \) are increasing.
To determine a lower bound on the number of colors an \( r \)-round algorithm needs for directed \( n \)-node rings, it therefore suffices to determine a lower bound on the chromatic number of \( B_{2r+1,n} \). To obtain such a lower bound, we need the following definition.
Lemma 11.8. If \( (\vec{\alpha}, \vec{\beta}) \) are two nodes \((B, DL)\) and \(\vec{\alpha} \neq \vec{\beta}\), then the directed edge \(((u, x), (y, z))\) between \((u, x) \in E\) and \((y, z) \in E\) iff \(x = y\), i.e., if the first edge ends where the second one starts.
Proof. The edges of \(B_{k+1, n}\) are pairs of \(k\)-tuples \(\vec{\alpha} = (\alpha_1, \ldots, \alpha_k)\) and \(\vec{\beta} = (\beta_1, \ldots, \beta_k)\) that satisfy Conditions (11.1) and (11.2). Because the last \(k-1\) labels in \(\vec{\alpha}\) are equal to the first \(k-1\) labels in \(\vec{\beta}\), the pair \((\vec{\alpha}, \vec{\beta})\) can be represented by a \((k+1)\)-tuple \(\vec{\gamma} = (\gamma_1, \ldots, \gamma_{k+1})\) with \(\gamma_1 = \alpha_1\), \(\gamma_i = \beta_{i-1} = \alpha_i\) for \(2 \leq i \leq k\), and \(\gamma_{k+1} = \beta_k\). Because the labels in \(\vec{\alpha}\) and the labels in \(\vec{\beta}\) are increasing, the labels in \(\vec{\gamma}\) are increasing as well. The two graphs \(B_{k+1, n}\) and \(DL(B_{k,n})\) therefore have the same node sets. There is an edge between two nodes \((\vec{\alpha}_1, \vec{\beta}_1)\) and \((\vec{\alpha}_2, \vec{\beta}_2)\) of \(DL(B_{k,n})\) if \(\vec{\beta}_1 = \vec{\alpha}_2\). This is equivalent to requiring that the two corresponding \((k+1)\)-tuples \(\vec{\gamma}_1\) and \(\vec{\gamma}_2\) are neighbors in \(B_{k+1, n}\), i.e., that the last \(k\) labels of \(\vec{\gamma}_1\) are equal to the first \(k\) labels of \(\vec{\gamma}_2\).
The following lemma establishes a useful connection between the chromatic numbers of a directed graph \(G\) and its diline graph \(DL(G)\).
Lemma 11.9. For the chromatic numbers \(\chi(G)\) and \(\chi(DL(G))\) of a directed graph \(G\) and its diline graph, it holds that
\[
\chi(DL(G)) \geq \log_2 \left(\chi(G)\right).
\]
Proof. Given a \(c\)-coloring of \(DL(G)\), we show how to construct a \(2^c\) coloring of \(G\). The claim of the lemma then follows because this implies that \(\chi(G) \leq 2^{\chi(DL(G))}\).
Assume that we are given a \(c\)-coloring of \(DL(G)\). A \(c\)-coloring of the diline graph \(DL(G)\) can be seen as a coloring of the edges of \(G\) such that no two adjacent edges have the same color. For a node \(v\) of \(G\), let \(S_u\) be the set of colors of its outgoing edges. Let \(u\) and \(v\) be two edges such that \(G\) contains a directed edge \((u, v)\) from \(u\) to \(v\) and let \(x\) be the color of \((u, v)\). Clearly, \(x \in S_u\) because \((u, v)\) is an outgoing edge of \(u\). Because adjacent edges have different colors, no outgoing edge \((v, w)\) of \(v\) can have color \(x\). Therefore \(x \notin S_v\). We can therefore use these color sets to obtain a vertex coloring of \(G\), i.e., the color of \(u\) is \(S_u\) and the color of \(v\) is \(S_v\). Because the number of possible subsets of \([c]\) is \(2^c\), this yields a \(2^c\)-coloring of \(G\).
Let \(\log^{(i)} x\) be the \(i\)-fold application of the base-2 logarithm to \(x\):
\[
\log^{(1)} x = \log_2 x, \quad \log^{(i+1)} x = \log_2(\log^{(i)} x).
\]
Remember from Chapter 1 that
\[
\log^* x = 1 \text{ if } x \leq 2, \quad \log^* x = \min\{i : \log^{(i)} x \leq 2\}.
\]
For the chromatic number of \(B_{k,n}\), we obtain
Lemma 11.10. For all \(n \geq 1\), \(\chi(B_{1,n}) = n\). Further, for \(n \geq k \geq 2\), \(\chi(B_{k,n}) \geq \log^{(k-1)} n\).
Proof. For $k = 1$, $B_{k,n}$ is the complete graph on $n$ nodes with a directed edge from node $i$ to node $j$ iff $i < j$. Therefore, $\chi(B_{1,n}) = n$. For $k > 2$, the claim follows by induction and Lemmas 11.8 and 11.9. □
This finally allows us to state a lower bound on the number of rounds needed to color a directed ring with 3 colors.
**Theorem 11.11.** Every deterministic, distributed algorithm to color a directed ring with 3 or less colors needs at least $\log^* n/2 - 1$ rounds.
Proof. Using the connection between $B_{k,n}$ and the neighborhood graph for directed rings, it suffices to show that $\chi(B_{2r+1,n}) > 3$ for all $r > \log^* n/2 - 1$. From Lemma 11.10, we know that $\chi(B_{2r+1,n}) \geq \log^{(2r)} n$. To obtain $\log^{(2r)} n \leq 2$, we need $r \geq \log^* n/2$. Because $\log_2 3 < 2$, we therefore have $\log^{(2r)} n \leq 3$ if $r \geq \log^* n/2 - 1$. □
**Corollary 11.12.** Every deterministic, distributed algorithm to compute an MIS of a directed ring needs at least $\log^* n/2 - O(1)$ rounds.
**Remarks:**
- It is straight-forward to see that also for a constant $c > 3$, the number or rounds needed to color a ring with $c$ or less colors is $\log^* n/2 - O(1)$.
- There basically (up to additive constants) is a gap of a factor of 2 between the $\log^* n + O(1)$ upper bound of Chapter 1 and the $\log^* n/2 - O(1)$ lower bound of this chapter. It is possible to show that the lower bound is tight, even for undirected rings (for directed rings, this will be part of the exercises).
- The presented lower bound is due to Nathan Linial. The lower bound is also true for randomized algorithms. The generalization for randomized algorithms was done by Moni Naor.
- The neighborhood graph concept can be used more generally to study distributed graph coloring. It can for instance be used to show that with a single round (every node sends its identifier to all neighbors) it is possible to color a graph with $(1 + o(1))\Delta^2 \ln n$ colors and that every one-round algorithm needs at least $\Omega(\Delta^2 / \log^2 \Delta + \log \log n)$ colors.
- Using $r$-hop views and the fact that nodes with equal $r$-hop views have to make the same decisions is the basic principle behind almost all locality lower bounds (in fact, we are not aware of a locality a lower bound that does not use this principle). Using this basic technique (but a completely different proof otherwise), it is for instance possible to show that computing an MIS in a general graph requires at least $\Omega(\sqrt{\log n / \log \log n})$ rounds.
|
{"Source-Url": "https://disco.ethz.ch/courses/fs08/distcomp/lecture/chapter11.pdf", "len_cl100k_base": 4687, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23009, "total-output-tokens": 5307, "length": "2e12", "weborganizer": {"__label__adult": 0.0006213188171386719, "__label__art_design": 0.0005183219909667969, "__label__crime_law": 0.0008530616760253906, "__label__education_jobs": 0.0010204315185546875, "__label__entertainment": 0.0001767873764038086, "__label__fashion_beauty": 0.0003161430358886719, "__label__finance_business": 0.0005354881286621094, "__label__food_dining": 0.0007405281066894531, "__label__games": 0.0015840530395507812, "__label__hardware": 0.003818511962890625, "__label__health": 0.0025043487548828125, "__label__history": 0.0006074905395507812, "__label__home_hobbies": 0.0002791881561279297, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0004799365997314453, "__label__politics": 0.0005402565002441406, "__label__religion": 0.0010900497436523438, "__label__science_tech": 0.281005859375, "__label__social_life": 0.00013768672943115234, "__label__software": 0.0064697265625, "__label__software_dev": 0.69287109375, "__label__sports_fitness": 0.0007624626159667969, "__label__transportation": 0.0014963150024414062, "__label__travel": 0.0003876686096191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16868, 0.02193]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16868, 0.38595]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16868, 0.89669]], "google_gemma-3-12b-it_contains_pii": [[0, 1970, false], [1970, 5190, null], [5190, 7839, null], [7839, 10904, null], [10904, 14295, null], [14295, 16868, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1970, true], [1970, 5190, null], [5190, 7839, null], [7839, 10904, null], [10904, 14295, null], [14295, 16868, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16868, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16868, null]], "pdf_page_numbers": [[0, 1970, 1], [1970, 5190, 2], [5190, 7839, 3], [7839, 10904, 4], [10904, 14295, 5], [14295, 16868, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16868, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
664c79636fe761f994401fded38efb9b56398d50
|
Technical Note
Software Device Drivers for M28W640C Parallel NOR Flash Memory
Introduction
This technical note describes the library source code in C for M28W640CT and M28W640CB parallel NOR Flash memory devices, which are referred to as M28W640C throughout this document unless otherwise specified.
The source code is available from micron.com or your Micron distributor. The c1663.c and c1663.h files contain libraries for accessing M28W640C NOR Flash memory devices.
Also included in this technical note is an overview of the programming model for M28W640C devices. This overview outlines memory device operation and provides a basis for understanding and modifying the accompanying source code.
The source code is written to be as platform independent as possible and requires minimal changes by the user to compile and run. This technical note explains how to modify the source code for individual target hardware. The source code contains comments throughout that explain how it is used and why it has been written the way that it has.
This technical note does not replace the M28W640C data sheet. It refers to it throughout, and it is necessary to have a copy of the data sheet to follow some explanations. The software supplied with this documentation has been tested on a target platform and is usable in C and C++ environments. It is small in size and can be applied to any target hardware.
M28W640C Programming Model
M28W640C is a 64Mb (4Mb x 16) Flash memory that can be electrically erased at a block level and programmed in-system on a word-by-word basis through special coded command sequences on most standard microprocessor buses. The devices feature an asymmetrical block architecture. The M28W640C has an array of 135 blocks: 8 parameter blocks of 4-kilobyte words and 127 main blocks of 32-kilobyte words. M28W640CT memory devices have parameter blocks at the top of the memory address space, while M28W640CB memory devices locate the parameter blocks starting from the bottom. Each block can be erased separately. An ERASE can be suspended to either READ from or PROGRAM to another block, and then resumed. A PROGRAM operation can be suspended to read data in another block and then resumed. Each block can be programmed and erased over 100,000 cycles.
All blocks have three levels of protection. They can be locked and locked-down individually, preventing any accidental programming or erasure. The memory devices offer an additional hardware protection: when V_pp is lower than V_pplk, all blocks are protected against an unwanted PROGRAM or ERASE. All blocks are locked at power-up. The devices include a protection register to increase the protection of a system's design.
PROGRAM and ERASE commands are written to the memory device's command interface. An on-chip PROGRAM/ERASE controller (P/E.C.) handles the timings necessary for PROGRAM and ERASE operations. The end of a PROGRAM or ERASE operation can be detected and any error conditions identified. The command set required to control the memory is consistent with JEDEC standards.
M28W640C devices offer two features to improve the programming throughput: the DOUBLE WORD PROGRAM command used to write a page of two adjacent words in parallel and the QUADRUPLE WORD PROGRAM command used to write a page of four adjacent words in parallel.
Note: Data with the current CPU data bus width is referred to as “elements” throughout the document unless otherwise specified. Due to the flexibility of the software driver, the size of an element depends on the current configuration (user change area).
Bus Operations and Commands
Most M28W640C functionality is available via the two standard bus operations: READ and WRITE. READ operations retrieve data or status information from the device. WRITE operations are interpreted by the device as commands that modify the data stored or the device's behavior. Only certain special WRITE operation sequences are recognized as commands by M28W640C devices. The various commands recognized by the devices are listed in the Commands Tables provided in the corresponding data sheets. The main commands are described in Table 1:
**Table 1: Bus Operations and Commands**
<table>
<thead>
<tr>
<th>Command</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>READ</td>
<td>This command returns the M28W640C to read mode where it behaves as a ROM. In this state, a READ operation outputs the data stored at the specified device address onto the data bus.</td>
</tr>
<tr>
<td>READ ELECTRONIC SIGNATURE</td>
<td>This command places the device in a mode that enables the user to read the electronic signature and block protection status. These are accessed by reading different addresses while the device is in read electronic signature mode.</td>
</tr>
<tr>
<td>ERASE</td>
<td>This is used to set all bits to 1 at every memory location in the selected block. The data previously stored in the erased block will be lost. The ERASE command takes longer to execute than other commands because an entire block is erased at once. Any attempts to ERASE or PROGRAM either a locked block or the memory while it is protected (for example, when ( V_{PP} ) is lower than ( V_{PP,LK} )) generate an error and leave the contents of the memory unchanged.</td>
</tr>
<tr>
<td>PROGRAM</td>
<td>This command is used to modify the data stored at the specified device address. Note that programming can only change bits from 1 to 0. If an attempt is made to change a bit from 0 to 1 using the PROGRAM command, the command will be executed and no error will be signaled, but the bit will remain unchanged. It may therefore be necessary to ERASE the block before programming to addresses within it. Programming modifies a single word at a time. Programming larger amounts of data must be done one word at a time by issuing a PROGRAM command, waiting for the command to complete, issuing the next PROGRAM command, and so forth.</td>
</tr>
</tbody>
</table>
**Table 1: Bus Operations and Commands (Continued)**
<table>
<thead>
<tr>
<th>Command</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>PROGRAM/ERASE SUSPEND</td>
<td>Issuing the PROGRAM/ERASE SUSPEND command during a PROGRAM or ERASE operation temporarily places the M28W640C device in program/erase suspend mode. While an ERASE operation is being suspended, the blocks not being erased can be read or programmed as if in the reset state of the device. While a PROGRAM operation is being suspended, the rest of the device can be read. This enables the user to immediately access information stored in the M28W640C device without having to wait for the PROGRAM or ERASE operation to complete. The PROGRAM or ERASE operation is resumed when the device receives the PROGRAM/ERASE RESUME command.</td>
</tr>
<tr>
<td>READ COMMON FLASH INTERFACE QUERY</td>
<td>This command enables the user to identify the number of blocks in the Flash memory device and the block addresses. The interface also contains information relating to the typical and maximum PROGRAM and ERASE times. This enables the user to implement software timeouts and prevents waiting indefinitely for a defective Flash memory device to finish programming or erasing. For further information about the CFI, please refer to the CFI specification available at <a href="http://www.jedec.org">http://www.jedec.org</a> or from your Micron distributor.</td>
</tr>
<tr>
<td>BLOCK LOCK, BLOCK UNLOCK, and BLOCK LOCK-DOWN</td>
<td>Blocks can be protected against accidental PROGRAM and ERASE operations that could alter their contents. A block can be locked or locked-down. A locked block cannot be programmed or erased. A locked-down block cannot have its protection status changed when the WRITE PROTECT signal is LOW (VIL). When the WRITE PROTECT signal is HIGH (VIH), the LOCK-DOWN function is disabled and each block can be locked or unlocked independently of the others. All blocks are locked at power-up and reset.</td>
</tr>
</tbody>
</table>
**Status Register**
During PROGRAM or ERASE operations, a BUS READ operation outputs the contents of the status register. The status register, which can also be accessed by issuing the READ STATUS REGISTER command, provides valuable information about the latest PROGRAM or ERASE operation. The status register bits are described in the Status Register Bits tables in the M28W640C data sheet. They are primarily used to determine when programming or erasing is complete and whether the operation was successful.
The completion or suspension of the PROGRAM or ERASE operation is indicated by the PROGRAM/ERASE controller status bit (status register bit DQ7) going HIGH (VIH). Programming or erasing errors are indicated by one or more error bits (status register bits DQ1, DQ3, DQ4, and DQ5) going HIGH. In the case of a failure, a CLEAR STATUS REGISTER command must be issued to reset the status register error bits. Otherwise, it will be not possible to determine whether subsequent operations are successful.
A Detailed Example
The Command tables in the M28W640C data sheet describe the BUS WRITE sequences recognized as valid commands by the PROGRAM/ERASE controller.
As an example, consider programming the value 9465h to address 03E2h. The required C language sequence is:
```c
*(uword*)(0x0000) = 0x0040; /* 1st cycle (any block address) */
*(uword*)(0x03E2) = 0x9465; /* 2nd cycle: address and data */
```
where uword is defined as the following 16-bit value:
```c
typedef unsigned short uword
```
The first of the two addresses (0000h) is arbitrary, but must be inside the Flash memory address space. The example assumes that address 0000h in the M28W640C device is mapped to address 0000h in the microprocessor address space. In practice, Flash devices are likely to have a base offset that must be added to the address.
While the device is programming to the specified address, READ operations will access the status register bits. Status register bit DQ7 will be 0 (LOW) during programming and switch to 1 (HIGH) upon completion. If any of the status register bits DQ1, DQ3, or DQ4 goes HIGH upon completion of the PROGRAM operation, it means that the operation has failed.
Once programmed, address 03E2h cannot be reprogrammed reliably until an ERASE operation is issued to erase the entire block.
Using the Software Driver
General Considerations
The software device drivers described in this technical note are intended to simplify the process of developing an application code in C for M28W640C Flash devices.
Note: To meet compatibility requirements, the M28W640C software device driver numbers each block in a Flash memory device starting from 0 (block 0 always has address offset 0) up to the highest address block number in the device. Block numbers may be described differently in the data sheets. For example, in a Flash device containing 64 blocks, it will always refer to the block with address offset 0 as block number 0, and to the last block as block number 63.
With the software driver interface, users can focus on writing the high-level code required for their particular applications. The high-level code accesses the Flash memory device by calling the low-level code so that users do not have to consider the details of the special command sequences. The resulting source code is both simpler and easier to maintain.
Code developed using the provided drivers can be broken down into three layers:
- Hardware-specific bus operations
- Low-level code
- High-level code written by the user
The low-level code requires hardware-specific READ and WRITE bus operations in C to communicate with an M28W640C device. The implementation of these operations is hardware-platform dependent as it depends on the microprocessor on which the C code runs and on the location of the memory in the microprocessor's address space.
The user must write the C drivers that are suitable for the current hardware platform. The low-level code issues the correct WRITE operation sequence for each command and interprets the information received from the devices during programming and erasing.
The high-level code written by the user accesses the memory devices by calling the low-level code. In this way, the code used is simple and easier to maintain. Another consequence is that the user's high-level code is easier to apply to other Micron Flash memory devices.
When developing an application, it is recommended to:
1. Write a simple program to test the low-level code provided and verify that it operates as expected in the user's target hardware and software environments.
2. Write the high-level code for the desired application. The application accesses the Flash memory device by calling the low-level code.
3. Thoroughly test the complete source code of the application.
Porting the Drivers to the Target System (User Change Area)
All changes to the software driver that the user must consider can be found in the header file. A designated area called the “user change area” contains the following items required to port the software driver to new hardware:
Basic Data Types
Check whether the compiler to be used supports the following basic data types, as described in the source code, and change it where necessary.
```c
typedef unsigned char ubyte; (8 bits)
typedef char byte; (8 bits)
typedef unsigned short uword; (16 bits)
typedef short word; (16 bits)
typedef unsigned int udword; (32 bits)
typedef int dword; (32 bits)
```
Device Type
Use the appropriate define statement to choose the correct device:
```c
#define USE_M28W640CT
#define USE_M28W640CB
```
Flash Memory Location
BASE_ADDR is the start address of the Flash memory device. It must be set according to the target system to access the Flash memory device at the correct address. This value is used by the FlashRead() and FlashWrite() functions. The default value is set to 0, and must be adjusted appropriately:
```c
#define BASE_ADDR ((volatile uCPUBusType*)0x00000000)
```
Flash Configuration
Choose the correct Flash memory configuration:
```c
#define USE_16BIT_CPU_ACCESSING_1_16BIT_FLASH
```
This define statement supports a board configuration containing a CPU with an external 16-bit memory bus with a single 16-bit Flash memory device connected to it.
```c
#define USE_32BIT_CPU_ACCESSING_2_16BIT_FLASH
```
This define statement supports a board configuration containing a CPU with an external 32-bit memory bus with two 16-bit Flash memory devices connected to it.
Timeout
Timeouts are implemented in the loops of code to provide an exit for operations that would otherwise never terminate. There are two possibilities:
1. The ANSI library functions declared in time.h exist. If the current compiler supports time.h, the define statement TIME_H_EXISTS should be activated. This prevents any change in timeout settings due to the performance of the current evaluation hardware.
```
#define TIME_H_EXISTS
```
2. The option COUNT_FOR_A_SECOND is used. If the current compiler does not support time.h, the define statement TIME_H_EXISTS cannot be used. In this case, the COUNT_FOR_A_SECOND value must be defined so as to create a one-second delay. For example, if 100,000 repetitions of a loop are needed to give a time delay of one second, then COUNT_FOR_A_SECOND should have the value 100000.
```
#define COUNT_FOR_A_SECOND (chosen value).
```
**Note:** This delay depends on hardware performance and should be updated each time the hardware is changed.
This driver has been tested with a certain configuration and other target platforms may have other performance data. As a result, it may be necessary to change the COUNT_FOR_A_SECOND value. It is up to the user to implement the correct value to prevent the code from timing out too early and allow correct completion.
Additional Subroutines
In the software driver, the VERBOSE define statement is used to activate the FlashErrStr() function to generate a text string describing the return code from the Flash memory device.
```
#define VERBOSE
```
Additional Considerations
The access timing data for the Flash memory device can sometimes be problematic. It may be necessary to change the FlashRead() and FlashWrite() functions if they are not compatible with the timings of the target hardware. This can be solved with a logic state analyzer.
The programmer must take extra care when the device is accessed during an interrupt service routine. When the device is in read mode, interrupts can freely read from the device. Interrupts that do not access the device may be used during all functions.
## C Library Functions Provided
The software library described in this technical note provides the source code for the functions described in Table 2:
### Table 2: C Library Functions
<table>
<thead>
<tr>
<th>Function</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Flash()</td>
<td>This is used to access all device functions and acts as the main Flash memory interface. This function is available on all software drivers written in the Flash device driver format and should be used exclusively. Any functionality unsupported by the Flash memory device can be detected and malfunctions can thus be avoided. <strong>Note:</strong> The other functions are listed to offer a second-level interface when enhanced performance is required. Within the Flash device driver, the functions are always used in the same way, which means that the function interface (names, return codes, parameters, and data types) remains unchanged regardless of the Flash memory device.</td>
</tr>
<tr>
<td>FlashBlockErase()</td>
<td>This is used to erase a block in the device. A block cannot be erased when it is locked or V_{pp} is invalid (lower than V_{PPLK}). Attempting to do so generates an error.</td>
</tr>
<tr>
<td>FlashBlockLockDown()</td>
<td>This is used to lock-down a block. Once locked-down, the block is locked and when WP is LOW (V_L), the lock status of the block cannot be changed by using the software commands alone. The block reverts to the locked state when the device is reset or powered down.</td>
</tr>
<tr>
<td>FlashBlockProtect()</td>
<td>This is used to lock (protect) a block in the Flash memory device. Once locked (protected), the data in the block cannot be programmed or erased until the block is unlocked (unprotected).</td>
</tr>
<tr>
<td>FlashBlockUnprotect()</td>
<td>This is used to unlock (unprotect) a block in the Flash memory device. Once the block is unlocked, the data it contains can be erased or new data can be programmed to it.</td>
</tr>
<tr>
<td>FlashCheckBlockLockDownStatus()</td>
<td>This is used to check whether a block is locked-down.</td>
</tr>
<tr>
<td>FlashCheckBlockProtection()</td>
<td>This is used to check whether a block is locked.</td>
</tr>
<tr>
<td>FlashCheckCompatibility()</td>
<td>This is used to check the Flash memory device for compatibility.</td>
</tr>
<tr>
<td>FlashChipErase()</td>
<td>This is used to erase the entire device. Locked blocks will be not erased. The device cannot be erased when V_{pp} is invalid (lower than V_{PPLK}). Attempting to do so generates an error.</td>
</tr>
<tr>
<td>FlashChipUnprotect()</td>
<td>This is used to unlock all blocks in the Flash memory device. Once all the blocks are unlocked, the data contained in all the blocks can be entirely erased or new data can be programmed.</td>
</tr>
<tr>
<td>FlashClearStatusRegister()</td>
<td>This is used to clear the status register.</td>
</tr>
<tr>
<td>FlashDoubleProgram()</td>
<td>This is used to program the memory by issuing the DOUBLE WORD PROGRAM command.</td>
</tr>
<tr>
<td>FlashErrorStr()</td>
<td>This is used to generate a text string describing the detected error.</td>
</tr>
<tr>
<td>Function</td>
<td>Description</td>
</tr>
<tr>
<td>--------------------------------</td>
<td>---------------------------------------------------------------------------------------------------------------------------------------------</td>
</tr>
<tr>
<td>FlashProgram()</td>
<td>This is used to program data arrays to the Flash memory device. Only previously erased elements can be programmed reliably. Locked blocks cannot be programmed, and PROGRAM operations cannot be performed when Vpp is invalid.</td>
</tr>
<tr>
<td>FlashProtectionRegisterProgram()</td>
<td>This is used to program the protection register.</td>
</tr>
<tr>
<td>FlashQuadProgram()</td>
<td>Available only for M28W640C devices, this is used to program the memory by issuing the QUADRUPLE WORD PROGRAM command.</td>
</tr>
<tr>
<td>FlashReadCfi()</td>
<td>This is used to check if the common Flash interface (CFI) is supported and then read the CFI data at the specified offset.</td>
</tr>
<tr>
<td>FlashReadDeviceId()</td>
<td>This is used to read the device codes of the Flash memory device.</td>
</tr>
<tr>
<td>FlashReadManufacturerCode()</td>
<td>This is used to read the manufacturer codes of the Flash memory device.</td>
</tr>
<tr>
<td>FlashReadProtectionRegister()</td>
<td>This is used to read a location in the protection register.</td>
</tr>
<tr>
<td>FlashReadStatusRegister()</td>
<td>This is used to read the status register.</td>
</tr>
<tr>
<td>FlashReset()</td>
<td>This is used to reset the device to read array mode. <strong>Note:</strong> There should be no need to call this function under normal operation as all the other software library functions leave the device in this mode.</td>
</tr>
<tr>
<td>FlashResume()</td>
<td>This is used to resume the PROGRAM or ERASE operation being suspended.</td>
</tr>
<tr>
<td>FlashSingleProgram()</td>
<td>This is used to program a single element.</td>
</tr>
<tr>
<td>FlashSuspend()</td>
<td>This is used to suspend the PROGRAM or ERASE operation in progress. The functions provided in the software library rely on the user implementing the hardware-specific bus operations and on access timings to communicate properly with the Flash device. If changes in the software driver are necessary, the only two functions that need to be changed are FlashRead() and FlashWrite().</td>
</tr>
<tr>
<td>FlashRead()</td>
<td>This is used to read a value from the Flash memory device.</td>
</tr>
<tr>
<td>FlashWrite()</td>
<td>This is used to write a value to the Flash memory device.</td>
</tr>
</tbody>
</table>
Getting Started (Example Quicktest)
To test the source code in the target system, start by reading from the M28W640C device. If it is erased, only FFFFh data should be read. Then, read the manufacturer and device codes and verify that they are correct. If these functions work, it is likely that the other functions will also work. However, all functions should be tested thoroughly.
To start, write a function main() and include the C file as described in the following example. All Flash memory functions can be called and executed within the main function.
The following example shows a check of the device identifiers (device code, manufacturer code) and a simple BlockErase command.
```c
#include "c1663.c"
void main(void) {
ParameterType fp; /* Contains all Flash Parameters */
ReturnType rRetVal; /* Return Type Enum */
Flash(ReadManufacturerCode, &fp);
printf("Manufacturer Code: %08Xh\r\n",
fp.ReadManufacturerCode.ucManufacturerCode);
Flash(ReadDeviceId, &fp);
printf("Device Code: %08Xh\r\n",
fp.ReadDeviceId.ucDeviceId);
fp.BlockErase.ublBlockNr = 10; /* block 10 will be erased*/
rRetVal = Flash(BlockErase, &fp); /* function execution */
}
```
Software Limitations
The software described in this technical note does not implement the full set of M28W640C functionality. When an error occurs, the software simply returns the error message. It is up to the user to decide what to do. They can either try the command again or replace the device, if necessary.
Conclusion
M28W640C 3V supply, parallel NOR Flash memory devices are ideal for embedded and other computer systems. They can be easily interfaced to microprocessors and driven with simple software drivers written in the C language.
The M28W640C driver interface enables changeable Flash configurations, compiler-independent data types, and a unique access mode for a broad range of Flash devices.
In addition, applications supporting the software can implement any Flash device with the same interface, without any code change. A simple recompiling with a new software driver is all that is required to control a new device.
Revision History
Rev. C .................................................................................................................. 01/12
- Edited and formatted document
- Rebranded as a technical note
Rev. B .................................................................................................................. 05/11
- Minor changes
Rev. A .................................................................................................................. 02/03
- Initial release of document
|
{"Source-Url": "https://www.micron.com/-/media/client/global/documents/products/technical-note/solid-state-storage/tn1318_m28w640c_drivers.pdf", "len_cl100k_base": 5427, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24822, "total-output-tokens": 5814, "length": "2e12", "weborganizer": {"__label__adult": 0.001186370849609375, "__label__art_design": 0.0007791519165039062, "__label__crime_law": 0.0008769035339355469, "__label__education_jobs": 0.00047969818115234375, "__label__entertainment": 0.00015604496002197266, "__label__fashion_beauty": 0.0005750656127929688, "__label__finance_business": 0.00024139881134033203, "__label__food_dining": 0.0009822845458984375, "__label__games": 0.0018787384033203125, "__label__hardware": 0.2236328125, "__label__health": 0.0009140968322753906, "__label__history": 0.00042057037353515625, "__label__home_hobbies": 0.0004241466522216797, "__label__industrial": 0.0023708343505859375, "__label__literature": 0.00031948089599609375, "__label__politics": 0.0004143714904785156, "__label__religion": 0.0014896392822265625, "__label__science_tech": 0.036376953125, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.018463134765625, "__label__software_dev": 0.705078125, "__label__sports_fitness": 0.001041412353515625, "__label__transportation": 0.00147247314453125, "__label__travel": 0.00030994415283203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26108, 0.0178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26108, 0.71468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26108, 0.87995]], "google_gemma-3-12b-it_contains_pii": [[0, 2707, false], [2707, 6399, null], [6399, 9616, null], [9616, 10923, null], [10923, 13409, null], [13409, 15097, null], [15097, 17215, null], [17215, 20206, null], [20206, 23428, null], [23428, 24967, null], [24967, 25595, null], [25595, 26108, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2707, true], [2707, 6399, null], [6399, 9616, null], [9616, 10923, null], [10923, 13409, null], [13409, 15097, null], [15097, 17215, null], [17215, 20206, null], [20206, 23428, null], [23428, 24967, null], [24967, 25595, null], [25595, 26108, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26108, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26108, null]], "pdf_page_numbers": [[0, 2707, 1], [2707, 6399, 2], [6399, 9616, 3], [9616, 10923, 4], [10923, 13409, 5], [13409, 15097, 6], [15097, 17215, 7], [17215, 20206, 8], [20206, 23428, 9], [23428, 24967, 10], [24967, 25595, 11], [25595, 26108, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26108, 0.23077]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
98cba6707917484e46cf315b511c58d6c38b15f3
|
GENERATING SIMULINK AND STATEFLOW MODELS FROM SOFTWARE SPECIFICATIONS
Keywords: Mechatronic Systems Engineering, Software Engineering, MechatronicUML
1. Introduction
Innovation in today's technical systems is largely driven by embedded software. Such systems are known as mechatronic systems. For example, it has been estimated that the current generation of upper class cars will contain about one gigabyte of software [Pretschner et al. 2007]. Mechatronic systems pose a challenge for software development as they are often employed in a safety-critical context and they operate under tight resource constraints.
While previously most of the embedded software consisted of single feedback controllers for controlling the dynamic behavior of the physical part of the system, these single controllers are increasingly connected to each other. Then, the behavior of a single controller is depending on the behavior of another controller. This requires discrete state-based software for the specification of asynchronous message exchange in addition to the control software. Consequently, this leads to complex hybrid embedded software. For example in a modern car, the adaptive cruise control system connects the previously isolated feedback controllers for the engine and the braking subsystems.
Furthermore, single technical systems are not working in isolation anymore but rather form systems of systems where autonomous systems coordinate and communicate in an ad-hoc fashion using complex message-based communication protocols [Schäfer and Wehrheim 2007]. In this case, the network topology is not fixed at design time but rather adapts itself at run time. For example, several cars may coordinate their headlights to improve the illumination of the street to reduce the number of accidents.
Owing to these trends, the amount of software and its complexity rises to unprecedented heights. Thus, the key issue for the successful development of such systems is handling the size and the complexity by appropriate development methods and languages as well as supporting tools.
Model-driven development approaches enable to abstract from technical implementation details and, thus, are better suited to develop such systems. MATLAB\(^1\) with its toolboxes Simulink and Stateflow, Dymola\(^2\) based on the open source language Modelica as well as ASCET\(^3\) and SCADE\(^4\) are state of the art software tools for the model-driven development for software in embedded software. They all provide means to model feedback controllers using block diagrams and discrete state-based behavior using a variant of statecharts [Harel 1987].
All these software tools, in principle, support the development of software for the aforementioned use cases. However, they lack appropriate modeling support in the case of communication protocols using
\(^1\) http://www.mathworks.com/products/matlab/index.html
\(^2\) http://www.3ds.com/products/catia/portfolio/dymola
\(^4\) http://www.esterel-technologies.com/products/scade-suite/
asynchronous message exchange and complex real-time constraints encompassing several states and
transitions [Giese and Henkler 2006].
The engineer may use workarounds, e.g., by a manual implementation using Embedded MATLAB
Functions, to emulate the required functionality. This works for simulation and target code generation
purposes. However, single simulation runs cannot guarantee the freedom of bugs. In contrast, formal
verification [Baier and Katoen 2008], as, e.g., in the Simulink Design Verifier and the SCADE Suite
Design Verifier, enables to guarantee freedom of bugs by considering all possible simulation runs.
Though, the aforementioned workarounds make formal verification quickly infeasible as the number
of simulation runs explodes because the semantics of asynchronous messages and rich time constraints
is hidden in the manual implementation of the workarounds. Consequently, modeling languages
are required which support the specification of asynchronous message exchange and rich time constraints
as first class modeling entities.
MechatronicUML [Becker et al. 2012] is a modeling language which targets the software embedded in
mechatronic systems and specifically addresses the aforementioned case of systems of systems with
complex communication protocols. MechatronicUML focuses on the discrete part of the system. It
follows the component-based approach for software development. The behavior of discrete com-
ponents is specified using Real-Time Statecharts, which are a combination of UML state machines
[Object Management Group 2011] and timed automata [Alur and Dill 1994]. They support the
specification of asynchronous messages and time constraints as first class entities in the modeling
language.
The formal verification of MechatronicUML models exploits a sophisticated interface definition
between the discrete part, which contains the aforementioned complex message-based communication
protocols, and the continuous part, which contains the control software. The interface decouples the
discrete part from the continuous part and, thus, enables the efficient automatic formal verification of
the discrete part. This formal verification exploits assumptions about the continuous behavior that are
guaranteed by extensive manual simulations. After the successful formal verification of the system,
the discrete part has to be integrated with the continuous part for holistic simulations and target code
generation.
In this paper, we present how we employ MATLAB with the toolboxes Simulink and Stateflow for
this step because it is the de facto standard platform for automotive software. We do this by
automatically generating Simulink and Stateflow models from MechatronicUML models. This gener-
ation enables us exploit the complete MATLAB tool chain, e.g., for target code generation and
simulation. However, we have to ensure that the generated Simulink and Stateflow models exhibit the
same behavior as the original MechatronicUML models. Consequently, we give arguments in our
presentation of the generation that it preserves the behavior and thus the verification results, too.
We illustrate the generation by a running example using the miniature robot BeBot. The BeBot is a
small mechatronic system, which has been developed as a test bed for development with a strong
focus on ad-hoc communication and collaboration with other BeBots. The BeBot consists of a base
module that includes the electrical drive, wheels, the power supply, and a small processing unit mostly
for motor control. BeBots can be equipped with an extension module with powerful information
processing and wireless communication devices. As such, they provide a sound test platform for the
development of innovative car functionalities like the aforementioned communication and
collaboration to improve the illumination of streets.
In the next section, we present MATLAB/Simulink and Stateflow in more detail. Section 3 contains a
presentation of the main concepts of MechatronicUML. The main contribution, the generation of
MATLAB/Simulink and Stateflow models from MechatronicUML models, is presented in Section 4.
We show how the complex communication and the time constraints of MechatronicUML are
represented in MATLAB/Simulink and Stateflow. We conclude and give a brief outlook in Section 5.
2. MATLAB/Simulink and Stateflow
MATLAB is an environment for technical computing. Simulink is a platform for multi-domain
modeling and simulation. It supports causal block-oriented modeling and the analysis of discrete-time
and continuous-time models. The Simulink simulation engine has several solvers for ordinary differential equations (ODE).
Stateflow extends Simulink by an environment for event-based reactive behavior specification. It uses the finite state machine concept (FSM) which is similar to Harel's statechart formalism [Harel 1987]. It supports hierarchical and parallel states. Stateflow supports modeling of complex control flow by transitions between these states and allows using MATLAB functions as a complex action language. However, the Stateflow formalism has some drawbacks for modeling communication protocols with real-time requirements between distributed systems. Stateflow is an event-triggered modeling environment. However, it does not offer expressive features for specifying real-time behavior like in timed automata [Alur and Dill 1994] or Real-Time Statecharts. The modeler has to use helping elements from Simulink to count time-ticks in Stateflow. Additionally, Stateflow provides no support for asynchronous, message-based communication with buffers for sent and received messages.
Although Stateflow has output events which can be used to exchange information between different Stateflow blocks, these events are not buffered by the Stateflow receiver block. This means, if the receiver doesn’t directly use the received events, these events are lost and cannot be used to coordinate systems. The buffering of messages is very important for the coordination of distributed mechatronic systems, as they are often physically separated and arranged in different locations. Therefore, mechatronic systems often cannot coordinate via shared variables. It is possible to encode asynchronous message-based communication with a combination of several linked Simulink and Stateflow blocks, but this is tedious and hard to maintain manually.
3. MechatronicUML
In this section, we explain MechatronicUML in more detail. The focus of MechatronicUML is the specification of distributed hybrid components that must fulfill real-time requirements such as deadlines. Formal verification techniques are applied to detect flaws of the discrete communication behavior during the design early. Combined with suitable structure and behavior specification languages, MechatronicUML is meant to reduce development time and cost.
The structure is specified by a component model that integrates the specification of the communication of discrete software components and continuous components such as feedback controllers. For the specification of the behavior, we use Real-Time Statecharts. This variant of statecharts provides a concept to specify real-time properties and defines semantics for the exchange of asynchronous messages.
In the following, we firstly introduce a running example (Section 3.1). Afterwards, we briefly describe the component model (Section 3.2). Next, we give an overview of Real-Time Statecharts (Section 3.3) with a focus on two main differences to MATLAB/Simulink and Stateflow: (1) a concept for the communication with asynchronous messages (Section 3.4), and (2) a more comprehensive concept for the specification of real-time properties (Section 3.5).
3.1. Running example
We explain MechatronicUML by using a scenario of a sophisticated collision avoidance system for a driver assistance system. It uses car-to-car communication for enhancing standard sensor-based distance detection by the exchange of precise position data. As mentioned in Section 1, we use the miniature robot BeBot as a test platform for such systems. In our scenario, BeBots have to navigate in a plain area without colliding with each other. For better understandability of the example, we focus on the specification of only two BeBots in the remainder of this paper. However, the specification can be easily extended to support an arbitrary number of BeBots instead of only two [Becker et al. 2012].
The communication of the BeBots is crucial for the implementation of our scenario. Each BeBot has to exchange position data with the other BeBot and adapt its driving direction accordingly. For a coordinated communication, we assign different roles, the distributor role and the client role, to the BeBots. One BeBot, which we call distributor BeBot, is assigned the distributor role. The distributor BeBot collects the position data of both BeBots and sends the data to the other BeBot. The other BeBot, called client BeBot, is assigned the client role. The client BeBot receives the position data of both BeBots and sends only its own position to the distributor BeBot.
3.2. Component model
A MechatronicUML specification for the structure of the two BeBots in our scenario is shown in Figure 1. The structure of a single BeBot is specified by a component in MechatronicUML. Since we use two BeBots in our scenario, we specify two representatives of the BeBot component, called instances: bebot1 that is assigned the distributor role and bebot2 that is assigned the client role. The communication between the BeBots is specified by ports and connections between the ports. For example, bebot1 has a distributor port that is connected to the client port of bebot2. For the integration of continuous components, we utilize two kinds of ports. First, discrete ports are used for message-based communication. A discrete port has a well-defined interface that defines which messages and parameters can be exchanged. Second, continuous ports exchange time-varying quantities that have a value at all points in time. This definition is derived from the definition of signals in Simulink.

Each BeBot component embeds four subcomponents: the two controllers MotorCtrl and PositionSensor, and the two discrete software components Navigation and Observer. The MotorCtrl governs the two motors of the BeBot according to the continuous values that are received through the continuous ports left_in and right_in from the Navigation. The PositionSensor determines the current position of the BeBot and sends it via the continuous port pos_out to the position port of the Navigation.
Since motor control and wireless communication are placed on different modules (see Section 1), the functionality is separated into two discrete software components: the Observer that uses the wireless communication to exchange position data and the Navigation that is connected with the motor control to steer the BeBot through the environment. More precisely, the distance to and the direction of the other BeBot is encoded in a distance vector. This distance vector is received by the Navigation component from the Observer component through the receiver port. Then, the motor speed is calculated in accordance to the distance vector such that a collision between the BeBots is impossible. Based on the own BeBot’s position data that the Navigation receives via the continuous port position, it sends its current position to the Observer through the sender port. This position is received by the Observer via the receiver port. In addition to the communication with the Navigation, the Observer implements the previously described behavior for the distributor and the client role. Next, we explain the advantages of Real-Time Statecharts by using the Observer’s behavior as an example.
3.3. Discrete behavior model
In Real-Time Statecharts, messages are used to specify the communication between different statecharts. In particular, asynchronous messages are used to decouple the communication of statecharts between different components. For the specification of real-time properties, Real-Time Statecharts extend statecharts by clocks and corresponding time guards as used in timed automata. They describe the internal time interval in which a transition can be fired. Time invariants specify the duration the system is allowed to stay in a state. The formal semantics of Real-Time Statecharts is based on timed automata to enable applying formal verification techniques.
As an example, Figure 2 shows the Real-Time Statechart of the Observer component. It consists of one top-level state Observer_Main which has four orthogonal (parallel) regions: receiver, sender, client, and distributor. These regions are executed in parallel. Each region corresponds to a port of the
Observer component and contains an internal statechart that handles the communication for the corresponding port. If a port is not used in a specific instance of the Observer component, the corresponding region is not present in the statechart. For example, the region client does not exist for the Observer component instance in the distributor BeBot.
The own position data and the other BeBot’s position data are stored in two global variables myPos and otherPos that are defined in the upper right. The BeBot’s own position is received in the receiver region from the Navigation. Each time the Navigation sends an ownPosition message with the current position, the position is stored in myPos. The other BeBot’s position is received in the client or the distributor region depending on the BeBot’s role. The distributor BeBot starts in the state Receiving and waits for the current position of the client BeBot. The client BeBot sends its position by the asynchronous message to the distributor BeBot. After the distributor BeBot receives the position message, it changes to the state Sending in the distributor region. Moreover, it stores the position of the client BeBot in the variable otherPos, and sends the internal message posData to inform the sender region that new position data is available. Next, the distributor BeBot’s Observer changes to the state Receiving again and sends the position data of both BeBots by the message allPosition to the client BeBot.
Simultaneously, in the region sender the transition from Idle to Calculate is executed after the posData message is received. When entering the state Calculate, the Observer executes the operation calcDistance() to determine the distance vector between the BeBots. During the execution of the transition from Calculate to Idle, this distance is send to the Navigation component.
3.4. Asynchronous messages
An asynchronous message is sent immediately. The receiver stores the message in a queue until it processes the message. During the execution of the receiving transition, the message is consumed and removed from the queue. Asynchronous messages do not get lost if they are not received right away.
As an example, consider again the exchange of the position data between the client BeBot and the distributor BeBot. The client BeBot sends the position message with its current position through the sender port of the component to the distributor BeBot as specified by the component model in Figure 1. This message is received during the execution of the transition from the state Receiving to
Sending in the distributor region. As long as this transition is not enabled, i.e. the state Receiving is not active or the time guard is not evaluated to true, the message is stored in a queue.
3.5. Real-Time properties
Real-Time Statecharts use clocks to represent the internal time of the system. These clocks are initially set to 0 time units. Note, that we use time units instead of SI-units. However, it is possible to transform time units to SI-units as we explain in Section 4.3. Clocks may be used in time guards of transitions and time invariants of states to restrict the system behavior. A transition may only be fired if the time guard evaluates to true for the current clock values, a state may only be active as long as the time invariant evaluates to true. In contrast to Stateflow, clocks are not automatically reset after the system changes to another state. Instead, the keyword reset is used to explicitly reset a clock on executing a transition, on entering a state, or on leaving a state. This semantics of clocks enables to easily specify more complex real-time constraints.
For instance, in the region distributor the clock c4 is reset on entering the state Sending (entry / {reset: c4}). The system is allowed to stay in the state Sending until c4 has reached 5 time units. The effect is that the clock c4 is always 0 on entering the state Sending, but be between 0 and 5 on entering the state Receiving. Therefore, the total amount of time that the system is allowed to stay in the Receiving state depends on the time the system stayed in the Sending state. If the system stayed 5 time units in the state Sending, it is only allowed to stay a maximum of 95 time units in the state Receiving. This affects also the time guard of the transition from the state Receiving to the state Sending. The time guard evaluates to true after c4 reaches 75 time units. Again, if the system stayed 5 time units in the Sending state and afterwards changes to the Receiving state, the time guard evaluates to true 70 time units after entering the Receiving state. If the client does not send its position data until the clock c4 has exceeded 100 time units, the communication is considered erroneous and the system changes to the Error state.
4. Generating MATLAB/Simulink and Stateflow models from MechatronicUML
In this section, we show how MechatronicUML models are translated into Simulink and Stateflow models. The main challenge for the translation is to preserve the behavior of the MechatronicUML model which has been formally verified. We especially need to consider this for the translation of asynchronous communication and time constructs which are not directly supported by Simulink and Stateflow. In the following, we outline the generation of Simulink block diagrams and Stateflow charts from MechatronicUML models in Section 4.1. Then, we highlight the elements that are used to realize asynchronous communication in Simulink and Stateflow in Section 4.2. Finally, we show how the advanced clock concept of Real-Time Statecharts may be encoded in Stateflow in Section 4.3.
We have implemented the generation of Simulink and Stateflow models from a given MechatronicUML model using a Triple-Graph-Grammar-based model transformation approach [Schürr 1994]. The tool is available for download as free software on our website.
4.1. Generating Simulink block diagrams from component definitions
The component specification of MechatronicUML is translated into a Simulink block diagram. The basic generation is straightforward. For each component, we create one subsystem block with the same name. The ports of the component are translated to inputs and outputs of the respective block. In contrast to Simulink, MechatronicUML supports bidirectional discrete ports, i.e., such ports may send and receive messages. Such ports are translated into an input and an output. Figure 3 shows the result of translating beb01 of Figure 1 to Simulink.
Simulink and Stateflow provide no support for asynchronous, message-based communication with queues for sent and received messages. These concepts are supported natively by MechatronicUML and, thus, need to be encoded manually into Simulink and Stateflow. The CommunicationSwitch is one of the elements necessary for providing asynchronous message-based communication. It implements a dispatching of messages on one level of hierarchy in the system (cf. Section 4.2).
5 http://www.cs.uni-paderborn.de/index.php?id=muml-simulation
The controllers ctr1 and pd1 of Figure 1 are connected via continuous ports to the Navigation component. Such ports are connected directly in Simulink.
In MechatronicUML, each non-hierarchical component contains a Real-Time Statechart. Each Real-Time Statechart is translated into a Stateflow chart. States, transitions, entry actions, and exit actions are translated to their counterparts in Stateflow. If a state of a Real-Time Statechart embeds parallel regions as, e.g., Observer_Main of Figure 2, we set the attribute Decomposition of that state to Parallel. For each region, we create one state in that top-level state with its attribute Decomposition set to Exclusive. That substate contains the Real-Time Statechart of the respective region. Figure 5 shows an excerpt of the Observer Statechart of Figure 2 where only the substate for the distributor region is modeled completely.
The priorities of regions and transitions in Real-Time Statecharts are encoded by a user defined execution order in Stateflow. In MechatronicUML, a high number indicates a high priority and, thus, the highest priority is translated to execution order 1. Lower priorities get increasing execution orders, respectively.
4.2. Using asynchronous message-based communication in Simulink and Stateflow
We enable asynchronous communication in Simulink by adding two additional types of blocks to the block diagram: a CommunicationSwitch block which is contained in hierarchical components and a Link Layer block which is contained in non-hierarchical components. The CommunicationSwitch shown in Figure 3 dispatches asynchronous messages within a subsystem. Thus, all connections between message ports are replaced by a connection to and from the CommunicationSwitch. Thereby, the CommunicationSwitch decouples the implementation of a block from the message dispatching in the real system. The CommunicationSwitch is generated for simulation purposes and meant to be replaced by a network interface for the real system later.
A message is a six-tuple (package_id, sender_id, receiver_id, message_id, parameter, timestamp). The package_id is an integer assigning a sequential number to a message. This may be used to track lost messages. The sender_id is the network address of the sender; the receiver_id is the network address of the intended receiver of the message. The message itself is encoded by an unsigned integer in contrast to the Strings used in MechatronicUML because Simulink does not support variable sized Strings. In our implementation, each message may only contain exactly one parameter of type double. Thus, messages with more than one parameter need to be split into several messages. The timestamp encodes the point in time at which the message was sent. The messages are encoded into a bus signal using the six fields described above. This bus is then connected to the CommunicationSwitch which dispatches the messages according to their intended receiver.
In our example, the component Observer is a non-hierarchical component, i.e., it contains a Real-Time Statechart, but no other component parts. We translate such components as shown for the Observer in Figure 4. The resulting Simulink block contains a Stateflow block and a Link Layer block for each message port of the MechatronicUML component. For presentation purposes, we omit the sender and receiver ports and only show the Link Layer block for the distributor port.
The Link Layer block takes the signals for the port (distributor_recv and distributor_send) as input and output signals as well as a uniquely identifying network address (net_address). The receiver_net_address is the network address of the receiver of messages which are sent via this Link Layer, i.e., the target port that this component is connected to. These addresses are then used by the CommunicationSwitch for realizing the message dispatching.
In addition, the Link Layer defines four queues that are used to buffer the messages. The in-queues (event_queue_in and event_queue_param_in) store the messages that are received via the input port and provide them to the Stateflow chart. Accordingly, the out-queues buffer the messages that are sent by the Stateflow chart and until they are sent via port_out. The Stateflow chart only uses the in-queues and out-queues provided by the Link Layer block and does not directly access the message bus from the CommunicationSwitch. Thus, the implementation of the Stateflow chart does not depend on the concrete implementation of the asynchronous communication in Simulink.
For modeling asynchronous messages in Stateflow, we utilize the queue signals shown in Figure 6 that are used as inputs and outputs of the Stateflow chart. In MechatronicUML, we distinguish between trigger messages that are received and enable transitions and raised messages that are sent when firing a transition. We use three embedded MATLAB functions checkQueue, dequeue, and enqueue for implementing that behavior. The use of these functions is illustrated in Figure 5 which shows an excerpt of the Real-Time Statechart of the Observer of Figure 2 containing only the contents of the distributor region.
In Figure 2, the transition from Receiving to Sending in region distributor is enabled by the trigger message position. In Stateflow, we may only fire the transition from Receiving to Sending if the message positions is contained in the queue for received messages. This preserves the semantics of MechatronicUML where a transition may only be fired if the message has been received. We check this by using the function checkQueue which returns true if and only if the message is contained in the queue. Since the message carries an array of two elements and our implementation only supports one
parameter per message, the message is split into two messages in Stateflow each carrying one value as a parameter. The message is consumed and thereby removed from the queue by calling the dequeue function which returns the changed queues as well as the received parameter value. The parameter value is then assigned to the variable otherPos. A raised message is translated to a call of the enqueue function which adds the message and the parameter to the respective out-queues. Then, the Link Layer block sends the respective message in the next simulation step. This preserves the semantics of MechatronicUML which requires the message to be sent right away. An example is given by the transition Sending to Receiving in Figure 5.
Figure 5 only shows a small excerpt of our example. It clearly shows that modeling asynchronous communication in Simulink and Stateflow is tedious and introduces a huge, additional amount of complexity when modeling this manually. Such additional complexity increases the possibility of introducing errors to the system. In MechatronicUML, we are able to reduce the visual complexity because we handle asynchronous communication as a first class entity of our modeling language. In addition, we are able to apply formal verification to show the correctness of interactions which is, in Stateflow, also hindered by the manual implementation of asynchronous communication.
4.3. Using Real-Time Clocks in Stateflow
Real-Time Statecharts allow specifying complex timing constraints as described in Section 3.5. In contrast, Stateflow includes only simple temporal logic operators: its after() and before() operators can only be used to refer to the time elapsed since activation of the associated state. Especially, Stateflow does not provide clock variables that are independent from the associated state and can be used to refer to the time elapsed since their last reset. Therefore, the advanced timing concepts of MechatronicUML have to be mapped to helper constructs in Stateflow.
Each clock variable of MechatronicUML is mapped to a double variable in Stateflow. However, this variable is not accessed directly, but with the help of two embedded MATLAB functions: reset() to reset a clock variable and time() to retrieve a clock value. As shown in Figures 5 and 6, we use a Digital Clock block as an absolute time input (clockSignal) for the time calculation in a Stateflow chart. When resetting a clock using reset(), the current clockSignal is assigned to the clock variable. The time() function returns the difference between the current clockSignal and the value of the clock variable, i.e., the time since its last reset. Since MechatronicUML uses time units instead of SI-units, the user must specify a mapping of time units to SI-units. In our example, we assume that each time unit corresponds to 1ms. Then, time() automatically converts the clock value to the correct SI-unit, allowing to use the same constant values as in MechatronicUML. By using the clock values in guards, we restrict firing of transitions to the same time intervals which have been specified in MechatronicUML which preserves the semantics of our model.
5. Conclusion and Outlook
In this paper we presented MechatronicUML, a modeling language which specifically targets the software embedded in technical systems. Real-Time Statecharts are used to model the discrete communication behavior between different systems. In contrast to existing development environments like MATLAB/Simulink and Stateflow, MechatronicUML provides sophisticated support for asynchronous communication and clocks. Furthermore, it supports the verification of safety properties, helping finding bugs early in the development, which is crucial to reduce development time and costs. For these reasons, we consider MechatronicUML a model-driven development technique highly suitable to tackle the increasing complexity of modern mechatronic systems.
As the main contribution, we described how a software specification in MechatronicUML can be automatically translated to Simulink and Stateflow retaining the original MechatronicUML semantics and, thus, the verification results. In particular, we map asynchronous communication in MechatronicUML to helper constructs like switches and Link Layer blocks that encapsulate this asynchronous communication in Simulink and Stateflow. In addition, we use digital clock blocks in combination with helper functions to translate the clock variables and clock constraints of MechatronicUML.
Thereby, we have combined the modeling and verification strengths of MechatronicUML with the advanced simulation and code generation capabilities of Simulink and Stateflow. We have implemented the generation of Simulink and Stateflow models from a given MechatronicUML model using a model transformation approach.
Besides the generation of Simulink and Stateflow models, we are currently developing a transformation from MechatronicUML to Modelica [Fritzson 2004]. Modelica suffers from similar drawbacks as Simulink and Stateflow, mainly missing concepts for clocks and asynchronous communication. Thus, similar concepts like the ones described in this paper have to be employed to implement such a transformation.
Modern mechatronic systems often incorporate self-adaptation or, in the case of systems of systems, dynamic communication topologies. This is also supported by MechatronicUML. However, Simulink does not support the dynamic instantiation of components which is necessary to model such dynamic system structures. Thus, as future work, we plan to investigate how such changing system structures can be represented in Simulink.
Acknowledgement
We thank Jana Bröggelwirth, Andrey Pines, and Andreas Volk for implementing the tool support for the transformation. This work was partially developed in the course of the Special Research Initiative 614 - Self-optimizing Concepts and Structures in Mechanical Engineering - University of Paderborn, and was published on its behalf and funded by the Deutsche Forschungsgemeinschaft. This work was partially developed in the project ‘ENTIME: Entwurfstechnik Intelligente Mechatronik’ (Design Methods for Intelligent Mechatronic Systems). The project ENTIME is funded by the state of North Rhine-Westphalia (NRW), Germany and the EUROPEAN UNION, European Regional Development Fund, 'Investing in your future'. Christian Heinzemann and Jan Rieke are supported by the International Graduate School Dynamic Intelligent Systems funded by the state of NRW.
References
M.Sc. Christian Heinzemann, M.Sc. Uwe Pohlmann, Dipl.-Inform. Jan Rieke, Prof. Dr. Wilhelm Schaefer, Dipl.-Inform. Oliver Sudmann,
Software Engineering Group, Heinz Nixdorf Institute, University of Paderborn,
Zukunftsmeile 1, 33102 Paderborn, Germany
Telephone: +49-5251-60-2306 | -3323 | -3310 | -3313 | -3307
Email: c.heinzemann | upohl | jrieke | wilhelm | oliversu @upb.de
URL: http://www.cs.uni-paderborn.de/fachgebiete/fachgebiet-softwaretechnik.html
Dr. Matthias Tichy
Software Engineering Division, Chalmers University of Technology and University of Gothenburg, Sweden
Phone: +46-(0)31-772 6031
Email: tichy@chalmers.se
|
{"Source-Url": "https://www.tichy.de/publications/2012/DESIGN2012.pdf", "len_cl100k_base": 7209, "olmocr-version": "0.1.48", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34711, "total-output-tokens": 8284, "length": "2e12", "weborganizer": {"__label__adult": 0.00047707557678222656, "__label__art_design": 0.00040340423583984375, "__label__crime_law": 0.0004074573516845703, "__label__education_jobs": 0.0005359649658203125, "__label__entertainment": 8.118152618408203e-05, "__label__fashion_beauty": 0.00020229816436767575, "__label__finance_business": 0.0002589225769042969, "__label__food_dining": 0.0004367828369140625, "__label__games": 0.0010862350463867188, "__label__hardware": 0.0030765533447265625, "__label__health": 0.0004661083221435547, "__label__history": 0.00032830238342285156, "__label__home_hobbies": 0.0001386404037475586, "__label__industrial": 0.0012607574462890625, "__label__literature": 0.00021660327911376953, "__label__politics": 0.00029397010803222656, "__label__religion": 0.0005502700805664062, "__label__science_tech": 0.070556640625, "__label__social_life": 7.963180541992188e-05, "__label__software": 0.008087158203125, "__label__software_dev": 0.908203125, "__label__sports_fitness": 0.0004878044128417969, "__label__transportation": 0.00231170654296875, "__label__travel": 0.00024139881134033203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37694, 0.02216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37694, 0.55486]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37694, 0.89409]], "google_gemma-3-12b-it_contains_pii": [[0, 3166, false], [3166, 7715, null], [7715, 12297, null], [12297, 16083, null], [16083, 18657, null], [18657, 23157, null], [23157, 26596, null], [26596, 28927, null], [28927, 33454, null], [33454, 37694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3166, true], [3166, 7715, null], [7715, 12297, null], [12297, 16083, null], [16083, 18657, null], [18657, 23157, null], [23157, 26596, null], [26596, 28927, null], [28927, 33454, null], [33454, 37694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37694, null]], "pdf_page_numbers": [[0, 3166, 1], [3166, 7715, 2], [7715, 12297, 3], [12297, 16083, 4], [16083, 18657, 5], [18657, 23157, 6], [23157, 26596, 7], [26596, 28927, 8], [28927, 33454, 9], [33454, 37694, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37694, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
07a15c3251884f4b617f1378ce5ead4b9c6e3cb5
|
McAfee Product Security Practices
14 March 2022
# Table of Contents
Importance of Security .......................................................................................................................... 3
Software Development Lifecycle (SDLC) at McAfee ........................................................................ 3
Development Methodologies ............................................................................................................... 4
Security Development Lifecycle (SDL) .................................................................................................. 4
SDL.O2 High-Level SDL ....................................................................................................................... 5
SDL.T2.1 Security Architecture Review ............................................................................................... 6
SDL.T2.2 Security Design Review ........................................................................................................ 6
SDL.T3 Threat Modeling ....................................................................................................................... 6
SDL.T4 Privacy and Data Protection Review ......................................................................................... 6
SDL.O7 Security Training ....................................................................................................................... 7
SDL.O4 Software Security Architects .................................................................................................. 7
SDL.T6 “Trust and Verify” .................................................................................................................... 7
SDL.O5 Complimentary Security Testing ............................................................................................. 8
SDL.O6 McAfee Policies ....................................................................................................................... 8
SDL.O5 Software Security Tools ......................................................................................................... 9
SDL.O9.1 Product Security Maturity Model ......................................................................................... 9
SDL.O3 Vulnerability Response .......................................................................................................... 10
Disclaimer ........................................................................................................................................... 10
Points of Contact ................................................................................................................................. 10
Glossary ............................................................................................................................................... 10
Revision History .................................................................................................................................. 12
Importance of Security
At McAfee, LLC we take product security very seriously. Our practices include designing for both security and privacy, in product software, IT applications, and cloud services. We have rigorous software security policies and processes designed to proactively find and remove software security defects such as security vulnerabilities. We understand that our products, IT applications, and cloud services must not only fulfill the stated function to help protect our customers, the McAfee software itself must also aim to protect itself from vulnerabilities and attackers. McAfee strives to build software that demonstrates resilience against attacks.
We also understand that our customers may, from time to time, wish to review our software security practices so that they may make their own risk-based decisions on how best to use our products and to fulfill any due diligence responsibilities they may have.
Specific policies and practices can vary by product. The summary of practices described in this statement applies to all McAfee branded products as well as customer facing IT and Web applications.
Software Development Lifecycle (SDLC) at McAfee
All of McAfee’s software is developed using the Agile or Continuous Integration / Continuous Delivery (CI/CD) methodology. These agile and CI/CD practices are referred to as the Agile Software Development Lifecycle (SDLC). The Waterfall methodology is no longer used within McAfee. At McAfee, the SDLC is referred to internally as the Product Lifecycle Framework (PLF) v2.
Development Methodologies
The chart below was developed for a traditional Waterfall SDLC. This chart has been adapted and redefined for McAfee’s Agile SDL, which includes CI/CD. Security and privacy tasks are integrated into McAfee’s SDL as a seamless, holistic process designed to produce software that has appropriate security and privacy built into it.
### SDLC Phases
<table>
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>Concept</td>
<td>Planning</td>
<td>Design & Development</td>
<td>Readiness</td>
<td>Release & Launch</td>
<td>Support & Sustain</td>
</tr>
</tbody>
</table>
### Security Assessment
- Product security team is looped in early (Product Security Group & Product Security Champions)
- Product security team hosts a discovery meeting
- Product security team creates an S-PLF project plan (states what further work will be done)
- Product team initiates a Privacy Impact Assessment (PIA)
### Architecture
- S1 Security Plan
- S-PLF policy assessment & scoping
- Threat modeling / architecture security analysis
- Privacy information gathering and analysis
### Design & Development
- S2 Security Plan
- Security test plan composition
- Static analysis
- Threat model updating
- Design security analysis & review
- Privacy implementation assessment
### Ship
- S3 Security Plan
- Security test case execution
- Static analysis
- Dynamic analysis
- Fuzz testing
- Manual code review
- Privacy validation and remediation
### Post-Release, Legacy, & M&A
- S4 Security Plan
- Final security review
- Vulnerability scan
- Penetration test
- Open source licensing review
- Final privacy review
- External vulnerability disclosure response (FSIRT)
- Reviews by service contractors
- Post-release certifications
- Internal review for new product combinations or cloud deployment
- Security architectural reviews & tool-based assessments of legacy and M&A products
While the following description may appear to apply only to Waterfall development, the same set of security tasks are performed across the iterations of Agile just as they are performed in discrete phases during Waterfall. For CI/CD, SDL activities are determined by certain triggers which are set by milestones, events, and time intervals. McAfee encourages full engagement by software security architects and engineers within Agile sprints to ensure that security and privacy are integral parts of the Agile process.
### Security Development Lifecycle (SDL)
In line with IT and application development industry standards such as ISO/IEC 27001, 27002, and 27034, BSIMM, and SAFECode, McAfee software development has processes designed to adhere to a Security Development Lifecycle (SDL).
McAfee’s SDL covers the technical, operational, and enterprise aspects of building secure software. The SDL technical activities defined for each product, IT application, or cloud services release is the focus of this document.
Technical SDL Activities (Engineering)
- SDL.T1 Security Definition of Done (DoD) (security To Do list before shipping)
- SDL.T2 Security Architecture & Design Reviews
- SDL.T3 Threat Modeling
- SDL.T4 Privacy & Data Protection Review
- SDL.T5 Secure Coding Standards (includes cryptography)
- SDL.T6 Manual Code Review
- SDL.T7 Open Source & 3rd Party Libraries
- SDL.T8 Vendor Management (includes software legal compliance)
- SDL.T9 Static Security Testing (SAST)
- SDL.T10 Interactive Security Testing (IAST)
- SDL.T11 Dynamic Security Testing (DAST) (includes Web Application scanning)
- SDL.T12 Fuzz Testing
- SDL.T13 Vulnerability Scan
- SDL.T14 Penetration Testing
- SDL.T15 Security Testing & Validation
- SDL.T16 Operating Environment (includes public cloud services)
Not all of the 16 technical SDL activities are mandatory for each product release. Some are conditionally required. The SDL.T1 Security Definition of Done (DoD) lists which activities are required for each release and is owned by the SSAs. Several activities are mandatory no matter what, such as the Security DoD (T1), Privacy Review (T4), Manual Code Review (T6), and SAST (T9).
Operational SDL Activities (InfoSec)
- SDL.O1 Program
- SDL.O2 Security Development Lifecycle (SDL)
- SDL.O3 Vulnerability Response (PSIRT/ASIRT)
- SDL.O4 People & Resources
- SDL.O5 Tools & Services
- SDL.O6 Policy & Compliance
- SDL.O7 Security Training
- SDL.O8 Metrics & Reporting
- SDL.O9 Maturity Models
Enterprise SDL Activities (IT)
- SDL.E1 Vulnerability Management
- SDL.E2 Risk Management
- SDL.E3 Asset Management
- SDL.E4 Remediation Management
- SDL.E5 Exception Management
- SDL.E6 Security Monitoring
- SDL.E7 Certifications
The following paragraphs describe, at a high level, the McAfee SDL process.
**SDL.O2 High-Level SDL**
For a new product, the security process typically begins at project initiation. A seasoned security architect or McAfee Software Security Architect (SSA) or Engineer (SSE) assesses a proposal for its security implications. The output of this engagement is any additional security features that will be added to software self-protection so that the software can be deployed by the different security postures of McAfee’s customers.
SDL.T2.1 Security Architecture Review
Any project that involves a change to the architecture of the product is required to go through a security architecture and design review. The proposed architectural and design changes are analyzed for security requirements, as well as analyzed within the whole of the architecture of the software for each change’s security implications. An architecture review may be a discrete event, may be accomplished iteratively as the architecture progresses (Agile), or may be updated continuously (CI/CD).
SDL.T2.2 Security Design Review
The SDL requires that designs that contain security features or effects are reviewed to make sure that security requirements are built correctly. The SSA signs off when the design meets expectations. All functional items, including security design elements, are included in the thorough functional test plan. Like architectural reviews, a design review may be a discrete event or may be accomplished iteratively when design work occurs (Agile or CI/CD).
SDL.T3 Threat Modeling
A threat model is created or updated. The output of this analysis will typically be the security requirements that must be folded into the design that will be implemented.
SDL.T4 Privacy and Data Protection Review
In tandem with architecture and design reviews, privacy and data protection reviews are conducted. A Privacy Impact Assessment (PIA) is performed to determine if any additional privacy activities are required to protect personal data. Privacy reviews cover the whole lifecycle of personal data and often extend beyond the product collecting the data and include backend systems and infrastructure.
SDL.O7 Security Training
At McAfee, we foster industry standard secure coding practices. To that end, McAfee University and our McAfee Learning Management System (LMS) contains many courses on building software securely. Some are home-grown from internal subject matter experts, while others are purchased from third-party vendors. Developers are expected to pursue ongoing developer education. Self-training is encouraged.
SDL.O4 Software Security Architects
Software Security Architects (SSA) and Software Security Engineers (SSE) are assigned to each product line and IT application. Our 120+ SSAs and SSEs perform the SDL activities and help to confirm that every part of the software security process is applied appropriately.
Software Security Architect/Engineer Qualifications
1. A minimum of 3-5 years software development experience
2. A passion for or background in software security
3. Approved by the BU Engineering VP/Sr. Director & SSA BU Lead
4. Dedicate a minimum of 20% of their time doing software security tasks
5. Time to be trained in software security, reviews, tools, and processes
6. Be collocated within each engineering team / BU
7. Must not only know how to develop (build) software but also know how to deconstruct it (take it apart) while “thinking like a hacker”
SDL.T6 “Trust and Verify”
Alongside each developer’s responsibility to produce secure code, McAfee has a “trust and verify” attitude. All new code must go through a manual code review. For non-sensitive and noncritical functions, this code may solely go through peer review. Critical and sensitive changes are also reviewed by staff with a sufficient level of expertise to assess critical changes.
Making use of overlapping complementary approaches, we employ several tools and automation to find security defects that may slip through manual code review. All code must be statically analyzed (unless no static analyzer exists for the language or environment). All web code is expected to undergo a web vulnerability scan. Other forms of input are routinely fuzz tested. Medium, high, and critical severity issues must be fixed before release. Low severity issues are prioritized then usually fixed or mitigated in future patches and product releases.
SDL.O5 Complimentary Security Testing
Critical customer-premise releases may additionally be put through a third-party penetration analysis on a case-by-case basis before release. All hosted systems are routinely vulnerability scanned and penetration tested by either our Information Security (InfoSec) department or by a third-party engagement.
We believe that the preceding is a solid plan in line with industry standards and best practices. Since no computer system can be absolutely secure, McAfee does not claim that the SDL will prevent any particular issue or any collection of issues. McAfee reevaluates and updates its SDL policies and process on a regular basis.
SDL.O6 McAfee Policies
McAfee believes that customer relations are best served through open, transparent dialog. We encourage customer engagement, including requests about our software security process.
There are some limitations as to what we may share. For instance, we never share our source code outside of McAfee’s direct control. Also, we never make available the list of vulnerabilities that are found as a result of our own internal investigations or from any of our automated testing tools. After internally discovered vulnerabilities have been addressed in a hotfix, patch or new product release, all medium and high severity issues are documented in product release notes and in security advisories.
It is important to note that any scan of McAfee’s production systems will be considered an attack. Response to perceived attack will be rapid and decisive. Please coordinate your needs with your account manager. Availability of test systems is subject to customer need, customer cost, and timing.
SDL.O5 Software Security Tools
McAfee engineering teams apply an appropriate combination of tools depending upon the target programming language, architecture, and the execution run-time. These tools are a combination of internally developed, vendor purchased, and open source tools. We may provide a list of utilized tools upon request. For reference, we use many of the security tools listed in OWASP’s Security Testing Tools list.
SDL.O9.1 Product Security Maturity Model
The SDL describes the “what” of software security. McAfee’s Product Security Maturity Model describes the “how well” of software security.
For each SDL activity, the PSMM describes 5 different levels from 0-4. These levels are:
- Level 0: None
- Level 1: Minimal [Initial]
- Level 2: Good [Basic]
- Level 3: Better [Acceptable]
- Level 4: Best [Mature]
With 16 technical activities and 9 operational activities, a perfect score is 100. McAfee software development teams assess their products annually using the PSMM. This allows us to focus our efforts on what each particular product needs the most, while measuring the overall maturity of each product line, engineering BU, and the company as a whole.

SDL.O3 Vulnerability Response
To handle vulnerabilities discovered in shipping McAfee products and live customer-facing applications, McAfee has a Vulnerability Response Team. This team consists of both PSIRT and ASIRT. The Product Security Incident Response Team (PSIRT) responds to product vulnerabilities in shipping products. They work with the discoverer and engineering to develop and deliver a patch and accompanying security bulletin. The vulnerability’s severity (CVSS base-score) and business risk factors determine our fix response time (SLA). Similar to PSIRT, the Application Security Incident Response Team (ASIRT) responds to IT application and cloud services vulnerabilities in both externally and internally facing IT applications.
Disclaimer
No computer system can be absolutely secure. McAfee makes no warranty concerning any malfunctions or other errors in its hardware products or software products caused by viruses, infections, worms, or similar malicious code not developed or introduced by McAfee. McAfee makes no warranty that any hardware products or software products will protect against all possible security threats, including intentional misconduct by third parties. McAfee is not liable for any downtime or service interruption, for any lost or stolen data or systems, or for any other damages arising out of or relating to any such actions or intrusions.
Points of Contact
• Meredith Stickle, Director – Information Security, McAfee LLC
Glossary
SDL Security Development Lifecycle
A secure software development methodology that condenses the traditional waterfall methodology delivery cycles into weeks instead of month. Used by all McAfee software development teams.
ASIRT Application Security Incident Response Team
Part of the Vulnerability Response team within McAfee that responds to IT application and cloud services vulnerabilities in both externally and internally facing IT applications.
CI/CD Continuous Integration / Continuous Delivery
A time frame for releasing software updates more frequently than agile as more products become cloud-native.
DAST Dynamic Analysis Security Testing
Run-time code review using automated tools.
GDPR General Data Protection Regulation
The EU’s privacy regulation effective 25 May 2018.
IAST Interactive Analysis Security Testing
IAST is a form of application security testing that stems from a combination of dynamic application security testing (DAST) and runtime application self-protection (RASP) technologies.
PIA Privacy Impact Assessment
A privacy review conducted on all products to determine if additional privacy activities are required before a product is released.
<table>
<thead>
<tr>
<th>Acronym</th>
<th>Description</th>
</tr>
</thead>
</table>
| PLF | Product Lifecycle Framework
McAfee’s SDLC. |
| PSI | Potentially Shippable Increment
An agile term that means that each unit produced from a series of Sprints has a quality of completion.
A governance checkpoint determines each release. PSCs participate in release decisions. There is no mandate to release a PSI. |
| PSIRT | Product Security Incident Response Team
Part of the Vulnerability Response team within McAfee that responds to product vulnerabilities in shipping products.
| PSMM | Product Security Maturity Model
Measures how well each SDL activity is being performed. |
| SAST | Static Analysis Security Testing
Source code review using automated tools. |
| SDL | Security Development Lifecycle
The security aspects of an SDLC. |
| SDLC | Software Development Lifecycle
Describes the processes, activities, and deliverables for developing, testing and shipping software. |
| SSA | Software Security Architect
A senior security architect within McAfee responsible for all security-related activities for a given product line. |
| SSE | Software Security Engineer
A security engineer within McAfee responsible for all security-related activities for a given product line.
SSEs are typically not as experienced as SSAs. |
## Revision History
<table>
<thead>
<tr>
<th>Name</th>
<th>Version</th>
<th>Change Description</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Brook Schoenfield</td>
<td>1</td>
<td>Initial Draft</td>
<td>7 Aug 2014</td>
</tr>
<tr>
<td>Brook Schoenfield</td>
<td>2</td>
<td>Minor content updates.</td>
<td>28 Aug 2014</td>
</tr>
<tr>
<td>Brook Schoenfield</td>
<td>3</td>
<td>Minor content updates.</td>
<td>28 Aug 2014</td>
</tr>
<tr>
<td>Brook Schoenfield</td>
<td>4</td>
<td>Six-month review. Reformatted document.</td>
<td>9 Dec 2014</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>5</td>
<td>Six-month review. Rebranded from McAfee Inc. to Intel Security.</td>
<td>6 Apr 2015</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>9</td>
<td>Add Points of Contact</td>
<td>18 May 2015</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>10</td>
<td>Updated Software Security Tool List.</td>
<td>22 May 2015</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>14</td>
<td>Annual review. Added Glossary.</td>
<td>2 Feb 2016</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>15</td>
<td>Six-month review. Updated Agile SDL Activities list.</td>
<td>25 July 2016</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>16</td>
<td>Six-month review. Rebranded from Intel Security to McAfee, LLC. Removed Software from the title.</td>
<td>28 Mar 2017</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>17</td>
<td>Minor updates.</td>
<td>28 Apr 2017</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>18</td>
<td>Six-month review. Removed terms from the glossary.</td>
<td>12 Oct 2017</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>19</td>
<td>Six-month review. Renamed PSCs to SSAs. Added Revision History table for FedRAMP. Renamed title from Product to Software to include IT Application security. Added all SDL Activities.</td>
<td>2 Apr 2018</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>20</td>
<td>Added Application and Enterprise SDL activities. Removed James Ransome who left McAfee in June 2018.</td>
<td>5 July 2018</td>
</tr>
<tr>
<td>Harold Toomey</td>
<td>21</td>
<td>Renamed Agile SDL to McAfee SDL. Updated SDL activities list.</td>
<td>1 Nov 2018</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>22</td>
<td>Updated points of contact</td>
<td>1 June 2019</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>23</td>
<td>Six-month review. Minor content updates</td>
<td>18 Feb 2020</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>24</td>
<td>Minor content update</td>
<td>2020</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>25</td>
<td>Bi-annual content review</td>
<td>29 June 2020</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>26</td>
<td>Renew expired links. Minor content update.</td>
<td>6 August 2020</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>27</td>
<td>Bi-annual content review</td>
<td>2 Feb 2021</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>28</td>
<td>Bi-annual content review</td>
<td>1 July 2021</td>
</tr>
<tr>
<td>Matt Valdes</td>
<td>29</td>
<td>Update resource links and contact information.</td>
<td>14 March 2022</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.mcafee.com/enterprise/en-us/assets/misc/ms-product-software-security-practices.pdf", "len_cl100k_base": 4905, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31527, "total-output-tokens": 5132, "length": "2e12", "weborganizer": {"__label__adult": 0.00045680999755859375, "__label__art_design": 0.00029540061950683594, "__label__crime_law": 0.0012683868408203125, "__label__education_jobs": 0.0012865066528320312, "__label__entertainment": 4.7206878662109375e-05, "__label__fashion_beauty": 0.0001863241195678711, "__label__finance_business": 0.0014905929565429688, "__label__food_dining": 0.00026226043701171875, "__label__games": 0.0006856918334960938, "__label__hardware": 0.0007138252258300781, "__label__health": 0.00028777122497558594, "__label__history": 9.828805923461914e-05, "__label__home_hobbies": 8.046627044677734e-05, "__label__industrial": 0.0003466606140136719, "__label__literature": 0.00015926361083984375, "__label__politics": 0.0002608299255371094, "__label__religion": 0.000278472900390625, "__label__science_tech": 0.0027618408203125, "__label__social_life": 8.416175842285156e-05, "__label__software": 0.007457733154296875, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0002720355987548828, "__label__transportation": 0.0003609657287597656, "__label__travel": 0.00013959407806396484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24084, 0.02149]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24084, 0.02421]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24084, 0.86714]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 3049, null], [3049, 4605, null], [4605, 7444, null], [7444, 9682, null], [9682, 11343, null], [11343, 13037, null], [13037, 15277, null], [15277, 16507, null], [16507, 19178, null], [19178, 20671, null], [20671, 24084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 3049, null], [3049, 4605, null], [4605, 7444, null], [7444, 9682, null], [9682, 11343, null], [11343, 13037, null], [13037, 15277, null], [15277, 16507, null], [16507, 19178, null], [19178, 20671, null], [20671, 24084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24084, null]], "pdf_page_numbers": [[0, 49, 1], [49, 3049, 2], [3049, 4605, 3], [4605, 7444, 4], [7444, 9682, 5], [9682, 11343, 6], [11343, 13037, 7], [13037, 15277, 8], [15277, 16507, 9], [16507, 19178, 10], [19178, 20671, 11], [20671, 24084, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24084, 0.13158]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
fe4b24a77a52dce92975d36a3581465edade825c
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00956442/file/hal-00956442.pdf", "len_cl100k_base": 6373, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31538, "total-output-tokens": 7663, "length": "2e12", "weborganizer": {"__label__adult": 0.0004413127899169922, "__label__art_design": 0.0004549026489257813, "__label__crime_law": 0.0006723403930664062, "__label__education_jobs": 0.000946044921875, "__label__entertainment": 0.00010716915130615234, "__label__fashion_beauty": 0.0002092123031616211, "__label__finance_business": 0.00036716461181640625, "__label__food_dining": 0.0005121231079101562, "__label__games": 0.0010128021240234375, "__label__hardware": 0.0009965896606445312, "__label__health": 0.0009083747863769532, "__label__history": 0.0003552436828613281, "__label__home_hobbies": 0.00014448165893554688, "__label__industrial": 0.0006837844848632812, "__label__literature": 0.0004458427429199219, "__label__politics": 0.000400543212890625, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.0997314453125, "__label__social_life": 0.0001233816146850586, "__label__software": 0.0076141357421875, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.0004291534423828125, "__label__transportation": 0.0007348060607910156, "__label__travel": 0.0002536773681640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26388, 0.03546]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26388, 0.66952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26388, 0.88028]], "google_gemma-3-12b-it_contains_pii": [[0, 1035, false], [1035, 3036, null], [3036, 5932, null], [5932, 8616, null], [8616, 9840, null], [9840, 12660, null], [12660, 15400, null], [15400, 18089, null], [18089, 20632, null], [20632, 23304, null], [23304, 26388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1035, true], [1035, 3036, null], [3036, 5932, null], [5932, 8616, null], [8616, 9840, null], [9840, 12660, null], [12660, 15400, null], [15400, 18089, null], [18089, 20632, null], [20632, 23304, null], [23304, 26388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26388, null]], "pdf_page_numbers": [[0, 1035, 1], [1035, 3036, 2], [3036, 5932, 3], [5932, 8616, 4], [8616, 9840, 5], [9840, 12660, 6], [12660, 15400, 7], [15400, 18089, 8], [18089, 20632, 9], [20632, 23304, 10], [23304, 26388, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26388, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
93ac12e17eb771141dee0ad67d3ee02bf8c4c5dc
|
A Environment Details
Here we provide detailed descriptions of each of experiment environments. See (§6) for high-level descriptions and the accompanying code for implementations.
A.1 Cover Environment Details
- **Types:**
- The block type has features height, width, x, y, grasp.
- The target type has features width, x.
- The gripper type has features x, y, grip, holding.
- The allowed-region type, which is used to determine whether picking or placing at certain positions is allowed, has features lower-bound-x, upper-bound-x.
- **Action space:** \( \mathbb{R}^3 \). An action \((dx, dy, dgrip)\) is a delta on the gripper.
- **Predicates:** Covers, HandEmpty, Holding, IsBlock, IsTarget.
- **Contact-related predicates:** Covers, HandEmpty, Holding.
- **Notes:** This extends the environment of [12, 13, 21] to make the robot move in two dimensions. In the previous work, the environment was referred to as PickPlace1D.
A.2 Doors Environment Details
- **Types:**
- The robot type has features x, y.
- The door type has features x, y, theta, mass, friction, rotation, target, is-open.
- The room type has features x, y.
- The obstacle type has features x, y, width, height, theta.
- **Action space:** \( \mathbb{R}^3 \). An action \((dx, dy, drotation)\) is a delta on the robot. The rotation acts on the door handle when the robot is close enough to the door.
- **Predicates:** InRoom, InDoorway, InMainRoom, TouchingDoor, DoorIsOpen, DoorInRoom, DoorsShareRoom.
- **Contact-related predicates:** TouchingDoor, InRoom.
- **Notes:** The rotation required to open the door is a complicated function of the door features.
A.3 Stick Button Environment Details
- **Types:**
- The gripper type has features x, y.
- The button type has features x, y, pressed.
- The stick type has features x, y, held.
- The holder type has features x, y.
- **Action space:** \( \mathbb{R}^3 \). An action \((dx, dy, z\text{-force})\) is a delta on the gripper and a force in the z direction (see notes below).
- **Predicates:** Pressed, RobotAboveButton, StickAboveButton, AboveNoButton, Grasped, HandEmpty.
- **Contact-related predicates:** Grasped, Pressed.
- **Notes:** Picking and pressing succeed when (1) the \(z\text{-force}\) action exceeds a threshold; (2) there are no collisions; and (3) when the respective objects are close enough in \(x, y\) space.
A.4 Coffee Environment Details
- **Types:**
- The gripper type has features x, y, z, tilt-angle, wrist-angle, fingers.
- The pot type has features x, y, rotation, is-held, is-hot.
- The plate type has feature is-on.
- The cup type has features x, y, liquid-capacity, liquid-target, current-liquid.
- **Action space:** \( \mathbb{R}^6 \). An action \((dx, dy, dz, dtilt, dwrist, dfingers)\) is a delta on the gripper.
B Approach Details
Here we provide detailed descriptions of each approach evaluated in experiments. See (§6) for high-level descriptions and the accompanying code for implementations.
B.1 Bilevel Planning with Neuro-Symbolic Skills (BPNS)
BPNS is our main approach, as described in the main paper.
**Planning:** The number of abstract plans \( N_{\text{abstract}} = 8 \) for Cover and Doors, and \( N_{\text{abstract}} = 1000 \) for Coffee and Stick Button. We would not expect performance to substantially improve for Cover or Doors with a larger \( N_{\text{abstract}} \), since we know that the first abstract plan is generally refinable in these environments; the smaller number was selected for the sake of experiments finishing faster. The number of samples per step \( N_{\text{samples}} = 10 \) for all environments.
**Operator Learning:** Operators whose skill datasets comprise less than 1% of the overall number of segments are filtered out. This filtering is helpful to speed up learning and planning in cases where there are rare effects or simulation noise in the demonstrations.
**Policy Learning:** Policies are fully-connected neural networks with two hidden layers of size 32. Models are trained with Adam for 10,000 epochs with a learning rate of \( 1e^{-3} \) with MSE loss.
**Sampler Learning:** Following Chitnis et al. [13], each sampler consists of two neural networks: a generator and a discriminator. The generator outputs the mean and diagonal covariance of a Gaussian, using an exponential linear unit (ELU) to assure PSD covariance. The generator is a fully-connected neural network with two hidden layers of size 32, trained with Adam for 50,000 epochs with a learning rate of \( 1e^{-3} \) using Gaussian negative log likelihood loss. The discriminator is a binary classifier of samples output by the generator. Negative examples for the discriminator are collected from other skill datasets. The classifier is a fully-connected neural network with two hidden layers of size 32, trained with Adam for 10,000 epochs with a learning rate of \( 1e^{-3} \) using binary cross entropy loss. During planning, the generator is rejection sampled using the discriminator for up to 100 tries, after which the last sample is returned.
B.2 BPNS No Subgoal
BPNS No Subgoal is a variation of BPNS that does not use subgoal parameterization.
**Planning:** \( N_{\text{abstract}} \) is the same as BPNS and \( N_{\text{samples}} \) is not applicable.
**Learning:** Operator learning is the same as BPNS. For policy learning, for each input \( x \circ y \) in the training data, where \( y \) is the subgoal, we use \( x \) instead, i.e., the state alone. No samplers are learned.
B.3 Graph Neural Network Metacontroller (GNN Meta)
GNN Meta is a mapping from state, abstract state, and goal to a ground skill. This baseline offers a learning-based alternative to AI planning in the outer loop of bilevel planning.
**Planning:** Repeat until the goal is reached: query the model on the current state, abstract state, and goal to get a ground skill. Invoke the ground skill’s sampler up to 100 times to find a subgoal that leads to the abstract successor state predicted by the skill’s operator. If successful, simulate the state forward; otherwise, terminate with failure.
**Learning:** Skill learning is identical to BPNS. This approach additionally learns a metacontroller in the form of a GNN. Following the baselines presented in prior work [13], the GNN is a standard encode-process-decode architecture with 3 message passing steps. Node and edge modules are fully-connected neural networks with two hidden layers of size 16. We follow the method of Chitnis et al. [13] for encoding object-centric states, abstract states, and goals into graph inputs. To get graph
outputs, we use node features to identify the object arguments for the skill and a global node with a one-hot vector to identify the skill identity. The models are trained with Adam for 1000 epochs with a learning rate of $1 \times 10^{-3}$ and batch size 128 using MSE loss.
B.4 GNN Meta No Subgoal
GNN Meta No Subgoal is the same as GNN Meta, but with skills learned via BPNS No Subgoal.
B.5 GNN Behavioral Cloning (GNN BC)
GNN BC is a mapping from states, abstract states, and goals, directly to actions. This approach is model-free; at evaluation time, it is queried at each state, and the returned action is executed in the environment. The GNN architecture and training is identical to GNN Meta, except that output graphs consist only of a global node, which holds the fixed-dimensional action vector.
B.6 Samples=1
This ablation is identical to BPNS, except with $N_{\text{samples}} = 1$ during planning.
B.7 Abstract Plans=1
This ablation is identical to BPNS, except with $N_{\text{abstract}} = 1$ during planning.
C Additional Results
Here we present additional results to supplement the main results in (§6).
C.1 Planning Time Analysis
Figure 5 reports evaluation task success rate as a function of planning time for BPNS, Samples=1, and Abstract Plans=1. In Cover and Doors, performance peaks within the first few seconds of wall-clock time. This is consistent with our finding that the first abstract plan is generally refinable in these two environments. In Stick Button and Coffee, performance increases more gradually for BPNS over time. Furthermore, given a small time budget, the Samples=1 ablation sometimes solves more evaluation tasks than BPNS. In these two environments, the first abstract plan is typically not refinable; BPNS exhaustively attempts to sample that abstract plan and others before arriving at a refinable abstract plan. With Samples=1, the unreifiable abstract plans are quickly discarded after one sampling attempt. The gap that later emerges between BPNS and Samples=1 is due to tasks where one sampling attempt of a refinable abstract plan is not enough. This trend is fairly specific to the details of our bilevel planning implementation. Other search-then-sample TAMP techniques that do not exhaustively sample abstract plans before moving onto the next one may converge faster [1].
C.2 GNN Meta Additional Analysis
We were initially surprised by the poor performance of GNN Meta in Stick Button and Coffee, given that the target functions are intuitively straightforward. In Stick Button, the model should be able to use the positions of the buttons in the low-level state to determine whether they can be directly reached, or if the stick should be used instead. In Coffee, the model should use the rotation of the
Figure 6: **GNN Meta additional results.** Task success rates for the GNN Meta baseline on tasks with more objects (Eval) or the same number of objects (Train) as seen during training. The gap suggests that the poor performance of GNN Meta in the main results (Figure 4) is largely attributable to a failure to generalize over object count. All results are over 10 random seeds. Lines are means and error bars are standard deviations. Note that in Cover and Doors, the distribution of object number is the same between train and evaluation.
C.3 **Comparison to [12, 13]***
Here we elaborate on the relationship between this work and our prior work [12, 13]. In brief, [13] extends [12] by removing the assumption that samplers are given. Our work here extends [13] by removing the assumptions that high-level controllers are given (e.g., pick, place, move), and that demonstration data is provided in terms of those high-level controllers.
Removing the assumption that high-level controllers are given requires several nontrivial steps. First, in [12, 13], the demonstration data is given in terms of high-level controllers: each transition corresponds to the execution of an entire controller, and the controller identity is known. This setting makes operator learning much easier because each transition corresponds to exactly one operator. The controller identity is also used to make operator learning easier. In contrast, we are given demonstration data where the actions are low-level (e.g., end effector positions of the robot), and we seek to learn operators that correspond to the execution of many low-level actions in sequence. This necessitates segmentation. Second, because the controllers are fully defined in the previous work, including their continuous parameterizations, it is straightforward to set up the sampler learning problems. In contrast, we have no such continuous controller parameterization given to us in this work. One of our main insights is that subgoal states can be used as the basis for continuous parameterization. This insight follows from KD1 and has the benefit that we can automatically derive targets for learning from the demonstration data. Finally, we must learn the controllers themselves. In the previous work, these controllers were hardcoded.
With these differences in mind, to motivate our work, we designed a version of our approach that ablates policy learning, and can be seen as an application of [13]. This ablation works as follows:
- For each transition in the demonstration data, we create one (single-step) segment.
- Partitioning and lifting are unmodified. Operator learning is also unmodified.
- Instead of policy learning, we create a “pass-through” policy architecture that consumes a continuous action (instead of a subgoal) and returns that action.
- Sampler learning is unmodified, except rather than sampling subgoals, we sample actions to give to the pass-through policy, consistent with the sampler learning of [13].
Another way to understand this ablation is that we are applying the method of [13] but supposing that the given action space consists of a single parameterized controller, with the parameter space equal to the low-level action space. We ran this ablation in the Cover environment with 1000 demonstrations and hyperparameters unchanged from our main experiments. Results are shown in the table on the right, with the numerical entries representing means (standard deviations) over 10 seeds.
Qualitatively, we see that the learned skills are the same between the two approaches, except that the ablation learns an additional skill with empty operator preconditions and effects. This additional skill is important and provides insight into the discrepancy between the two approaches. In the demonstration data, the majority of transitions do not correspond to any change in the abstract state. For example, as the robot moves in preparation for a pick or place, the abstract state is constant. These transitions lead to the empty operator effects. The preconditions are empty because there is nothing in common between the cases where no abstract effect occurs — sometimes the robot is holding an object, and sometimes it is not. This empty skill makes abstract planning very difficult because there is no signal for the planner to realize when using the skill would bring the robot closer to the goal. Note, though, that this empty skill is needed for planning, and simply removing it would make performance even worse; the remaining two skills can only handle the single step immediately preceding the respective effects (i.e., the step where an object is picked or placed).
C.4 Learning from Human Demonstrations
We ran an additional experiment with human demonstrations. To collect the demonstrations, we created a graphical user interface (GUI) and a simplified version of the Stick Button environments. The GUI is shown in Figure 7. Here the robot is a red circle, the buttons are yellow circles, the stick is a brown rectangle, the stick holder is the gray rectangle, and all buttons outside the green region are unreachable by the robot. Clicking a point on the screen initiates a translational robot movement, with the magnitude clipped so that the robot can only move so far in one action. Pressing any key initiates a grasp of the stick if the robot is close enough, or a press of the button if either the robot or stick head is close enough.
We used the GUI to collect 994 human demonstrations. We then ran BPNS with hyperparameters identical to the main results. Results are shown in Table 1. We find that the number of evaluation tasks solved by the approach trained on human demonstrations is slightly below that of automated demonstrations. This small gap can be attributed to noise, suboptimality, and inconsistencies in the human demonstrations. Overall, the strong performance of BPNS on human demonstrations suggests that the approach can be scaled.
C.5 Impact of Irrelevant Predicates
We conducted three additional experiments in the Cover environment to investigate the influence of irrelevant predicates. We used 1000 demonstrations and all hyperparameters are the same as in the main results.
Demos | % Tasks Solved | Test Time (s) | Nodes Created
---|---|---|---
Automated | 83.60 | 51.80 | 421.51
Human | 80.00 | 40.07 | 430.93
Table 1: Human demonstration results in Stick Button.
First, we added a variable number of static predicates, i.e., predicates whose evaluation is always True or False for an object regardless of the low-level state. Second, we added a variable number of dynamic (i.e., not static) predicates. Concretely, we created predicates that randomly threshold the \( y \) position of the robot. Third, in an attempt to create a setting that is adversarially bad for our framework, we added a variable number of random predicates, where the evaluation of each predicate is completely random on each input, with 50% probability of being True and without regard for the low-level state.
Results for each of the three experiments are shown in Figure 8. Static predicates have no apparent impact on evaluation performance. Qualitatively, we see that the preconditions of the learned operators have more preconditions, corresponding to the static predicates that are always True. The form of the operators, and the rest of the learned skills, are otherwise unchanged. The dynamic predicates also have little to no impact on evaluation performance. The learned operators again have additional preconditions, but also have additional dynamic predicates in the effects. However, these dynamic predicates are relatively “well-behaved”, whereas in more complicated environments, predicate evaluations could be much less regular. This motivates the random predicates experiments, where indeed we see a substantial drop in evaluation performance. This drop is precipitated by a much larger and more complex set of learned operators, which makes abstract planning and learning more difficult. Altogether, these results confirm that the choice of predicates is important.
### C.6 Impact of Irrelevant Objects
We conducted two additional experiments in the Cover environment to test the influence of irrelevant objects. We used 1000 demonstrations and all hyperparameters the same as in the main results.
In the first experiment, we added a variable number of irrelevant blocks during training, and in the second experiment, we added them instead during evaluation. The blocks are irrelevant because they are not involved in goals; they are also placed off the table so as to not cause collisions with the original blocks. The blocks do have the same type as the original blocks, so they will not be simply filtered out during type matching.
Results for each of the experiments are shown in Figure 9. Adding the irrelevant objects during training has no impact on evaluation performance. This is expected, since our preprocessing pipeline naturally filters out the irrelevant objects. The irrelevant objects exhibit a small impact on learning time due to the cost of evaluating predicates. Adding the irrelevant objects during evaluation has a small impact on evaluation performance, although success rate is robust with up to 100 irrelevant objects. The increase in evaluation time is due to predicate evaluation and an increased branching factor during abstract planning.
C.7 Disabling Filtering of Low-Data Skills
<table>
<thead>
<tr>
<th>Environment</th>
<th>Filter?</th>
<th>% Tasks Solved</th>
<th>Test Time (s)</th>
<th>Nodes Created</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cover</td>
<td>Yes</td>
<td>99.40 (0.64)</td>
<td>2.02 (2.84)</td>
<td>7.15 (0.42)</td>
</tr>
<tr>
<td>Cover</td>
<td>No</td>
<td>99.40 (0.64)</td>
<td>1.99 (2.79)</td>
<td>7.30 (0.59)</td>
</tr>
<tr>
<td>Doors</td>
<td>Yes</td>
<td>98.80 (0.80)</td>
<td>1.33 (0.59)</td>
<td>41.94 (4.56)</td>
</tr>
<tr>
<td>Doors</td>
<td>No</td>
<td>98.80 (0.80)</td>
<td>1.70 (1.05)</td>
<td>50.84 (24.27)</td>
</tr>
<tr>
<td>Stick Button</td>
<td>Yes</td>
<td>83.60 (1.78)</td>
<td>51.80 (11.47)</td>
<td>421.51 (81.28)</td>
</tr>
<tr>
<td>Stick Button</td>
<td>No</td>
<td>23.80 (9.68)</td>
<td>121.43 (57.14)</td>
<td>2725.17 (1898.47)</td>
</tr>
<tr>
<td>Coffee</td>
<td>Yes</td>
<td>98.00 (1.18)</td>
<td>49.39 (10.22)</td>
<td>53.68 (7.07)</td>
</tr>
<tr>
<td>Coffee</td>
<td>No</td>
<td>98.20 (1.04)</td>
<td>47.72 (9.52)</td>
<td>53.95 (7.01)</td>
</tr>
</tbody>
</table>
Table 2: Disabling filtering of low-data skills.
We ran an additional experiment where we disabled the filtering out of skills with low data. We used 1000 demonstrations and identical hyperparameters to our main experiments. Results are shown in Table 2, with the numerical entries representing means (standard deviations) over 10 seeds. The results show that evaluation performance in Cover, Doors, and Coffee is largely unaffected by filtering, while the performance in Stick Button is substantially affected. The performance in Stick Button can be traced back to the rare situation illustrated in the
image on the right. Typically, when the robot (red) presses a button (yellow) with the stick (brown), the robot is not above any other button. However, in this case, the robot is coincidentally above a second button while executing the stick press. This leads to an operator with an effect set that includes both the button being pressed and the robot being above a second button. That operator is ultimately detrimental to planning because in the vast majority of cases, it is not possible for the robot to press the button while being above a second button, so refining this operator usually fails. This operator also has a very small amount of training data, which makes the associated policy and sampler unreliable. For similar reasons, we prefer to filter out skills with too little training data by default.
C.8 Comparison to Oracle
We collected statistics for an oracle approach that uses manually-designed skills. We use identical hyperparameters to the main results. The statistics are reported in Table 3, with the numerical entries representing means (standard deviations) over 10 seeds, and where “Ours” is the main approach, BPNS, trained with 1000 demonstrations. In Doors and Coffee, the learned skills require fewer node creations during abstract search to find a plan. This difference can be attributed to (1) overly general preconditions in the operators of our manually designed skills; and (2) more targeted sampling when using the learned samplers versus manually designed samplers. In all environments, the wall-clock time taken to plan with our learned skills is far greater than that of the oracle. From profiling, we can see that this difference is largely due to neural network inference time in both the learned samplers and learned skill policies. In contrast, the manually designed skills are written in pure Python, and can therefore be evaluated very efficiently.
C.9 Learned Operator Examples
See Figure 10 for examples of learned operators in each environment. Below, we describe the operators that are typically learned at convergence for each of the environments. These descriptions are based on inspection of the operator syntax, speaking to their interpretability.
- **Cover:**
1. Pick up a block.
2. Place a block on a target.
- **Doors:**
1. Move to a door from the main part of a room (not in a doorway).
2. Move to a door from another doorway.
3. Move through an open door.
4. Open a door.
- **Stick Button:**
1. Move from free space to press a button with the gripper.
<table>
<thead>
<tr>
<th>Env</th>
<th>Approach</th>
<th>% Tasks Solved</th>
<th>Test Time (s)</th>
<th>Nodes Created</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cover</td>
<td>Ours</td>
<td>99.40 (0.64)</td>
<td>2.02 (2.84)</td>
<td>7.15 (0.42)</td>
</tr>
<tr>
<td>Cover</td>
<td>Oracle</td>
<td>98.80 (0.30)</td>
<td>0.03 (0.01)</td>
<td>7.01 (0.17)</td>
</tr>
<tr>
<td>Doors</td>
<td>Ours</td>
<td>98.80 (0.80)</td>
<td>1.33 (0.59)</td>
<td>41.94 (4.56)</td>
</tr>
<tr>
<td>Doors</td>
<td>Oracle</td>
<td>100.00 (0.00)</td>
<td>1.04 (0.09)</td>
<td>84.12 (20.17)</td>
</tr>
<tr>
<td>Stick Button</td>
<td>Ours</td>
<td>83.60 (1.78)</td>
<td>51.80 (11.47)</td>
<td>421.51 (81.28)</td>
</tr>
<tr>
<td>Stick Button</td>
<td>Oracle</td>
<td>90.40 (1.99)</td>
<td>0.18 (0.03)</td>
<td>320.32 (46.92)</td>
</tr>
<tr>
<td>Coffee</td>
<td>Ours</td>
<td>98.00 (1.18)</td>
<td>49.39 (10.22)</td>
<td>53.68 (7.07)</td>
</tr>
<tr>
<td>Coffee</td>
<td>Oracle</td>
<td>100.00 (0.00)</td>
<td>0.18 (0.03)</td>
<td>67.25 (8.12)</td>
</tr>
</tbody>
</table>
Table 3: Comparison to oracle skills.
2. Move from above another button to press a button with the gripper.
3. Move from free space to pick up a stick.
4. Move from above a button to pick up a stick.
5. Move from free space to press a button with the stick.
6. Move from above another button to press a button with the stick.
- **Coffee:**
1. Pick up the coffee pot.
2. Put the coffee pot on the hot plate.
3. Turn on the hot plate.
4. Move from above no cup to pour into a cup.
5. Move from above another cup to pour into a cup.
6. Twist the coffee pot.
7. Pick up the coffee pot after twisting.
### C.10 Predicate Invention Preliminary Results
We ultimately envision a continually learning robot that uses symbols to learn skills and skills to learn symbols in a virtuous cycle of self-improvement. One plausible path toward realizing this vision would start with a set of manually designed symbols, as we did in this work, or skills, as done by Konidaris et al. [4]. Alternatively, we could start with demonstrations alone. In this case, we need to answer the chicken-or-the-egg question: which should be learned first, skills or symbols? Here we present very preliminary results suggesting the viability of learning symbols (predicates) first, and then skills from those learned symbols.
<table>
<thead>
<tr>
<th>Metric</th>
<th>Manual</th>
<th>Learned</th>
</tr>
</thead>
<tbody>
<tr>
<td>% Eval Tasks Solved</td>
<td>99.40 (0.64)</td>
<td>99.20 (0.92)</td>
</tr>
<tr>
<td># Predicates</td>
<td>5.00 (0.00)</td>
<td>5.40 (0.80)</td>
</tr>
<tr>
<td>Eval Time (s)</td>
<td>2.00 (2.78)</td>
<td>2.82 (3.10)</td>
</tr>
<tr>
<td>Learning Time (s)</td>
<td>372.01 (4.55)</td>
<td>4898.16 (692.09)</td>
</tr>
</tbody>
</table>
We follow the approach of Silver et al. [21], which starts with a minimal set of goal predicates that are sufficient for describing the task goals, and then uses demonstrations to invent new predicates. Specifically, we focus on the Cover environment, where there is only one goal predicate: Covers. Given 1000 demonstrations, after learning predicates (see [21] for a description of the approach), we run BPNS skill learning and planning, with the configuration identical to the main experiments. Results are shown in the table on the right. Each entry is a mean (standard deviation) over 10 random seeds. The Manual column uses the manually designed predicates from our main experiments and the Learned column uses learned predicates. Further investigation is needed, but the results do suggest that learning skills on top of learned predicates is a viable direction. We also report the number of predicates learned, evaluation time, and learning time. Consistent with the prior work, we see that additional predicates can be learned that sometimes lead to faster planning during evaluation. We also see that the time bottleneck for the overall system is predicate invention, not skill learning.
Learned-Op0:
Parameters: [?x0:block, ?x1:gripper]
Preconditions: [HandEmpty(), IsBlock(?x0:block)]
Add Effects: [Holding(?x0:block, ?x1:gripper)]
Delete Effects: [HandEmpty()]
Example learned operator for picking a block in Cover.
Learned-Op0:
Parameters: [?x0:door, ?x1:robot]
Preconditions: [InDoorway(?x1:robot, ?x0:door),
TouchingDoor(?x1:robot, ?x0:door)]
Add Effects: [DoorIsOpen(?x0:door)]
Delete Effects: [TouchingDoor(?x1:robot, ?x0:door)]
Example learned operator for opening a door in Doors.
Learned-Op0:
Parameters: [?x0:gripper, ?x1:stick]
Preconditions: [AboveNoButton(),
HandEmpty(?x0:gripper)]
Add Effects: [Grasped(?x0:gripper, ?x1:stick)]
Delete Effects: [HandEmpty(?x0:gripper)]
Example learned operator for grasping the stick in Stick Button.
Learned-Op0:
Parameters: [?x0:cup, ?x1:pot, ?x2:gripper]
Preconditions: [Holding(?x2:gripper, ?x1:pot),
PotHot(?x1:pot),
NotAboveCup(?x2:gripper, ?x1:pot)]
Add Effects: [CupFilled(?x0:cup),
PotAboveCup(?x1:pot, ?x0:cup),
RobotAboveCup(?x2:gripper, ?x0:cup)]
Delete Effects: [NotAboveCup(?x2:gripper, ?x1:pot)]
Example learned operator for pouring in Coffee.
Figure 10: Learned operator examples.
|
{"Source-Url": "https://proceedings.mlr.press/v205/silver23a/silver23a-supp.pdf", "len_cl100k_base": 6843, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26482, "total-output-tokens": 7319, "length": "2e12", "weborganizer": {"__label__adult": 0.0006275177001953125, "__label__art_design": 0.0019702911376953125, "__label__crime_law": 0.000743865966796875, "__label__education_jobs": 0.00527191162109375, "__label__entertainment": 0.00018918514251708984, "__label__fashion_beauty": 0.00044608116149902344, "__label__finance_business": 0.0005278587341308594, "__label__food_dining": 0.0008063316345214844, "__label__games": 0.001987457275390625, "__label__hardware": 0.00647735595703125, "__label__health": 0.0007491111755371094, "__label__history": 0.00075531005859375, "__label__home_hobbies": 0.001026153564453125, "__label__industrial": 0.00405120849609375, "__label__literature": 0.000545501708984375, "__label__politics": 0.0003941059112548828, "__label__religion": 0.0007009506225585938, "__label__science_tech": 0.444580078125, "__label__social_life": 0.0002617835998535156, "__label__software": 0.0135040283203125, "__label__software_dev": 0.51171875, "__label__sports_fitness": 0.0006685256958007812, "__label__transportation": 0.0018472671508789065, "__label__travel": 0.0002923011779785156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27635, 0.05095]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27635, 0.43373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27635, 0.90893]], "google_gemma-3-12b-it_contains_pii": [[0, 2817, false], [2817, 6615, null], [6615, 9390, null], [9390, 12873, null], [12873, 15602, null], [15602, 18163, null], [18163, 20200, null], [20200, 23506, null], [23506, 26361, null], [26361, 27635, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2817, true], [2817, 6615, null], [6615, 9390, null], [9390, 12873, null], [12873, 15602, null], [15602, 18163, null], [18163, 20200, null], [20200, 23506, null], [23506, 26361, null], [26361, 27635, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27635, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27635, null]], "pdf_page_numbers": [[0, 2817, 1], [2817, 6615, 2], [6615, 9390, 3], [9390, 12873, 4], [12873, 15602, 5], [15602, 18163, 6], [18163, 20200, 7], [20200, 23506, 8], [23506, 26361, 9], [26361, 27635, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27635, 0.13757]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
392bbc700cda4301f89b5bccc28108569013cdbd
|
2009 IEEE International Conference on Web Services
(ICWS 2009)
Los Angeles, California, USA
6-10 July 2009
Pages 1-526
Editors:
E. Damiani
J. Zhang
R. Chang
Regular Research Papers
The SCIFC Model for Information Flow Control in Web Service Composition ..............................................1
Wei She, I-Ling Yen, Bhavani Thuraisingham, and Elisa Bertino
Markov-HTN Planning Approach to Enhance Flexibility of Automatic Web
Service Composition ...........................................................................................................................................9
Kun Chen, Jiuyun Xu, and Stephan Reiff-Marganiec
Control Flow Requirements for Automated Service Composition ............................................................17
Piergiorgio Bertoli, Raman Kazhamiakin, Massimo Paolucci, Marco Pistore, Heorhi Raik, and Matthias Wagner
WS-OBJECTS: Extending Service-Oriented Architecture with Hierarchical
Composition of Client-Side Asynchronous Event-Processing Logic ..............................................................25
Krzysztof Ostrowski and Ken Birman
A Plug-in Architecture for Self-Adaptive Web Service Compositions ......................................................35
Anis Charfi, Tom Dinkelaker, and Mira Mezini
Selective Querying for Adapting Hierarchical Web Service Compositions
Using Aggregate Volatility .............................................................................................................................43
John Harney and Prashant Doshi
What are the Problem Makers: Ranking Activities According to their Relevance for Process Changes .................................................................51
Chen Li, Manfred Reichert, and Andreas Wombacher
Distributed Cross-Domain Change Management ........................................59
Bruno Wassermann, Heiko Ludwig, Jim Laredo, Kamal Bhattacharya,
and Liliana Pasquale
Applying Sanitizable Signature to Web-Service-Enabled Business Processes:
Going Beyond Integrity Protection .................................................................67
Kar Way Tan and Robert H. Deng
MACE: A Dynamic Caching Framework for Mashups ................................75
Osama Al-Haj Hassan, Lakshmish Ramaswamy, and John A. Miller
Wrap Scientific Applications as WSRF Grid Services Using gRAVI ...........83
Kyle Chard, Wei Tan, Joshua Boverhof, Ravi Madduri, and Ian Foster
Web Service Mashup Middleware with Partitioning of XML Pipelines ..........91
Eric Wohlstadter, Peng Li, and Brett Cannon
Towards Probabilistic Estimation of Quality of Online Services .................99
Le-Hung Vu and Karl Aberer
Flexible Probabilistic QoS Management of Transaction Based Web Services
Orchestrations .........................................................................................107
Sidney Rosario, Albert Benveniste, and Claude Jard
Service Provenance in QoS-Aware Web Service Runtimes .........................115
Anton Michlmayr, Florian Rosenberg, Philipp Leitner, and Schahram Dustdar
Scenario-Driven Approach for Business Process Modeling .........................123
Anna Ruokonen, Lasse Pajunen, and Tarja Systä
From Workflow Models to Executable Web Service Interfaces ....................131
Armin Haller, Mateusz Marmolowski, Walid Gaaloul, Eyal Oren,
Brahmanada Sapkota, and Manfred Hauswirth
Privacy Time-Related Analysis in Business Protocols ................................141
Karima Mokhtari, Salima Benbernou, Mohsen Rouached, Mohand-Said.Hacid,
and Frank Leyman
Discovery of Optimized Web Service Configurations Using a Hybrid
Semantic and Statistical Approach ............................................................149
Maciej Zaremba, Jacek Migdal, and Manfred Hauswirth
An Efficient Service Discovery Algorithm for Counting Bloom Filter-Based
Service Registry .........................................................................................157
Shuxing Cheng, Carl K. Chang, and Liang-Jie Zhang
Efficient Discovery of Collision-Free Service Combinations .......................165
Roman Vaculin and Katia Sycara
Towards a Model-Driven Process for Designing ReSTful Web Services ..........173
Markku Laitkorpi, Petri Selonen, and Tarja Systä
RETRO: A Consistent and Recoverable RESTful Transaction Model ..........................................................181
Alexandros Marinos, Amir Razavi, Sotiris Moschovianis, and Paul Krause
Towards Automated RESTful Web Service Composition ..................................................................................189
Haibo Zhao and Prashant Doshi
Efficient Testing of Service-Oriented Applications Using Semantic Service Stubs .................................................................197
Senthil Mani, Vibha Singh Sinha, Saurabh Sinha, Pankaj Dhoolia,
Debdoot Mukherjee, and Soham Chakraborty
An Abstract GFSM Model for Optimal and Incremental Conformance Testing of Web Services ..............................................................205
Li Li and Wu Chou
Timed Model Checking Based Approach for Web Services Analysis ..................................................................................213
Nawal Guermouche and Claude Godart
BPEL’n’Aspects: Adapting Service Orchestration Logic .........................................................................................222
Dimka Karastoyanova and Frank Leymann
Service Supervision: Coordinating Web Services in Open Environment ...........................................................................238
Masahiro Tanaka, Toru Ishida, Yohei Murakami, and Satoshi Morimoto
Domain-Specific Processing of Policies or: WS-Policy Intersection Revisited ..........................................................246
Bernhard Hollunder
Integrating Abductive Logic Programming and Description Logics in a Dynamic Contracting Architecture ...........................................................................254
Marco Alberti, Massimiliano Cattafi, Federico Chesani, Marco Gavanelli,
Evelina Lamma, Marco Montali, Paola Mello, and Paolo Torroni
An Automated Method for Web Service Orchestration Based on Reusable Building Blocks .................................................................................262
Frank Alexander Kraemer, Haldor Samset, and Rolv Braek
QoS-Driven Adaptation of BPEL Scenario Execution ...............................................................................................271
Kareliotis Christos, Costas Vassilakis, Efstathios Rouvas, and Panayiotis Georgiadis
Towards Adaptation of Service Interface Semantics .......................................................................................................279
Li Kuang, Shuiguang Deng, Jian Wu, and Ying Li
An Adaptive Tradeoff Model for Service Performance and Security in Service-Based Systems ...........................................................................287
Stephen S. Yau, Yin Yin, and Ho G. An
Reputation Propagation in Composite Services ...........................................................................................................295
Surya Nepal, Zaki Malik, and Athman Bouguettaya
An Approach to Incentive-Based Reputation for Communities of Web Services .....................................................................................................303
Babak Khosravifar, Jamal Bentahar, Philippe Thiran, Ahmad Moazin, and Adrien Guiot
Applying Knowledge Sharing for Business Intelligence Collaboration .................................................................311
Bo Yang, Hao Wang, and Fred Douglis
Discovery of Semantic Web Service Flow Based on Computation .................................................................319
Fangfang Liu, Yuliang Shi, Xiangfeng Luo, Guoning Liang, and Zheng Xu
Exploiting Metrics for Similarity-Based Semantic Web Service Discovery ......................................................327
Stefan Dietze, Alessio Gugliotta, and John Domingue
SAWSDL-MX2: A Machine-Learning Approach for Integrating Semantic Web Service Matchmaking Variants ....................................................................................................................335
Matthias Klusch, Patrick Kapahnke, and Ingo Zinnikus
Behavioral Attestation for Business Processes ..........................................................343
Masoom Alam, Mohammad Nauman, Xinwen Zhang, Tamleek Ali, and Patrick C.K. Hung
Interoperability Changes in an Adaptive Service Orchestration ..........................................................351
Marcel Hiel and Hans Weigand
A Dependency Impact Analysis Model for Web Services Evolution ..............................................................359
Shuying Wang and Miriam A.M. Capretz
Reducing User Perceived Latency with a Middleware for Mobile SOA Access ...............................................366
Andreas Göb, Daniel Schreiber, Louenas Hamdi, Erwin Aitenbichler, and Max Mühlhäuser
A Mobility-Based Clustering and Discovery of Web Services in Mobile Ad-hoc Networks ..........................................................374
Yoo-Seok Shim, Yeon-Seok Kim, and Kyong-Ho Lee
Efficient Access to Composite M-services ..................................................................................381
Xu Yang, Athman Bouguettaya, and Xumin Liu
Scalable Optimized Composition of Web Services with Complexity Analysis .........................................389
Rattikorn Hewett, Phongphun Kijsanayothin, and Bach Nguyen
Improving Web Services Robustness .........................................................................................397
Nuno Laranjeiro, Marco Vieira, and Henrique Madeira
Enforcement from the Inside: Improving Quality of Business in Process Management ..........................................................405
Hanna Eberle, Stefan Föll, Klaus Herrmann, Frank Leymann, Annapaola Marconi, Tobias Unger, and Hannes Wolf
Scientific Workflows as Services in caGrid: A Taverna and gRAVI Approach ...............................................413
Wei Tan, Kyle Chard, Dinanath Sulakhe, Ravi Madduri, Ian Foster, Stan Soiland-Reyes, and Carole Goble
SPA: A Comprehensive Framework for Hybrid Solution Provisioning ..................................................421
Yuhui Wu, Zhile Zou, Ying Chen, Yang Zhao, and Qingbo Wang
DIALOG: Distributed Auditing Logs ..........................................................................................429
Christoph Ringelstein and Steffen Staab
WSRec: A Collaborative Filtering Based Web Service Recommender System ........................................437
Zibin Zheng, Hao Ma, Michael R. Lyu, and Irwin King
Personalized Web Service Ranking via User Group Combining Association Rule ............................................................................................................................................................................445
Wenge Rong, Kecheng Liu, and Lin Liang
Integrating Behavioral Trust in Web Service Compositions ..................................................................................................................................................453
Sharon Paradesi, Prashant Doshi, and Sonu Swaika
Modeling Cost-Aware Web Services Composition Using PTCCS ..................................................................................................................................................461
Fangxiong Xiao, Zhiqiu Huang, Zining Cao, Jun Hu, and LinYuan Liu
Towards Scalability of Quality Driven Semantic Web Service Composition ..................................................................................................................................................469
Freddy Lécué and Nikolay Mehandjiev
DHT-Based Range Query Processing for Web Service Discovery ..................................................................................................................................................477
Yiming Zhang, Ling Liu, Dongsheng Li, Feng Liu, and Xicheng Lu
Gradual Removal of QoS Constraint Violations by Employing Recursive Bargaining Strategy for Optimizing Service Composition Execution Path ..................................................................................................................................................485
Kaijun Ren, Nong Xiao, Junqiang Song, Chi Yang, Min Zhu, and Jinjun Chen
A Framework for Optimal Decentralized Service-Choreography ..................................................................................................................................................493
Saayan Mitra, Ratnesh Kumar, and Samik Basu
Equivalence of Web Services in Process-Aware Service Compositions ..................................................................................................................................................501
Stefanie Rinderle-Ma, Manfred Reichert, and Martin Jurisch
Application and Industry Track
Building Collaboration Applications that Mix Web Services Hosted Content with P2P Protocols ..................................................................................................................................................509
Ken Birman, Jared Cantwell, Daniel Freedman, Qi Huang, Petko Nikolov, and Krzysztof Ostrowski
Application of Management Frameworks to Manage Workflow-Based Systems: A Case Study on a Large Scale E-science Project ..................................................................................................................................................519
Srinath Perera, Suresh Marru, Thilina Gunarathe, Dennis Gannon, and Beth Plale
Collaborative Scientific Workflows ..................................................................................................................................................527
Shiyong Lu and Jia Zhang
Identity Attribute-Based Role Provisioning for Human WS-BPEL Processes ..................................................................................................................................................535
Federica Paci, Rodolfo Ferrini, and Elisa Bertino
Towards More Secure Web Services: Pitfalls of Various Approaches to XML Signature Verification Process ..................................................................................................................................................543
Tomáš Knap and Irena Mlyňková
A Web Service Architecture for Decentralised Identity- and Attribute-Based Access Control ..................................................................................................................................................551
Regina N. Hebig, Christoph Meinel, Michael Menzel, Ivonne Thomas, and Robert Warschofsky
Aspect-Oriented Quality of Service for Web Services: A Model-Driven Approach ..................................................................................................................................................559
Guadalupe Ortiz and Behzad Bordbar
Design of SOA Based Web Service Systems Using QFD for Satisfaction of Quality of Service Requirements ............................................................... 567
Xiaoqing (Frank) Liu and Lianzhang Zhu
Analysis of Signature Wrapping Attacks and Countermeasures ................................................................. 575
Sebastian Gajek, Metko Jensen, Lijun Liao, and Jörg Schwenk
Discovery and On-demand Provisioning of Real-World Web Services .................................................. 583
Dominique Guinard, Vlad Trifa, Patrik Spiess, Bettina Dober, and Stamatis Karnouskos
Contract-First Design Techniques for Building Enterprise Web Services ........................................ 591
Youliang Zhong and Jian Yang
Rapid Identification Approach for Reusable SOA Assets Using Component Business Maps .................................................. 599
Islam Elgedawy and Lakshmish Ramaswamy
CCOA: Cloud Computing Open Architecture ...................................................................................... 607
Liang-Jie Zhang and Qun Zhou
Virtualizing Services and Resources with ProBus: The WS-Policy-Aware Service and Resource Bus ........................................................................ 617
Ralph Mietzner, Tammo van Lessen, Alexander Wiese, Matthias Wieland,
Dimka Karastoyanova, and Frank Leymann
Vulnerable Cloud: SOAP Message Security Validation Revisited ................................................................ 625
Nils Gruschka and Luigi Lo Iacono
Scalable and Reliable Location Services through Decentralized Replication ......................................... 632
Gong Zhang, Ling Liu, Sangeetha Seshadri, Bhuvan Bamba, and Yuehua Wang
A Contract-Based Accountability Service Model .................................................................................. 639
Chen Wang, Shiping Chen, and John Zic
User-Perceived Service Availability: A Metric and an Estimation Approach ........................................ 647
Lingshuang Shao, Junfeng Zhao, Tao Xie, Lu Zhang, Bing Xie, and Hong Mei
A Dynamic Approach toward QoS-Aware Service Workflow Composition ....................................... 655
David Chiu, Sagar Deshpande, Gagan Agrawal, and Rongxing Li
A MapReduce-Enabled Scientific Workflow Composition Framework ........................................ 663
Xubo Fei, Shiyong Lu, and Cui Lin
An Abstraction Framework for Service Composition in Event-Driven SOA Systems .......................................................... 671
Sourish Dasgupta, Satish Bhat, and Yugyung Lee
LiveMig: An Approach to Live Instance Migration in Composite Service Evolution .............................................. 679
Jin Zeng, Jinpeng Huai, Hailong Sun, Ting Deng, and Xiang Li
Using Model Customization for Variability Management in Service Compositions ........................................... 687
Hadaytullah, Kai Koskimies, and Tarja Systä
An Approach to Composing Web Services with Context Heterogeneity ...................................................... 695
Xitong Li, Stuart Madnick, Hongwei Zhu, and Yushun Fan
Infoset for Service Abstraction and Lightweight Message Processing ...........................................................703
Li Li and Wu Chou
A Novel Dynamic Priority Scheduling Algorithm of Process Engine in SOA ..............................................711
QiMing Tian, Li Li, Ling Jin, and XinXin Bai
A Relational Approach for Efficient Service Selection ......................................................................................719
Qi Yu and Manjeet Rege
Mobile In-store Personalized Services .................................................................................................................727
Jun Li, Ismail Ari, Jhilmil Jain, Alan H. Karp, and Mohamed Dekhil
A Service-Oriented System for Optimizing Residential Energy Use ..............................................................735
Chen Wang, Martin de Groot, and Peter Marendy
The Web Service Browser: Automatic Client Generation and Efficient Data Transfer for Web Services ......................................................................................................................................................................743
Steffen Heinzl, Markus Mathes, Thilo Stadelmann, Dominik Seiler, Marcel Diegelmann, Helmut Dohnmann, and Bernd Freisleben
A Conceptual Modeling Approach to Business Service Mashup Development ........................................................751
Alessandro Bozzon, Marco Brambilla, Federico Michele Facca, and Giovanni Toffetti Carughu
Intelligent Matching for Public Internet Web Services—Towards Semi-Automatic Internet Services Mashup ...........................................................................................................................................759
Chen Wu, Tharam Dillon, and Elizabeth Chang
Service-Oriented Architecture for Privacy-Preserving Data Mashup ................................................................767
Thomas Trojer, Benjamin C.M. Fung, and Patrick C.K. Hung
Reiki: Serviceability Architecture and Approach for Reduction and Management of Product Service Incidents ...........................................................................................................................................................................................................775
Chris Connelly, Brian Cox, Tim Forell, Rui Liu, Dejan Milojicic, Alan Nemeth, Peter Piet, Suhas Shivanna, and Wei-Hong Wang
Establishing and Monitoring SLAs in Complex Service Based Systems ..........................................................783
Marco Comuzzi, Constantinos Kotsokalis, George Spanoudakis, and Ramin Yahyapour
Inferring Behavioural Models from Traces of Business Applications ................................................................791
Arnaud Dury, Hesham H. Hallal, and Alexandre Petrenko
A Performance Evaluation Study for Web Services Attachments ........................................................................799
Julio Cezar Estrella, André Takeshi Endo, Rubens Kenji T. Toyohara, Regina H.C. Santana, Marcos J. Santana, and Sarita Mazzini Bruschi
Analytic Architecture Assessment in SOA Solution Design and its Engineering Application ...........................................807
Nianjun Zhou and Liang-Jie Zhang
SOA Middleware Support for Service Process Reconfiguration with End-to-End QoS Constraints ...................................................815
Yanlong Zhai, Jing Zhang, and Kwei-Jay Lin
Composing Services for Third-party Service Delivery .........................................................................................823
Ingo Weber, Alistair Barros, Norman May, Jörg Hoffmann, and Tomasz Kaczmarek
An Extensible Abstract Service Orchestration Framework ........................................................................831
Stéphanie Chollet and Philippe Lalanda
Pat: A P2P Based Publish/Subscribe System for QoS Information
Xiao Zheng, Junzhou Luo, and Jiuxin Cao
Dissemination of Web Services ........................................................................................................839
A Flexible Approach for Automatic Process Decentralization Using Dependency Tables ........................................847
Walid Fdhila, Ustun Yildiz, and Claude Godart
A Tool for Choreography Analysis Using Collaboration Diagrams .........................................................856
Tevfik Bultan, Chris Ferguson, and Xiang Fu
Supporting Rebinding in BPEL ........................................................................................................864
Anja Strunk, Iris Braun, Sandro Reichert, and Alexander Schill
Web Service Ranking Using Semantic Profile Information ........................................................................872
Umesh Bellur and Harin Vadodaria
A Unified Test Framework for Continuous Integration Testing of SOA Solutions .........................................880
Hehui Liu, Zhongjie Li, Jun Zhu, Huafang Tan, and Heyuan Huang
Service Composition as Generative Constraint Satisfaction ........................................................................888
Wolfgang Mayer, Rajesh Thiagarajan, and Markus Stumptner
Collaborative Web Data Record Extraction ..........................................................................................896
Gengxin Miao, Firat Kart, L.E. Moser, and P.M. Melliar-Smith
Adaptive Prefetching Scheme Using Web Log Mining in Cluster-Based Web Systems ................................903
Heung Ki Lee, Baik Song An, and Eun Jung Kim
RDF Data-Centric Storage ....................................................................................................................911
Justin J. Levandoski and Mohamed F. Mokbel
Static vs. Dynamic Validation of BSP Conformance .................................................................................919
Stefan Premenschutz-Schützenau, Nirmal K. Mukhi, Satoshi Hada, Naoto Sato, Fumiko Satoh, and Naohiko Uramoto
A Process Modeling-Based Approach for Web Service Management .....................................................928
Yan Liu
WS-Policy: On Conditional and Custom Assertions ................................................................................936
Bernhard Hollunder
Design Quality Analytics of Traceability Enablement in Service-Oriented Solution Design Environment ......944
Liang-Jie Zhang, Zhi-Hong Mao, and Nianjun Zhou
A Petri Net Siphon Based Solution to Protocol-Level Service Composition Mismatches ........................952
Pengcheng Xiong, Mengchu Zhou, and Calton Pu
Dynamic Collaborative Business Process Formulation via Ontologised Hierarchical Task Network (HTN) Planning .................................................................................................................959
SOA-Based Integration of the Internet of Things in Enterprise Services .................................................................................................................................968
Patrik Spiess, Stamatis Karnouskos, Dominique Guinard, Domnic Savio,
Oliver Baecker, Luciana Moreira Sá de Souza, and Vlad Trifa
Patterns for Enterprise Mashups in B2B Collaborations to Foster Lightweight Composition and End User Development .................................................................................................................................976
Till Janner, Robert Siebeck, Christoph Schroth, and Volker Hoyer
BluInfo: Open Architecture for Deploying Web Services in WPAN Hotspots .................................................................................................................................984
Hannu Kukka, Fabio Kruger, and Timo Ojala
XDM-Compatible Service Repository for User-Centric Service Creation and Discovery .................................................................................................................................992
Jian Yu, Paolo Falcarin, Sancho Rego, Isabel Ordas, Eduardo Martins, Quan Sun,
Ruben Trapero, and Quan Z. Sheng
Work-in-Progress
Risk Management Framework for Service-Oriented Architecture .................................................................................................................................1000
R. William Maule and William C. Lewis
An Approach to Non-functional Property Evaluation of Web Services .................................................................................................................................1004
Pei Li, Marco Comerio, Andrea Maurino, and Flavio De Paoli
Posters
Mutation Test Based on OWL-S Requirement Model .................................................................................................................................1006
Xiaojuan Wang, Ning Huang, and Rui Wang
Modeling and Analysis of Flexible Transaction for Web Services .................................................................................................................................1008
Min Yuan, Zhiqiu Huang, and Fangxiang Xiao
Formal Analysis for Multimedia Conferencing Communication Services Orchestration .................................................................................................................................1010
Bo Cheng, Xiangtao Lin, Xiaoxiao Hu, and Junliang Chen
Change Detection and Correction Facilitation for Web Applications and Services .................................................................................................................................1012
Alfredo Alba, Varun Bhagwan, Tyrone Grandison, Daniel Gruhl, and Jan Pieper
Deactivation of Unwelcomed Deep Web Extraction Services through Random Injection .................................................................................................................................1014
Varun Bhagwan and Tyrone Grandison
Web Services SIP Based Open Multimedia Conferencing on Internet .................................................................................................................................1016
Bo Cheng, Xiaoxiao Hu, Xiangtao Lin, Yang Zhang, and Junliang Chen
A Framework for Building Reliable Distributed Bioinformatics Service Repositories .................................................................................................................................1018
Francois Moreews
A Governance Model for SOA ................................................................. Pierre de Leusse, Theo Dimitrakos, and David Brossard
Out of the Confusion of Tongues: A Unified Database Programming Paradigm ........................................ Rui Liu and Weihong Wang
A Semantic Repository for Geological Modeling Workflows ............................................................ Nabil Belaid, Yamine Ait-Ameur, and Jean-François Rainaud
Generic Web Services—Extensible Functionality with Stable Interface ............................................ Vadym Vorovskiy, Sebastian Enderlein, and Alexander Zeier
GroupSpeak: High-level Language Extension for Workflow Capability .................................................. Moshe Gutman, Sridhar Radhakrishnan, Changwook Kim, Chandra N. Sekharan, and Konstantin Laufer
Enabling Scaleable, Efficient, Non-visual Web Browsing Services ......................................................... Ashish Verma, Tyrone Grandison, and Himanshu Chauhan
The Fourth Party Service Platform and Service Charging ........................................................................ Xuhui He, Xiaolin Zheng, Deren Chen, and Jianyue Wang
**Missing Papers in ICWS 2008**
**Application and Industry Track**
A Policy-Based Middleware for Web Services SLA Negotiation ................................................................. Farhana Zulkernine, Patrick Martin, Chris Craddock, and Kirk Wilson
**Work in Progress Track**
**Author Index**
|
{"Source-Url": "http://toc.proceedings.com/05905webtoc.pdf", "len_cl100k_base": 5310, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24324, "total-output-tokens": 6183, "length": "2e12", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0007762908935546875, "__label__crime_law": 0.0002968311309814453, "__label__education_jobs": 0.0012636184692382812, "__label__entertainment": 0.00022017955780029297, "__label__fashion_beauty": 0.00015497207641601562, "__label__finance_business": 0.0010290145874023438, "__label__food_dining": 0.00042319297790527344, "__label__games": 0.0005469322204589844, "__label__hardware": 0.0011396408081054688, "__label__health": 0.00044083595275878906, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 7.766485214233398e-05, "__label__industrial": 0.0004241466522216797, "__label__literature": 0.00046133995056152344, "__label__politics": 0.00038504600524902344, "__label__religion": 0.0005030632019042969, "__label__science_tech": 0.0687255859375, "__label__social_life": 0.0001552104949951172, "__label__software": 0.030364990234375, "__label__software_dev": 0.89111328125, "__label__sports_fitness": 0.00017201900482177734, "__label__transportation": 0.0003893375396728515, "__label__travel": 0.0002491474151611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30693, 0.01386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30693, 0.04881]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30693, 0.51445]], "google_gemma-3-12b-it_contains_pii": [[0, 163, false], [163, 163, null], [163, 1553, null], [1553, 4288, null], [4288, 7511, null], [7511, 10768, null], [10768, 15090, null], [15090, 18277, null], [18277, 21943, null], [21943, 24842, null], [24842, 28641, null], [28641, 30693, null]], "google_gemma-3-12b-it_is_public_document": [[0, 163, true], [163, 163, null], [163, 1553, null], [1553, 4288, null], [4288, 7511, null], [7511, 10768, null], [10768, 15090, null], [15090, 18277, null], [18277, 21943, null], [21943, 24842, null], [24842, 28641, null], [28641, 30693, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30693, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30693, null]], "pdf_page_numbers": [[0, 163, 1], [163, 163, 2], [163, 1553, 3], [1553, 4288, 4], [4288, 7511, 5], [7511, 10768, 6], [10768, 15090, 7], [15090, 18277, 8], [18277, 21943, 9], [21943, 24842, 10], [24842, 28641, 11], [28641, 30693, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30693, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
0d4297148848e7b53a89dce7933d97604870f6e5
|
BGP Link-State
BGP Link-State (LS) is an Address Family Identifier (AFI) and Sub-address Family Identifier (SAFI) defined to carry interior gateway protocol (IGP) link-state database through BGP. BGP-LS delivers network topology information to topology servers and Application Layer Traffic Optimization (ALTO) servers. BGP-LS allows policy-based control to aggregation, information-hiding, and abstraction. BGP-LS supports IS-IS and OSPFv2.
- Finding Feature Information, page 1
- Overview of Link-State Information in BGP, page 1
- Information About BGP-LS, page 3
- BGP-LS OSPF, page 5
- BGP-LS IS-IS, page 6
- BGP-LS Show Commands, page 7
- BGP-LS Debug Commands, page 10
- Additional References for BGP-LS, page 10
- Feature Information for BGP-LS, page 11
Finding Feature Information
Your software release may not support all the features documented in this module. For the latest caveats and feature information, see Bug Search Tool and the release notes for your platform and software release. To find information about the features documented in this module, and to see a list of the releases in which each feature is supported, see the feature information table.
Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required.
Overview of Link-State Information in BGP
In a number of environments, a component external to a network is called upon to perform computations based on the network topology and current state of the connections within the network, including Traffic
Engineering (TE) information. This is information typically distributed by IGP routing protocols within the network.
This module describes a mechanism by which Link-State (LS) and Traffic Engineering (TE) information from IGPs can be collected from networks and shared with external components using the BGP. This is achieved using a new BGP Network Layer Reachability Information (NLRI) encoding format. The mechanism is applicable to physical and virtual links. Applications of this technique include Application-Layer Traffic Optimization (ALTO) servers and Path Computation Elements (PCEs). These components, while external to the network, require network state information on a real-time basis. Specifically, they require link-state database information of each IGP node (OSPF or ISIS) from the entire network. BGP protocol is used to collect the necessary information and to share with the external components and this is achieved using a NLRI encoding format.
In order to address the need for applications that require topological visibility across IGP areas, or even across Autonomous Systems, the BGP-LS address-family/sub-address-family have been defined to allow BGP to carry link-state information. The identifying key of each Link-State object, namely a node, link, or prefix, is encoded in the NLRI and the properties of the object are encoded in the BGP-LS attribute.
The below figure describes a typical deployment scenario of a network that utilizes BGP-LS. In each IGP area, one or more nodes are configured with BGP-LS. These BGP speakers form an IBGP mesh by connecting to one or more route-reflectors. This way, all BGP speakers (specifically the route-reflectors (RR)) obtain link-state information from all IGP areas (and from other ASes from EBGP peers). An external component connects to the route-reflector to obtain this information (perhaps moderated by a policy regarding what information is or is not advertised to the external component). An external component (for example, a controller) then can collect these information in the "northbound" direction across IGP areas or ASes and construct the end-to-end path (with its associated SIDs) that need to be applied to an incoming packet to achieve the desired end-to-end forwarding.
*Figure 1: Relation between IGP nodes and BGP*
Information About BGP-LS
Carrying Link-State Information in BGP
This specification contains two parts:
- Definition of a new BGP NLRI that describes links, nodes, and prefixes comprising of IGP link-state information
- Definition of a new BGP path attribute (BGP-LS attribute) that carries link, node, and prefix properties and attributes, such as the link and prefix metric or auxiliary Router-IDs of nodes, and so on.
TLV Format
Information in the new Link-State NLRIs and attributes is encoded in Type/Length/Value (TLV) triplets. The TLV format is shown in the below figure.
```
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// Value (variable) //
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
The Length field defines the length of the value portion in octets (thus, a TLV with no value portion would have a length of zero).
Link-State NLRI
The MP_REACH_NLRI and MP_UNREACH_NLRI attributes are BGP’s containers for carrying opaque information. Each Link-State Network Layer Reachability Information (NLRI) describes either a node, a link, or a prefix. NLRI body is a set of Type/Length/Value triplets (TLV) and contains the data that identifies an object.
```
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| NLRI Type | Total NLRI Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Link-State NLRI (variable) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
NLRI Types
The Total NLRI length field contains the cumulative length, in octets, of the rest of the NLRI, not including the NLRI Type field or itself.
Figure 2: The NLRI Types
```
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | NLRI Type |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
The NLRI Types are shown in the following figures:
**Figure 3: The Node NLRI Format**
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+--------+
| Protocol-ID |
+---------------------------+
| Identifier |
| (64 bits) |
+---------------------------+
// Local Node Descriptors (variable) //
```
**Figure 4: The Link NLRI Format**
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+--------+
| Protocol-ID |
+---------------------------+
| Identifier |
| (64 bits) |
+---------------------------+
// Local Node Descriptors (variable) //
// Remote Node Descriptors (variable) //
// Link Descriptors (variable) //
```
The IPv4 and IPv6 Prefix NLRIs (NLRI Type = 3 and Type = 4) use the same format, as shown in the following figure.
**Figure 5: The IPv4/IPv6 Topology Prefix NLRI Format**
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+--------+
| Protocol-ID |
+---------------------------+
| Identifier |
| (64 bits) |
+---------------------------+
// Local Node Descriptors (variable) //
// Prefix Descriptors (variable) //
```
**Node Descriptors**
Each link is anchored by a pair of Router-IDs that are used by the underlying IGP, namely, a 48-bit ISO System-ID for IS-IS and a 32-bit Router-ID for OSPFv2 and OSPFv3. An IGP may use one or more additional auxiliary Router-IDs, mainly for traffic engineering purposes. For example, IS-IS may have one or more IPv4 and IPv6 TE Router-IDs. These auxiliary Router-IDs must be included in the link attribute.
Link Descriptors
The Link Descriptor field is a set of Type/Length/Value (TLV) triplets. The link descriptor TLVs uniquely identify a link among multiple parallel links between a pair of anchor routers. A link described by the link descriptor TLVs actually is a "half-link", a unidirectional representation of a logical link. In order to fully describe a single logical link, two originating routers advertise a half-link each, that is, two Link NLRIs are advertised for a given point-to-point link.
Prefix Descriptors
The Prefix Descriptor field is a set of Type/Length/Value (TLV) triplets. Prefix Descriptor TLVs uniquely identify an IPv4 or IPv6 prefix originated by a node.
BGP-LS Attribute
The BGP-LS attribute is an optional, non-transitive BGP attribute that is used to carry link, node, and prefix parameters and attributes. It is defined as a set of Type/Length/Value (TLV) triplets. This attribute should only be included with Link-State NLRIs. This attribute must be ignored for all other address families.
BGP-LS OSPF
OSPF is one of the IGP protocols that feeds its topology into BGP into the LS cache. Link state information can be passed to BGP in two ways:
- When new communications between OSPF and BGP has been established, or when BGP-LS functionality has been initially enabled under OSPF, then all LSA information is downloaded to BGP via the LS library.
- As new LSA information is being processed or received from remote OSPF nodes, this information is added or updated in BGP.
Configuring BGP-LS OSPF
Perform the following steps to configure OSPF with BGP-LS:
1. Enable the OSPF routing protocol and enter router configuration mode.
```
router ospf
```
For example,
```
Device(config-router)# router ospf 10
```
2. Distribute BGP link-state.
```
distribute link-state
```
For example,
```
Device(config-router)# distribute link-state instance-id <instid>
Device(config-router)# distribute link-state throttle <time>
```
- throttle (optional): Sets throttle time to process LS distribution queue. Default value is 5 seconds. Range: 1 to 3600 seconds.
In the scenarios where any area gets deleted, throttle timer does not get honored. Queue is walked by OSPF completely and updates to all the areas are sent to BGP.
If you do not specify any value for instance ID and throttle, default values are taken.
Example:
```
#show run | sec router ospf
router ospf 10
distribute link-state instance-id 33 throttle 6
```
You should not be using the same instance ID for two OSPF instances. It throws an instance ID already in use error.
---
**BGP-LS IS-IS**
IS-IS distributes routing information into BGP. IS-IS processes the routing information in its LSP database and extract the relevant objects. It advertises IS-IS nodes, links, and prefix information and their attributes into BGP. This update from IS-IS into BGP only happens when there is a change in the LSP fragments, either belonging to the local router or any remote routers.
---
**Configuring IS-IS With BGP-LS**
Perform the following steps to configure IS-IS with BGP-LS:
1. Enable the IS-IS routing protocol and enter router configuration mode.
```
router isis
For example,
Device(config-router)# router isis
```
2. Distribute BGP link-state.
```
distribute link-state
For example,
Device(config-router)# distribute link-state instance-id <instid>
Device(config-router)# distribute link-state throttle <time>
```
**throttle** (optional): Sets throttle time to process LS distribution queue. Range: 5-20 seconds.
---
**Configuring BGP**
Perform the following steps to configure BGP with BGP-LS:
1. Enable the BGP routing protocol and enter router configuration mode.
```
router bgp
```
For example,
Device(config-if)# router bgp 100
2 Configure the address-family link-state.
address-family link-state link-state
For example,
Device(config-router)# address-family link-state link-state
3 Exit the address-family.
exit-address-family
For example,
Device(config-router)# exit-address-family
Example: ISIS With BGP-LS Configuration
Example: IS-IS Configuration
router isis 1
net 49.0001.1720.1600.1001.00
is-type level-1
metric-style wide
distribute link-state level-1
segment-routing mpls
segment-routing prefix-sid-map advertise-local
mpls traffic-eng router-id Loopback0
mpls traffic-eng level-1
interface GigabitEthernet2/2/2
ip address 30.0.0.2 255.255.255.0
ip router isis 1
negotation auto
mpls traffic-eng tunnels
isis network point-to-point
Example: BGP Configuration
router bgp 100
bgp log-neighbor-changes
neighbor 19.0.0.6 remote-as 100
neighbor 19.0.0.79 remote-as 100
!
address-family ipv4
neighbor 19.0.0.6 activate
neighbor 19.0.0.79 activate
exit-address-family
!
address-family link-state link-state
neighbor 19.0.0.6 activate
neighbor 19.0.0.79 activate
exit-address-family
BGP-LS Show Commands
show ip ospf ls-distribution
Displays the status of LS distribution.
R1#show ip ospf ls-distribution
OSPF Router with ID (1.3.0.1) (Process ID 10)
OSPF LS Distribution is Enabled
Instance Id: 0
Throttle time: 5
Registration Handle: 0x0
Status: Ready Active
Num DBs Queued for LSCache Update: 0
Num of DBs with Unresolved Links: 0
show ip ospf database dist-ls-pending
Displays the LSAs that are pending, to be sent to BGP.
Sample Output:
R1#show ip ospf database dist-ls-pending
OSPF Router with ID (1.3.0.1) (Process ID 10)
<table>
<thead>
<tr>
<th>Link ID</th>
<th>ADV Router</th>
<th>Age</th>
<th>Seq#</th>
<th>Checksum</th>
<th>Link count</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.2.0.2</td>
<td>1.2.0.2</td>
<td>4</td>
<td>0x80000006</td>
<td>0x009678</td>
<td>1</td>
</tr>
<tr>
<td>3.3.3.3</td>
<td>3.3.3.3</td>
<td>1110</td>
<td>0x80000018</td>
<td>0x00CAF9</td>
<td>2</td>
</tr>
</tbody>
</table>
(show has unresolved links)
show isis distribute-ls [level-1 | level-2]
Displays ISIS internal LS cache information that are distributed to BGP.
r1#sh isis distribute-ls
ISIS distribute link-state: configured
distslevels:0x3, distls-initialized:1,
dists_instance_id:0, distls_throttle_delay:10
LS DB: ls_init_started(0) ls_initialized(1) ls_pending_delete(0)
dists_enabled[1]:1
dists_enabled[2]:1
Level 1:
Node System ID:0003.0003.0003 Pseudonode-Id:0 ls_change_flags:0x0
LSP: lsapid(0003.0003.0003.00-00), lsptype(0) lsp_change_flags(0x0)
Node Attr: name(r3) bitfield(0x81) node_flags(0x0)
area_len/area_addr(2/33) num_mtid/mtid(0/0) ipv4_id(33.33.33.1)
num_alg/sr_alg(0/0) num_srgb/srgb(1/(start:16000, range:8000)
srgb_flags(0x80)
opaque_len/opaque(0/0x0)
ISIS LS Links:
mtid(0): nid:0002.0002.0002.00, {0, 0}, {6.6.6.1, 6.6.6.6}
Link Attr: bitfield:0x940F, local_ipv4_id:6.6.6.1, remote_ipv4_id:6.6.6.6,
max_link_bw:10000, max_resv_bw:10000,
num_unresv_bw/unresv_bw:8/
[0]: 10000 kbits/sec, [1]: 8000 kbits/sec
[2]: 8000 kbits/sec, [3]: 8000 kbits/sec
[4]: 8000 kbits/sec, [5]: 8000 kbits/sec
[6]: 8000 kbits/sec, [7]: 8000 kbits/sec,
admin_group:0, protect_type:0, mpls_proto_mask:0x0,
te_metric:0, metric:0, link_name:,
num_srlg/srlg:0/
num_adj_sid/adj:2/
Address Family IPv4 ISIS LS Prefix:
mtid(0): 1.1.1.0/24
Prefix Attr: bitfield:0x0, metric:10, igp_flags:0x0,
num_route_tag:0, route_tag:0
num_pfx_sid:0, pfx_sid:
opaque_len:0, opaque_data:0x0
mtid(0): 3.3.3.0/24
Prefix Attr: bitfield:0x0, metric:10, igp_flags:0x0,
num_route_tag:0, route_tag:0
num_pfx_sid:0, pfx_sid:
opaque_len:0, opaque_data:0x0
### show bgp link-state link-state
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal, r RIB-failure, S stale, m multipath, b backup-path, f RT-Filter, x best-external, a additional-path, c RIB-compressed, t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Prefix codes: E link, V node, T4 IPv4 reachable route, T6 IPv6 reachable route, I Identifier, N local node, R remote node, L link, P prefix, L1/L2 ISIS level-1/level-2, O OSPF, a area-ID, l link-ID, t topology-ID, s ISO-ID, c confed-ID/ASN, b bgp-identifier, r router-ID, i if-address, n nbr-address, o OSPF Route-type, p IP-prefix, d designated router address, u/U Unknown, x/X Unexpected, m/M Malformed
<table>
<thead>
<tr>
<th>Network</th>
<th>Next Hop Metric LocPrf Weight Path</th>
</tr>
</thead>
<tbody>
<tr>
<td><code><v>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.1001.00]]> [x]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><v>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.2002.00]]> [x]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><v>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.3003.00]]> [x]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><v>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.4004.00]]> [x]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><v>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.5005.00]]> [x]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.1001.00]]> [r][c100][b0.0.0.0][s1720.1600.2002.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.2002.00]]> [r][c100][b0.0.0.0][s1720.1600.3003.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.3003.00]]> [r][c100][b0.0.0.0][s1720.1600.4004.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.5005.00]]> [r][c100][b0.0.0.0][s1720.1600.4004.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.5005.00]]> [r][c100][b0.0.0.0][s1720.1600.5005.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.5005.00]]> [r][c100][b0.0.0.0][s1720.1600.2002.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
<tr>
<td><code><e>[l1][i0x43][n[c100][b0.0.0.0][s1720.1600.1001.00]]> [r][c100][b0.0.0.0][s1720.1600.5005.00]]> [l]15.0.0.1 0 0 100 i</code></td>
<td></td>
</tr>
</tbody>
</table>
show bgp link-state link-state nlri <nlri string>
BGP routing table entry for [V][L1][I0x43][N[c100][b0.0.0.0][s1720.1600.4004.00]], version 95
Paths: (1 available, best #1, table link-state link-state)
Not advertised to any peer
Refresh Epoch 4
Local
16.16.16.16 (metric 30) from 15.15.15.15 (15.15.15.15)
Origin IGP, metric 0, localpref 100, valid, internal, best
Originator: 16.16.16.16, Cluster list: 15.15.15.15
LS Attribute: Node-name: R4, ISIS area: 49.12.34
rx pathid: 0, tx pathid: 0x0
BGP-LS Debug Commands
• debug ip ospf dist-ls [detail]
Turns on ls-distribution related debugs in OSPF.
• debug isis distribute-ls
Displays the items being advertised into the BGP from IS-IS.
Additional References for BGP-LS
<table>
<thead>
<tr>
<th>Related Topic</th>
<th>Document Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cisco IOS commands</td>
<td>Cisco IOS Master Commands List, All Releases</td>
</tr>
</tbody>
</table>
MIBs
<table>
<thead>
<tr>
<th>MIB</th>
<th>MIBs Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>• CISCO-MIB</td>
<td>To locate and download MIBs for selected platforms, Cisco IOS releases, and feature sets, use Cisco MIB Locator found at the following URL: <a href="http://www.cisco.com/go/mibs">http://www.cisco.com/go/mibs</a></td>
</tr>
</tbody>
</table>
Technical Assistance
<table>
<thead>
<tr>
<th>Description</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>The Cisco Support website provides extensive online resources, including documentation and tools for troubleshooting and resolving technical issues with Cisco products and technologies. To receive security and technical information about your products, you can subscribe to various services, such as the Product Alert Tool (accessed from Field Notices), the Cisco Technical Services Newsletter, and Really Simple Syndication (RSS) Feeds. Access to most tools on the Cisco Support website requires a Cisco.com user ID and password.</td>
<td><a href="http://www.cisco.com/cisco/web/support/index.html">http://www.cisco.com/cisco/web/support/index.html</a></td>
</tr>
</tbody>
</table>
Feature Information for BGP-LS
The following table provides release information about the feature or features described in this module. This table lists only the software release that introduced support for a given feature in a given software release train. Unless noted otherwise, subsequent releases of that software release train also support that feature.
Use Cisco Feature Navigator to find information about platform support and Cisco software image support. To access Cisco Feature Navigator, go to www.cisco.com/go/cfn. An account on Cisco.com is not required.
Table 1: Feature Information for BGP-LS
<table>
<thead>
<tr>
<th>Feature Name</th>
<th>Releases</th>
<th>Feature Information</th>
</tr>
</thead>
<tbody>
<tr>
<td>BGP-LS</td>
<td>16.4.1</td>
<td>This is a new feature.</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_bgp/configuration/xe-16/irg-xe-16-book/irg-xe-16-book_chapter_01010101.pdf", "len_cl100k_base": 6315, "olmocr-version": "0.1.48", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27298, "total-output-tokens": 6946, "length": "2e12", "weborganizer": {"__label__adult": 0.000492095947265625, "__label__art_design": 0.0003120899200439453, "__label__crime_law": 0.0004143714904785156, "__label__education_jobs": 0.0008683204650878906, "__label__entertainment": 0.00026798248291015625, "__label__fashion_beauty": 0.00022590160369873047, "__label__finance_business": 0.0006966590881347656, "__label__food_dining": 0.00033211708068847656, "__label__games": 0.0012407302856445312, "__label__hardware": 0.0277099609375, "__label__health": 0.0004093647003173828, "__label__history": 0.0006084442138671875, "__label__home_hobbies": 0.00015604496002197266, "__label__industrial": 0.0016002655029296875, "__label__literature": 0.00028133392333984375, "__label__politics": 0.0004417896270751953, "__label__religion": 0.0006794929504394531, "__label__science_tech": 0.401611328125, "__label__social_life": 0.00013530254364013672, "__label__software": 0.1536865234375, "__label__software_dev": 0.40478515625, "__label__sports_fitness": 0.0004498958587646485, "__label__transportation": 0.002094268798828125, "__label__travel": 0.00034928321838378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20636, 0.06828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20636, 0.2053]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20636, 0.64716]], "google_gemma-3-12b-it_contains_pii": [[0, 1650, false], [1650, 3964, null], [3964, 5948, null], [5948, 7442, null], [7442, 9686, null], [9686, 11428, null], [11428, 12716, null], [12716, 15222, null], [15222, 17376, null], [17376, 18354, null], [18354, 20636, null], [20636, 20636, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1650, true], [1650, 3964, null], [3964, 5948, null], [5948, 7442, null], [7442, 9686, null], [9686, 11428, null], [11428, 12716, null], [12716, 15222, null], [15222, 17376, null], [17376, 18354, null], [18354, 20636, null], [20636, 20636, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20636, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20636, null]], "pdf_page_numbers": [[0, 1650, 1], [1650, 3964, 2], [3964, 5948, 3], [5948, 7442, 4], [7442, 9686, 5], [9686, 11428, 6], [11428, 12716, 7], [12716, 15222, 8], [15222, 17376, 9], [17376, 18354, 10], [18354, 20636, 11], [20636, 20636, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20636, 0.13231]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
d13dc0c8143f185664648b4965b63dc4c837649f
|
This specification defines a way for an XMPP servers to deliver information for use in push notifications to mobile and other devices.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
1 Introduction
The purpose of push notifications is to inform users of new messages or other pertinent information even when they have no XMPP clients online. Typically, these notifications are delivered to a user’s mobile device, displaying a notice that can trigger opening an XMPP client to continue a conversation or answer a Jingle session request.
There have been several push notification implementations by mobile XMPP client vendors. However, experience has shown that these implementations carried several drawbacks:
- Treated the XMPP client and XMPP server as one unified service, such that push notifications only worked using the “official” client.
- Proxied a user’s session through the client provider’s backend services in order to monitor for and trigger push notifications.
The goal for this document is to make the generalized case possible, whereby a user may use their XMPP client of choice with their own server of choice. The requirements are thus:
- Allow XMPP servers to support push notifications to multiple client implementations, via multiple external or proprietary push services.
- Allow clients to receive push notifications from multiple third-party XMPP servers.
- Eliminate the need for clients to proxy a user’s XMPP session in order to enable push notifications.
Note: Any publish-subscribe use cases not described herein are described in Publish-Subscribe (XEP-0060) ¹. Also, this document does not show error flows related to the generic publish-subscribe use cases referenced herein, since they are exhaustively defined in XEP-0060. The reader is referred to XEP-0060 for all relevant protocol details related to the XMPP publish-subscribe extension. This document merely defines a “subset” or “profile” of XMPP publish-subscribe.
2 Concepts and Approach
XMPP Push works between the user’s XMPP server and two push notification services in tandem:
1. The user’s XMPP server publishes notifications to the XMPP Push Service of each of the user’s client applications.
2. The XMPP Push Service (as defined here) for a client application then delivers the notification to a third-party notification delivery service.
3. The third-party (and potentially proprietary or platform-dependent) push service delivers the notification from the client application’s backend service to the user’s device.
This two-tiered push architecture allows the user’s XMPP server to deliver notifications to arbitrary third-party clients, and in turn allows those clients to use the appropriate delivery mechanism for their platforms without having to share any private keys or other credentials with the XMPP server.
2.1 General Architecture of a Push Notification Service
The current state-of-the-art for a generic push notification service requires four actors:
**App Client** The app client is the software installed and ran by the user, and is the final receiver of a push notification.
**App Server** The app server is a backend service for the app client. At minimum, the app server exists to trigger push notifications, but it often also performs business logic for the app.
**User Agent** The user agent is a service running locally on the user’s device which receives push notifications and delivers them to the appropriate application.
**Push Service** The push service ferries notifications from the App Server to the User Agent. How it does so is often proprietary and vendor/platform dependent.
Enabling notifications is a five step process:
1. The App Client asks the User Agent to authorize the delivery of notifications.
2. The User Agent then requests a token from the Push Service which authorizes delivery of notifications to that User Agent and App Client.
3. The Push Service issues the token to the User Agent.
4. The User Agent gives the token to the App Client.
5. The App Client sends the token to the App Server for later use.
**Listing 1:** The five general steps to enable push notifications
```
+---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | | | | App Client +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | | | | +---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}---{}
| | | | | | | | | | | | App Server |
```
To send a push notification, the App Server sends the notification data to the Push Service along with the saved token.
### Listing 2: General delivery of a push notification
```
+-(()-()-()-()()-()-()()-()-()()-()-(()+) +-(()-()-()-()()-()-()()-()-()()-()-(()+) |
| App Client | | App Server |
| | | |
+-(()-()-()-()()-()-()()-()-()()-()-()+ +-(()-()-()-()()-()-()()-()-()()-()-()+ |
| | | |
| User Agent <-(()-()-()-()()-()-()()-()-()()-()-()+ |
| | | Push Service |
| | | |
+-(()-()-()-()()-()-()()-()-()()-()-()+ +-(()-()-()-()()-()-()()-()-()()-()-()+ |
```
### 2.2 Mapping the General Architecture to XMPP
To build an XMPP Push service on top of a general push service, we perform the following mapping:
- The general App Client becomes the XMPP User Agent
- The general App Server becomes the XMPP Push Service
- The XMPP server is now the new logical "App Server"
- The XMPP client portion of the application is the new logical "App Client"
3 XMPP Push Service
An XMPP Push Service is a PubSub service as defined by the XMPP XEP-0060 extension. The functional difference between a Push Service and a generic pubsub service is that a Push Service will generally summarize and forward published content via non-XMPP mechanisms. Note: a Push Service is provided by a specific client application as part of the App Server. A user’s XMPP server will typically not act as a Push Service itself, but will instead publish to the Push Services for the user’s client applications.
3.1 Recommended Defaults
A Push Service MUST:
- Support the ‘whitelist’ access model and set it to the default.
- Support the ‘publish-only’ affiliation.
3.2 Business Rules
Each PubSub node is a delivery target for the Push Service, which could represent multiple devices for a single user.
In order to prevent information leaks, each node SHOULD be configured with a ‘whitelist’
access model so that only trusted entities are able to view or subscribe to published notifications. Furthermore, the ‘publish-only’ affiliation SHOULD be used to allow acceptable entities (such as the server JID and the user’s bare JID) to publish to the node to trigger notifications. Care SHOULD be taken to ensure that publish requests are coming from the user’s server and not from other third-party client applications using the full JID of a user. A Push Service MAY opt to only accept or further process publish requests from server JIDs and bare user JIDs to ensure that only a user’s server is able to publish, but it SHOULD instead use publish options with credentials shared only with the user’s server (see Enabling Notifications).
4 Discovering Support
4.1 Account Owner Service Discovery
Before enabling or disabling push services, a client SHOULD determine whether the user’s server supports publishing push notifications; to do so, it MUST send a Service Discovery (XEP-0030) \(^2\) information quest to the user’s bare JID:
Listing 4: Client queries server regarding protocol support
```
<iq from='user@example.com/mobile'
to='user@example.com'
id='x13'
type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
If the user’s server supports publishing push notifications and the account is provisioned to allow them, the server MUST include the feature ‘urn:xmpp:push:0’ in its list of supported features.
Listing 5: Server communicates protocol support
```
<iq from='juliet@capulet.lit'
to='juliet@capulet.lit/balcony'
id='disco1'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='account' type='registered'/>
<feature var='urn:xmpp:push:0'/>
...
</query>
</iq>
```
4.2 Push Service Discovery
If a service supports the XMPP Push Service publish-subscribe profile described herein, it MUST include an identity of "pubsub/push" in "disco#info" results.
Listing 6: Service identifies as a Push Services
```xml
<iq from='push-5.client.example'
to='user@example.com/mobile'
id='x23'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='pubsub' type='push'/>
<feature var='urn:xmpp:push:0'/>
...
</query>
</iq>
```
5 Enabling Notifications
The full process for enabling notifications requires initializing two separate push services: between the App Client and App Server, and between the App Server and the user’s XMPP server.
Note: It is assumed that an App Client is able to perform any registration procedures it requires to bootstrap its own preferred push notification system. Furthermore, it is assumed that the App Client or App Server is able to provision a node on its own XMPP Push Service. It is possible, but not required, to perform these actions over XMPP using In-Band Registration (XEP-0077)\(^3\).
1. The App Client performs any necessary bootstrapping and registration for its preferred push service.
2. The App Client registers itself with the App Server.
3. The App Server allocates or reuses a node on the App Server’s XMPP Push Service.
4. The App Server informs the App Client of the provisioned node, along with any additional parameters required for publishing to that node.
5. The App Client requests the XMPP server to publish notifications to the given node.
Listing 7: The full flow of enabling push notifications for an application
For the last step, the App Client sends an IQ-set to the user’s bare JID with an <enable /> element qualified by the 'urn:xmpp:push:0' namespace, which MUST contain a 'jid' attribute of the XMPP Push Service being enabled. It SHOULD contain a 'node' attribute which is set to the provisioned node specified by the App Server.
Listing 8: Enabling Notifications
```
<iq type='set' id='x42'>
<enable xmlns='urn:xmpp:push:0' jid='push-5.client.example' node='yxs32uqsf1adf3iuqo'/>
</iq>
```
An App Server MAY require additional information to be provided with each published notification, such as authentication credentials. These parameters are included in the enable request by adding a Data Forms (XEP-0004) 4 data form with a FORM_TYPE of 'http://jabber.org/protocol/pubsub#publish-options'.
6 DISABLING NOTIFICATIONS
Listing 9: Enabling Notifications, with provided publish options
```xml
<iq type='set' id='x43'>
<enable xmlns='urn:xmpp:push:0' jid='push-5.client.example' node='yxs32uqsflafdk3iuqo'>
<x xmlns='jabber:x:data' type='submit'>
<field var='FORM_TYPE' value='http://jabber.org/protocol/pubsub#publish-options'/>
<field var='secret' value='eruio234vzxc2kla-91'/>
</x>
</enable>
</iq>
```
The JID for a Push Service MAY be enabled multiple times for a user only if different node values are provided. If the combination of JID and node has already been enabled, then the server SHOULD use the last received request for any publish options.
6 Disabling Notifications
If the user decides to stop push notifications for a particular client application, the App Client SHOULD send an IQ-set to the user's bare JID with a <disable /> element qualified by the 'urn:xmpp:push:0' namespace, which MUST include a 'jid' attribute of the service to be removed.
Listing 10: Disabling all notifications to a given service
```xml
<iq type='set' id='x97'>
<disable xmlns='urn:xmpp:push:0' jid='push-5.client.example'/>
</iq>
```
A 'node' attribute MAY be included to remove a particular JID and node combination if multiple nodes have been enabled for a single service JID.
Listing 11: Disabling notifications
```xml
<iq type='set' id='x97'>
<disable xmlns='urn:xmpp:push:0' jid='push-5.client.example' node='yxs32uqsflafdk3iuqo'/>
</iq>
```
If a 'node' attribute is provided, then only that combination of JID and node SHOULD be removed from the set of enabled services. Otherwise, the server SHOULD disable all enabled entries for the specified service for the user.
When a service is not enabled, the server MUST NOT attempt publishing notifications to the service.
7 Publishing Notifications
When the user's server detects an event warranting a push notification, it performs a PubSub publish to all XMPP Push Services registered for the user, where the item payload is a <notification /> element in the 'urn:xmpp:push:0' namespace.
A Data Forms (XEP-0004) data form whose FORM_TYPE is 'urn:xmpp:push:summary' MAY be included to provide summarized information such as the number of unread messages or number of pending subscription requests.
Other elements MAY be included if relevant for the notification.
Listing 12: Server publishes a push notification
```xml
<iq type='set' from='example.com' to='push-5.client.example' id='n12'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='yxs32uqsflafdk3iuqo'>
<item>
<notification xmlns='urn:xmpp:push:0'>
<x xmlns='jabber:x:data'>
<field var='FORM_TYPE'><value>urn:xmpp:push:summary</value></field>
<field var='message-count'><value>1</value></field>
<field var='last-message-sender'><value>juliet@capulet.example/balcony</value></field>
<field var='last-message-body'><value>Wherefore art thou, Romeo?</value></field>
</x>
<additional xmlns='http://example.com/custom'>Additional custom elements</additional>
</notification>
</item>
</publish>
</pubsub>
</iq>
```
If additional data was provided when enabling the service, the publish request SHOULD include the data as publish options.
Listing 13: Server publishes a push notification with provided publish options
```xml
<iq type='set' from='example.com' to='push-5.client.example' id='n12'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='yxs32uqsflafdk3iuqo'>
<item>
<notification xmlns='urn:xmpp:push:0'>
<x xmlns='jabber:x:data'>
<field var='FORM_TYPE'><value>urn:xmpp:push:summary</value></field>
<field var='message-count'><value>1</value></field>
<field var='last-message-sender'><value>juliet@capulet.example/balcony</value></field>
<field var='last-message-body'><value>Wherefore art thou, Romeo?</value></field>
</x>
<additional xmlns='http://example.com/custom'>Additional custom elements</additional>
</notification>
</item>
</publish>
</pubsub>
</iq>
```
7.1 Publish Errors
If a publish request is returned with an IQ-error, then the server SHOULD consider the particular JID and node combination to be disabled. However, a server MAY choose to keep a service enabled if the error is deemed recoverable or transient, until a sufficient number of errors have been received in a row. A server MAY retry an automatically disabled JID and node combination after a period of time (e.g. 1 day).
7.2 Notification Delivery
Once the notification has been published to the XMPP Push Service, it is left to the implementation how to deliver the notification to the user’s device. However, the general flow for the process looks like so:
Listing 14: The full path of a push notification, from XMPP server to user client
8 Remote Disabling of Notifications
It can be desirable for an XMPP Push Service to stop accepting notifications from the user's XMPP server. To do so, the XMPP Push Service removes the 'publish-only' (or other publish-enabling affiliation) from the user's JID, and MAY send an affiliation change notice to the user's bare JID:
Listing 15: Push Service announces stop of push support
Upon receiving an affiliation change event, the server MAY remove the received JID and node combination from the set of enabled services. If a server does not do so, then the service will
be removed from the enabled set through the error handling process.
9 Security Considerations
Push notifications require routing private information, such as message bodies, through third parties. As such, servers SHOULD allow users to limit the information sent via push notifications.
It is NOT RECOMMENDED to allow in-band modification of push notification content settings. Such operations SHOULD be done out-of-band to prevent privilege escalation.
10 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA). *
11 XMPP Registrar Considerations
11.1 Protocol Namespaces
The XMPP Registrar includes ‘urn:xmpp:push:0’ in its registry of protocol namespaces (see <https://xmpp.org/registrar/namespaces.html>).
- urn:xmpp:push:0
11.2 Protocol Versioning
If the protocol defined in this specification undergoes a revision that is not fully backwards-compatible with an older version, the XMPP Registrar shall increment the protocol version number found at the end of the XML namespaces defined herein, as described in Section 4 of XEP-0053.
---
*The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
7The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
11.3 Field Standardization
Field Standardization for Data Forms (XEP-0068)\(^8\) defines a process for standardizing the fields used within Data Forms scoped by a particular namespace, and the XMPP Registrar maintains a registry of such FORM_TYPES (see <https://xmpp.org/registrar/formtypes.html>).
11.3.1 urn:xmpp:push:summary FORM_TYPE
\[
\begin{aligned}
<\text{form_type}>
<\text{name}>urn:xmpp:push:summary</\text{name}>
<\text{desc}>Provides summarizing information about a user for use in push notifications.</\text{desc}>
<\text{field}>
\text{var}='message-count'
\text{type}='text-single'
\text{label}='The number of unread or undelivered messages'/>
<\text{field}>
\text{var}='pending-subscription-count'
\text{type}='text-single'
\text{label}='The number of pending incoming presence subscription requests'/>
<\text{field}>
\text{var}='last-message-sender'
\text{type}='jid-single'
\text{label}='The sender of the last received message'/>
<\text{field}>
\text{var}='last-message-body'
\text{type}='text-single'
\text{label}='The body text of the last received message'/>
\end{aligned}
\]
11.4 Service Discovery Category/Type
The XMPP Registrar includes a category of "component" in its registry of Service Discovery identities (see <https://xmpp.org/registrar/disco-categories.html>); as a result of this document, the Registrar includes a type of "jidprep" to that category. The registry submission is as follows:
\[
\begin{aligned}
<\text{category}>
<\text{name}>pubsub</\text{name}>
<\text{type}>
<\text{name>push</\text{name}>
<\text{desc}>
\end{aligned}
\]
A push notification service that supports the publish-subscribe profile defined in XEP-XXXX.
12 XML Schema
```xml
<?xml version='1.0' encoding='UTF-8'?>
<xs:schema
xmlns:xs='http://www.w3.org/2001/XMLSchema'
targetNamespace='urn: xmpp:push:0'
xmlns='urn: xmpp:push:0'
elementFormDefault='qualified'>
<xs:annotation>
<xs:documentation>
The protocol documented by this schema is defined in
XEP-xxxx: http://www.xmpp.org/extensions/xep-xxxx.html
</xs:documentation>
</xs:annotation>
<xs:import
namespace='jabber:x:data'
schemaLocation='http://xmpp.org/schemas/x-data.xsd' />
<xs:element name='enable'>
<xs:complexType>
<xs:sequence minOccurs='0' maxOccurs='unbounded' xmlns:xdata='jabber:x:data'>
<xs:element ref='xdata:x' />
</xs:sequence>
<xs:attribute name='jid' type='xs:string' use='required' />
<xs:attribute name='node' type='xs:string' use='required' />
</xs:complexType>
</xs:element>
<xs:element name='disable'>
<xs:complexType>
<xs:attribute name='jid' type='xs:string' use='required' />
<xs:attribute name='node' type='xs:string' use='optional' />
</xs:complexType>
</xs:element>
<xs:element name='notification'>
```
<xs:complexType>
<xs:sequence minOccurs='0' maxOccurs='unbounded' xmlns:xdata='jabber:x:data'>
<xs:element ref='xdata:x' />
<xs:any />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
|
{"Source-Url": "https://xmpp.org/extensions/xep-0357.pdf", "len_cl100k_base": 5863, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 43703, "total-output-tokens": 7079, "length": "2e12", "weborganizer": {"__label__adult": 0.00031757354736328125, "__label__art_design": 0.0002448558807373047, "__label__crime_law": 0.0010519027709960938, "__label__education_jobs": 0.0003924369812011719, "__label__entertainment": 8.26120376586914e-05, "__label__fashion_beauty": 0.00010776519775390624, "__label__finance_business": 0.0012617111206054688, "__label__food_dining": 0.00017750263214111328, "__label__games": 0.0005445480346679688, "__label__hardware": 0.002033233642578125, "__label__health": 0.00016951560974121094, "__label__history": 0.00015044212341308594, "__label__home_hobbies": 5.370378494262695e-05, "__label__industrial": 0.00026726722717285156, "__label__literature": 0.00020635128021240232, "__label__politics": 0.0002276897430419922, "__label__religion": 0.0002363920211791992, "__label__science_tech": 0.0121307373046875, "__label__social_life": 6.568431854248047e-05, "__label__software": 0.09375, "__label__software_dev": 0.88623046875, "__label__sports_fitness": 0.00014472007751464844, "__label__transportation": 0.0002639293670654297, "__label__travel": 0.00013375282287597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24931, 0.01608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24931, 0.08753]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24931, 0.76684]], "google_gemma-3-12b-it_contains_pii": [[0, 135, false], [135, 2670, null], [2670, 2670, null], [2670, 4761, null], [4761, 8075, null], [8075, 9029, null], [9029, 9946, null], [9946, 11820, null], [11820, 13544, null], [13544, 14409, null], [14409, 16223, null], [16223, 18671, null], [18671, 19428, null], [19428, 20003, null], [20003, 21607, null], [21607, 23361, null], [23361, 24723, null], [24723, 24931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 135, true], [135, 2670, null], [2670, 2670, null], [2670, 4761, null], [4761, 8075, null], [8075, 9029, null], [9029, 9946, null], [9946, 11820, null], [11820, 13544, null], [13544, 14409, null], [14409, 16223, null], [16223, 18671, null], [18671, 19428, null], [19428, 20003, null], [20003, 21607, null], [21607, 23361, null], [23361, 24723, null], [24723, 24931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24931, null]], "pdf_page_numbers": [[0, 135, 1], [135, 2670, 2], [2670, 2670, 3], [2670, 4761, 4], [4761, 8075, 5], [8075, 9029, 6], [9029, 9946, 7], [9946, 11820, 8], [11820, 13544, 9], [13544, 14409, 10], [14409, 16223, 11], [16223, 18671, 12], [18671, 19428, 13], [19428, 20003, 14], [20003, 21607, 15], [21607, 23361, 16], [23361, 24723, 17], [24723, 24931, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24931, 0.02128]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
8da5fce5e06cad17fc3fc5b36cac1ed937697856
|
18-447 Lecture 8: Data Hazard and Resolution
James C. Hoe
Department of ECE
Carnegie Mellon University
Housekeeping
- Your goal today
- detect and resolve data hazards in in-order pipelines
- control flow will come next lecture
- Notices
- Lab 2, status check next week, due wk of 2/27
- Midterm 2/27 in class; covers Lectures 1~10
- Readings
- P&H Ch 4
Instruction Pipeline Reality
- Identical tasks ... NOT!
- coalescing instruction types
- external fragmentation (some idle stages)
- Uniform suboperations ... NOT!
- balance pipeline stages
- group or sub-divide steps to minimize variance
- internal fragmentation (some too-fast stages)
- Independent tasks ... NOT!
- resolve data and resource hazards
- duplicate contended resources
- inter-instruction dependency detection and resolution
MIPS ISA features are engineered for 5-stage pipelining
Data Dependence (on registers)
Data dependence
\[ r_3 \leftarrow r_1 \text{ op } r_2 \]
\[ \ldots \]
\[ r_5 \leftarrow r_3 \text{ op } r_4 \]
Read-after-Write (RAW)
Anti-dependence
\[ r_3 \leftarrow r_1 \text{ op } r_2 \]
\[ \ldots \]
\[ r_1 \leftarrow r_4 \text{ op } r_5 \]
Write-after-Read (WAR)
Output-dependence
\[ r_3 \leftarrow r_1 \text{ op } r_2 \]
\[ \ldots \]
\[ r_3 \leftarrow r_6 \text{ op } r_7 \]
Write-after-Write (WAW)
We discuss control dependence next lecture
RAW Dependency and Hazard
- Following RAW dependencies lead to hazards in the 5-stage pipeline (from last lecture)
<table>
<thead>
<tr>
<th>Instruction</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>addi ra r</code></td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
<tr>
<td><code>addi r-ra</code></td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td>WB</td>
</tr>
<tr>
<td><code>addi r-ra</code></td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td>MEM</td>
<td></td>
</tr>
<tr>
<td><code>addi r-ra</code></td>
<td>IF</td>
<td>ID</td>
<td>EX</td>
<td></td>
<td></td>
</tr>
<tr>
<td><code>addi r-ra</code></td>
<td>IF</td>
<td>ID</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><code>addi r-ra</code></td>
<td>IF</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Register Data Hazard Analysis
<table>
<thead>
<tr>
<th>R/I-Type</th>
<th>LW</th>
<th>SW</th>
<th>Bxx</th>
<th>Jal</th>
<th>Jalr</th>
</tr>
</thead>
<tbody>
<tr>
<td>IF</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ID</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
</tr>
<tr>
<td>EX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WB</td>
<td>write RF</td>
<td>write RF</td>
<td></td>
<td>write RF</td>
<td>write RF</td>
</tr>
</tbody>
</table>
- For a given pipeline, when is there a register data hazard between 2 instructions?
- dependence type: RAW, WAR, WAW?
- instruction types involved?
- distance between the two instructions?
Necessary Condition for Data Hazard
\[ \text{dist}_{\text{dependence}}(i,j) \leq \text{dist}_{\text{hazard}}(X,Y) \Rightarrow \text{Hazard!!} \]
\[ \text{dist}_{\text{dependence}}(i,j) > \text{dist}_{\text{hazard}}(X,Y) \Rightarrow \text{Safe} \]
RAW Hazard Analysis Example
<table>
<thead>
<tr>
<th>R/I-Type</th>
<th>LW</th>
<th>SW</th>
<th>Bxx</th>
<th>Jal</th>
<th>Jalr</th>
</tr>
</thead>
<tbody>
<tr>
<td>IF</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ID</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
<td>read RF</td>
</tr>
<tr>
<td>EX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MEM</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WB</td>
<td>write RF</td>
<td>write RF</td>
<td></td>
<td>write RF</td>
<td>write RF</td>
</tr>
</tbody>
</table>
- Older \( I_A \) and younger \( I_B \) have RAW hazard iff
- \( I_B \) \((R/I, \text{LW}, \text{SW}, \text{Bxx} \text{ or } \text{JALR})\) reads a register written by \( I_A \) \((R/I, \text{LW}, \text{or } \text{JAL}/\text{R})\)
- \( \text{dist}(I_A, I_B) \leq \text{dist}(\text{ID}, \text{WB}) = 3 \)
What about WAW and WAR hazard?
What about memory data hazard?
Pipeline Stall:
universal hazard resolution
Stall==make the younger instruction wait until the hazard has passed
1. stop all up-stream stages
2. drain all down-stream stages
What should happen in this case?
What should happen in this case?
### Pipeline Stall
<table>
<thead>
<tr>
<th></th>
<th>t₀</th>
<th>t₁</th>
<th>t₂</th>
<th>t₃</th>
<th>t₄</th>
<th>t₅</th>
<th>t₆</th>
<th>t₇</th>
<th>t₈</th>
<th>t₉</th>
<th>t₁₀</th>
</tr>
</thead>
<tbody>
<tr>
<td>IF</td>
<td>i</td>
<td>j</td>
<td>k</td>
<td>k</td>
<td>k</td>
<td>k</td>
<td>l</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ID</td>
<td>h</td>
<td>i</td>
<td>j</td>
<td>j</td>
<td>j</td>
<td>j</td>
<td>k</td>
<td>l</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>EX</td>
<td>h</td>
<td>i</td>
<td>bub</td>
<td>bub</td>
<td>bub</td>
<td>j</td>
<td>k</td>
<td>l</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>MEM</td>
<td>h</td>
<td>i</td>
<td>bub</td>
<td>bub</td>
<td>bub</td>
<td>j</td>
<td>k</td>
<td>l</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WB</td>
<td>h</td>
<td>i</td>
<td>bub</td>
<td>bub</td>
<td>bub</td>
<td>j</td>
<td>k</td>
<td>l</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
i: \(rx \leftarrow _-\)
j: \( _- \leftarrow rx\)
### Stall
- disable \( pc \) and \( ir \) latching
- control should set \( \text{RegWrite}=0 \) and \( \text{MemWrite}=0 \)
*Based on original figure from [P&H, CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED]*
Stall Condition
- Older I_A and younger I_B have RAW hazard iff
- I_B (R/I, LW, SW, Bxx or JALR) reads a register written by
I_A (R/I, LW, or JAL/R)
- dist(I_A, I_B) ≤ dist(ID, WB) = 3
- Stated constructively, before I_B in ID reads a register, I_B needs to check if any I_A in EX, MEM or WB is going to update it (if so, value currently in RF is “stale”)
Watch out for x0!!
Stall Condition
- Helper functions
- \( rs1(I) \) returns the \( rs1 \) field of \( I \)
- \( use_{rs1}(I) \) returns true if \( I \) requires RF[rs1] and rs1 \( \neq x0 \)
- Stall IF and ID when
- \((rs1(IR)^{rd}_{EX})\&\& use_{rs1}(IR)\&\& RegWrite_{EX}\) or
- \((rs1(IR)^{rd}_{MEM})\&\& use_{rs1}(IR)\&\& RegWrite_{MEM}\) or
- \((rs1(IR)^{rd}_{WB})\&\& use_{rs1}(IR)\&\& RegWrite_{WB}\) or
- \((rs2(IR)^{rd}_{EX})\&\& use_{rs2}(IR)\&\& RegWrite_{EX}\) or
- \((rs2(IR)^{rd}_{MEM})\&\& use_{rs2}(IR)\&\& RegWrite_{MEM}\) or
- \((rs2(IR)^{rd}_{WB})\&\& use_{rs2}(IR)\&\& RegWrite_{WB}\)
It is crucial that the EX, MEM and WB continue to advance normally during stall cycles
Impact of Stall on Performance
- Each stall cycle corresponds to 1 lost ALU cycle
- For a program with N instructions and S stall cycles, Average IPC=N/(N+S)
- S depends on
- frequency of hazard-causing dependencies
- exact distance between the hazard-causing instruction pair
- distance between hazard-causing dependencies
(suppose i_2, i_3 and i_4 all depend on i_0, once i_1's hazard is resolved by stalling, i_2 and i_3 do not stall)
Sample Assembly [P&H]
```assembly
for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
addi $s1, $s0, -1 # 3 stalls
for2tst:
slti $t0, $s1, 0 # 3 stalls
bne $t0, $zero, exit2
sll $t1, $s1, 2 # 3 stalls
add $t2, $a0, $t1 # 3 stalls
lw $t3, 0($t2) # 3 stalls
lw $t4, 4($t2) # 3 stalls
slt $t0, $t4, $t3 # 3 stalls
beq $t0, $zero, exit2
...........
addi $s1, $s1, -1 # 3 stalls
j for2tst
exit2:
```
Data Forwarding (aka Register Bypassing)
- It is intuitive to think of RF as state
- “add rx ry rz” literally means get input values from RF[ry] and RF[rz] and put result in RF[rx]
- But, RF is just a part of a computing abstraction
- “add rx ry rz” means 1. inputs are the results of the last instructions to have defined the values of RF[ry] and RF[rz], and 2. until another instruction redefines RF[rx], younger instructions that refers to RF[rx] should use this instruction’s result
- What matters is to maintain the correct “dataflow” between operations, thus
<table>
<thead>
<tr>
<th>Instruction</th>
<th>IF</th>
<th>ID</th>
<th>EX</th>
<th>MEM</th>
<th>WB</th>
</tr>
</thead>
<tbody>
<tr>
<td>add ra r- r-</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>addi r- ra r-</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Resolving RAW Hazard by Forwarding
- Older IA and younger IB have RAW hazard iff
- IB (R/I, LW, SW, Bxx or JALR) reads a register written by IA (R/I, LW, or JAL/R)
- dist(IA, IB) ≤ dist(ID, WB) = 3
- Stated constructively, before IB in ID reads a register, IB needs to check if any IA in EX, MEM or WB is going to update it (if so, value currently in RF is “stale”)
- If the value is already produced, don’t stall!
- retrieve value from datapath before RF write
- retrieve from the youngest definition if multiple definitions are outstanding
Forwarding Paths (v1)
With forwarding
dist(i, j) = 1
dist(i, j) = 2
dist(i, j) = 3
internal forward?
better if EX is the fastest stage
**Forwarding Logic (for v1)**
if \( (rs_{1,ID} \neq 0) \&\& (rs_{1,ID} == rd_{EX}) \&\& \text{RegWrite}_{EX} \) then
forward operand from EX // dist=1
else if \( (rs_{1,ID} \neq 0) \&\& (rs_{1,ID} == rd_{MEM}) \&\& \text{RegWrite}_{MEM} \) then
forward operand from MEM // dist=2
else if \( (rs_{1,ID} \neq 0) \&\& (rs_{1,ID} == rd_{WB}) \&\& \text{RegWrite}_{WB} \) then
forward operand from WB // dist=3
else
use \( A_{ib} \) (operand from RF) // dist > 3
Ordering matters!! Must check youngest match first
Why doesn’t \texttt{use_rs1()\hspace{1em}} appear in the forwarding logic?
Wrong value forwarded if matched against LW in EX?
---
**Data Hazard Analysis (with Forwarding)**
<table>
<thead>
<tr>
<th>R/I-Type</th>
<th>LW</th>
<th>SW</th>
<th>Bxx</th>
<th>Jal</th>
<th>Jalr</th>
</tr>
</thead>
<tbody>
<tr>
<td>IF</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ID</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>EX</td>
<td>use</td>
<td>produce</td>
<td>use</td>
<td>use</td>
<td>produce</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>produce</td>
<td></td>
</tr>
<tr>
<td>MEM</td>
<td>produce</td>
<td>(use)</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>WB</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
- Even with data-forwarding, RAW dependence on an immediate preceding LW instruction produces a hazard
- \( \text{Stall} = (rs(IR_{ID}) == rd_{EX}) \&\& \text{use}_rs(IR_{ID}) \&\& \text{MemRead}_{EX} \) & “\text{opcode}_{EX}=Lx”
**MIPS Load “Delay Slot”**
- **R2000 defined load with arch. latency of 1 inst**
- instruction immediately following a load (in the “delay slot”) still sees the old value
- dependent instruction at least distance 2, **no more hazard!**
- **Delay slot vs dynamic stalling**
- fill with an independent instruction (no difference)
- if not, fill with a would-be WAR instruction (gain 1 cycle)
- if not, fill with a NOP (no difference)
- **Can’t lose on 5-stage . . . good idea?**
Hint: ISA feature made to fit microarchitecture choice
---
**Sample Assembly [P&H]**
```assembly
for (j=i-1; j>=0 && v[j] > v[j+1]; j--) { .... }
addi $s1, $s0, -1
for2tst:
slti $t0, $s1, 0
bne $t0, $zero, exit2
sll $t1, $s1, 2
add $t2, $a0, $t1
lw $t3, 0($t2)
lw $t4, 4($t2)
nop
slt $t0, $t4, $t3
beq $t0, $zero, exit2
........
addi $s1, $s1, -1
j for2tst
exit2:
```
Terminology
- **Dependency**
- ordering requirement between instructions
- **Pipeline Hazard**
- (potential) violation of dependencies
- **Hazard Resolution**
- static \(\Rightarrow\) schedule instructions at compile time to avoid hazards
- dynamic \(\Rightarrow\) detect hazard and adjust pipeline operation
- **Pipeline Interlock** (i.e., stall)
MIPS = Microprocessor without Interlocked Pipeline Stages
---
Dividing into Stages
Is this the correct partitioning? Why not 4 or 6 stages? Why not different boundaries?
Why not very deep pipelines?
- 5-stage pipeline still has plenty of combinational delay between registers
- “Superpipelining” ⇒ increase pipelining such that even intrinsic operations (e.g. ALU, RF access, memory access) require multiple stages
- What’s the problem?
![Diagram]
Inst₀: \( r₁ \leftarrow r₂ + r₃ \)
Inst₁: \( r₄ \leftarrow r₁ + 2 \)
Intel P4’s Superpipelined Adder Hack
32-bit addition pipelined over 2 stages, \( BW=1/\text{latency}_{16\text{-bit-add}} \)
No stall between back-to-back dependencies
When you can’t split a stage . . .
I (BW=2/T) → A (BW=1/T) → B (BW=1/T) → O (BW=2/T)
Dependencies and Pipelining
(architecture vs. microarchitecture)
Sequential and atomic instruction semantics
True dependence between two instructions may only require ordering of certain sub-operations
This is an overspecification. It defines what is correct but doesn’t say must actually do it this way
|
{"Source-Url": "https://users.ece.cmu.edu/~jhoe/course/ece447/S17handouts/L08.pdf", "len_cl100k_base": 4674, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47805, "total-output-tokens": 4508, "length": "2e12", "weborganizer": {"__label__adult": 0.0005393028259277344, "__label__art_design": 0.0008320808410644531, "__label__crime_law": 0.000522613525390625, "__label__education_jobs": 0.0028247833251953125, "__label__entertainment": 0.00012445449829101562, "__label__fashion_beauty": 0.00031757354736328125, "__label__finance_business": 0.00026702880859375, "__label__food_dining": 0.0006322860717773438, "__label__games": 0.001102447509765625, "__label__hardware": 0.033111572265625, "__label__health": 0.0007348060607910156, "__label__history": 0.0004470348358154297, "__label__home_hobbies": 0.00043272972106933594, "__label__industrial": 0.0027313232421875, "__label__literature": 0.00027441978454589844, "__label__politics": 0.00044083595275878906, "__label__religion": 0.0008516311645507812, "__label__science_tech": 0.170166015625, "__label__social_life": 0.00013911724090576172, "__label__software": 0.00925445556640625, "__label__software_dev": 0.77197265625, "__label__sports_fitness": 0.0006513595581054688, "__label__transportation": 0.0015211105346679688, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 11289, 0.02559]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 11289, 0.23028]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 11289, 0.7017]], "google_gemma-3-12b-it_contains_pii": [[0, 367, false], [367, 1375, null], [1375, 2336, null], [2336, 3390, null], [3390, 3633, null], [3633, 4271, null], [4271, 5351, null], [5351, 6243, null], [6243, 7537, null], [7537, 7678, null], [7678, 8957, null], [8957, 9845, null], [9845, 10377, null], [10377, 10896, null], [10896, 11289, null]], "google_gemma-3-12b-it_is_public_document": [[0, 367, true], [367, 1375, null], [1375, 2336, null], [2336, 3390, null], [3390, 3633, null], [3633, 4271, null], [4271, 5351, null], [5351, 6243, null], [6243, 7537, null], [7537, 7678, null], [7678, 8957, null], [8957, 9845, null], [9845, 10377, null], [10377, 10896, null], [10896, 11289, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 11289, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 11289, null]], "pdf_page_numbers": [[0, 367, 1], [367, 1375, 2], [1375, 2336, 3], [2336, 3390, 4], [3390, 3633, 5], [3633, 4271, 6], [4271, 5351, 7], [5351, 6243, 8], [6243, 7537, 9], [7537, 7678, 10], [7678, 8957, 11], [8957, 9845, 12], [9845, 10377, 13], [10377, 10896, 14], [10896, 11289, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 11289, 0.16016]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0da3fb81b0f5468dba7d72dccf38f6676e9070d8
|
TUTORIAL
TRACK II
ADVANCED ADA TOPICS
By
Major Patricia Lawlis, Air Force Institute of Technology
and
Captain Dean Gonzalez, U.S. Air Force Academy
and
Lieutenant David Cook, U.S. Air Force Academy
Tutorial Track II. Advanced Ada Topics
MAJ Patricia Lawlis, CAPT Dean Gonzalez, and LT David Cook
Ada Software Education and Training Team
Ada Joint Program Office
3E 114, The Pentagon
Washington, DC 20301-3081
Ada Joint Program Office
Approved for public release; distribution unlimited.
This document contains prints of viewgraphs presented at the Advanced Ada Topics Tutorial, Track II June 9, 1987. Topics covered were Data Abstraction, Tasking, Strong Typing, and Exceptions.
Ada* Tasking
Abstraction of Process
by
Dean W. Gonzalez
David A. Cook
303-472-2136
AV 259-2136
U.S. Air Force Academy
*Ada is a registered trademark of the U.S. Government, Ada Joint Program Office.
Ada Tasking
Overview
Define Ada Tasking
Define Synchronization Mechanism
Examples
• started after elaboration of parent, and before the parent's first statement
• may also be a type and treated as an object
Ada Tasking
Task Definition
• A program unit for concurrent execution
• Never a library unit
• Master is a ...
Library Package
Subprogram
Block Statement
Other Task
Callee Provides Service
1. Immediate Response
2. Wait for a while
3. Wait forever
Service is Requested with an entry call statement
Service is provided with an accept statement
Ada Tasking
Synchronization Mechanisms
- Global Variables
- Rendezvous
Main Program in a Task
Caller Requests Service
1. Immediate Request
2. Wait for a While
3. Wait Forever
Scenario I
"The Golden Arches"
McD Tasks:
Service Provided: Food
Service Requested: None
Gonzo Tasks:
Service Provided: None
Service Requested: Food
Ada Tasking
- Select statement provides ability to program the different 'request' and 'provide' modes.
- Guards are "if statements" for the providing service.
- Termination is an alternative if a service is no longer needed.
Task McD is
entry SERVE(TRAY_OF : out FOOD_TYPE);
end McD;
Task GONZO;
Task Body McD is
NEW_TRAY : FOOD_TYPE;
function COOK return FOOD_TYPE is
begin
loop
accept SERVE (TRAY_OF : out FOOD_TYPE) do
TRAY_OF := COOK;
end;
end loop;
end McD;
Task Body GONZO is
MY_TRAY : FOOD_TYPE;
procedure CONSUME (MY_TRAY : in FOOD_TYPE) is ... begin
loop
McD.Serve (MY_TRAY);
CONSUME (MY_TRAY);
end loop;
end GONZO;
Task Body HoD is
NEW_TRAY : FOOD_TYPE;
function COOK return FOOD_TYPE is
...
end COOK;
begin
loop
NEW_TRAY := COOK;
accept SERVE (TRAY_OF : out FOOD_TYPE) do
TRAY_OF := NEW_TRAY;
end SERVE;
end loop;
end GONZO;
loop
NEW_TRAY := COOK;
select
accept SERVE (TRAY_OF : out FOOD_TYPE) do...
TRAY_OF := NEW_TRAY;
end SERVE;
else
null;
end select;
end loop;
loop
NEW_TRAY := COOK;
select
accept SERVE (TRAY_OF : out FOOD_TYPE) do...
TRAY_OF := NEW_TRAY;
end SERVE;
else
terminate;
end select;
end loop;
loop
NEW_TRAY := COOK;
select
accept SERVE (TRAY_OF : out FOOD_TYPE) do...
TRAY_OF := NEW_TRAY;
end SERVE;
or
delay 15 * MINUTES;
end select;
end loop;
loop
select
McD.SERVE(MY_ORDER); consume(MY_ORDER);
else
select
BK.SERVE(MY_ORDER); consume(MY_ORDER);
else
exit;
end select;
end select;
end loop;
loop
select
Hcb.SERVE(NY_ORDER); consume (NY_ORDER);
or
delay 16.0 * MINUTES;
select
BK.SERVE(NY_ORDER); consume (NY_ORDER);
or
delay 5.0 * MINUTES;
exit;
end select;
end select;
end select;
end loop;
loop
select
McD.SERVE (MY_ORDER);
or
BK.SERVE (MY_ORDER);
end select;
consume;
end loop;
loop
select
McD.SERVE (NY_ORDER);
or
BK.SERVE (NY_ORDER);
else
delay 10 * MINUTES;
exit;
end select;
consume;
end loop;
Service Requested: Food
Service Provided: Money
GONZO TASK
Service Requested: Money
Service Provided: Food
MCD TASK
"NO FREE LUNCH"
SCENARIO II
AHA TASKING
Task McD is
entry SERVE ( ORDER : out FOOD_TYPE;
COST : in MONEY_TYPE);
end McD;
TASK GONZO;
-- OR
Task McD is
entry SERVE ( ORDER : out FOOD_TYPE);
end McD;
Task GONZO is
entry PAY ( COST : in MONEY_TYPE;
PAYMENT : out MONEY_TYPE);
end GONZO;
Task Body McD is
CASH_DRAWER : MONEY_TYPE;
NEW_ORDER : FOOD_TYPE;
function COOK ................
function CALC_COST (ORDER : in FOOD_TYPE )
return MONEY_TYPE is ...........
begin
loop
NEW_ORDER := COOK;
select
accept SERVE(ORDER : out FOOD_TYPE) do
ORDER := NEW_ORDER;
COST := CALC_COST (NEW_ORDER);
GONZO_PAY (COST, AMOUNT_PAID);
CASH_DRAWER := CASH_DRAWER + AMOUNT_PAID;
end SERVE;
or
delay 15.0 * MINUTES;
end select;
end loop;
end McD;
Task Body GONZO is
ACCOUNT_BALANCE : MONEY_TYPE;
MY_ORDER : FOOD_TYPE;
function GO_TO_WORK return MONEY_TYPE is...
begin
ACCOUNT_BALANCE := GO_TO_WORK * ACCOUNT_BALANCE;
loop
Mcd.SERVE (MY_ORDER);
accept PAY (COST : in MONEY_TYPE;
PAYMENT : out MONEY_TYPE) do
ACCOUNT_BALANCE := ACCOUNT_BALANCE -
PAYMENT := COST;
end PAY;
end loop;
end loop;
end GONZO;
Service Requested: None
Service Provided: Make new water
Manager Task
Service Requested: Food
Service Provided: Money
Gonzalo Task
Service Requested: Money
Service Provided: Food
MC2 Task
„No wait for the waiters“
Scenario II A
Ada Tasking
Task type McD is
entry SERVE....
end McD;
Task GONZO is
entry PAY....
end GONZO;
Task MANAGER;
Type CASHIER POINTER is access McD;
Type REGISTER_TYPE is array (1..NO_REGISTERS)
of CASHIER_POINTER;
THE_REGISTERS : REGISTER_TYPE := (others => new McD);
Task Body McD is
...
...
...
begin
loop
NEW_ORDER := COOK;
select
accept SERVE.....
...
end SERVE;
or
delay 2.0 * MINUTES;
exit;
end select;
end loop;
Task Body GONZO is
...
...
begin
...
...
---Now, GONZO has to search for the open
-- registers, and select the one with
-- the shortest line
...
...
THE_REGISTERS(MY_REGISTER).SERVE;
...
dend GONZO;
Task Body MANAGER is
...
...
begin
loop
-- The MANAGER will look at the queue lengths of
-- the open registers, and, when necessary
-- will open registers that are currently
-- closed
...
if ............ then
THE_REGISTERS(CLOSED_REGISTER) := new McD;
end if;
end loop;
end MANAGER;
Ada Tasking
Scenario III
"A Sugar Cone, Please:
BR Task
Service Provided: Ice Cream
Service Requested: An Order
Servomatic Task
Service Provided: A Number
Customers Task
Service Provided: An Order
Service Requested: Ice Cream
task BR is
entry SERVE (ICE_CREAM : out DESSERT_TYPE);
end BR;
task SERVOMATIC is
entry TAKE (A_NUMBER : out SERVOMATIC_NUMBERS);
end SERVOMATIC;
task type CUSTOMER_TASK is
entry REQUEST (ORDER : out ORDER_TYPE);
end CUSTOMER_TASK;
type CUSTOMER is access CUSTOMER_TASK;
CUSTOMERS : array (SERVOMATIC_NUMBERS) of CUSTOMER;
task body BR
NEXT_CUSTOMER: SERVOMATIC_NUMBERS := SERVOMATIC_NUMBERS'last;
CURRENT_ORDER: ORDER_TYPE;
ICE_CREAM: DESSERT_TYPE;
function MAKE (ORDER: in ORDER_TYPE) return DESSERT_TYPE is
begin
loop
begin
NEXT_CUSTOMER := (NEXT_CUSTOMER + 1) mod SERVOMATIC_NUMBERS'last;
CUSTOMERS(NEXT_CUSTOMER).REQUEST (CURRENT_ORDER);
ICE_CREAM := MAKE(CURRENT_ORDER);
accept SERVE(ICE_CREAM: out DESSERT_TYPE) do
ICE_CREAM := BR.ICE_CREAM;
end SERVE;
exception
when TASKING_ERROR => null;
--customer not here
end;
end loop;
end;
task body SERVOMATIC is
NEXT_NUMBER : SERVOMATIC_NUMBERS :=
SERVOMATIC_NUMBERS'first;
begin
loop
accept TAKE(A_NUMBER : out SERVOMATIC_NUMBERS) do
A_NUMBER := NEXT_NUMBER;
end TAKE;
NEXT_NUMBER := (NEXT_NUMBER + 1) mod
SERVOMATIC_NUMBERS'last;
end loop;
end SERVOMATIC;
task body CUSTOMER_TASK is
MY_ORDER : ORDER_TYPE := ... - some value;
MY_DESSERT : DESSERT_TYPE;
begin
accept REQUEST ( ORDER : out ORDER_TYPE) do
ORDER := MY_ORDER;
end REQUEST;
BR.SERVE(MY_DESSERT);
-- eat the dessert, or do whatever
end;
Service Requested: File Name
Service Provided: Print
Printer Task
Service Requested: Print
Service Provided: Virtual Print
Spooler Task
by Renaming Task Entry
Action-"Hides" The Print Spooler
Printer Package
"Lets Hide The Spooler Task"
Scenario IV
Ada Tasking
Package PRINTER_PACKAGE is
... task SPOOLER is
entry PRINT_FILE (NAME : in STRING;
PRIORITY : in NATURAL);
entry PRINTER_READY;
end SPOOLER;
...
procedure PRINT (NAME : in STRING;
PRIORITY : in NATURAL := 10)
renames SPOOLER.PRINT_FILE;
end PRINTER_PACKAGE;
Package Body PRINTER_PACKAGE is
... task PRINTER is
entry PRINT_FILE (NAME : in STRING);
end PRINTER;
...
...
end PRINTER_PACKAGE;
task body SPOOLER is
begin
loop
select
accept PRINTER_READY do
PRINTER_PRINT_FILE ( REMOVE (QUEUE) );
-- Remove would determine the next job and
-- send it to the actual printer
end PRINTER_READY;
else
null;
end select;
select
accept PRINT_FILE ( NAME : in STRING;
PRIORITY : NATURAL ) do
INSERT ( NAME, PRIORITY);
-- put name on queue or queues according
-- to priority
end PRINT_FILE;
else
null;
end select;
end select;
end loop;
end SPOOLER;
task body PRINTER is
begin
loop
SPOOLER.PRINTER_READY;
accept PRINT_FILE ( NAME : in STRING ) do
if NAME'length /= 0 then .......
--print the file
else
delay 10.0 * seconds;
end if;
end PRINT_FILE;
end loop;
end PRINTER;
with PRINTER_PACKAGE;
procedure MAIN is
loop
--process several files
PRINTER_PACKAGE.PRINT (A_FILE, A_PRIORITY);
end loop;
end MAIN;
APPLICATIONS FOR TASKS
- CONCURRENT OPERATIONS
- ROUTING MESSAGES
- SHARED RESOURCE MANAGEMENT
- INTERRUPT HANDLING
**MATRIX MULTIPLICATION**
\[
\begin{bmatrix}
1 & 1 & 1 \\
2 & 2 & 0
\end{bmatrix}
\times
\begin{bmatrix}
2 \\
1 \\
1
\end{bmatrix}
=
\begin{bmatrix}
4 \\
6
\end{bmatrix}
\]
type ROW_OR_COL is array (integer range <>) of integer;
type PTR is access ROW_OR_COL;
task type PARTIAL is
entry SEND (ROW, COL : ROW_OR_COL);
entry RECEIVE (RESULT : out integer);
end PARTIAL;
MAIN
begin
-- send row and col
-- receive partial product
end
task: body PARTIAL is
PRODUCT : integer := 0;
ROW_PTR : PTR;
COL_PTR : PTR;
begin
accept SEND (ROW,COL : ROW_OR_COL) do
ROW_PTR := new ROW_OR_COL'(ROW);
COL_PTR := new ROW_OR_COL'(COL);
end SEND;
for I in ROW_PTR.all'range loop
PRODUCT := PRODUCT +
ROW_PTR(I) * COL_PTR(I);
end loop;
accept RECEIVE (RESULT : out integer) do
RESULT := PRODUCT;
end RECEIVE;
end PARTIAL;
procedure MAIN is
COLS : constant := 10;
ROWS : constant := 10;
type MATR IX is array (1 .. ROWS) of
ROW OR COL (1 .. COLS);
MAT : MATR IX;
VECTOR : ROW OR COL (1 .. COLS);
FINAL : ROW OR COL (1 .. ROWS);
....
declare
WORKER : array (1 .. ROWS) of PARTIAL; -- tasks
begin
for I in 1 .. ROWS loop
WORKER(I).SEND(ROW => MAT(I),
COL => VECTOR);
end loop;
for I in 1 .. ROWS loop
WORKER(I).RECEIVE (FINAL(I));
end loop;
end; -- block
- Write task specifications to send an integer from task A to task B.
* WRITE SPECIFICATIONS AND BODIES FOR THE FOLLOWING SYSTEM. TASK C WILL REPEATEDLY GET AN INTEGER FROM TASK A AND SEND IT ON TO TASK B.
type PRIORITY is (LOW, MEDIUM, HIGH);
task SWITCH is
entry SEND (PRIORITY)
(M : in string);
end SWITCH;
task body SWITCH is
begin
loop
select
accept SEND(HIGH) do ... end SEND;
or
when SEND(HIGH)'count = 0 =>
accept SEND(MEDIUM) do ... end SEND;
or
when SEND(HIGH)'count = 0 and
SEND(MEDIUM)'count = 0 =>
accept SEND(LOW) ... end SEND;
end select;
end loop;
end SWITCH;
task SYNCHRONIZER is
entry PUT (ITEM in SOME_TYPE),
entry GET (ITEM out SOME_TYPE);
end SYNCHRONIZER;
task body SYNCHRONIZER is
SPOT : SOME_TYPE;
begin
loop
accept PUT (ITEM : in SOME_TYPE) do
SPOT := ITEM;
end PUT;
accept GET (ITEM : out SOME_TYPE) do
ITEM := SPOT;
end GET;
end loop;
end SYNCHRONIZER;
CONTROLLING RESOURCES
- Several concerns are present when dealing with parallelism that are not present when dealing in a purely sequential mode.
- It is important to be able to assure that a value is not being changed by one user at the precise moment that it is being referenced by another user.
- Ada provides a pragma 'shared' which can help:
INDEX integer;
pragma SHARED(INDEX);
- Enforces mutually exclusive access.
- Available for scalar and access types only.
task SEMAPHORE is
entry SEIZE;
entry RELEASE;
end SEMAPHORE;
task body SEMAPHORE is
IN_USE : boolean := false;
begin
loop
select
when not IN_USE =>
accept SEIZE do
IN_USE := true;
end SEIZE;
or
when IN_USE =>
accept RELEASE do
IN_USE := false;
end RELEASE;
end select;
end loop;
end SEMAPHORE;
ENCAPSULATING A DATA ITEM
task PROTECTED is
entry SET (OBJ : in integer),
entry GET (OBJ : out integer),
end PROTECTED,
task body PROTECTED is
LOCAL : integer;
begin
loop
select
accept SET (OBJ : in integer) do
LOCAL := OBJ;
end SET;
or
accept GET (OBJ : out integer) do
OBJ := LOCAL;
end GET;
end select;
end loop;
end PROTECTED;
task PUMP;
task SENDER is
entry WRITE (ITEM : out SOME_TYPE);
end SENDER;
task RECEIVER is
entry READ (ITEM : in SOME_TYPE);
end RECEIVER;
task body PUMP is
THE_ITEM : SOME_TYPE;
begin
loop
SENDER.READ(THE_ITEM);
RECEIVER.WRITE(THE_ITEM);
end loop;
end PUMP;
task body SENDER is separate;
task body RECEIVER is separate;
HARDWARE INTERRUPTS
- For architectures that 'jump' to a certain hardware address upon receipt of an interrupt
- A task entry is associated with the address
- Priority is higher than any user-defined
```
task INTERRUPT_HANDLER is
entry DONE;
for DONE use at 16*40*;
end INTERRUPT_HANDLER;
task body INTERRUPT_HANDLER is
begin
accept DONE do
...
end DONE;
end INTERRUPT_HANDLER;
```
A cyclic executive might deal with several levels of processing:
- Event driven processing (high priority, perhaps interrupt handling)
- Periodic (cyclic) processing
- Background processing (low priority)
procedure EXECUTIVE is
task TASK_1 is
pragma PRIORITY (10);
entry EVENT;
end TASK_1;
task TASK_2 is
entry EVENT;
for EVENT use at 16/110; -- one tick per cycle
end TASK_2;
task BACKGROUND is
pragma PRIORITY (0);
end BACKGROUND;
task PERIODIC is
pragma PRIORITY (5);
entry TICK;
end PERIODIC;
task body PERIODIC is
...
begin
loop
accept TICK;
... -- process a frame
end loop;
end PERIODIC;
-- bodies (or stubs) of other tasks go here
end EXECUTIVE,
Tutorial on Ada® Exceptions
by
Major Patricia K. Lawlis
lawlis%asu@csnet-relay
Air Force Institute of Technology (AFIT)
and
Arizona State University (ASU)
9 June 1987
© Ada is a registered trademark of the U. S. Government - Ada Joint Program Office
References
Outline
=> Overview
- Naming an exception
- Creating an exception handler
- Raising an exception
- Handling exceptions
- Turning off exception checking
- Tasking exceptions
- More examples
Overview
- What is an exception
- Ada exceptions
- Comparison
- the American way
- using exceptions
What Is an Exception
- A run time error
- An unusual or unexpected condition
- A condition requiring special attention
- Other than normal processing
Ada Exceptions
- An exception has a name
- may be predefined
- may be declared
- The exception is raised
- may be raised implicitly by run time system
- may be raised explicitly by raise statement
- The exception is handled
- exception handler may be placed in any frame
- exception propagates until handler is found
- if no handler anywhere, process aborts
package Stack_Package is
type Stack_Type is limited private;
procedure Push (Stack : in out Stack_Type;
Element : in Element_Type;
Overflow_Flag : out boolean);
...
end Stack_Package;
with Text_IO;
with Stack_Package; use Stack_Package;
procedure Flag_Waving is
...
Stack : Stack_Type;
Element : Element_Type;
Flag : boolean;
begin
...
Push (Stack, Element, Flag);
if Flag then
Text_IO.Put ("Stack overflow");
...
end if;
...
end Flag_Waving;
package Stack_Package is
type Stack_Type is limited private;
Stack_Overflow,
Stack_Underflow : exception;
procedure Push (Stack : in out Stack_Type;
Element : in Element_Type);
-- may raise Stack_Overflow
end Stack_Package;
with Text_IO;
with Stack_Package; use Stack_Package;
procedure More_Natural is
Stack : Stack_Type;
Element : Element_Type;
begin
...
Push (Stack, Element);
...
exception
when Stack_Overflow =>
Text_IO.Put ("Stack overflow");
...
end More_Natural;
Outline
• Overview
=> Naming an exception
• Creating an exception handler
• Raising an exception
• Handling exceptions
• Turning off exception checking
• Tasking exceptions
• More examples
Naming an Exception
- Predefined exceptions
- Declaring exceptions
- I/O exceptions
Predefined Exceptions
- In package STANDARD (also see chap 11 of LRM)
- CONSTRAINT_ERROR
violation of range, index, or discriminant constraint...
- NUMERIC_ERROR
execution of a predefined numeric operation cannot deliver a correct result
- PROGRAM_ERROR
attempt to access a program unit which has not yet been elaborated...
- STORAGE_ERROR
storage allocation is exceeded...
- TASKING_ERROR
exception arising during intertask communication
Declaring Exceptions
```
exception_declaration ::= identifier_list : exception;
```
- Exception may be declared anywhere an object declaration is appropriate
- However, exception is not an object
- may not be used as subprogram parameter, record or array component
- has same scope as an object, but its effect may extend beyond its scope
Example:
```
procedure Calculation is
Singular : exception;
Overflow, Underflow : exception;
begin
...
end Calculation;
```
• Exceptions relating to file processing
• In predefined library unit IO_EXCEPTIONS
(also see chap 14 of LRM)
• TEXT_IO, DIRECT_IO, and SEQUENTIAL_IO with it
package IO_EXCEPTIONS is
NAME_ERROR : exception; -- attempt to use
USE_ERROR : exception; -- invalid operation
STATUS_ERROR: exception; -- attempt to read
MODE_ERROR : exception; -- beyond end of file
DEVICE_ERROR: exception; -- attempt to input
END_ERROR : exception; -- wrong type
DATA_ERROR : exception; -- for text processing
LAYOUT_ERROR: exception;
end IO_EXCEPTIONS;
Outline
- Overview
- Naming an exception
=> Creating an exception handler
- Raising an exception
- Handling exceptions
- Turning off exception checking
- Tasking exceptions
- More examples
Creating an Exception Handler
- Defining an exception handler
- Restrictions
- Handler example
Defining an Exception Handler
- Exception condition is "caught" and "handled" by an exception handler
- Exception handler may appear at the end of any frame (block, subprogram, package or task body)
```
begin
...
exception
-- exception handler(s)
end;
```
- Form similar to case statement
```
exception_handler ::=
when exception_choice { | exception_choice} =>
sequence_of_statements
exception_choice ::= exception_name | others
```
Restrictions
- Exception handlers must be at the end of a frame
- Nothing but exception handlers may lie between exception and end of frame
- A handler may name any visible exception declared or predefined
- A handler includes a sequence of statements
- response to exception condition
- A handler for others may be used
- must be the last handler in the frame
- handles all exceptions not listed in previous handlers of the frame (including those not in scope of visibility)
- can be the only handler in the frame
procedure Whatever is
Problem_Condition : exception;
begin
...
exception
when Problem_Condition =>
Fix_It;
when CONSTRAINT_ERROR =>
Report_It;
when others =>
Punt;
end Whatever;
Outline
- Overview
- Naming an exception
- Creating an exception handler
=> Raising an exception
- Handling exceptions
- Turning off exception checking
- Tasking exceptions
- More examples
Raising an Exception
- How exceptions are raised
- Effects of raising an exception
- Raising example
How Exceptions are Raised
- Implicitly by run time system
- predefined exceptions
- Explicitly by raise statement
```
raise_statement ::= raise [exception_name];
```
- the name of the exception must be visible at the point of the raise statement
- a raise statement without an exception name is allowed only within an exception handler
Effects of Raising an Exception
- Control transfers to exception handler at end of frame (if one exists)
- Exception is lowered
- Sequence of statements in exception handler is executed
- Control passes to end of frame
- If frame does not contain an appropriate exception handler, the exception is propagated
procedure Whatever is
Problem_Condition : exception;
Real_Bad_Condition : exception;
begin
...
if Problem_Arises then
raise Problem_Condition;
end if;
...
if Serious_Problem then
raise Real_Bad_Condition;
end if;
...
exception
when Problem_Condition =>
Fix_It;
when CONSTRAINT_ERROR =>
Report_It;
when others =>
Punt;
end Whatever;
Outline
- Overview
- Naming an exception
- Creating an exception handler
- Raising an exception
=> Handling exceptions
- Turning off exception checking
- Tasking exceptions
- More examples
Handling Exceptions
- How exception handling can be useful
- Which exception handler is used
- Sequence of statements in exception handler
- Propagation
- Propagation example
How Exception Handling Can Be Useful
- Normal processing could continue if
- cause of exception condition can be "repaired"
- alternative approach can be used
- operation can be retried
- Degraded processing could be better than termination
- for example, safety-critical systems
- If termination is necessary, "clean-up" can be done first
Which Exception Handler Is Used
- If exception is raised during normal execution, system looks for an exception handler at the end of the frame in which the exception occurred.
- If exception is raised during elaboration of the declarative part of a frame:
- elaboration is abandoned and control goes to the end of the frame with the exception still raised.
- exception part of the frame is not searched for an appropriate handler.
- effectively, the calling unit will be searched for an appropriate handler.
- if elaboration of library unit, program execution is abandoned.
-- all library units are elaborated with the main program.
- If exception is raised in exception handler:
- handler may contain block(s) with handler(s).
- if not handled locally within handler, control goes to end of frame with exception raised.
Sequence of Statements in Exception Handler
- Handler completes the execution of the frame
- handler for a function should usually contain a return statement
- Statements can be of arbitrary complexity
- can use most any language construct that makes sense in that context
- cannot use goto statement to transfer into a handler
- if handler is in a block inside a loop, could use exit statement
- Handler at end of package body applies only to package initialization
Propagation
- Occurs if no handler exists in frame where exception is raised
- Also occurs if `raise` statement is used in handler
- Exception is propagated dynamically
- propagates from subprogram to unit calling it
(not necessarily unit containing its declaration)
- this can result in propagation outside its scope
- Propagation continues until
- an appropriate handler is found
- exception propagates to main program (still with no handler) and program execution is abandoned
procedure Do_Nothing is
--------------
procedure Has_It is
Some_Problem : exception;
begins...
raise Some_Problem;
...
exception
when Some_Problem =>
Clean_Up;
raise;
end Has_It;
--------------
procedure Calls_It is
begin...
Has_It;
...
end Calls_It;
--------------
begin -- Do_Nothing
...
Calls_It;
...
exception
when others => Fix_Everything;
end Do_Nothing;
Outline
- Overview
- Naming an exception
- Creating an exception handler
- Raising an exception
- Handling exceptions
=> Turning off exception checking
- Tasking exceptions
- More examples
Turning Off Exception Checking
- Overhead vs efficiency
- Pragma SUPPRESS
- Check identifiers
Overhead vs Efficiency
- Exception checking imposes run time overhead
- interactive applications will never notice
- real-time applications have legitimate concerns but must not sacrifice system safety
- When efficiency counts
- first and foremost, make program work
- be sure possible problems are covered by exception handlers
- check if efficient enough - stop if it is
- if not, study execution profile
-- eliminate bottlenecks
-- improve algorithm
-- avoid "cute" tricks
- check if efficient enough - stop if it is
- if not, trade-offs may be necessary
- some exception checks may be expendable since debugging is done
- however, every suppressed check poses new possibilities for problems
-- must re-examine possible problems
-- must re-examine exception handlers
- always keep in mind
-- problems will happen
-- critical applications must be able to deal with these problems
Improving the algorithm is far better - and easier in the long run - than suppressing checks
Pragma SUPPRESS
- Only allowed immediately within a declarative part or immediately within a package specification
```
pragma SUPPRESS (identifier [, [ ON =>] name]);
```
- identifier is that of the check to be omitted (next slide lists identifiers)
- name is that of an object, type, or unit for which the check is to be suppressed
-- if no name is given, it applies to the remaining declarative region
- An implementation is free to ignore the suppress directive for any check which may be impossible or too costly to suppress
Example:
```
pragma SUPPRESS (INDEX_CHECK, ON => Index);
```
Check Identifiers
- These identifiers are explained in more detail in chap 11 of the LRM
- Check identifiers for suppression of CONSTRAINT_ERROR checks
ACCESS_CHECK
DISCRIMINANT_CHECK
INDEX_CHECK
LENGTH_CHECK
RANGE_CHECK
- Check identifiers for suppression of NUMERIC_ERROR checks
DIVISION_CHECK
OVERFLOW_CHECK
- Check identifier for suppression of PROGRAM_ERROR checks
ELABORATION_CHECK
- Check identifier for suppression of STORAGE_ERROR check
STORAGE_CHECK
Overview
- Naming an exception
- Creating an exception handler
- Raising an exception
- Handling exceptions
- Turning off exception checking
=> Tasking exceptions
- More examples
Tasking Exceptions
- Exception handling is trickier for tasks
- Exceptions during task rendezvous
- Tasking example
Exception Handling is Trickier for Tasks
- Rules are not really different, just more involved
- local exceptions handled the same within frames
If exception is raised
- during elaboration of task declarations
- the exception TASKING_ERROR will be raised at the point of task activation
- the task will be marked completed
- during execution of task body (and not resolved there)
- task is completed
- exception is not propagated
- during task rendezvous
- this is the really tricky part
Exceptions During Task Rendezvous
- If the **called** task terminates abnormally
exception TASKING_ERROR is raised in **calling** task at the point of the entry call
- If the **calling** task terminates abnormally
no exception propagates to the **called** task
- If an exception is raised in **called** task within an **accept** (and not handled there locally)
the same exception is raised in the **calling** task at the point of the entry call
(even if exception is later handled outside of the accept in the called task)
- If an entry call is made for entry of a task that becomes completed before accepting the entry
exception TASKING_ERROR is raised in **calling** task at the point of the entry call
procedure Critical_Code is
Failure : exception;
-----------
task Monitor is
entry Do_Something;
end Monitor;
task body Monitor is
...
begin
accept Do_Something do
...
raise Failure;
...
end Do_Something;
...
exception -- exception handled here
when Failure =>
Termination_Message;
end Monitor;
-----------
begin -- Critical_Code
...
Monitor.Do_Something;
...
exception -- same exception will be handled here
when Failure =>
Critical_Problem_Message;
end Critical_Code;
Outline
- Overview
- Naming an exception
- Creating an exception handler
- Raising an exception
- Handling exceptions
- Turning off exception checking
- Tasking exceptions
=> More examples
Interactive Data Input
with Text_io; use Text_io;
procedure Get_Input (Number : out integer) is
type Input_Type is integer range 0..100;
package Int_io is new Integer_io (Input_Type);
In_Number : Input_Type;
begin -- Get_Input
loop -- to try again after incorrect input
begin -- inner block to hold exception handler
put ("Enter a number 0 to 100");
Int_io.get (In_Number);
Number := In_Number;
exit; -- to exit loop after correct input
exception
when DATA_ERROR | CONSTRAINT_ERROR =>
put ("Try again, fat fingers!");
Skip_Line; -- must clear buffer
end; -- inner block
end loop;
end Get_Input;
package Container is
procedure Has_Handler;
procedure Raises_Exception;
end Container;
procedure Not_in_Package is
begin
Container.Raises_Exception;
exception
when others => raise;
end Not_in_Package;
package body Container is
Crazy : exception;
procedure Has_Handler is
begin
Not_in_Package;
exception
when Crazy => Tell_Everyone;
end Has_Handler;
procedure Raises_Exception is
begin
raise Crazy;
end Raises_Exception;
end Container;
begin
Container.Has_Handler;
end;
Keeping a Task Alive
task Monitor is
entry Do_Something;
end Monitor;
task body Monitor is
begin
loop -- for never-ending repetition
...
select
accept Do_Something do
begin -- block for exception handler
...
raise Failure;
...
exception
when Failure => Recover;
end; -- block
end Do_Something; -- exception must be
-- lowered before exiting
...
end select;
...
end loop;
exception
when others =>
Termination_Message;
end Monitor;
END DATE FILMED 5-88 FTC
|
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a190366.pdf", "len_cl100k_base": 7806, "olmocr-version": "0.1.53", "pdf-total-pages": 104, "total-fallback-pages": 0, "total-input-tokens": 163981, "total-output-tokens": 12275, "length": "2e12", "weborganizer": {"__label__adult": 0.00029349327087402344, "__label__art_design": 0.00031447410583496094, "__label__crime_law": 0.00024819374084472656, "__label__education_jobs": 0.0025234222412109375, "__label__entertainment": 7.158517837524414e-05, "__label__fashion_beauty": 0.00011366605758666992, "__label__finance_business": 0.00016617774963378906, "__label__food_dining": 0.0002682209014892578, "__label__games": 0.000560760498046875, "__label__hardware": 0.000644683837890625, "__label__health": 0.00020992755889892575, "__label__history": 0.000156402587890625, "__label__home_hobbies": 7.873773574829102e-05, "__label__industrial": 0.0002522468566894531, "__label__literature": 0.0001652240753173828, "__label__politics": 0.0001850128173828125, "__label__religion": 0.0002989768981933594, "__label__science_tech": 0.00354766845703125, "__label__social_life": 8.392333984375e-05, "__label__software": 0.005527496337890625, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0002262592315673828, "__label__transportation": 0.00038695335388183594, "__label__travel": 0.0001571178436279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32154, 0.00777]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32154, 0.21454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32154, 0.72574]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 201, false], [201, 762, null], [762, 967, null], [967, 1053, null], [1053, 1179, null], [1179, 1356, null], [1356, 1538, null], [1538, 1718, null], [1718, 1869, null], [1869, 2098, null], [2098, 2376, null], [2376, 2552, null], [2552, 2817, null], [2817, 2979, null], [2979, 3152, null], [3152, 3358, null], [3358, 3524, null], [3524, 3733, null], [3733, 3833, null], [3833, 3968, null], [3968, 4130, null], [4130, 4415, null], [4415, 4945, null], [4945, 5301, null], [5301, 5547, null], [5547, 5809, null], [5809, 5994, null], [5994, 6197, null], [6197, 6552, null], [6552, 6783, null], [6783, 7115, null], [7115, 7680, null], [7680, 8019, null], [8019, 8291, null], [8291, 8559, null], [8559, 8964, null], [8964, 9453, null], [9453, 9726, null], [9726, 9871, null], [9871, 9988, null], [9988, 10434, null], [10434, 10834, null], [10834, 11284, null], [11284, 11354, null], [11354, 11490, null], [11490, 12068, null], [12068, 12418, null], [12418, 12904, null], [12904, 13291, null], [13291, 13798, null], [13798, 14150, null], [14150, 14549, null], [14549, 14755, null], [14755, 15298, null], [15298, 15552, null], [15552, 16159, null], [16159, 16350, null], [16350, 16455, null], [16455, 16606, null], [16606, 16981, null], [16981, 17534, null], [17534, 18119, null], [18119, 18316, null], [18316, 18401, null], [18401, 18861, null], [18861, 19347, null], [19347, 19916, null], [19916, 20108, null], [20108, 20204, null], [20204, 20661, null], [20661, 21189, null], [21189, 21407, null], [21407, 21599, null], [21599, 21701, null], [21701, 22044, null], [22044, 22358, null], [22358, 22748, null], [22748, 22940, null], [22940, 23116, null], [23116, 23467, null], [23467, 24308, null], [24308, 24786, null], [24786, 25282, null], [25282, 25677, null], [25677, 25869, null], [25869, 25964, null], [25964, 26898, null], [26898, 26991, null], [26991, 27591, null], [27591, 28086, null], [28086, 28268, null], [28268, 28385, null], [28385, 28889, null], [28889, 29620, null], [29620, 30157, null], [30157, 30348, null], [30348, 30348, null], [30348, 30348, null], [30348, 31096, null], [31096, 31591, null], [31591, 32130, null], [32130, 32154, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 201, true], [201, 762, null], [762, 967, null], [967, 1053, null], [1053, 1179, null], [1179, 1356, null], [1356, 1538, null], [1538, 1718, null], [1718, 1869, null], [1869, 2098, null], [2098, 2376, null], [2376, 2552, null], [2552, 2817, null], [2817, 2979, null], [2979, 3152, null], [3152, 3358, null], [3358, 3524, null], [3524, 3733, null], [3733, 3833, null], [3833, 3968, null], [3968, 4130, null], [4130, 4415, null], [4415, 4945, null], [4945, 5301, null], [5301, 5547, null], [5547, 5809, null], [5809, 5994, null], [5994, 6197, null], [6197, 6552, null], [6552, 6783, null], [6783, 7115, null], [7115, 7680, null], [7680, 8019, null], [8019, 8291, null], [8291, 8559, null], [8559, 8964, null], [8964, 9453, null], [9453, 9726, null], [9726, 9871, null], [9871, 9988, null], [9988, 10434, null], [10434, 10834, null], [10834, 11284, null], [11284, 11354, null], [11354, 11490, null], [11490, 12068, null], [12068, 12418, null], [12418, 12904, null], [12904, 13291, null], [13291, 13798, null], [13798, 14150, null], [14150, 14549, null], [14549, 14755, null], [14755, 15298, null], [15298, 15552, null], [15552, 16159, null], [16159, 16350, null], [16350, 16455, null], [16455, 16606, null], [16606, 16981, null], [16981, 17534, null], [17534, 18119, null], [18119, 18316, null], [18316, 18401, null], [18401, 18861, null], [18861, 19347, null], [19347, 19916, null], [19916, 20108, null], [20108, 20204, null], [20204, 20661, null], [20661, 21189, null], [21189, 21407, null], [21407, 21599, null], [21599, 21701, null], [21701, 22044, null], [22044, 22358, null], [22358, 22748, null], [22748, 22940, null], [22940, 23116, null], [23116, 23467, null], [23467, 24308, null], [24308, 24786, null], [24786, 25282, null], [25282, 25677, null], [25677, 25869, null], [25869, 25964, null], [25964, 26898, null], [26898, 26991, null], [26991, 27591, null], [27591, 28086, null], [28086, 28268, null], [28268, 28385, null], [28385, 28889, null], [28889, 29620, null], [29620, 30157, null], [30157, 30348, null], [30348, 30348, null], [30348, 30348, null], [30348, 31096, null], [31096, 31591, null], [31591, 32130, null], [32130, 32154, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32154, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32154, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 201, 3], [201, 762, 4], [762, 967, 5], [967, 1053, 6], [1053, 1179, 7], [1179, 1356, 8], [1356, 1538, 9], [1538, 1718, 10], [1718, 1869, 11], [1869, 2098, 12], [2098, 2376, 13], [2376, 2552, 14], [2552, 2817, 15], [2817, 2979, 16], [2979, 3152, 17], [3152, 3358, 18], [3358, 3524, 19], [3524, 3733, 20], [3733, 3833, 21], [3833, 3968, 22], [3968, 4130, 23], [4130, 4415, 24], [4415, 4945, 25], [4945, 5301, 26], [5301, 5547, 27], [5547, 5809, 28], [5809, 5994, 29], [5994, 6197, 30], [6197, 6552, 31], [6552, 6783, 32], [6783, 7115, 33], [7115, 7680, 34], [7680, 8019, 35], [8019, 8291, 36], [8291, 8559, 37], [8559, 8964, 38], [8964, 9453, 39], [9453, 9726, 40], [9726, 9871, 41], [9871, 9988, 42], [9988, 10434, 43], [10434, 10834, 44], [10834, 11284, 45], [11284, 11354, 46], [11354, 11490, 47], [11490, 12068, 48], [12068, 12418, 49], [12418, 12904, 50], [12904, 13291, 51], [13291, 13798, 52], [13798, 14150, 53], [14150, 14549, 54], [14549, 14755, 55], [14755, 15298, 56], [15298, 15552, 57], [15552, 16159, 58], [16159, 16350, 59], [16350, 16455, 60], [16455, 16606, 61], [16606, 16981, 62], [16981, 17534, 63], [17534, 18119, 64], [18119, 18316, 65], [18316, 18401, 66], [18401, 18861, 67], [18861, 19347, 68], [19347, 19916, 69], [19916, 20108, 70], [20108, 20204, 71], [20204, 20661, 72], [20661, 21189, 73], [21189, 21407, 74], [21407, 21599, 75], [21599, 21701, 76], [21701, 22044, 77], [22044, 22358, 78], [22358, 22748, 79], [22748, 22940, 80], [22940, 23116, 81], [23116, 23467, 82], [23467, 24308, 83], [24308, 24786, 84], [24786, 25282, 85], [25282, 25677, 86], [25677, 25869, 87], [25869, 25964, 88], [25964, 26898, 89], [26898, 26991, 90], [26991, 27591, 91], [27591, 28086, 92], [28086, 28268, 93], [28268, 28385, 94], [28385, 28889, 95], [28889, 29620, 96], [29620, 30157, 97], [30157, 30348, 98], [30348, 30348, 99], [30348, 30348, 100], [30348, 31096, 101], [31096, 31591, 102], [31591, 32130, 103], [32130, 32154, 104]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32154, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8d9952104241a56a478f259e701f26f8e7ceb742
|
Trusted Execution Environment Provisioning (TEEP) Protocol
draft-ietf-teep-protocol-00
Abstract
This document specifies a protocol that installs, updates, and deletes Trusted Applications (TAs) in a device with a Trusted Execution Environment (TEE). This specification defines an interoperable protocol for managing the lifecycle of TAs.
The protocol name is pronounced teepee. This conjures an image of a wedge-shaped protective covering for one’s belongings, which sort of matches the intent of this protocol.
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on June 9, 2020.
Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved.
1. Introduction
The Trusted Execution Environment (TEE) concept has been designed to separate a regular operating system, also referred as a Rich Execution Environment (REE), from security-sensitive applications. In an TEE ecosystem, different device vendors may use different operating systems in the REE and may use different types of TEEs. When application providers or device administrators use Trusted Application Managers (TAMs) to install, update, and delete Trusted Applications (TAs) on a wide range of devices with potentially different TEEs then an interoperability need arises.
This document specifies the protocol for communicating between a TAM and a TEEP Agent, involving a TEEP Broker.
The Trusted Execution Environment Provisioning (TEEP) architecture document [I-D.ietf-teep-architecture] has set to provide a design guidance for such an interoperable protocol and introduces the necessary terminology. Note that the term Trusted Application may include more than code; it may also include configuration data and keys needed by the TA to operate correctly.
2. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
This specification re-uses the terminology defined in [I-D.ietf-teep-architecture].
3. Message Overview
The TEEP protocol consists of a couple of messages exchanged between a TAM and a TEEP Agent via a TEEP Broker. The messages are encoded either in JSON or CBOR and designed to provide end-to-end security. TEEP protocol messages are signed and/or encrypted by the endpoints, i.e., the TAM and the TEEP Agent, but trusted applications may as well be encrypted and signed by the service provider. The TEEP protocol not only re-use JSON and CBOR but also the respective security wrappers, namely JOSE (JWS [RFC7515] and JWE [RFC7516], to be more specific) and COSE [RFC8152]. Furthermore, for attestation the Entity Attestation Token (EAT) [I-D.ietf-rats-eat] and for software updates the SUIT manifest format [I-D.ietf-suit-manifest] are re-used.
This specification defines six messages.
A TAM queries a device’s current state with a QueryRequest message. A TEEP Agent will, after authenticating and authorizing the request, report attestation information, list all TAs, and provide information about supported algorithms and extensions in a QueryResponse message. An error message is returned if the request could not be processed. A TAM will process the QueryResponse message and determine whether subsequent message exchanges to install, update, or delete trusted applications shall be initiated.
With the TrustedAppInstall message a TAM can instruct a TEEP Agent to install a TA. The TEEP Agent will process the message, determine whether the TAM is authorized and whether the TA has been signed by an authorized SP. In addition to the binary, the TAM may also provide personalization data. If the TrustedAppInstall message was processed successfully then a Success message is returned to the TAM, an Error message otherwise.
```
<table>
<thead>
<tr>
<th>TAM</th>
<th>TEEP Agent</th>
</tr>
</thead>
</table>
TrustedAppInstall ----> Success
<---- or
Error
```
With the TrustedAppDelete message a TAM can instruct a TEEP Agent to delete one or multiple TA(s). A Success message is returned when the operation has been completed successfully, and an Error message otherwise.
```
<table>
<thead>
<tr>
<th>TAM</th>
<th>TEEP Agent</th>
</tr>
</thead>
</table>
TrustedAppDelete ----> Success
<---- or
Error
```
4. Detailed Messages Specification
For a CBOR-based encoding the following security wrapper is used (described in CDDL format [I-D.ietf-cbor-cddl]).
Outer_Wrapper = {
msg-authenc-wrapper => bstr .cbor
Msg_AuthEnc_Wrapper / nil,
teep-message => (QueryRequest /
QueryResponse /
TrustedAppInstall /
TrustedAppDelete /
Error /
Success ),
}
msg-authenc-wrapper = 1
teep-message = 2
Msg_AuthEnc_WRAPPER = [ * (COSE_Mac_Tagged /
COSE_Sign_Tagged /
COSE_Mac0_Tagged /
COSE_Sign1_Tagged)]
A future version of this specification will also describe the security wrapper for JSON (in CDDL format).
4.1. QueryRequest
suite = int
version = int
data_items = (
attestation: 1,
trusted_apps: 2,
extensions: 3,
suit_commands: 4
)
QueryRequest = (
TYPE : int,
TOKEN : bstr,
REQUEST : [+data_items],
? CIPHER_SUITE : [+suite],
? NONCE : bstr,
? VERSION : [+version],
? OCSP_DATA : bstr,
* $$extensions
)
A QueryRequest message is signed by the TAM and has the following fields:
TYPE TYPE = 1 corresponds to a QueryRequest message sent from the TAM to the TEEP Agent.
TOKEN The value in the TOKEN field is used to match requests to responses.
REQUEST The REQUEST field indicates what information the TAM requests from the TEEP Agent in form of a list of integer values. Each integer value corresponds to an IANA registered information element. This specification defines the initial set of information elements:
attestation (1) With this value the TAM requests the TEEP Agent to return an entity attestation token (EAT) in the response.
trusted_apps (2) With this value the TAM queries the TEEP Agent for all installed TAs.
extensions (3) With this value the TAM queries the TEEP Agent for supported capabilities and extensions, which allows a TAM to discover the capabilities of a TEEP Agent implementation.
suit_commands (4) With this value the TAM queries the TEEP Agent for supported commands offered by the SUIT manifest implementation.
Further values may be added in the future via IANA registration.
CIPHER_SUITE The CIPHER_SUITE field lists the ciphersuite(s) supported by the TAM. Details about the ciphersuite encoding can be found in Section 5.
NONCE NONCE is an optional field used for ensuring the refreshness of the Entity Attestation Token (EAT) contained in the response.
VERSION The VERSION field lists the version(s) supported by the TAM. For this version of the specification this field can be omitted.
OCSP_DATA The OCSP_DATA field contains a list of OCSP stapling data respectively for the TAM certificate and each of the CA certificates up to the root certificate. The TAM provides OCSP data so that the TEEP Agent can validate the status of the TAM certificate chain without making its own external OCSP service call. OCSP data MUST be conveyed as a DER-encoded OCSP response (using the ASN.1 type OCSPResponse defined in [RFC2560]). The use of OCSP is optional to implement for both the TAM and the TEEP Agent. A TAM can query the TEEP Agent for the support of this functionality via the capability discovery exchange, as described above.
4.2. QueryResponse
ta_id = (
Vendor_ID = bstr,
Class_ID = bstr,
Device_ID = bstr,
* $$extensions
)
ext_info = int
QueryResponse = (
TYPE : int,
TOKEN : bstr,
? SELECTED_CIPHER_SUITE : suite,
? SELECTED_VERSION : version,
? EAT : bstr,
? TA_LIST : [+ta_id],
? EXT_LIST : [+ext_info],
* $$extensions
)
The QueryResponse message is signed and encrypted by the TEEP Agent and returned to the TAM. It has the following fields:
TYPE TYPE = 2 corresponds to a QueryResponse message sent from the TEEP Agent to the TAM.
TOKEN The value in the TOKEN field is used to match requests to responses. The value MUST correspond to the value received with the QueryRequest.
SELECTED_CIPHER_SUITE The SELECTED_CIPHER_SUITE field indicates the selected ciphersuite. Details about the ciphersuite encoding can be found in Section 5.
SELECTED_VERSION The SELECTED_VERSION field indicates the protocol version selected by the TEEP Agent.
EAT The EAT field contains an Entity Attestation Token following the encoding defined in [I-D.ietf-rats-eat].
TA_LIST The TA_LIST field enumerates the trusted applications installed on the device in form of ta_ids, i.e., a vendor id/class id/device id triple.
EXT_LIST The EXT_LIST field lists the supported extensions. This document does not define any extensions.
4.3. TrustedAppInstall
TrustedAppInstall = (
TYPE : int,
TOKEN : bstr,
? MANIFEST_LIST : [+ SUIT_Outer_Wrapper],
* $$extensions
)
The TrustedAppInstall message is MACed and encrypted by the TAM and has the following fields:
TYPE TYPE = 3 corresponds to a TrustedAppInstall message sent from the TAM to the TEEP Agent. In case of successful processing, an Success message is returned by the TEEP Agent. In case of an error, an Error message is returned. Note that the TrustedAppInstall message is used for initial TA installation but also for TA updates.
TOKEN The value in the TOKEN field is used to match requests to responses.
TA The MANIFEST_LIST field is used to convey one or multiple SUIT manifests. A manifest is a bundle of metadata about the trusted app, where to find the code, the devices to which it applies, and cryptographic information protecting the manifest. The manifest may also convey personalization data. TA binaries and personalization data is typically signed and encrypted by the SP. Other combinations are, however, possible as well. For example, it is also possible for the TAM to sign and encrypt the personalization data and to let the SP sign and/or encrypt the TA binary.
4.4. TrustedAppDelete
TrustedAppDelete = (
TYPE : int,
TOKEN : bstr,
? TA_LIST : [+ta_id],
* $$extensions
)
The TrustedAppDelete message is MACed and encrypted by the TAM and has the following fields:
TYPE TYPE = 4 corresponds to a TrustedAppDelete message sent from the TAM to the TEEP Agent. In case of successful processing, an
Success message is returned by the TEEP Agent. In case of an error, an Error message is returned.
TOKEN The value in the TOKEN field is used to match requests to responses.
TA_LIST The TA_LIST field enumerates the TAs to be deleted.
4.5. Success
Success = (
TYPE : int,
TOKEN : bstr,
? MSG : tstr,
* $$extensions
)
The Success message is MACed and encrypted by the TEEP Agent and has the following fields:
TYPE TYPE = 5 corresponds to a Error message sent from the TEEP Agent to the TAM.
TOKEN The value in the TOKEN field is used to match requests to responses.
MSG The MSG field contains optional diagnostics information encoded in UTF-8 [RFC3629] returned by the TEEP Agent.
4.6. Error
Error = (
TYPE : int,
TOKEN : bstr,
ERR_CODE : int,
? ERR_MSG : tstr,
? CIPHER_SUITE : [+suite],
? VERSION : [+version],
* $$extensions
)
If possible, the Error message is MACed and encrypted by the TEEP Agent. Unprotected Error messages MUST be handled with care by the TAM due to possible downgrading attacks. It has the following fields:
TYPE TYPE = 6 corresponds to a Error message sent from the TEEP Agent to the TAM.
TOKEN The value in the TOKEN field is used to match requests to responses.
ERR_CODE The ERR_CODE field is populated with values listed in a registry (with the initial set of error codes listed below). Only selected messages are applicable to each message.
ERR_MSG The ERR_MSG message is a human-readable diagnostic message that MUST be encoded using UTF-8 [RFC3629] using Net-Unicode form [RFC5198].
VERSION The VERSION field enumerates the protocol version(s) supported by the TEEP Agent. This field is optional but MUST be returned with the ERR_UNSUPPORTED_MSG_VERSION error message.
CIPHER_SUITE The CIPHER_SUITE field lists the ciphersuite(s) supported by the TEEP Agent. This field is optional but MUST be returned with the ERR_UNSUPPORTED_CRYPTO_ALG error message.
This specification defines the following initial error messages. Additional error code can be registered with IANA.
ERR_ILLEGAL_PARAMETER The TEEP Agent sends this error message when a request contains incorrect fields or fields that are inconsistent with other fields.
ERR_UNSUPPORTED_EXTENSION The TEEP Agent sends this error message when it recognizes an unsupported extension or unsupported message.
ERR_REQUEST_SIGNATURE_FAILED The TEEP Agent sends this error message when it fails to verify the signature of the message.
ERR_UNSUPPORTED_MSG_VERSION The TEEP Agent receives a message but does not support the indicated version.
ERR_UNSUPPORTED_CRYPTO_ALG The TEEP Agent receives a request message encoded with an unsupported cryptographic algorithm.
ERR_BAD_CERTIFICATE The TEEP Agent returns this error when processing of a certificate failed. For diagnosis purposes it is RECOMMENDED to include information about the failing certificate in the error message.
ERR_UNSUPPORTED_CERTIFICATE The TEEP Agent returns this error when a certificate was of an unsupported type.
ERR_CERTIFICATE_REVOKED The TEEP Agent returns this error when a certificate was revoked by its signer.
ERR_CERTIFICATE_EXPIRED The TEEP Agent returns this error when a certificate has expired or is not currently valid.
ERR_INTERNAL_ERROR The TEEP Agent returns this error when a miscellaneous internal error occurred while processing the request.
ERRRESOURCEFullPath This error is reported when a device resource isn’t available anymore, such as storage space is full.
ERR_TA_NOT_FOUND This error will occur when the target TA does not exist. This error may happen when the TAM has stale information and tries to delete a TA that has already been deleted.
ERR_TA_ALREADY_INSTALLED While installing a TA, a TEE will return this error if the TA has already been installed.
ERR_TA_UNKNOWN_FORMAT The TEEP Agent returns this error when it does not recognize the format of the TA binary.
ERR_TA_DECRYPTION_FAILED The TEEP Agent returns this error when it fails to decrypt the TA binary.
ERR_TA_DECOMPRESSION_FAILED The TEEP Agent returns this error when it fails to decompress the TA binary.
ERR_MANIFEST_PROCESSING_FAILED The TEEP Agent returns this error when manifest processing failures occur that are less specific than ERR_TA_UNKNOWN_FORMAT, ERR_TA_UNKNOWN_FORMAT, and ERR_TA_DECOMPRESSION_FAILED.
ERR_PD_PROCESSING_FAILED The TEEP Agent returns this error when it fails to process the provided personalization data.
5. Ciphersuites
A ciphersuite consists of an AEAD algorithm, a HMAC algorithm, and a signature algorithm. Each ciphersuite is identified with an integer value, which corresponds to an IANA registered ciphersuite. This document specifies two ciphersuites.
<table>
<thead>
<tr>
<th>Value</th>
<th>Ciphersuite</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>AES-CCM-16-64-128, HMAC 256/256, X25519, EdDSA</td>
</tr>
<tr>
<td>1</td>
<td>AES-CCM-16-64-128, HMAC 256/256, P-256, ES256</td>
</tr>
</tbody>
</table>
6. Security Consideration
This section summarizes the security considerations discussed in this specification:
Cryptographic Algorithms This specification relies on the cryptographic algorithms provided by the security wrappers JOSE and COSE, respectively. A companion document makes algorithm recommendations but this document is written in an algorithm-agnostic way. TEEP protocol messages exchanged between the TAM and the TEEP Agent are protected using JWS and JWE (for JSON-encoded messages) and COSE (for CBOR-encoded messages). Public key based authentication is used to by the TEEP Agent to authenticate the TAM and vice versa.
Attestation A TAM may rely on the attestation information provided by the TEEP Agent and the Entity Attestation Token is re-used to convey this information. To sign the Entity Attestation Token it is necessary for the device to possess a public key (usually in the form of a certificate) along with the corresponding private key. Depending on the properties of the attestation mechanism it is possible to uniquely identify a device based on information in the attestation information or in the certificate used to sign the attestation token. This uniqueness may raise privacy concerns. To lower the privacy implications the TEEP Agent MUST present its attestation information only to an authenticated and authorized TAM.
TA Binaries TA binaries are provided by the SP. It is the responsibility of the TAM to relay only verified TAs from authorized SPs. Delivery of that TA to the TEEP Agent is then the responsibility of the TAM and the TEEP Broker, using the security mechanisms provided by the TEEP protocol. To protect the TA binary the SUIT manifest is re-used and it offers a variety of security features, including digital signatures and symmetric encryption.
Personalization Data An SP or a TAM can supply personalization data along with a TA. This data is also protected by a SUIT manifest. The personalization data may be itself is (or can be) opaque to the TAM.
TEEP Broker The TEEP protocol relies on the TEEP Broker to relay messages between the TAM and the TEEP Agent. When the TEEP Broker is compromised it can drop messages, delay the delivery of messages, and replay messages but it cannot modify those messages. (A replay would be, however, detected by the TEEP Agent.) A compromised TEEP Broker could reorder messages in an attempt to
install an old version of a TA. Information in the manifest ensures that the TEEP Agents are protected against such downgrading attacks based on features offered by the manifest itself.
CA Compromise The QueryRequest message from a TAM to the TEEP Agent may include OCSP stapling data for the TAM’s signer certificate and for intermediate CA certificates up to the root certificate so that the TEEP Agent can verify the certificate’s revocation status.
A certificate revocation status check on a TA signer certificate is OPTIONAL by a TEEP Agent. A TAM is responsible for vetting a TA and before distributing them to TEEP Agents. TEEP Agents will trust a TA signer certificate’s validation status done by a TAM.
CA Compromise The CA issuing certificates to a TAM or an SP may get compromised. A compromised intermediate CA certificates can be detected by a TEEP Agent by using OCSP information, assuming the revocation information is available. Additionally, it is RECOMMENDED to provide a way to update the trust anchor store used by the device, for example using a firmware update mechanism.
If the CA issuing certificates to devices gets compromised then these devices might be rejected by a TAM, if revocation is available to the TAM.
Compromised TAM The TEEP Agent SHOULD use OCSP information to verify the validity of the TAM-provided certificate (as well as the validity of intermediate CA certificates). The integrity and the accuracy of the clock within the TEE determines the ability to determine an expired or revoked certificate since OCSP stapling includes signature generation time, certificate validity dates are compared to the current time.
7. IANA Considerations
7.1. Media Type Registration
IANA is requested to assign a media type for application/teep+json.
Type name: application
Subtype name: teep+json
Required parameters: none
Optional parameters: none
Encoding considerations: Same as encoding considerations of application/json as specified in Section 11 of [RFC7159]
Security considerations: See Security Considerations Section of this document.
Interoperability considerations: Same as interoperability considerations of application/json as specified in [RFC7159]
Published specification: This document.
Applications that use this media type: TEEP protocol implementations
Fragment identifier considerations: N/A
Additional information:
- Deprecated alias names for this type: N/A
- Magic number(s): N/A
- File extension(s): N/A
- Macintosh file type code(s): N/A
Person to contact for further information: teep@ietf.org
Intended usage: COMMON
Restrictions on usage: none
Author: See the "Authors’ Addresses" section of this document
Change controller: IETF
IANA is requested to assign a media type for application/teep+cbor.
Type name: application
Subtype name: teep+cbor
Required parameters: none
Optional parameters: none
Encoding considerations: Same as encoding considerations of application/cbor
Security considerations: See Security Considerations Section of this document.
Interoperability considerations: Same as interoperability considerations of application/cbor as specified in [RFC7049]
Published specification: This document.
Applications that use this media type: TEEP protocol implementations
Fragment identifier considerations: N/A
Additional information:
Deprecated alias names for this type: N/A
Magic number(s): N/A
File extension(s): N/A
Macintosh file type code(s): N/A
Person to contact for further information: teep@ietf.org
Intended usage: COMMON
Restrictions on usage: none
Author: See the "Authors' Addresses" section of this document
Change controller: IETF
7.2. Error Code Registry
IANA is also requested to create a new registry for the error codes defined in Section 4.
Registration requests are evaluated after a three-week review period on the teep-reg-review@ietf.org mailing list, on the advice of one or more Designated Experts [RFC8126]. However, to allow for the allocation of values prior to publication, the Designated Experts may approve registration once they are satisfied that such a specification will be published.
Registration requests sent to the mailing list for review should use an appropriate subject (e.g., "Request to register an error code: example"). Registration requests that are undetermined for a period
longer than 21 days can be brought to the IESG’s attention (using the iesg@ietf.org mailing list) for resolution.
Criteria that should be applied by the Designated Experts includes determining whether the proposed registration duplicates existing functionality, whether it is likely to be of general applicability or whether it is useful only for a single extension, and whether the registration description is clear.
IANA must only accept registry updates from the Designated Experts and should direct all requests for registration to the review mailing list.
7.3. Ciphersuite Registry
IANA is also requested to create a new registry for ciphersuites, as defined in Section 5.
8. References
8.1. Normative References
8.2. Informative References
[I-D.ietf-cbor-cddl]
[I-D.ietf-teep-architecture]
Architecture", draft-ietf-teep-architecture-04 (work in progress), December 2019.
[I-D.ietf-teep-opentrustprotocol]
Pei, M., Atyeo, A., Cook, N., Yoo, M., and H. Tschofenig,
"The Open Trust Protocol (OTrP)", draft-ietf-teep-
opentrustprotocol-03 (work in progress), May 2019.
Writing an IANA Considerations Section in RFCs", BCP 26,
RFC 8126, DOI 10.17487/RFC8126, June 2017,
Appendix A. Acknowledgements
This work is based on the initial version of OTrP
[I-D.ietf-teep-opentrustprotocol] and hence credits go to those who
have contributed to it.
We would like to thank Eve Schooler for the suggestion of the
protocol name.
Appendix B. Contributors
We would like to thank the following individuals for their
contributions to an earlier version of this specification.
- Brian Witten
Symantec
brian_witten@symantec.com
- Tyler Kim
Solacia
tylerkim@iotrust.kr
- Nick Cook
Arm Ltd.
nicholas.cook@arm.com
- Minho Yoo
IoTrust
minho.yoo@iotrust.kr
Authors’ Addresses
Hannes Tschofenig
Arm Ltd.
110 Fulbourn Rd
Cambridge, CB1 9NJ
Great Britain
Email: hannes.tschofenig@arm.com
Mingliang Pei
Broadcom
350 Ellis St
Mountain View, CA 94043
USA
Email: mingliang.pei@broadcom.com
David Wheeler
Intel
US
Email: david.m.wheeler@intel.com
Dave Thaler
Microsoft
US
Email: dthaler@microsoft.com
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-teep-protocol-00.pdf", "len_cl100k_base": 5646, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 37327, "total-output-tokens": 7163, "length": "2e12", "weborganizer": {"__label__adult": 0.0003306865692138672, "__label__art_design": 0.0002722740173339844, "__label__crime_law": 0.0008997917175292969, "__label__education_jobs": 0.0004091262817382813, "__label__entertainment": 7.981061935424805e-05, "__label__fashion_beauty": 0.00018715858459472656, "__label__finance_business": 0.0008296966552734375, "__label__food_dining": 0.0002772808074951172, "__label__games": 0.0005846023559570312, "__label__hardware": 0.00754547119140625, "__label__health": 0.0003437995910644531, "__label__history": 0.000293731689453125, "__label__home_hobbies": 8.64863395690918e-05, "__label__industrial": 0.000728607177734375, "__label__literature": 0.0002295970916748047, "__label__politics": 0.0003726482391357422, "__label__religion": 0.0004878044128417969, "__label__science_tech": 0.11279296875, "__label__social_life": 6.628036499023438e-05, "__label__software": 0.057861328125, "__label__software_dev": 0.814453125, "__label__sports_fitness": 0.0002522468566894531, "__label__transportation": 0.0005478858947753906, "__label__travel": 0.00018703937530517575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26368, 0.01717]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26368, 0.17642]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26368, 0.81129]], "google_gemma-3-12b-it_contains_pii": [[0, 1320, false], [1320, 1911, null], [1911, 4024, null], [4024, 4873, null], [4873, 5678, null], [5678, 6938, null], [6938, 8217, null], [8217, 9543, null], [9543, 11141, null], [11141, 12309, null], [12309, 14167, null], [14167, 16030, null], [16030, 18430, null], [18430, 20319, null], [20319, 21389, null], [21389, 22781, null], [22781, 24550, null], [24550, 24807, null], [24807, 26025, null], [26025, 26368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1320, true], [1320, 1911, null], [1911, 4024, null], [4024, 4873, null], [4873, 5678, null], [5678, 6938, null], [6938, 8217, null], [8217, 9543, null], [9543, 11141, null], [11141, 12309, null], [12309, 14167, null], [14167, 16030, null], [16030, 18430, null], [18430, 20319, null], [20319, 21389, null], [21389, 22781, null], [22781, 24550, null], [24550, 24807, null], [24807, 26025, null], [26025, 26368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26368, null]], "pdf_page_numbers": [[0, 1320, 1], [1320, 1911, 2], [1911, 4024, 3], [4024, 4873, 4], [4873, 5678, 5], [5678, 6938, 6], [6938, 8217, 7], [8217, 9543, 8], [9543, 11141, 9], [11141, 12309, 10], [12309, 14167, 11], [14167, 16030, 12], [16030, 18430, 13], [18430, 20319, 14], [20319, 21389, 15], [21389, 22781, 16], [22781, 24550, 17], [24550, 24807, 18], [24807, 26025, 19], [26025, 26368, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26368, 0.02492]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
4dc22a2cba89a40a2c5df646199d06ad4b32e50a
|
Inheritance of Interface Specifications
Gary T. Leavens
Iowa State University
Inheritance of Interface Specifications
Abstract
Four alternatives for the semantics of inheritance of specifications are discussed. The information loss and frame axiom problems for inherited specifications are also considered.
Keywords
specification, inheritance, subtype, subclass, modularity, object-oriented, abstract data type
Disciplines
Systems Architecture
Inheritance of Interface Specifications
(Extended Abstract)
Gary T. Leavens
TR #93-23
September 1993
Keywords: specification, inheritance, subtype, subclass, modularity, object-oriented, abstract data type.
Submitted to the Workshop on Interface Definition Languages.
© Gary T. Leavens, 1993. All rights reserved.
Department of Computer Science
226 Atanasoff Hall
Iowa State University
Ames, Iowa 50011-1040, USA
Inheritance of Interface Specifications
(Extended Abstract)
Gary T. Leavens*
Department of Computer Science, 229 Atanasoff Hall
Iowa State University, Ames, Iowa 50011-1040 USA
leavens@cs.iastate.edu
September 14, 1993
Abstract
Four alternatives for the semantics of inheritance of specifications are discussed. The information loss and frame axiom problems for inherited specifications are also considered.
1 Introduction
An interface specification language (ISL) defines both how to call a module and its (functional) behavior [Win83] [Win87] [Lam89] [GHG+93]. The details of how to call a module and some aspects its behavior are specific to the particular programming language; hence in the Larch approach to interface specification [GHG+93], each ISL is tailored to a particular programming language. What does this tailoring involve?
- The syntax for specifying interfaces is a subset of the syntax for the programming language, so it can be directly compared to the interface of a candidate implementation. This means that the ISL must use the type system of the programming language.
- Most of the semantic concepts of the programming language should be reflected in the ISL’s semantics.
- The ISL should allow the specifier to use the programming language’s abstraction mechanisms. For example, an ISL tailored to an object-oriented programming language (OOPL) should allow inheritance of specifications.
*This work was supported in part by the National Science Foundation under Grant CCR-9108654.
The ISL should ease ways of reasoning about the correctness of code that are common or otherwise important. For example, an ISL for an OOPL should ease reasoning that uses supertype abstraction (thinking only about the types written in the program, not about the dynamically possible subtype that expressions may denote [IW90]).
It follows that when one is designing an ISL for an OOPL, one must consider both subtyping and inheritance.
1.1 Modularity
The last point in the list of ways to tailor an ISL is especially important in an object-oriented context. In order to support supertype abstraction one must pay attention to modularity. An ISL is modular if:
- at any point where an argument is specified to have type $T$, an actual argument of some subtype of $T$ is allowed,
- when adding new subtypes of existing types, or when using specification inheritance, one need not change any existing specifications, and the meaning of the existing specifications does not change (except for allowing the new subtypes to be used),
- to specify a type, one need not be aware of the specification of any other types, except those of the supertypes of the type being specified (and their supertypes, etc.), and the argument and result types of the operations.
1.2 Subtype versus Subclass
It is important to carefully define the terms “type”, “class”, “subtype” and “subclass”. By a type we mean an abstract data type (ADT), as characterized by a (behavioral) specification. By a class we mean an implementation module (such as classes in Smalltalk or C++).
A subclass is formed from another class (its superclass) by inheritance of code; this has no particular relation to behavior, as an object of a subclass may behave quite differently from an object of one of its superclasses. For example the class IntStack may be a subclass of IntDEQueue, although a IntStack object cannot respond to all the messages that one would want to send to a IntDEQueue [Sny86].
If one were to make an analogy between the notions of supertype and subclass, one would say that, by contrast, a supertype is formed from another ADT by inheritance of specifications, not code. For example, one might specify the type IntDEQueue by inheriting from IntStack, even if one chooses to implement the class IntStack as a subclass of the class IntDEQueue. The inheritance analogy is only approximate, however, as subtyping has to do with specified behavior,
not with how that behavior is specified. That is, subtyping is independent of specification inheritance. More precisely, a type $S$ is a subtype of $T$ if one can use objects of type $S$ in a program where objects of type $T$ are expected without any surprising results. This certainly implies that each object of type $S$ acts like some object of type $T$ [Sny86] [SCB+86] [Lea89]. In many practical cases it also implies that there is a homomorphic coercion function, $f_{S,T}$, that maps the values of objects of type $S$ to values of objects of type $T$ in such a way that for all instance operations of the supertype and for all $x$ of the subtype $S$,
$$\text{SuperPreCond}(f_{S,T}(x)) \Rightarrow \text{SubPreCond}(x) \quad (1)$$
$$\text{SuperPostCond}(f_{S,T}(x)) \Leftarrow \text{SubPostCond}(x) \quad (2)$$
where "SuperPreCond" is the precondition of the supertype's instance operation, etc. [Ame87] [Ame89] [Ame91] [IW93a] [IW93b].
1.3 Plan
In the following we discuss inheritance of specifications in ISLs. Our ideas come from our work on the ISLs Larch/Smalltalk (for Smalltalk) [Che91] and Larch/C++ (for C++) [LC93a] [CL93] [LC93b], and our work on the semantics of subtyping in OOPLs [Lea89] [IW90] [Lea90] [LP91] [LW92].
2 Inheritance of Specifications
For an example, consider the types BankAccount and PlusAccount. The supertype, BankAccount, has just a savings account. The subtype, PlusAccount, also has a ("free") checking account. We want to specify instance operations such as balance and pay_interest, for BankAccount and have these specifications be inherited by PlusAccount.
The Larch/C++ interface specification of BankAccount is given in Figure 1. The LSL [GHG+93] trait BankAccountTrait it uses is presented in Figure 2. The trait Rational which is included by BankAccountTrait, is found in the Larch Shared Language Handbook [GHG+93, Appendix A.16]. The member functions are specified as virtual, which means that the code executed in a call such as ba->pay_interest() will execute code determined by the dynamic class of the object pointed to by ba.
In order to state the inheritance problem as clearly as possible, we specify the subtype PlusAccount in Figure 3 without using inheritance. The LSL trait PlusAccountTrait is specified in Figure 4.
2.1 The Specification Inheritance Problem
The specification inheritance problem is to state the specification of types like PlusAccount as succinctly as possible, and to give the specification with inheri-
class BankAccount {
uses BankAccountTrait(BankAccount for Acct);
public:
BankAccount(double amt) {
requires (1/100) \leq \text{rational}(amt);
constructs self;
ensures \text{approximates}(amt, \text{balance}(self'), 1/100);
}
virtual double balance() const {
ensures \text{approximates}(\text{result}, \text{balance}(self'), (1/100));
}
virtual void pay_interest(double rate) {
requires (0/1) \leq \text{rate} \land \text{rate} \leq (1/1);
modifies self;
ensures \text{approximates}((\text{toDouble}(\text{balance}(self'))),
((1/1) + \text{rational}(\text{rate})) \times \text{balance}(self'),
1/100);
}
virtual void update(double amt) {
modifies self;
ensures \text{approximates}((\text{toDouble}(\text{balance}(self'))),
\text{balance}(self') + \text{rational}(\text{amt}), 1/100);
}
};
Figure 1: Larch/C++ interface specification of the (super)type BankAccount.
BankAccountTrait(Acct): trait
includes Rational % defines the sort Q
introduces
createAcct: Q \rightarrow Acct
balance: Acct \rightarrow Q
asserts \forall q: Q
balance(createAcct(q)) == q
Figure 2: The trait that specifies the abstract values of BankAccount objects.
tance a semantics that matches the intended specification as nearly as possible.
For PlusAccount, what one wants to write is something like the specification in Figure 5. In this specification, the type PlusAccount inherits the specifications of balance, pay_interest, and update. It is hard to imagine the interface specification of PlusAccount being more succinct. The question is, what does
class PlusAccount : public BankAccount {
uses PlusAccountTrait(PlusAccount for PA);
public:
PlusAccount(double savings_balance, double checking_balance) {
requires (1/100) ≤ rational(savings_balance)
∧ (0/1) ≤ rational(checking_balance);
constructs self;
ensures approximates(savings_balance, savings(self'), 1/100)
∧ approximates(checking_balance, checking(self'), 1/100);
}
virtual double balance() const {
ensures approximates(result, savings(self') + checking(self'), (1/100));
}
virtual void pay_interest(double rate) {
requires (0/1) ≤ rate ∧ rate ≤ (1/1);
modifies self;
ensures approximates(toDouble(savings(self')), ((1/1) + rational(rate)) × savings(self'), 1/100)
∧ approximates(toDouble(checking(self')), ((1/1) + rational(rate)) × checking(self'), 1/100);
}
virtual void update(double amt) {
modifies self;
ensures approximates(toDouble(savings(self')), savings(self') + rational(amt), 1/100)
∧ checking(self') = checking(self');
}
};
Figure 3: Larch/C++ interface specification of the (sub)type PlusAccount, done without using inheritance.
Figure 5 mean? And how close is that meaning to the meaning of Figure 3?
2.2 Possible Semantics of Specification Inheritance
A little reflection is enough to convince one that what should be done is to copy each inherited operation specification from the parent specification to the inheriting type's specification [CDD+89] [DD90] [Cus91] [LC93b]. The semantic question then becomes: given that the parent's specification was written in terms of the abstract values of the parent type, how can one interpret it for the abstract values of the inheriting type? We have identified four potential answers to this question in the context of a Larch-style ISL.
PlusAccountTrait: trait
introduces
savNchk: Q, Q → PA
checking: PA → Q
savings: PA → Q
asserts
∀ q1, q2: Q, pa: PA
savings(savNchk(q1, q2)) = q1;
checking(savNchk(q1, q2)) = q2
Figure 4: The trait that specifies the abstract values of PlusAccount objects.
class PlusAccount : public BankAccount {
uses PlusAccountTrait(PlusAccount for PA);
public:
PlusAccount(double savings_balance, double checking_balance) {
requires (1/100) ≤ rational(savings_balance)
∧ (0/1) ≤ rational(checking_balance);
constructs self;
ensures approximates(savings_balance, savings(self), 1/100);
∧ approximates(checking_balance, checking(self), 1/100);
}
};
Figure 5: Ideal interface specification of PlusAccount as a subtype of BankAccount, using specification inheritance.
1. Use the same sort of abstract values (i.e., extending the same LSL trait) for the subtype as for the supertype [GM87] [MOM90]. Since the abstract values of the types are the same, there is no problem in interpreting the parent type's specification.
2. Define a homomorphic\(^1\) coercion function that maps the abstract values of the inheriting type to the abstract values of the parent type [Ame87] [Ame89] [Ame91] [LW93a] [LW93b]. The parent type's specification is interpreted by using this function to coerce the inheriting type's abstract values to the types assumed in its specification.
\(^1\)Homomorphic in the sense that it commutes with the trait functions of the parent type; for subtypes it should also commute with the instance operations of the supertype in the sense of Formulae (1) and (2) above.
3. Define a homomorphic relation that relates each inheriting type's abstract value to at least one parent type abstract value, which is used to coerce the abstract values of the inheriting type to the parent type [Lea89]. The parent type's specification is interpreted by using this relation to obtain a set of parent type abstract values, and these are all used to interpret the parent type's specification.
4. Overload each trait function that takes an argument of the parent type's abstract values so that it is defined on abstract values of the inheriting type [LW90] [Lea91] [LW92]. For the abstract values of an inheriting type, these overloaded trait functions are used to interpret the inherited specification.
These approaches are discussed and compared below.
2.2.1 Using the Same Sort of Abstract Values
The approach of using the same sort for the inheriting type's specification as for the parent type specification is slick when it works. It often works when the inheriting type is a simple restriction on the parent type; for example, when a subtype's abstract values are a subset of the supertype's. However, it does not work well when the subtype's objects contain more information than the supertype's, as is the case with PlusAccount. While it is always possible to specify both sets of abstract values by using a disjoint union as the abstract value set of the parent type, doing so is not modular.
2.2.2 Using a Coercion Function
In our example, one can define a trait function, toAcct, in PlusAccountTrait that maps values of sort PA to values of sort Acct. There are infinitely many such mappings. The mapping which makes the meaning of Figure 5 closest to the meaning of Figure 3 is the one defined by the following axiom.
\[
\text{toAcct}(\text{savNchk}(q_1, q_2)) = \text{createAcct}(q_1 + q_2)
\]
Since there are so many possible mappings, the desired coercion function should be specified for the inheriting type. In Larch/C++ this can be done by specifying toAcct in the trait PlusAccountTrait, and adding to the interface specification of Figure 5 a line of the following form.
```
simulates BankAccount by toAcct
```
However the requirement that the coercion be a function, and that it be homomorphic, sometimes makes specification inconvenient. Consider the specification of an abstract class Graph, which has no way to create objects, but is intended as the common supertype of DirectedGraph and UndirectedGraph.
One way to describe the abstract values of Graph is as a pair of sets: a set of nodes and a set of edges. The edges cannot be undirected, as this would make them useless for DirectedGraph. But then one cannot specify the abstract values of UndirectedGraph by identifying the edges [n,m] and [m,n], because doing that makes having a homomorphic function from the abstract values of UndirectedGraph to the abstract values of Graph impossible. So the specifier of UndirectedGraph is forced to specify the abstract values without making this identification, which certainly complicates the specification of UndirectedGraph (see [CL93] for how this is done). Note, however, that this is not a modularity problem.
2.2.3 Using a Coercion Relation
A homomorphic relation is a generalization of a homomorphic function. Because it can coerce an inheriting type’s abstract value to a set of abstract values of the supertype (viewing the relation as a set-valued function), one can avoid the inconvenience described in the previous paragraph. However the disadvantage of homomorphic relations is that there is much to prove before one is convinced that assertion evaluation is well-defined, because of the possible ambiguity in dealing with sets of abstract values [Lea89] [Lea90].
2.2.4 Overloading the Trait Functions
This approach attacks the problem of how to interpret the parent type’s specification directly. It is clearly more general than the approaches above, because the others also, in effect, overload the trait functions so that they are defined on abstract values of the inheriting type. Permitting the trait functions to be overloaded in any way at all allows more flexibility, which can be exploited to partially solve the information loss problem (described in Section 3 below).
However, we recently discovered that this approach is not completely modular. Consider the specification of a type Node, which is used in the specification of the type Graph. Let us suppose that there is a trait function in the trait defining the abstract values of type Graph called includesNode, with the signature: includesNode: Graph, Node -> Node. When one defines a subtype of Node, say ColoredNode, then one must overload the trait function includesNode so that it is defined when its second argument is a ColoredNode. But this violates the definition of modularity of specifications, because the specifier of ColoredNode should not have to know about the specification of Graph, which is not a supertype of Node.
Another problem with this approach is that it requires a nonstandard interpretation of equality (=).
2.3 Discussion
Recall that in an OOPL, inheritance of code does not have to be used to make subtypes. Should inheritance of specifications necessarily be used to make subtypes? We can see no good reason for this in general. All of the approaches mentioned above work for inheriting specifications, even if the inheriting type is not a subtype of the parent type. Indeed this is a plus, as one would want the specification to be well-defined so that one can prove whether a claimed subtype relationship is legal or not.
There is also the semantic issue of what to do when multiple specifications of the same operation are inherited from different parent types. For the sake of brevity, we only offer our opinion that the best thing for a specification language to do is to prohibit the use of inheritance for such specifications; after all, the understandability of the specification is not decreased by giving the specification explicitly.
3 The Information Loss Problem
The information loss problem occurs when an inheriting type’s abstract values contain more information than its parent type’s. Consider what the inherited specification of `pay_interest` in Figure 5 says compared to its specification in Figure 3. From Figure 5 one can conclude that the total balance is increased by the specified interest, but the distribution between checkings and savings is not specified. Thus, besides paying interest, an implementation of `pay_interest` for `PlusAccount` is allowed to transfer money between checkings and savings!
For post-conditions of the form `self' = tf(self')`, where `tf` is a trait function, the problem may be solved by specially overloading the trait function `tf` to avoid the information loss. However, not all post-conditions take this form—witness `pay_interest`. So overloading the trait functions is not a general solution.
Meyer’s OOPL Eiffel has a way to avoid information loss without resorting to complete respecification. In subtypes, the Eiffel specifier can (only) conjoin an additional assertion to the post-condition using the keyword `then` [Mey92]. For example, one would specify `pay_interest` by specifying that the ratio of the checking to the savings to the checking parts of a `PlusAccount` is unchanged by `pay_interest`, as in the following.
```cpp
virtual void pay_interest(double rate) {
ensures then if checking(self') = 0 then checking(self') = 0
else (savings(self')/checking(self'))
= (savings(self')/checking(self'));
}
```
However, there is no reason to limit such shorthands to the specification of subtypes.
The information loss problem may also be amenable to solutions similar to those proposed for the frame problem (see below).
4 The Frame Problem
The frame problem is how to say "and nothing else changes" in a specification [BMR93]. In Larch/C++, a function specification has a modifies clause that says what objects the function is allowed to change. However, when a modifies clause of the form modifies self is inherited, it means that the abstract value as a whole may change. This may be less restrictive than intended, as extra information that in the subtype's abstract values may or may not be intended to change. The approach advocated in [BMR93] looks promising as a way to solve this.
5 Summary Position
For each inheriting type, the specifier should state a coercion function (or relation). This avoids all modularity problems. For maximum flexibility, a specification language could also allow individual trait functions to be defined, not by the coercion, but by explicit overloading. That way only overloading that are different than those produced by the coercion function need be defined. In effect this gives inheritance with overriding to trait functions.
Allow adding conjuncts to post-conditions to ease the information loss problem without requiring complete respecification.
Acknowledgements
These ideas were developed in conjunction with Yoonsik Cheon, and other members of the local Iowa State formal methods community, especially Tim Wahls, K. Kishore Dhara, and Albert Baker. Thanks to Yoonsik, Tim, and Kishore for comments on drafts.
References
|
{"Source-Url": "http://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=1097&context=cs_techreports", "len_cl100k_base": 4907, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 40393, "total-output-tokens": 7874, "length": "2e12", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.0003159046173095703, "__label__crime_law": 0.00032520294189453125, "__label__education_jobs": 0.0007014274597167969, "__label__entertainment": 5.131959915161133e-05, "__label__fashion_beauty": 0.00015544891357421875, "__label__finance_business": 0.0001928806304931641, "__label__food_dining": 0.00033974647521972656, "__label__games": 0.0004181861877441406, "__label__hardware": 0.0005173683166503906, "__label__health": 0.00045371055603027344, "__label__history": 0.00019872188568115232, "__label__home_hobbies": 7.206201553344727e-05, "__label__industrial": 0.0003681182861328125, "__label__literature": 0.00033783912658691406, "__label__politics": 0.0002593994140625, "__label__religion": 0.00048828125, "__label__science_tech": 0.009063720703125, "__label__social_life": 8.726119995117188e-05, "__label__software": 0.003692626953125, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0002484321594238281, "__label__transportation": 0.0004646778106689453, "__label__travel": 0.00016987323760986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28693, 0.03133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28693, 0.65659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28693, 0.8406]], "google_gemma-3-12b-it_contains_pii": [[0, 80, false], [80, 449, null], [449, 1106, null], [1106, 2622, null], [2622, 5051, null], [5051, 7546, null], [7546, 9342, null], [9342, 11125, null], [11125, 12739, null], [12739, 15195, null], [15195, 17808, null], [17808, 20391, null], [20391, 22411, null], [22411, 24730, null], [24730, 27094, null], [27094, 28693, null], [28693, 28693, null]], "google_gemma-3-12b-it_is_public_document": [[0, 80, true], [80, 449, null], [449, 1106, null], [1106, 2622, null], [2622, 5051, null], [5051, 7546, null], [7546, 9342, null], [9342, 11125, null], [11125, 12739, null], [12739, 15195, null], [15195, 17808, null], [17808, 20391, null], [20391, 22411, null], [22411, 24730, null], [24730, 27094, null], [27094, 28693, null], [28693, 28693, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28693, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28693, null]], "pdf_page_numbers": [[0, 80, 1], [80, 449, 2], [449, 1106, 3], [1106, 2622, 4], [2622, 5051, 5], [5051, 7546, 6], [7546, 9342, 7], [9342, 11125, 8], [11125, 12739, 9], [12739, 15195, 10], [15195, 17808, 11], [17808, 20391, 12], [20391, 22411, 13], [22411, 24730, 14], [24730, 27094, 15], [27094, 28693, 16], [28693, 28693, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28693, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
8f0837e39ea45d193099f2f586e08fb24a673a13
|
Multi-tier agent architecture for open service ecosystems
Kutvonen, Lea
CEUR Workshop Proceedings
2012-10-15
http://hdl.handle.net/10138/37520
Downloaded from Helda, University of Helsinki institutional repository.
This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.
Please cite the original version.
Abstract. The present trend of enterprise computing is towards networked business; thus there is a high demand on a new layer of global infrastructure that facilitates inter-enterprise collaboration management and provides utilities for interoperability. We address the larger problem of collaboration management in cases where the collaboration is dynamically composed of business services from independent organisations. The Pilarcos architecture shows how generic collaboration management utilities can support, with a significant level of automation, the whole lifecycle of a collaboration from service selection with interoperability checking, negotiation with trust decisions and commitment to the operational phase, monitoring, and breach management. We analyse the multi-tier agent architecture nature of the open service ecosystems, and collaborative management of inter-enterprise collaborations within them.
1 Introduction
The present trend of enterprise computing is towards networked business, and thus, there is a high demand on a new layer of global infrastructure that facilitates inter-enterprise collaboration management and provides utilities for interoperability. This development has largely been enabled by the evolution of service-oriented computing (SOC) [1], business process management (BPM) and adaptation of Web services as a common family of technology. Interoperability issues have received attention as well from ontology researchers as from semantic web development. Even organisational differences have been addressed to detect and alleviate heterogeneity at process and conceptual levels. It is also marked with its application to service engineering (e.g., [2–5]). Electronic business collaborations have also received interest in the domains of multi-agent technologies (e.g., [6, 7]) and enterprise architectures (such as Zachman [8], TOGAF [9]), not to forget of enterprise interoperability domain (e.g., [10] ) or virtual organisations environments (e.g., ECOLEAD [11–13] and CrossWork [14]). All of these aim to align business needs with the computing solutions available.
While common facilities for discovery of services and composition of them are emerging, a few essential challenges remain: i) preservation of partner autonomy of actors when establishing, controlling and dissolving collaborations, as well as
formulation of contracts between agents and balancing between human involved decision-making and automated support; ii) adherence to regulatory systems; and iii) evolution of the ecosystem and collaborations.
In our earlier work, we introduced open service ecosystems as such a collaboration governance environment [15–18]. Here we discuss ecosystem architecture from multi-agent system point of view: The basic actors include organisation-representing agents, and services that are organisation-governed and initiative-taking. Indeed, in the design of the ecosystem support and processes, agent technologies and speech act concepts, as well as deontic logic have played an essential role. Especially, focus is given on the responsibilities of agents, as there are some essential differences in comparison to related work that we find powerful. The key difference is the use of an explicit ecosystem entity that determines regulatory, methodological and ecosystem-specific disciplines that allow correctness-control in the dynamic inter-enterprise collaborations.
Section 2 provides an overview of the open service ecosystem: its stakeholders, management of inter-enterprise collaborations within it, and the management of services within each involved organisation. It also comments on the interrelated model-driven methods of developing collaboration types, i.e., templates for the eContracts (used for governing collaborations across organisational boundaries), and private and public ecosystem infrastructure services. Section 3 adds details on obligations the ecosystem and organisation-representing agents have, and the agent protocols at collaboration lifecycle, collaboration type design, ecosystem management, trust and privacy management, and decision-making at each organisation. Section 4 clarifies the contributions of the ecosystem design by providing some comparison to various elements of related frameworks.
2 Inter-enterprise collaborations in service ecosystems
In future, we vision, that individual users, enterprises or public organizations can easily compose new services from open service markets. Furthermore, these contract-governed collaborations can be managed by all their partners. All this is supported by a global infrastructure with facilities for interoperability control and contract-based community management (establishment, control and breach recovery) among autonomous organizations; this infrastructure also takes responsibility of governing trust and privacy-preservation issues.
The embedded interoperability support spans technical, semantic and pragmatic interoperability. We define interoperability, i.e. the capability to collaborate, as the effective capability to mutually communicate information in order to exchange proposals, requests, results, and commitments. Technical interoperability is concerned with connectivity between the computational services, allowing messages to be transported from one application to another. Semantic interoperability means that the message content becomes understood in the same way by senders and receivers, both in terms of information representation and messaging sequences. Pragmatic interoperability captures the willingness of partners to perform the collaborative actions. This willingness to participate refers both to
the capability of performing a requested action, and to policies dictating whether it is preferable for the enterprise to allow that action to take place.
Three key concepts in the Pilarcos open service ecosystems are those of inter-enterprise collaborations, eContract agents and ecosystem infrastructure. The Pilarcos architecture views inter-enterprise collaboration as a loosely-coupled, dynamic constellation of business services; it involves multiple partners through their software-based business services and their mutual interactions.
A business service is a software-supported service with a functionality suitable for a business need on the market and thus relevant for the networked business. In itself, each business service is an agent, in terms of being able to take initiative on some activity, being reactive to requests by other business services, and being governed by policies set by its owner. The relationship between business service and software supporting it resembles the relationship of an agent and web service [19]. Each of business services provide business protocol interfaces for each other, but also utilise locally provided agents for connecting to peer services through channels with appropriately configured properties (e.g., security, transactionality, nonrepudiation).
The type of the service constellations is declared as business network model (BNM), expressed in terms of the roles and interactions within the collaboration, the involved member services, and policies governing the joint behaviour [15]. Intuitively, a BNM describes a business scenario.
The eContract agent governs the inter-enterprise collaboration and captures both business and technical level aspects of control, as well the large-granule state information to govern the dynamism of the collaboration. The eContract is structured according to a selected BNM.
We define the service ecosystem as an environment – open service market – where service providers and clients can meet, establish contract-governed collaborations and gain experience on the services and partners involved. Our goal is to create a consistent but evolving environment to support governance of inter-enterprise collaborations, and to provide for consistency control of the inter-enterprise collaborations themselves [20].
An essential part of the ecosystem is its ecosystem infrastructure, a set of CaaS agents (Collaboration-as-a-Service) that provide shared utilities for enterprises to discover and select services available in the ecosystem, negotiate and establish collaborations, govern those collaborations through eContract agents, and utilise reputation information and collaboration type information.
As the ecosystems are required to intertwine engineering, governance and operational needs of collaborations, the Pilarcos ecosystem architecture involves:
- enterprises providing and needing each others business services, with their published business service portfolios [15];
- business-domain governing consortia, with their published models of business networks and business models [15];
- infrastructure service providers of individual functions such as service discovery and selection, contract negotiation and commitment to new collab-
orations, monitoring of contracted behaviour of partners, breach detection and recovery [15] and reputation flows from past collaborations [21];
- consortia and agencies that define legislative rules for acceptable contracts [18] and joint ontology about vocabulary to be used for contract negotiation, commitment and control [22, 23, 20, 18]; and
- infrastructure knowledge-base providers that maintain the information underlying the ecosystem infrastructure functions; this role is essential in enforcing all conformance rules of all ecosystem activities [23].
The main ecosystem activities are illustrated in Figure 1, involving service engineering (left and bottom), ecosystem and collaboration governance (left and right) and operational-time collaboration support (right and bottom). The left side of Figure 1 depicts processes related to engineering steps at each involved enterprise or consortia. Here, metainformation is brought to the system by designers and analyzers. First, available services are published by service providers (enterprises including public and private sector providers). Second, the publicly known BNM s are created by teams of designers and published after acceptability analysis. Third, regulations for conducting collaboration at administrative domains are fed in by enterprise and ecosystem administrators knowledgeable about local and international laws and business domain practices. This body of knowledge accumulates into metainformation repositories within the globally accessible infrastructure layer. The repositories only accept models that fulfil the set consistency criteria, thus providing a point of control.
The arrows leading to the right at the right side of Figure 1 depict the life-cycle of each collaboration from negotiation to termination phase. The collaboration establishment is initiated by one of the partners suggesting the use of commonly known BNM that can be picked from the infrastructure repositories. Further, the infrastructure services help in discovery and selection of suitable partner services for the collaboration and running a negotiation protocol between the selected partners. Within the negotiation step, the local, private support agents of each partner consider especially the suitability of the collaboration for the enterprises strategies and sufficiency of trust to the other partners in the collaboration. In the enactment and control phase, the local support agents provide protective monitoring and the required contract-related communication.
The arrows leading to the left at the right side of Figure 1 depicts the experience information gathered from all the collaborations in the ecosystem and providing feedback information for re-engineering and future decision-making processes in the ecosystem. Especially important is the reputation information generation, as the reputation-based trust management concept facilitates the scalability of the ecosystem. Here we can rely on social ecosystem studies [24]: The number of potential partners in the ecosystem is very limited if there are no established behaviour norms, and only slightly higher if misbehaviour is sanctioned. However, if also leaving misbehaviour unreported is considered as misbehaviour, an increasingly large ecosystem can be kept alive. The reputation production mechanism together with the negotiation step, where partners can reflect the collaboration suitability for their strategies and the potential risk
predicted with reputation information, creates a cycle that has this necessary control function. It effectively emulates the social or legal system pressure of business domain. This functionality is much missing from other approaches.
In conclusion, the bottom part of Figure 1 represents the global, federated infrastructure services that participates the governance, engineering and collaboration management processes. The ecosystem infrastructure services provide generic protocols and knowledge-bases for enterprises to
- match the collaboration management needs; including service discovery and selection, eContracting, and breach management [15];
- evolve the ecosystem with processes to keep and enhance a coherent knowledge base; this includes the control of collaboration types and available services and interoperability management information [22, 23];
- regulate of collaborations in such a way that only acceptable collaboration types are allowed, and by controlling the behaviour of services through contracts and enterprise policies [23];
- perform private decision-making on collaborations; this includes enterprises’ expert system for making decisions related to contracts, breaches and trust [21];
- perform (globally) distributed service production; this involves production methods for service software and coordination models, and definition of open service ecosystem quality requirements for software; and
- collect feedback about the reputation of the (successfully or prematurely) terminating collaborations and their component services [21, 25].
From the business point of view, the ecosystem provides a) a global infrastructure for collaborations that utilize services provided by other ecosystem members; b) a natural environment for innovating new services and new collaboration types; and c) helps enterprises in adjusting to rapidly changing business situations and participation in natural competition between collaborations and ecosystem members.
3 Multi-tier agent system
The ecosystem-involved agents form four agent tiers to be discussed separately: i) the inter-enterprise collaboration that is governed by an eContract agent and in which enterprise-representing agents are involved in; ii) the collaboration-as-a-service (CaaS) community, formed by the ecosystem infrastructure services, including the populator agent, service type repository, BNM repository, and reputation-providing agents; there are simultaneously several CaaS communities from which enterprises can choose from; iii) the BNM defining community; and iv) the open service ecosystem that is grounded to a CaaS community but in addition has regulatory and policy-based restrictions, so it can direct the acceptability rules of BNMs and service offers in its domain, for example.
These communities are intertwined: CaaS agents and enterprise-representing agents appear in more than one of the above mentioned communities. Below, each of these communities will be discussed in further detail.
3.1 CaaS tier
The ecosystem infrastructure provides generic service agents for ecosystem and collaboration management support. These agents are used by enterprises’ private agents during the collaboration lifecycle from establishment, through operation to dissolution. Also present as agents are those knowledge-bases that are essential for the consistency of the ecosystem behaviour and correctness of collaborations.
In the collaboration establishment phase, service discovery and selection is supported by a populator agent. The collaboration initiator selects a model from the public BNM repository (agent) and invokes the populator to find matching service offers from the trading service for each of the roles [15, 22]. The populator performs a static interoperability check to ensure that the service offers fit the collaboration model, and are compatible with other offers proposed into the same collaboration. New proposed service offer sets can be picked within given resource limits. The populator returns a contract proposal that ensures that the set of services it proposes do match to the roles for their service types, are not denied to work together by regulations, and are interoperable on technical, semantic and pragmatic levels.
In comparison with other service offer repositories (UDDI [26], ODP/OMG trader [27]) the fundamental difference is the populator service providing a multi-partner matching instead of a client-server setup, and also checks not only technical and semantic interoperability but also takes into account pragmatic interoperability aspects. The pragmatic aspects include views to BNMs, acceptable
role combinations and environment contract information (i.e., requirements of the communication channel properties). The information base utilized by the populator agent is based on ODP trading service.
As service discovery and selection is separate from contract negotiations, it can be done without access to sensitive information; this makes it possible to have this task implemented as a public agent [15]. Automated negotiation supports the agreement phase of the collaboration [15] (see Collaboration tier) and leads to the formation of the specialised eContract agent for the collaboration instance. The eContract captures the business network model, players in each role, requirements for communication channels between services, and requirements for nonfunctional properties of the collaboration as well as policies providing invariants the collaborative behaviour should hold. The commitment concept in place at the eContract establishment time follow the ontology for commitments in multi-agent systems [28]: discharging, assigning, delegating, and cancelling. It also supports a new model of business transactions [25].
In the policies embedded in the eContracts we consider deontic logic [29] rules to be appropriate; this is in line with the usage of ODP RM (open distributed processing reference model) [30] concepts and viewpoints as part of the formalisation of our architecture. Deontic logic is not binary (denied/compulsory), but uses rules of prohibition, obligation and permission instead. This is necessary in an environment where there is no single policy maker or enforcer of the policies but the actors are independent of each other. Thus it is not possible for force a partner to refrain from an action, or to force that partner to take another action. However, it is possible to agree that it is a violation of a prohibition to take certain actions, and in addition, to agree on the consequences of violations. The detailed behaviour on functional or nonfunctional aspects of the partners cannot either be (practically) agreed on, but some optional behaviour patterns can be allowed without causing violation management. This area is where permissions clarify the behaviour: something is optional to take place, and there is a specification in existence about the followup behaviour.
This policy approach allows us to make clear distinction between violations of the contracts and acceptable behaviour according to that contract [31]. However, each partner in the collaboration uses subjective rules for decision-making on whether to join the collaboration, or on whether to report to the eContract agent some violation detected in the sequence of actions they get exposed to.
The eContract provides interfaces for the collaboration partners for renegotiation, epoch changes (where membership or responsibilities can be changed), progressing to defined milestones in the business processes, and declaring detected breaches. The eContract is the key agent also at the collaboration termination phase when feedback on the success of the collaboration is collected for business process improvement, service improvement, ecosystem improvement and for partners’ service reputation information feeds. Part of the information is relevant to be produced from the shared state information, while most of the data is best produced by the local monitors at each enterprise. The eContract forwards this information to appropriate repository agents.
Essential agents in the CaaS tier are also the reputation flow agents. For each successful or unsuccessful collaboration termination, reports will be fed for them. These agents aggregate reputation information on several asset aspects including monetary, reputation, and control assets. These in turn are available for private decision-making agents at the eContract negotiations in future. Therefore, a dynamic incentive mechanism is effectively created for ecosystem members to keep to their service offers and eContract commitments (including privacy rules), and especially to the reporting protocols [21, 32].
3.2 Inter-enterprise collaboration tier
The inter-enterprise collaboration tier is participated by the private agents of each involved enterprise, the eContract agent, and the reputation flow agent(s) of the ecosystem. The local support agents subjectively represent the enterprise, and provide a local interface to the ecosystem infrastructure services for the local business services. The essential tasks at which enterprises need their agents to control the collaboration contract or collaboration behaviour include i) contract negotiation, ii) monitoring during collaboration operation, and iii) experience reporting when the collaboration terminates either having reached its purpose or terminating prematurely due to breaches.
A contract negotiator agent represents an enterprise and is responsible of running collaboration management protocols on behalf of the enterprise delegating rights to it. The agent provides interfaces for application software or administrative interfaces to initiate collaboration establishment, or for responding to suggestions from other enterprises. While the initial service selection is based on public information, the needs for privacy of decision-making on the enterprises’ commitments is incorporated to a negotiation phase. In the negotiation phase each suggested collaborating party can agree to join the collaboration, or refrain. In routine cases, it is possible for the enterprises’ agent provide an automated response to the collaboration proposals: an explicit meta-policy guides the agent to pick routine rejections or commitments. Other situations can be recognised, for example, by uncertainty of the trustworthiness of the peers, uncertainty of the strategical benefit of the collaboration, or uncertainty of the acceptability of negative reputation effects caused by a refusal.
The contract negotiator is aware of enterprise policies that govern all negotiations and commitments, and all services. Where the contract negotiator is not able to make a decision (metapolicies deny the right from it) on whether to accept or reject a proposal, it passes the proposal on for human intervention - and the decision-making support system information is made an expert support system style [33, 21]. The decision-making is governed by enterprise policies [33, 21, 20] related to a) strategic policies indicating what type of collaborations or which partners are of interest and worth investing the resources to collaborate with; b) reputation-based trust that weights the anticipated risk and tolerated risk level [21]; and c) privacy-preservation that may overrule otherwise acceptable collaborations due to high privacy costs involved.
Although trust and privacy are closely related, the decision-making processes on the issues are separate and parallel. Trust decisions weight expected benefits against anticipated losses in a specific business case; privacy decisions guard access to private information, metainformation and behaviour patterns.
We define trust as the extent to which one party is willing to participate in a given action with a given partner in a given situation, considering the risks and incentives involved. Trust decisions are subjective evaluations made by the trustor, targeting a given trustee and a given action in terms of standard assets shared between organizations: monetary, reputation, control and satisfaction [21]. A trust decision is based on a comparison of the uncontrollable risks that allowing the action would cause, and the willingness to accept them, i.e., risk tolerance. The risk evaluation is expressed as probabilities of different outcomes, estimating how the partner will behave in the future. This estimate is based on earlier experience with the trustee. First-hand experiences and experiences shared by other ecosystem members form the trustee’s reputation, which is the trustor’s subjective perception of how trustworthy the trustee is. Risk tolerance builds on the business importance of allowing the action: different kinds of benefits may be realized by a positive decision alone, such as building a partnership, helping the inter-enterprise collaboration towards realizing its goals, and not triggering compensation clauses in the contract.
We define privacy as the right of subjects to determine themselves for whom, for what purpose, to what extent, and how information about them, or information held by them, is communicated to others [32]. Here, the subject can be a person, social group organisation or organisational group. Privacy control is the set of actions by which a subject makes decisions on refraining or involving in information exchange or sharing, and taking actions on detected privacy violations. A privacy violation is circumstances where information is held or used in a way that breaches the privacy declaration by the information owning subject. Privacy declaration is an expression created by the protected information owner that gathers together rules on to whom, for what purpose, to what extent and how information can be made available. The negotiation phase allows each partner to reject or agree the collaboration without exposing their private policies about the decision. Reasons for rejecting can involve type of collaboration, partner identity, partner reputation, or strategy on committing to the collaboration load. The privacy enforcement during collaborations is mainly performed by trusted infrastructure, because in many scenarios, the actual enforcement of the policies is performed in systems that are not under the direct control of the owners of the processed private information. The local monitor agents (see below) can intercept (and stop) both i) incoming requests that violate the privacy declarations at the contract level or cause a discrepancy against the receivers local policies, and ii) outgoing information exchangers that violate or are at risk of indirectly compromising its local privacy declarations or those of the collaboration.
Monitoring agents support the enactment and control phase of the collaboration [34] by checking the acceptability of the behaviour (messaging) it can
directly assess. The monitors receive rules from eContract and from their local policy repositories. These rules can be contradictory. At the negotiation phase only those policies are checked that are explicated both in the eContract and in the enterprise policies; moreover, the enterprise policies can change during the collaboration without consulting the collaboration peers. The contradictions can mean failing to fulfil an obligation, or failing to provide the agreed quality level of the service, such as availability, timeliness, and privacy-preservation, or as non-repudiation and immutability. At detected breach situations, the partner needs to decide (automatically or through human intervention) whether the breach is serious enough for terminating or leaving the collaboration. When an essential breach is detected, the eContract agent is notified. The eContract agent then triggers recovery steps, for example, terminating the collaboration, or changing the faulty member to a new one. The recovery capabilities are dependent on the BNM, and therefore, the breach recovery process is defined as part of the BNM.
When terminating the collaboration, experience reporting is required [21]. The experience reporting forms the core of social control in the open service ecosystem. As contract violations are detected by monitors, they become known to other actors as well. This creates a direct reputation impact that limits the damage that misbehaving actors can achieve in other collaborations. The storage, processing and reporting of globally shared reputation information is a challenging problem, as it requires support for evaluating the credibility of experience reports [21]. False reports do not only affect the targeted service, but also inhibit the other actors’ ability to assess its behaviour, reducing the social control impact of reputation. The experience reporting from all parties provides a reference point for ecosystem level membership that relies on a democratic measure.
The eContract agent comprises of the collaboration metamodel and operations changing it; thus, the eContract provides a shared-language view on the collaboration structure, behaviour, policies and abstracted state. The logical eContract agent that is physically replicated to the computing systems of each collaboration member. The private contract agents are responsible of keeping the local services in their governance in synchrony with the committed eContract status. Between themselves these local contract agents need to use protocols familiar from multi-agent systems or speech act theories: definitions and declarations, requests, suggestions, commitments, and opinions.
3.3 Modeling tier
The purpose of the modeling tier is to create BNM models and service types into the ecosystem. We also group distributed, service-oriented, model-driven software engineering processes into this domain as they produce metainformation to the custody of the CaaS repository agents. The Pilarcos architecture includes four essential metainformation repositories: service offer repository, service type repository, BNM repository, and reputation information flow. The repositories must control the publication of offers or models strictly, following the rules provided by the ecosystem management tier.
We have separated the BNM design phase from the collaboration establishment phase, to further automate the commitment phase. The traditional virtual organisation breeding environment way (e.g., in ECOLEAD and CrossWork [35]) of first choosing the partners and base the business processes on their capabilities actually forces the design phase for each individual collaboration. The business network models can be separately but collaboratively designed, verified and validated for their suitability for the market domain. These models also provide a common vocabulary for enterprises to use at the business network establishment negotiations: When a collaboration is being established, the pragmatic interoperability (processes and policies) is tested between partners. Thus, the business services forming a collaboration do not necessarily have a joint history in the breeding environment that would enforce interoperability but are just introduced to each other in a refining negotiation of the eContract.
The BNMs must be verified and validated carefully before being accepted to the repository. The engineering methodology used must declare the authority for submitting the model to a business network model repository on behalf of the designing team; we assume a team here, because the BNM design requires expertise from multiple domains, such as business best practices, associated regulation systems, enterprise and process modelling, market situation and room for new process innovations, and access to the feedback from past collaborations and their experience. From the technical point of view, BNMs are compositions of business processes. Thus many of the existing business process or protocol verification tools can check for well-formedness, aliveness, deadlock freeness and other process properties. In addition to this control flow point of view, also the information flows must be considered carefully to detect privacy-threatening exchange of information, excess exchange of information causing bad performance and privacy-threatening cumulation of information to roles.
Service type definitions form a basic vocabulary for declaring business network models and publishing service offers. The type definitions can be reused in multiple collaborations, too, thus creating opportunities for business services to be used in cross-domain business networks.
The metainformation repositories form an ontology forest where the BNMs form roots. The service types created can be inserted to the name space of the first BNM they are used, to differentiate them from other similarly named types in other application domains.
3.4 Open service ecosystem tier
The open service ecosystem management tier is responsible of capturing the consistency, acceptability and regulatory aspects into the ecosystem knowledge-bases. These models and rules regulate the behaviour of the generic CaaS agents. Furthermore, the knowledge-base is not static but evolvable, thus making the evolution of the ecosystem and the individual collaborations within it evolvable.
For consistency enforcement, the ecosystem repositories are governed by four ontology metamodels [23, 20]: i) domain ontology metamodel that defines basic ecosystem concepts like collaboration, service, and contract; ii) methodology
metamodel that defines phases of service engineering processes and especially the produced artefacts; iii) domain reference metamodel that captures the infrastructure elements by defining the operational-time support functions and artefacts manipulated; iv) knowledge management metamodel that defines language on storing each knowledge item and relationship.
The ontology metamodels are interlinked so that a basic concept can be connected to its representation format in a methodology and an operational-time infrastructure function, and has a technical storage representation. Thus, the design and production time artefacts become also artefacts at the operational time. The metamodels are also extendable at each abstraction layer: it is possible to add new top level concepts and relate them to the existing ontology.
As an example of the metamodel effects on the collaborations, we can consider the route by which regulations are embedded to the monitors of a collaboration: At the design time of a BNM regulations are used to validate the BNM before acceptance, and suitable policies can be embedded into the BNM itself to allow optional behaviours or adaptability to changes in the regulations restricting the use of the optional behaviours. At the collaboration negotiation time, this BNM is used as the basis of the eContract and thus the regulatory system becomes inherited. The partners of the collaboration each use both the eContract policies and their local enterprise policies to govern their local behaviour. Eventually, the regulatory rules get monitored by each of the partners locally and thus can trigger breach management and trust consequences, for example.
4 Discussion
The contributions of the open service ecosystem architecture include i) the CaaS tier; ii) evolution and regulatory system support to collaborations through the ecosystem tier; iii) the collaboration governance through the eContract agent; iv) private agents to provide subjective, adjustable monitoring of breaches; and b) incentives and breach management processes through the ecosystem tier.
Our research methods include i) prototyping of the CaaS tier services [15, 34, 21, 23]; ii) creating and analysing Coloured Petri Net model of the collaboration lifecycles [25]; iii) providing a hierarchy of metamodels to ensure the design-time and run-time artefacts meet the needs of both phases properly and that non-functional property framework with trust, privacy and business-enhanced service level agreement needs can be systematically be satisfied [17, 23, 16]; iv) specified key parts of the system using ODP RM [18] and v) simulations and performance measures of the trust management facilities and CaaS service prototypes.
In related work, concepts of business ecosystems and digital ecosystems appear more frequently. However, there are some goal-level difference often present. The term service ecosystem is often used to refer to an environment where a platform exists for applications to be available in a shared manner from multiple organisations or individuals for maintaining their information flow needs. The difference of open service ecosystems to these is the introduction of CaaS and
ecosystem tiers that allow changing the platform and the model and rule bases for governing the acceptability and correctness criteria in the ecosystem.
For digital ecosystems (e.g. Digital Business Ecosystems project 507953, [36]) the purpose of ecosystem introduction is similar; agents represent software-based services, and information exchange patterns join them together. Habitats in digital ecosystems seem to represent similar phenomenon than business networks in our work. In digital ecosystems, an essential goal is to help agents migrate to different habitats and my evolution also increase their fitness. Fitness is measured in terms of suitability to sufficient amount of requests made in the ecosystem. In our work, services belong to a service portfolio of an enterprise that is member in an ecosystem - the services become available for other ecosystem members as SaaS services. In terms of fitness measures, populators do match service offers to collaboration requests based on the suitability of service type and in the negotiations, each potential partner decide on the trustworthiness of that composed business network. However, individual services are not evolved within the collaboration lifecycle but behind the modeling and production tier, and thus appear as new, improved services while non-requested services just stay passive and can eventually be administratively removed. Also fitness in mind, monitors at each enterprise collect constantly information about the usefulness of the services utilised, thus enabling re-engineering feedback.
The ALIVE project [37, 38](FP7-215890) service governance is addressed through a model for dynamic SOA using agent-based technology. The model captures different levels of abstraction for the specification of governance, expectations, and behaviour of the services, not just their functionality, so that their run-time interactions are predictable, observable, and controllable. The contracts between partner services are dynamic and autonomously configurable, the participants will need the ability to negotiate at run-time to establish the SLAs, to monitor compliance with them, and to take actions to maintain them. While the work addresses trust issues and quality of service level agreements, the presented classification of ecosystems [38] does not include environments where the surrounding ecosystem would have control principles or functionalities for disciplining the member behaviours. However, the ALIVE solution proposes a set of tools for organisations to create choreographies between components of agentified services across multiple organisations, as well as tools for monitoring and controlling these compositions. Thus also this toolset helps humans to manage each independent collaboration through models.
Work from CONSOLIDER project on open system communication [39] provides an interesting counterpart for our work. We can relate collaborations with electronic institutions, BNMs with scene networks, and the details of our application-level business protocols between business services with utterance patterns. Interestingly, the Z language is utilised for expressing information state changes in transitions; the ODP RM information viewpoint semantics is defined in Z, and thus the formal foundations in joining utterances (computational viewpoint) and information state changes (information viewpoint) together seem similar in nature.
Our future work plans include projects on the processes of organisations to interface with the described tiers of eContracts, ecosystems and local policy-governing agents, especially in terms of trust and privacy decisions.
References
|
{"Source-Url": "https://helda.helsinki.fi/bitstream/handle/10138/37520/kutvonen12multitier.pdf;jsessionid=DECB305599399810A5C2AE37ED6FA447?sequence=2", "len_cl100k_base": 7165, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35670, "total-output-tokens": 10054, "length": "2e12", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0012025833129882812, "__label__crime_law": 0.0007619857788085938, "__label__education_jobs": 0.0019702911376953125, "__label__entertainment": 0.00022149085998535156, "__label__fashion_beauty": 0.0002772808074951172, "__label__finance_business": 0.00931549072265625, "__label__food_dining": 0.0004439353942871094, "__label__games": 0.0008006095886230469, "__label__hardware": 0.0013399124145507812, "__label__health": 0.0007367134094238281, "__label__history": 0.00067901611328125, "__label__home_hobbies": 0.0001862049102783203, "__label__industrial": 0.0011577606201171875, "__label__literature": 0.0006437301635742188, "__label__politics": 0.000949859619140625, "__label__religion": 0.0005550384521484375, "__label__science_tech": 0.408447265625, "__label__social_life": 0.00024437904357910156, "__label__software": 0.069580078125, "__label__software_dev": 0.49853515625, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0010547637939453125, "__label__travel": 0.00035500526428222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47970, 0.03887]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47970, 0.13842]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47970, 0.90285]], "google_gemma-3-12b-it_contains_pii": [[0, 752, false], [752, 3110, null], [3110, 6426, null], [6426, 9672, null], [9672, 13145, null], [13145, 14718, null], [14718, 17789, null], [17789, 21258, null], [21258, 24559, null], [24559, 28026, null], [28026, 31330, null], [31330, 34626, null], [34626, 37829, null], [37829, 41262, null], [41262, 44367, null], [44367, 47970, null]], "google_gemma-3-12b-it_is_public_document": [[0, 752, true], [752, 3110, null], [3110, 6426, null], [6426, 9672, null], [9672, 13145, null], [13145, 14718, null], [14718, 17789, null], [17789, 21258, null], [21258, 24559, null], [24559, 28026, null], [28026, 31330, null], [31330, 34626, null], [34626, 37829, null], [37829, 41262, null], [41262, 44367, null], [44367, 47970, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47970, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47970, null]], "pdf_page_numbers": [[0, 752, 1], [752, 3110, 2], [3110, 6426, 3], [6426, 9672, 4], [9672, 13145, 5], [13145, 14718, 6], [14718, 17789, 7], [17789, 21258, 8], [21258, 24559, 9], [24559, 28026, 10], [28026, 31330, 11], [31330, 34626, 12], [34626, 37829, 13], [37829, 41262, 14], [41262, 44367, 15], [44367, 47970, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47970, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
a469edac240bc6361ef182814e4930d5f2d1986a
|
A survey of view selection methods
Imene Mami, Zohra Bellahsene
To cite this version:
HAL Id: lirmm-00720157
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00720157
Submitted on 23 Jul 2012
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Survey of View Selection Methods
Imene Mami and Zohra Bellahsene
Université Montpellier 2 - INRIA, LIRMM, Montpellier, France
{imen.mami,bella@lirmm.fr}
ABSTRACT
Materialized view selection is a critical problem in many applications such as query processing, data warehousing, distributed and semantic web databases, etc. We refer to the problem of selecting an appropriate set of materialized views as the view selection problem. Many different view selection methods have been proposed in the literature to address this issue. The present paper provides a survey of view selection methods. It defines a framework for highlighting the view selection problem by identifying the main dimensions that are the basis in the classification of view selection methods. Based on this classification, this study reviews most of the view selection methods by identifying respective potentials and limits.
1. INTRODUCTION
The view selection issue has been investigated in several contexts: query optimization, warehouse design, data placement in a distributed setting, web databases, etc. Many diverse solutions to the view selection problem have been proposed and analyzed through surveys [13, 21, 32]. The survey [21] concentrates on methods of finding a rewriting of a query using a set of materialized views. The study presented in [32] focuses on the state of the art in materialization for web databases. A critical analysis of methodologies for selecting materialized views in data warehousing is provided in [13]. However, none of the above mentioned surveys provides a classification of view selection approaches in order to identify their advantages and disadvantages. Our survey fills this gap.
This paper aims at studying the view selection in relational databases and data warehouses as well as in a distributed setting. First, we define a framework for highlighting the view selection problem. Thus, we present a classification of view selection methods based on the main view selection dimensions that we have identified. This study also reviews existing view selection methods by identifying respective potentials and limits.
The rest of the paper is organized as follows: Section 2 gives some definitions. Section 3 identifies the main view selection dimensions along which view selection methods can be classified. Section 4 presents a critical survey of existing view selection methods. Section 5 contains the conclusion and discusses open issues.
2. PROBLEM SPECIFICATION
2.1 Preliminaries
Here, we introduce the main notions used in this paper and related to the view selection context.
**View:** A view is a derived relation, defined by a query in terms of base relations and/or other views.
**Materialized View:** A view is said to be materialized if its query result is persistently stored otherwise it is said to be virtual. We refer to a set of selected views to materialize as a set of materialized views.
**Workload:** A workload or a query workload is a given set of queries \( Q = \{Q_1, Q_2, \cdots, Q_q\} \). Each query \( Q_i \) has an associated non-negative weight \( fQ_i \) which describes the query frequency. The set of materialized views is dependent on the query workload. In a distributed scenario, the queries are executed on different computer nodes. Each computer node has an associated query workload.
**View Selection:** Given a database schema and a query workload, the objective is to select an appropriate set of materialized views to improve query performance. The process of selecting a set of materialized views is known as view selection.
**View Maintenance:** Whenever a base relation is changed, the materialized views built on it have to be updated in order to compute up-to-date query results. The process of updating a materialized view is known as view maintenance. Different maintenance policies (deferred or immediate) and maintenance strategies (incremental or rematerialization)
2.2 Problem Definition
The use of materialized views is a common technique to reduce query response time [6]. Indeed, materializing an appropriate set of views and answering queries using these views can significantly speed up the query processing since the access to materialized views can be much faster than recomputing the views. Therefore, materializing all the input queries can achieve the lowest query processing cost but the highest view maintenance cost since materialized views have to be maintained in order to keep them consistent with the data at sources. Besides, the query result can be too large to fit in the available storage space. Hence, there is a need for selecting a set of views to materialize by taking into account three important parameters: query processing cost, view maintenance cost and storage space. The problem of choosing which views to materialize which have a desirable balance among the three costs is known as the view selection problem. This is one of the most challenging problems in data warehousing [50] and it is known to be a NP-complete problem [26]. In a distributed environment consisting of many heterogeneous nodes with different resource constraints, the distributed view selection problem is that which view has to be materialized at which node of the network. The view selection problem in a distributed case is much more difficult than the view selection problem in a central case because of the immense challenges associated to distributed settings [16] (i.e., data granularity, degrees of replication, heterogeneity of information sources, etc.).
Problem Formulation: The problem of view selection can be formulated as follows. Given a database schema \( R = \{R_1, R_2, \ldots , R_t\} \), a query workload \( Q = \{Q_1, Q_2, \ldots , Q_q\} \) defined over \( R \), the problem is to select an appropriate set of materialized views \( M = \{V_1, V_2, \ldots , V_m\} \) such that the query workload is answered with the lowest cost under a limited amount of resources, e.g., storage space and/or view maintenance cost.
The view selection problem in a distributed context consisting of a set of nodes \( N = \{N_1, N_2, \ldots , N_n\} \) in which each node has an associated query workload, is to choose a set of views \( M = \{V_1, V_2, \ldots , V_m\} \) and a set of nodes \( N_e \subseteq N \) at which \( M \) should be materialized. The distributed view selection is designed so that the full query workload is answered with the lowest cost subject to resource constraints. Resources may be storage space per node, view maintenance cost, communication costs or a combination of them.
2.3 Cost Model
The cost model is an important issue for the view selection process [9]. The main objective in view selection problem is the minimization of the weighted query processing cost, defined by the formula:
\[
\text{QueryProcessingCost} = \sum_{Q_i \in Q} f_{Q_i} \times Qc(Q_i, M)
\]
where \( f_{Q_i} \) is the query frequency of the query \( Q_i \) and \( Qc(Q_i, M) \) is the processing cost corresponding to \( Q_i \) given a set of materialized views \( M \).
Because materialized views have to be kept up to date, the view maintenance cost has to be considered. This cost is weighted by the update frequency indicating the frequency of updating materialized views. The view maintenance cost is computed as follows:
\[
\text{ViewMaintenanceCost} = \sum_{V_i \in M} f_u(V_i) \times Mc(V_i, M)
\]
where \( f_u(V_i) \) is the update frequency of the view \( V_i \) and \( Mc(V_i, M) \) is the maintenance cost of \( V_i \) given a set of materialized views \( M \).
The cost model is extended for distributed setting by taking into account the communication cost which is the cost for transferring data from its origin to the node that initiated the query. Given a query \( Q_i \) which is asked at a node \( N_j \) and denoting by \( V_k \) a view used to answer \( Q_i \), the communication cost is zero if \( V_k \) is materialized at \( N_j \). Otherwise, let \( N_l \) be the node containing \( V_k \), then the communication cost for transferring \( V_k \) from \( N_l \) to \( N_j \) is:
\[
\text{CommunicationCost}(V_k, N_l \rightarrow N_j) = C_{N_j, N_l} \times size(V_k)
\]
where \( C_{N_j, N_l} \) is the network transmission cost per unit of data transferred between \( N_j \) and \( N_l \) and \( size(V_k) \) is the size of the view \( V_k \).
2.4 Static View Selection vs. Dynamic View Selection
A static view selection approach is based on a given workload and chooses accordingly the set of views to materialize. Whereas, in a dynamic view selection approach, the view selection is applied as a query arrives. Therefore, the workload is built incrementally and changes over time. Because the view selection has to be in synchronization with the workload, any change to the workload should be reflected to the view selection as well. Indeed, in a system of a dynamic nature [4, 5, 29], the set of
materialized views can be changed over time and replaced with more beneficial views in case of changing the query workload. In order to reduce view maintenance cost and storage space requirements, [57] aims at materializing the most frequently accessed tuples of the view rather than materializing all tuples of the view. The set of materialized tuples can be changed dynamically as the queries change, either manually or automatically by an internal cache manager using a feedback loop. However, the task of monitoring constantly the query pattern and periodically recalibrating the materialized views is rather complicated and time consuming especially in large data warehouse where many users with different profiles submit their queries.
A dynamic view selection is often referred to as view caching. With caching, the cache is initially empty and data are inserted or deleted from the cache during the query processing. Materialization could be performed even if no queries have been processed and materialized views have to be updated in response of changes on the base relations. A detailed comparison of these two techniques is given in [27]. Traditional caching approaches aim at caching the results of queries, in other words to cache views. Another alternative is to cache only a part of a view. Indeed, a chunk based scheme has been introduced in [12] for fine granularity caching. Chunk based caching allows caching of only few, frequently used tuples of views. To facilitate the computation of chunks required by a query but not found in the cache, a new organization for base relations has been proposed which they called a chunked file. Caching has been adopted in data warehousing [43], distributed databases [28] and peer to peer systems [25]. Dynamic view indexing has also been considered in [44]. In this paper, we focus only on static view selection methods because most of existing view selection approaches are of static nature.
3. VIEW SELECTION DIMENSIONS
In order to identify the advantages and disadvantages of view selection methods, we propose two main dimensions along with they can be classified: (i) Frameworks; and (ii) Resource Constraints.
3.1 Frameworks
Generally, approaches to the view selection problem consist of two main steps. The first step identifies the candidate views which are promising for materialization. Techniques based on multiquery DAG, syntactical analysis of the workload or query rewriting have been used to obtain the candidate views. Based on the set of candidate views, the second step selects the set of views to materialize under the resource constraints and by applying heuristic algorithms.
3.1.1 Multiquery DAG
Most of the proposed view selection methods operate on query execution plans. The plans can be derived from multiple query optimization techniques or by merging multiple query plans. The main interest of such techniques relies in detecting common sub-expressions between the different queries of workload and capturing the dependencies among them. This feature can be exploited for sharing updates and storage space. The dependence relation on queries (or views) has been represented by using a directed acyclic graph also called a DAG. However, these methods require optimizer calls which can be expensive in complex scenarios.
The most commonly used DAGs in literature are:
AND/OR View Graph: The union of all possible execution plans of each query forms an AND/OR view graph [40]. The AND/OR view graph described by Roy [42] is a Directed Acyclic Graph (DAG) composed of two types of nodes: Operation nodes and Equivalence nodes. Each operation node represents an algebraic expression (Select-Project-Join) with possible aggregate function. An equivalence node represents a set of logical expressions that are equivalent (i.e., that yield the same result). The operation nodes have only equivalence nodes as

children and equivalence nodes have only operation nodes as children. The root nodes are the query results and the leaf nodes represent the base relations. A sample AND-OR view graph is shown in figure 1. Circles represent operation nodes (Op-Nodes) and boxes represent equivalence nodes (Eq-Nodes). For example, in figure 1, view $V_1$ corresponding to a single query $Q_1$, can be computed from $V_6$ and $V_3$ or $R_1$ and $V_4$. If there is only one way to answer or update a given query, the graph becomes an AND view graph. In the data cube which is a specific model of a data warehouse, the AND-OR view graph is an OR view graph, as for each view there are zero or more ways to construct it from other views, but each way involves only one other view [19]. In other words, an OR view graph is an AND-OR view graph in which any node is an equivalence node that can be computed from any one of its children.
**Multi-View Processing Plan (MVPP):** The MVPP defined by Yang et al [52] is a directed acyclic graph in which the root nodes are the queries, the leaf nodes are the base relations and all other intermediate nodes are selection, projection, join or aggregation views that contribute to the construction of a given query. The MVPP is obtained after merging into a single plan either individual optimal query plans (similar to the AND view graph) or all possible plans for each query (similar to the AND-OR view graph). The difference between the MVPP representation and the AND-OR view graph or the AND view graph representation is that all intermediate nodes in the MVPP represent operation nodes. A sample MVPP is shown in figure 2.
**Data Cube Lattice:** Harinarayan and al [22] propose modeling data in multiple dimensions. It is built from the queries involved in the data warehouse application, e.g., OLAP-style queries. The Data Cube Lattice is a DAG whose nodes represent queries (or views) which are characterized by the attributes of the Group by clause. The edges denote the derivability relation between views. That is, if there is a path from view $V_i$ to a view $V_j$ (see figure 3), then grouping attributes on $V_j$ can be calculated from grouping attributes on $V_i$. The node labeled none corresponds to an empty set of group-by attributes (tuples are not grouped). The benefit of this representation is that a query can be used to answer or update another query. An extension of the data cube lattice in order to adapt it to a distributed case was proposed in [3, 53]. Indeed, the cube has been modified by adding edges that mark the derivation relationship between views on different computer nodes.
**3.1.2 Query Rewriting**
Query rewriting based approaches not only compute the set of materialized views but also find a complete rewriting of the queries over it. Here, the input to the view selection is not a multiquery DAG but the query definitions. The view selection problem is modeled as a state search problem using a set of transformation rules. These rules detect and exploit common subexpressions between the queries of the workload and guarantee that all the queries can be answered using exclusively the materialized views. Query rewriting based approaches not only compute the set of materialized views but also find a complete rewriting of the queries over it. Nevertheless, the completeness of the transformation rules may make the complexity of state space search strategies exponential.
**3.1.3 Syntactical Analysis of the Workload**
Some view selection methods are based on syntactical analysis of the workload to identify candidate views. These approaches analyze the workload and pick a subset of relations from which to materialize one or more views, if only if has the potential to reduce the cost of the workload significantly. How-
ever, the search space for computing the optimal set of views to be materialized may be very large.
### 3.2 Resource Constraints
Resource constraints considered during the view selection can be taken into account when classifying view selection methods. There are three main models presented in literature: unbounded, space constrained and maintenance cost constrained.
#### 3.2.1 Unbounded
In the unbounded setting, there is no limit on available resources (storage, computation etc.). Thus, the view selection problem consists in choosing a set of views to materialize that minimizes the query processing cost and the view maintenance cost. Formally thus, the problem is:
$$\text{minimize } \left( \sum_{Q_i \in Q} f_{Q_i} \times Qc(Q_i, M) + \sum_{V_i \in M} f_u(V_i) \times Mc(V_i, M) \right)$$
However, this approach may lead to two kinds of problems. First, sometimes the selected views may be too large to fit in the available space. Second, the cost of the view maintenance may offset the performance advantages provided by the view materialization.
#### 3.2.2 Space Constrained
Due to the storage space limitation, materializing all views is not always possible. In this setting, a useful notion is that of a view benefit (or query benefit). This is defined as the reduction in the workload evaluation cost, that can be achieved by materializing this view. Also relevant in this context is the per-unit benefit, obtained by dividing the view benefit by its space occupancy. It has been shown [19] that the per-space unit benefit of a view can only decrease as more views are selected (monotonic property). The space constrained model minimizes the query processing cost plus the view maintenance cost under a space constraint.
$$\text{minimize } \left( \sum_{Q_i \in Q} f_{Q_i} \times Qc(Q_i, M) + \sum_{V_i \in M} f_u(V_i) \times Mc(V_i, M) \right)$$
$$\text{under } \sum_{V_i \in M} \text{size}(V_i) \leq S$$
where $S$ is the storage space capacity.
#### 3.2.3 Maintenance Cost Constrained
This model constrains the time that can be allotted to keep up to date the materialized views in response to updates on base relations. In the maintenance cost constrained model, the maintenance cost of a view may decrease with selection of other views for materialization. Therefore, the query benefit per unit of maintenance cost of a view can increase [20]. This non monotonic nature of maintenance cost makes the view selection problem more difficult. The maintenance cost constrained model minimizes the query processing cost under a maintenance cost constraint.
$$\text{minimize } \left( \sum_{Q_i \in Q} f_{Q_i} \times Qc(Q_i, M) \right)$$
$$\text{under } \sum_{V_i \in M} f_u(V_i) \times Mc(V_i, M) \leq U$$
where $U$ is the view maintenance cost limit.
The models that we have presented in section 3.2 can be extended to the distributed setting by taking into account the distributed specific features (i.e., the communication cost between the computer nodes).
### 4. REVIEW OF VIEW SELECTION METHODS
In this section, we classify the view selection methods according to several dimensions characterizing their algorithms (i) resource constraints they consider during the view selection process and (ii) frameworks they use to obtain the candidate views (see figure 4). Based on this classification, we review most of the view selection methods. The best-known heuristic algorithms proposed in literature to solve the view selection problem, namely: deterministic algorithms, randomized algorithms, hybrid algorithms or constraint programming.
#### 4.1 Deterministic Algorithms Based Methods
Much research work on view selection uses deterministic strategies to address the view selection problem. [41] is the first paper that provides a solution for materializing view indexes which can be seen as a special case of the materialized views. The solution is based on A* algorithm [37]. An exhaustive approach is also presented in [31, 39] for finding the best set of views to materialize. Nevertheless, an exhaustive search cannot compute the optimal solution in a reasonable time.
The authors in [22] present and analyze algorithms for view selection in case of OLAP-style queries. They provide a polynomial-time greedy algorithm to select a set of views to materialize that minimizes the query processing cost subject to a space constraint. However, this approach does not con-
Figure 4: A Classification of view selection methods.
The work in [51] is dealing with more general SQL queries which include select, project, join, and aggregation operations. A greedy algorithm has been designed to select a set of materialized views so that the combined query processing and view maintenance cost is minimized. However, the view maintenance cost has been overrated since the maintenance cost for a materialized view is the cost used for constructing this view. Besides, the view selection is done without any resource constraint.
A theoretical framework for the view selection problem in data warehousing setting has been developed in [19]. Their work provides a near-optimal exponential time greedy algorithm for the case of AND-OR view graph and near-optimal polynomial time greedy algorithm for the cases of AND view graph and OR view graph. This approach was extended in [20] to study the view selection under a maintenance cost constraint.
The authors in [42] demonstrate that using multi-query optimization techniques in conjunction with a greedy heuristic is practical and provides significant benefit. The greedy heuristic is used to iteratively pick from the AND-OR view graph the set of views to materialize that minimizes the query processing cost. This study was extended in [36] to consider how to optimize view maintenance cost. In addition to speed up the query workload by selecting materialized views, algorithms exploit common sub-expressions between view maintenance expressions to compute an efficient plan to the maintenance of the materialized views. However, the view selection has been studied without any resource constraint.
The view selection algorithm proposed in [2] is based on the notion of level in the query tree (each view of the query tree is associated to a level). In this approach, the view selection problem is studied under a space constraint and solved in two phases. The first phase depends on local optimization by...
taking each query and pre-selecting a set of views which reduce the query processing cost without increasing significantly the view maintenance cost. The second phase computes the cost for each level of the query graph and selects the one which has the minimal sum of query processing and view maintenance cost.
The view selection has been studied in [34, 45, 46, 47, 48] under the condition that the input queries can be answered using exclusively the materialized views. An exhaustive algorithm has been designed in [47] to select a set of materialized views while minimizing the combination of the query processing and view maintenance cost. This work was extended in [34] by developing greedy algorithms that expand only a small fraction of the states produced by the exhaustive algorithm. The view selection problem in [45, 46, 48] is addressed under a space constraint. However, their view selection algorithm is still in exponential time. A survey of work on answering queries using views can be found in [21].
The study in [1] is based on a syntactical analysis of the workload to address the problem of selecting both views and indexes to be materialized. This approach proceeds in three main steps. The first step analyses the workload and chooses subsets of base relations with a high impact on the query processing cost. Based on the base relations subsets, the second step identify syntactically relevant views and indexes that can potentially be materialized. In the third step, the system runs a greedy enumeration algorithm to pick a set of views and indexes to materialize based on the result of the second step by taking into account the space constraint. Nevertheless, this approach does not take into account the view maintenance cost.
The works published in [3, 53] address the view selection problem in a distributed data warehouse environment. An extension of the concept of a data cube lattice to capture the distributed semantics has been proposed. Moreover, they extend a greedy based selection algorithm for the distributed case. However, the cost model that they have used does not include the view maintenance cost. Furthermore, the network transmission costs are not considered which is very important in a distributed context. Indeed, the communication cost is computed only from the size of the query result.
The above methods take a deterministic approach either by exhaustive search or by some heuristics such as greedy. However, greedy search is subjected to the known caveats, i.e., sub-optimal solutions may be retained instead of the globally optimal one since initial solutions influence the solution greatly. As a result, many paradigms and programming techniques have been developed to improve the solutions of the view selection problem: randomized algorithms, hybrid algorithms and constraint programming which we describe in next subsection.
4.2 Randomized Algorithms Based Methods
Typical randomized algorithms are genetic [14] or use simulated annealing [30]. Genetic algorithms generate solutions using techniques inspired by the natural evolution process such as selection, mutation, and crossover. The search strategy for these algorithms is very similar to biological evolution. Genetic algorithms start with a random initial population and generate new populations by random crossover and mutation. The fittest individual found is the solution. The algorithms terminate as soon as there is no further improvement over a period.
A genetic algorithm has been used in [23, 55] in conjunction with the MVPP framework to solve the view selection problem. The materialized views have been selected according to their reduction in the combined query processing and view maintenance cost. However, because of the random characteristic of the genetic algorithm, some solutions can be infeasible. For example, in the maintenance cost constrained model, when a view is selected, the benefit will not only depend on the view itself but also on other views that are selected. One solution to this problem is to add a penalty value as part of the fitness function to ensure that infeasible solutions will be discarded. For instance, a penalty function has been applied in [33] which reduces the fitness each time the maintenance cost constraint is not satisfied. This approach minimizes the query processing cost given varying upper bounds on the view maintenance cost, assuming unlimited amount of storage space. In order to let the genetic algorithm converge faster, they represent the initial population as a favorable configuration based on external knowledge about the problem and its solution rather than a random sampling, i.e., the views with a high query frequency are most likely selected for materialization. However, the genetic algorithm may tend to get stuck at a poor local optimum fairly early. A solution was provided in [54] to avoid premature convergence and keep improving the solution by incorporating constraints into the algorithm through a stochastic ranking procedure where no penalty functions are used.
The study presented in [8] which is based on a syntactical analysis of the workload deals with the
determined view selection problem. This approach consists of three main steps. The first one extends the base relations selection algorithm described in [1] for the distributed scenario. Based on the result of the first step and the similarity between queries, the second step generates the candidate views which are promising for materialization. In the third step a genetic algorithm is applied to select a set of materialized views and the nodes of the network on which they will be materialized that minimize the query processing and view maintenance cost. However, this approach does not take into account either the space constraint or the maintenance cost constraint.
The approaches proposed in [10, 11, 24] use simulated annealing algorithms to address the view selection problem. These algorithms are motivated by an analogy to annealing in solids. Simulated Annealing algorithms start with an initial configuration, generate new configurations by random walk along the different solutions of the solution space according to a cooling schedule and terminate as soon as no applicable ones exist or lose all the energy in the system.
Materialized views have been selected in [10] so that the combined query processing and view maintenance cost is minimized. The view selection problem is solved in [24] under the case where either the space constraint or the maintenance cost constraint is considered. Further, randomized search has been applied to solve two more issues. First, they considered the case where both space and maintenance constraints exist. Next they applied randomized search in the context of dynamic view selection.
In order to support the scalability when the number of views and queries become large, a new approach has been introduced in [11] using Parallel Simulated Annealing (PSA) for materialized view selection. By performing simulated annealing with multiple inputs over multiple compute nodes concurrently, PSA is able to improve the quality of obtained sets of materialized views. Moreover, PSA is able to perform view selection on MVPP having a much larger number of views, which reflects the real data warehousing environment. However, the view selection problem is solved without any bound neither on the storage space nor on the view maintenance cost.
Randomized algorithms can be applied to complex problems dealing with large or even unlimited search spaces. Thus, the use of randomized algorithms can be considered in solving large combinatorial problems such as the view selection problem. However, the quality of the solution \(^1\) depends on the set-up of the algorithm as well as the extremely difficult fine-tuning of algorithm that must be performed during many test runs.
4.3 Hybrid Algorithms Based Methods
Hybrid algorithms combine the strategies of deterministic and randomized algorithms in their search in order to provide better performance in terms of solution quality. Solutions obtained by deterministic algorithms are used as initial configuration for simulated annealing algorithms or as initial population for genetic algorithms.
A hybrid approach has been applied in [56] which combines heuristic algorithms i.e., greedy algorithms and genetic algorithms to solve three related problems. The first one is to optimize queries. The second one is to choose the best global processing plan from multiple processing plans for each query. The third problem is to select materialized views from a given global processing plan. Their experimental results confirmed that hybrid algorithms provide better performance than either genetic algorithms or heuristic algorithms i.e., greedy algorithms used alone in terms of solution quality. However, their algorithms are more time consuming and may be impractical due to their excessive computation time.
4.4 Constraint Programming Based Methods
Constraint programming is a descendant of declarative programming. This programming technique has been exploited in many applications for solving combinatorial problems [49]. The success of using constraint programming for combinatorial optimization is due to its combination of high level modeling, constraint propagation and facilities to control the search behavior.
A constraint programming based approach has been presented in [35] to address the view selection problem. More specifically, the view selection problem has been modeled as a constraint satisfaction problem. Its resolution has been supported automatically by constraint solver embedded in the constraint programming language. The authors proved experimentally that a constraint programming based approach provides better performances compared with a randomized method i.e., genetic algorithm in term of cost savings. The view selection has been studied under the case where (i) only the maintenance cost constraint is considered and (ii) both
\(^1\)The solution quality represents the quality of the set of materialized views found by the algorithm. For example, the solution quality may be measured in term of cost savings.
maintenance cost and space constraints exist. They have also shown that their approach is scalable.
5. CONCLUSION
This study provides a critical survey of different approaches in which the view selection has been studied in relational databases and data warehouses as well as in a distributed setting. We have defined formally the view selection problem and identified the main view selection dimensions along with view selection methods have been classified. Based on the classification, we have discussed most of view selection methods.
Analysis of state of the art of view selection has shown that there is very few work on view selection in distributed databases and data warehouses [3, 8, 53] and no effective solution for peer to peer systems. Indeed, [16] seems to be the only paper which deals with the view selection problem in peer to peer environment. In fact, it is provided a full definition of the problem but without providing any algorithm or detail on how to select an effective set of views to materialize and place them at appropriate peers. Thus, one of challenging directions of future work aims at addressing the view selection problem in a distributed setting. More recently, materialized view selection has been explored in semantic web databases [7, 15] in order to facilitate efficient processing of RDF queries and updates. However, they consider a static workload which contradicts the dynamic nature of the web. Indeed, any change to the workload should be reflected to the view selection as well. This issue will be the future aspect while studying the view selection in semantic web databases.
6. ACKNOWLEDGEMENTS
We would like to thank the reviewers for their valuable comments to improve this paper.
7. REFERENCES
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00720157/file/surveyV5.pdf", "len_cl100k_base": 7297, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34771, "total-output-tokens": 9498, "length": "2e12", "weborganizer": {"__label__adult": 0.00040221214294433594, "__label__art_design": 0.0004107952117919922, "__label__crime_law": 0.0005540847778320312, "__label__education_jobs": 0.002475738525390625, "__label__entertainment": 0.00013518333435058594, "__label__fashion_beauty": 0.000255584716796875, "__label__finance_business": 0.0009756088256835938, "__label__food_dining": 0.00044345855712890625, "__label__games": 0.0007429122924804688, "__label__hardware": 0.000912189483642578, "__label__health": 0.0010967254638671875, "__label__history": 0.000583648681640625, "__label__home_hobbies": 0.00015532970428466797, "__label__industrial": 0.0006823539733886719, "__label__literature": 0.0006899833679199219, "__label__politics": 0.00037550926208496094, "__label__religion": 0.0005369186401367188, "__label__science_tech": 0.2459716796875, "__label__social_life": 0.00017690658569335938, "__label__software": 0.0404052734375, "__label__software_dev": 0.70068359375, "__label__sports_fitness": 0.0002849102020263672, "__label__transportation": 0.0006175041198730469, "__label__travel": 0.0002961158752441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39895, 0.0374]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39895, 0.1816]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39895, 0.92157]], "google_gemma-3-12b-it_contains_pii": [[0, 880, false], [880, 4826, null], [4826, 9809, null], [9809, 13780, null], [13780, 17577, null], [17577, 21987, null], [21987, 23971, null], [23971, 29145, null], [29145, 34196, null], [34196, 39895, null], [39895, 39895, null]], "google_gemma-3-12b-it_is_public_document": [[0, 880, true], [880, 4826, null], [4826, 9809, null], [9809, 13780, null], [13780, 17577, null], [17577, 21987, null], [21987, 23971, null], [23971, 29145, null], [29145, 34196, null], [34196, 39895, null], [39895, 39895, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39895, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39895, null]], "pdf_page_numbers": [[0, 880, 1], [880, 4826, 2], [4826, 9809, 3], [9809, 13780, 4], [13780, 17577, 5], [17577, 21987, 6], [21987, 23971, 7], [23971, 29145, 8], [29145, 34196, 9], [34196, 39895, 10], [39895, 39895, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39895, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
6796ddb1aab630ad7f66cb2aaa3b3678fbd8af1c
|
1 Branch Prediction Overview
2 Software-Based Branch Prediction
2.1 Static Software Hints
2.2 Branch Delay Slots
2.3 Predication
3 Hardware-Based Branch Prediction
3.1 Fixed Branch Predictor
3.2 Branch History Table (BHT) Predictor
3.3 Two-Level Predictor For Temporal Correlation
3.4 Two-Level Predictor For Spatial Correlation
3.5 Generalized Two-Level Predictors
3.6 Tournament Predictors
3.7 Branch Target Buffers (BTBs) Predictor
1. Branch Prediction Overview
Assume incorrect branch prediction in dual-issue I2OL processor.
```
bn e
opA
opB
opC
opD
opE
opF
opG
opTARG
```
Assume correct branch prediction in dual-issue I2OL processor.
```
bn e
opA
opTARG
opX
opY
opZ
```
Three critical pieces of information we need to predict control flow:
- (1) Is this instruction a control flow instruction?
- (2) What is the target of this control flow instruction?
- (3) Do we redirect control flow to the target or next instr?
2. Software-Based Branch Prediction
When do we know these critical pieces of information?
<table>
<thead>
<tr>
<th></th>
<th>jal</th>
<th>jr</th>
<th>bne</th>
</tr>
</thead>
<tbody>
<tr>
<td>(1) Is this instruction a control flow instruction?</td>
<td>D</td>
<td>D</td>
<td>D</td>
</tr>
<tr>
<td>(2) What is the target of this control flow instruction?</td>
<td>D</td>
<td>X</td>
<td>D</td>
</tr>
<tr>
<td>(3) Do we redirect ctrl flow to the target or next instr?</td>
<td>D</td>
<td>D</td>
<td>X</td>
</tr>
</tbody>
</table>
What do we need to predict in F stage vs. D stage?
<table>
<thead>
<tr>
<th></th>
<th>jal</th>
<th>jr</th>
<th>bne</th>
</tr>
</thead>
<tbody>
<tr>
<td>F stage</td>
<td>predict 1,2,3</td>
<td>predict 1,2,3</td>
<td>predict 1,2,3</td>
</tr>
<tr>
<td>D stage</td>
<td>no prediction</td>
<td>predict 2</td>
<td>predict 3</td>
</tr>
</tbody>
</table>
2. Software-Based Branch Prediction
- Static software hints
- Branch delay slots
- Predication
2.1. Static Software Hints
Software provides hints about whether a control flow instruction is likely to be taken or not taken. These hints are part of the instruction and thus are available earlier in the pipeline (e.g., in the D stage).
```
bne.t
opA
opTARG
bne.nt
opY
opZ
```
What if the hint is wrong?
```
bne.t
opA
opTARG
bne.nt
opA
opB
```
2.2. Branch Delay Slots
Without branch delay slots must squash fall through instructions if branch is taken.
<table>
<thead>
<tr>
<th>bne</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>opA</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>opB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>targ</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
With branch delay slots compiler can put useful work in the slots. Instructions in the delay slots are always executed regardless of branch condition.
<table>
<thead>
<tr>
<th>bne</th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>opA</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>opB</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>targ</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
2.3. Predication
Not really “prediction”. Idea is to turn control flow into dataflow completely eliminating the control hazard.
**Conditional move instructions** conditionally move a source register to a destination register.
\[
\text{movn } rd, rs1, rs2 \quad \text{if (} R[rs2] \neq 0 \text{ ) } R[rd] \leftarrow R[rs1] \\
\text{movz } rd, rs1, rs2 \quad \text{if (} R[rs2] = 0 \text{ ) } R[rd] \leftarrow R[rs1]
\]
<table>
<thead>
<tr>
<th>Pseudocode</th>
<th>w/o Predication</th>
<th>w/ Predication</th>
</tr>
</thead>
<tbody>
<tr>
<td>if ( a < b )</td>
<td>slt x1, x2, x3</td>
<td>slt x1, x2, x3</td>
</tr>
<tr>
<td>x = a</td>
<td>beq x1, x0, L1</td>
<td>movz x4, x2, x1</td>
</tr>
<tr>
<td>else</td>
<td>addi x4, x2, x0</td>
<td>movn x4, x3, x1</td>
</tr>
<tr>
<td>x = b</td>
<td>jal x0, L2</td>
<td>L1:</td>
</tr>
<tr>
<td></td>
<td></td>
<td>addi x4, x3, x0</td>
</tr>
<tr>
<td></td>
<td></td>
<td>L2:</td>
</tr>
</tbody>
</table>
**Full predication** enables almost all instructions to be executed under a predicate. If predicate is false, instruction should turn into a NOP.
<table>
<thead>
<tr>
<th>Pseudocode</th>
<th>w/ Predication</th>
</tr>
</thead>
<tbody>
<tr>
<td>if ( a < b )</td>
<td>slt.p p1, x2, x3</td>
</tr>
<tr>
<td>opA</td>
<td>( p1) opA</td>
</tr>
<tr>
<td>opB</td>
<td>( p1) opB</td>
</tr>
<tr>
<td>else</td>
<td>(!p1) opC</td>
</tr>
<tr>
<td>opC</td>
<td>(!p1) opD</td>
</tr>
<tr>
<td>opD</td>
<td></td>
</tr>
</tbody>
</table>
- What if both sides of branch have many instructions?
- What if one side of branch has many more than the other side?
3. Hardware-Based Branch Prediction
3.1. Fixed Branch Predictor
- Fixed branch predictor
- Branch history table (BHT) predictor
- Two-level predictor for temporal correlation
- Two-level predictor for temporal correlation
- Generalized two-level predictors
- Tournament predictor
- Branch target buffer (BTB) predictor
3.1. Fixed Branch Predictor
- Always predict not taken
- What we have been assuming so far
- Simple to implement and can perform prediction in F
- Poor accuracy, especially on very important backwards branch in loops
- Always predict taken
- Difficult to implement: we don’t know if this is a branch until D
- Difficult to implement: we don’t know target until at least D
- Could predict not taken in F, and then adjust in D
- Poor accuracy, especially on if/then/else
- Predict taken for backward branches and predict not taken for forward branches
- Difficult to implement: we don’t know if this is a branch until D
- Difficult to implement: we don’t know target until at least D
- Could predict not taken in F, and then adjust in D
- Better accuracy
loop: <------------------.
lw x1, 0(x2) | backward
lw x3, 0(x4) | branches
slt x5, x1, x3 | taken on avg
beq x5, x0, L1 --. forward | 90%
addi x6, x1, x0 | branches |
jal x0, L2 | taken on avg |
L1: <-’ 50%
addi x6, x3, x0 |
L2: |
sw x6, 0(x7) |
addi x2, x2, 4 |
addi x4, x4, 4 |
addi x7, x7, 4 |
addi x8, x8, -1 |
bne x8, x0, loop -------------------’
- For now let’s focus on conditional branches as opposed to unconditional jumps
- Let’s assume we always predict not-taken in the F stage
- In the D stage, we know if the instruction is a branch and we know the target of the branch
- So key goal is to predict whether or not we need to redirect the control flow, i.e., to predict the branch outcome in the D stage instead of waiting until the X stage
- By doing this prediction in the D stage we can reduce the branch misprediction penalty by several cycles although it is still not zero if we predict the branch is taken
3.2. Branch History Table (BHT) Predictor
How can we do better? Exploit structure in the program, namely **temporal correlation**: the outcomes of specific static branch in the past may be a good indicator of the outcomes of future dynamic instances of the same static branch.
**One-Bit Saturating Counter**
Remember the previous outcome of a specific static branch and predict the outcome will be the same for the next dynamic instance of the same branch.
Consider how this saturating counter would behave for a backwards branch in a loop with four iterations. Assume the entire loop is executed several times.
<table>
<thead>
<tr>
<th>Iteration</th>
<th>Prediction</th>
<th>Actual</th>
<th>Mispredict?</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Exploiting temporal correlation works well, but a one-bit saturating counter will always mispredicts the backwards branch in a loop twice. Loops are very common!
**Two-Bit Saturating Counter**
Remember the last two outcomes of a specific static branch. Require two consecutive “counter examples” before changing the prediction.
Consider how this saturating counter would be have for a backwards branch in a loop with four iterations. Assume the entire loop is executed several times.
<table>
<thead>
<tr>
<th>Iteration</th>
<th>Prediction</th>
<th>Actual</th>
<th>Mispredict?</th>
<th>ST</th>
<th>WT</th>
<th>WNT</th>
<th>SNT</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>2</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>3</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>4</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>1</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>2</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>3</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>4</td>
<td>Predict</td>
<td>NT</td>
<td></td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
</tbody>
</table>
What if start state is strongly taken?
Other Two-Bit FSM Branch Predictors
See Fig 5.8 in Sun + Li Passi for other alternatives.
Branch History Table
- So far we have focused on a simple FSM that exploits temporal correlation to make a prediction for a specific static branch.
- To make predictions for many different static branches, we need to keep track of a dedicated FSM per static branch.
- A branch history table (BHT) is a table where each entry is the state of the FSM for a different static branch.
- Two PC’s can “alias” to the same entry in BHT.
- Aliasing is similar to a cache conflict.
- We could store the PC as a tag along with the FSM state to make sure we don’t mix up the FSM state across two static branches.
- Storing the PC is too expensive though, so we can just let branches alias and this just reduces the branch prediction accuracy.
- Can reduce aliasing with larger BHT.
BHT with 4k entries and 2bits/entry = 80–90% accuracy
How do we continue to improve prediction accuracy? Exploit even more complicated temporal correlation.
Often a branch exhibits more complicated patterns than just "always taken" or "always not taken." Could we develop a more complicated FSM, but then patterns vary per branch. We want per branch customized FSMs.
```c
void convolve(int BS[], int A[], int size)
{
for (int i = 2; i < size - 2; i++)
for (int j = 0; j < 5; j++)
BS[i - (2 - j)] = A[i] * COEFF[j];
}
```
Can we predict that every fifth dynamic instance of the backwards loop branch will be not taken?
3.3. Two-Level Predictor For Temporal Correlation
When a branch is taken or not taken we shift in either a one (taken) or a zero (not taken) into the least significant bit of the corresponding BHSR.
<table>
<thead>
<tr>
<th>Index</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td></td>
</tr>
<tr>
<td>0111</td>
<td>ST</td>
</tr>
<tr>
<td>1000</td>
<td>WT</td>
</tr>
<tr>
<td>1001</td>
<td>WT</td>
</tr>
<tr>
<td>1010</td>
<td>WT</td>
</tr>
<tr>
<td>1011</td>
<td>ST</td>
</tr>
<tr>
<td>1100</td>
<td>WT</td>
</tr>
<tr>
<td>1101</td>
<td>ST</td>
</tr>
<tr>
<td>1110</td>
<td>ST</td>
</tr>
<tr>
<td>1111</td>
<td>SNT</td>
</tr>
</tbody>
</table>
- BHSR captures temporal pattern for that branch
- We use the BHSR to index into the PHT. A BHT has an entry per branch, but a PHT has an entry per branch pattern.
- The PHT says for a given pattern over the past n executions of a branch, should I take or not take the next execution of this branch?
- Once the two-level predictor is warmed up for previous nested loop example, the state of the PHT would be what is shown on the left
- Need at least four bits of “history” to learn this pattern and perfectly predict this branch
Problem: Multiple branches with same history might need different predictions. In other words, aliasing in the PHT can lose accuracy.
Solution: Add multiple PHTs, use bits from PC to choose which PHT to use.
[Diagram of a two-level predictor for temporal correlation]
Isomorphic - two different ways of drawing the same two-level structure.
The way one branch is resolved may be a good indicator of the way a later (different) branch will resolve.
\[
\begin{align*}
\text{If } (x < 7) & \quad \Rightarrow r_1, r_1, 7 \quad \text{branch } 1 \\
\text{and} & \quad \text{add } r_2, r_2, 6 \\
\text{then } & \quad \text{branch } 2 \\
\text{If } (x < 5) & \quad \Rightarrow r_3, r_3, 1 \\
\text{and} & \quad \text{add } r_1, r_1, 4 \\
\text{then } & \quad \text{branch } 1 \\
\end{align*}
\]
If branch \( \phi \) is taken (ie \( x \geq 7 \)),
then branch 1 is always taken (ie \( x \) must be \( \leq 5 \)).
So whether branch \( \phi \) is taken or not taken can be used to predict if we should take branch 1.
For a more example, 'PUT' will capture history - so we will know when first branch is taken, and that value(s) of the 'PUT' will point to an entry in the PUT that predicts taken.
As before, multiple bits can help avoid aliasing in input.
**GENERALIZED TWO-LEVEL PHTS**
Combined approach to exploit both complex temporal correlation and spatial correlation.
Difference from discussion on complex temporal correlation is that we purposely choose a smaller \( m \) to cause aliasing in the \( m \)th bit so as to allow us to capture spatial correlation.
<table>
<thead>
<tr>
<th>( k = 0 )</th>
<th>( 0 < k < 30 )</th>
<th>( k = 30 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>m = 0</td>
<td>( G_Ag )</td>
<td>( G_As )</td>
</tr>
<tr>
<td>0 ( < m < 30 )</td>
<td>( P_Ag )</td>
<td>( P_As )</td>
</tr>
<tr>
<td>m = 30</td>
<td>( S_Ag )</td>
<td>( S_As )</td>
</tr>
</tbody>
</table>
97% accuracy
3. Hardware-Based Branch Prediction
3.5. Generalized Two-Level Predictors
\textit{g-select}\thinspace
\textit{isomorphic to previous figure}
\textit{g-sense}\thinspace
\textit{PC}\thinspace
\textit{output}\thinspace
\textit{RVM output logic}\thinspace
\textit{TNT?}\thinspace
\textit{Instead of concatenating you bits from PC with BUSH, keeps avoid aliasing in one put/output more effectively}\n
Topic 10: Advanced Processors – Branch Prediction
Tournament Predictors
Different predictors are better at predicting different types of branches:
- one-level 2-bit saturating counter - loops
- two-level gshare - irregular code
Branch predictor selection table - predicts which branch predictor we should use
Even with best possible prediction of branch outcome, still need to wait for target address to be determined.
**Branch Target Buffer**
Can put BTB in fetch stage:
- Predicting if PC points to a branch
- Predicting target of branch
- Predicting if branch is taken.
Sometimes 62% hit, if hit then assume predict taken.
3. Hardware-Based Branch Prediction
3.7. Branch Target Buffers (BTBs) Predictor
- **RETURN ADDRESS STACK PREDICTOR**
- BTB only works for JR function call returns if always call function from same place (not realistic)
**STACK PREDICTOR**
- Push target address on stack for JAL/JALR
- Pop off target address for JR to predict target
Move stack predictor into fetch and predict which PC's are JR.
Use tournament predictor to choose between BTB and Stack predictor.
|
{"Source-Url": "https://www.csl.cornell.edu/courses/ece4750/handouts/ece4750-T10-ap-branchpred.pdf", "len_cl100k_base": 4925, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 31763, "total-output-tokens": 4950, "length": "2e12", "weborganizer": {"__label__adult": 0.00031566619873046875, "__label__art_design": 0.00031638145446777344, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.0002211332321166992, "__label__entertainment": 5.7578086853027344e-05, "__label__fashion_beauty": 0.00013315677642822266, "__label__finance_business": 9.41157341003418e-05, "__label__food_dining": 0.0002942085266113281, "__label__games": 0.0007829666137695312, "__label__hardware": 0.003726959228515625, "__label__health": 0.000202178955078125, "__label__history": 0.00016295909881591797, "__label__home_hobbies": 0.00012612342834472656, "__label__industrial": 0.0006022453308105469, "__label__literature": 0.0001233816146850586, "__label__politics": 0.00019609928131103516, "__label__religion": 0.00040602684020996094, "__label__science_tech": 0.0117034912109375, "__label__social_life": 4.857778549194336e-05, "__label__software": 0.006732940673828125, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.00031757354736328125, "__label__transportation": 0.0005331039428710938, "__label__travel": 0.00015747547149658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14186, 0.02979]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14186, 0.33639]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14186, 0.81317]], "google_gemma-3-12b-it_contains_pii": [[0, 467, false], [467, 961, null], [961, 1714, null], [1714, 2064, null], [2064, 2738, null], [2738, 3923, null], [3923, 5023, null], [5023, 5975, null], [5975, 7092, null], [7092, 8340, null], [8340, 8431, null], [8431, 9368, null], [9368, 9855, null], [9855, 10802, null], [10802, 11146, null], [11146, 11993, null], [11993, 12052, null], [12052, 12664, null], [12664, 13119, null], [13119, 13380, null], [13380, 13700, null], [13700, 14186, null]], "google_gemma-3-12b-it_is_public_document": [[0, 467, true], [467, 961, null], [961, 1714, null], [1714, 2064, null], [2064, 2738, null], [2738, 3923, null], [3923, 5023, null], [5023, 5975, null], [5975, 7092, null], [7092, 8340, null], [8340, 8431, null], [8431, 9368, null], [9368, 9855, null], [9855, 10802, null], [10802, 11146, null], [11146, 11993, null], [11993, 12052, null], [12052, 12664, null], [12664, 13119, null], [13119, 13380, null], [13380, 13700, null], [13700, 14186, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14186, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14186, null]], "pdf_page_numbers": [[0, 467, 1], [467, 961, 2], [961, 1714, 3], [1714, 2064, 4], [2064, 2738, 5], [2738, 3923, 6], [3923, 5023, 7], [5023, 5975, 8], [5975, 7092, 9], [7092, 8340, 10], [8340, 8431, 11], [8431, 9368, 12], [9368, 9855, 13], [9855, 10802, 14], [10802, 11146, 15], [11146, 11993, 16], [11993, 12052, 17], [12052, 12664, 18], [12664, 13119, 19], [13119, 13380, 20], [13380, 13700, 21], [13700, 14186, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14186, 0.24742]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
44b1ce92f2ca7f61d3f7f43c8b4845d773b0604f
|
A Meta-Framework for Generating Ontologies from Legacy Schemas
Manuel Wimmer
Vienna University of Technology
Austria
wimmer@big.tuwien.ac.at
Abstract—A huge amount of schemas are expressed within outdated schema languages which are restricted concerning expressiveness, extensibility, readability, and understandability. Consequently, the actual intention of the schema developers is hard to grasp. Reverse engineering approaches try to tackle this problem by automatically transforming legacy schemas into ontologies, but rarely enhance the semantics of the schemas by exploiting the higher expressiveness of modern schema languages.
Therefore, we propose a meta-framework for generating ontologies from legacy schemas going beyond existing approaches. This meta-framework is instantiable for various schema reverse engineering scenarios and allows to generate ontologies with improved structures and semantics compared to the original legacy schemas by exploiting the advanced expressiveness of modern schema languages. Finally, this meta-framework allows for the automatic migration of data from the legacy schemas into instances of the generated ontologies.
I. INTRODUCTION
Schemas are the first choice for describing data structures of applications since the introduction of database systems. The first generation of languages used for describing schemas, such as the relational model [1], was typically focused on the efficient implementation of data structures. However, soon it has been realized that also a conceptual view is necessary [2] to develop larger schemas, thus various conceptual schema languages have been proposed which influenced not only data engineering but also software engineering in general. Based on the origins and evolution, huge differences concerning the expressiveness, extensibility, readability, and understandability between practically used schema languages exist. Summarizing, modern schema languages offer various benefits for developing large schemas. For example, schemas expressed in UML class diagrams [3] provide sophisticated reuse mechanisms based on inheritance and composition, rich type systems, explicit relationships between elements, and furthermore, complex constraints are described with a dedicated constraint language [4], thus schemas become more and more like ontologies [5]. In the context of this paper, we use the term ontology for schemas which provide an explicit representation of all involved concepts, relationships, and constraints. In contrast, the term legacy schema is used for schemas which lack such an explicit representation.
In practice, explicit representations in form of ontologies are highly needed for integration concerns such as unification of schemas, for migrating data from one schema to another, and for extending schemas. For all these tasks, an explicit and precise representation is necessary in order to reconstruct the intentions of the schema developers. For this, switching the schema language and exploiting a visual representation is in general not sufficient, especially for large schemas. However, most current reverse engineering approaches support only such representation switches, which are necessary as a first step but are not sufficient as a final goal. Furthermore, the approaches are limited to a specific combination of schema languages, e.g., relational schemas to UML class diagrams, but no general reverse engineering methodology exists, which can be applied for several scenarios.
To tackle these problems, we propose a meta-framework which can be applied to various schema reverse engineering scenarios. The meta-framework provides a semi-automatic reverse engineering process consisting of two general phases. The first phase concerns the automatic transition from the legacy schema language to the ontology language, whereas the second phase supports the user by the semantic enrichment of the initial ontology in order to resolve deficiencies of the legacy schema language. This meta-approach has been extracted from several successful reverse engineering activities of the ModelCVS project [6] and the MDWEnet project [7]. The benefits of applying this meta-framework are: (1) once the legacy schema language is aligned with the ontology language, each schema conforming to the schema language is representable as an ontology, (2) the semi-automatic generation process allows not only a switch in the representation, but also an improvement of the structure and of the semantics of schemas, and finally, (3) the systematic generation of the ontology allows that data of the legacy schemas is automatically migrated into ontology instances of the generated ontology.
The remainder of this paper is structured as follows. Section 2 proposes the conceptual meta-framework by discussing the steps required to come from an implicit legacy schema to an explicit ontology. Section 3 elaborates on architectural concerns for a reverse engineering tool support-
ing the conceptual meta-framework. Section IV investigates related work and finally, Section V concludes this paper with an outlook on future work.
II. A META-FRAMEWORK FOR REVERSE ENGINEERING OF LEGACY SCHEMAS
In this section the meta-framework for generating ontologies from legacy schemas is presented on a conceptual level.
A. Two-phase Generation Process at a Glance
For producing semantic-rich ontologies from legacy schemas, we propose a semi-automatic process for overcoming the limitations of existing automatic generation approaches. The process encompasses two phases as illustrated in Figure 1. During the first phase a preliminary version of an ontology is automatically generated from the available legacy schema, while in the second phase this preliminary version is semantically enriched according to constraints not captured by the legacy schema as well as structurally improved by using dedicated features of the ontology language.

Figure 1. Two-phase generation process
As running example throughout this paper, the generation of UML class diagrams from Document Type Definitions (DTDs) is used to exemplify the steps of the proposed meta-framework. But before we are actually going into details about the meta-framework, the involved artifacts of the reverse engineering process are described and aligned based on the OMGs meta-layer architecture [8].
B. Meta-Layers Involved
In Figure 2, the meta-layers of the reverse engineering process are shown. On the lowest level, we have the so-called **Instance Layer** (cf. I in Figure 2) where the instances, i.e., the data, conforming to schemas resides. The middle layer is the so-called **Schema Layer** (cf. S in Figure 2) where schema definitions are located. And finally, on top we have the **Language Layer** (cf. L in Figure 2) where the schema language definitions reside. Please note that each artifact on layer \( N \) has to conform to a specification on layer \( N+1 \), e.g., the schema has to conform to the schema language, and that any number of artifacts on layer \( N \) may be instantiated from a specification on layer \( N+1 \), e.g., data conforming to a schema may be instantiated as often as needed.
In this meta-layer architecture, the artifacts of the reverse engineering process are arranged. On the left hand side of Figure 2, the legacy schema on layer S, its corresponding schema language on layer L, and its instantiated data on layer I are shown. On the right hand side of Figure 2, we have only one given artifact, namely the ontology language on layer L. The rest of the right hand side artifacts need to be generated by the ontology generation process and data migration processes located in the middle of Figure 2. Before these artifacts can be generated, the correspondences between the schema language and the ontology language have to be defined. By bridging the language definitions on layer L, it is possible to derive a transformation which is capable of generating initial ontologies from any legacy schema conforming to the bridged schema language. Furthermore, the correspondence definitions on layer L allow to migrate arbitrary data conforming to already transformed schemas as instances conforming to the generated ontologies. This means, the correspondences are the key for the required transformations on the layers below.
Considering our running example, when we bridge the DTD language with the UML language on layer L, by defining correspondences, e.g., between **Entity Type** from DTD and **Class** from UML, a DTD, e.g., the (X)HTML recommendation or an application-specific schema, can be automatically transformed into an UML class diagram. Furthermore, an XML document conforming to the DTD, e.g., an (X)HTML document or a document of the application-specific schema, can be transformed into instances of the UML class diagram called object diagrams [3].
C. From Implicit to Explicit Representations
The correspondences on layer L are the first ingredients for the reverse engineering process. After defining how each language element of the legacy schema language corresponds to an ontology language element, the schema definitions are transformable into initial ontologies. Correspondences are for example defined in a correspondence table and then implemented as transformation rules.
By defining only unambiguous correspondences, one only gain a switch in the representation which often allows for a graphical representation of the schemas, but the quality of the schema in terms of structural properties or semantic enhancement is not achieved. Therefore, in addition to unambiguous transformation rules, we need further mecha-
nisms to improve the generated ontology. At this point, it has to be decided between automatic and manual improvements. Considering our running example, UML offers inheritance between classes which is not available in DTDs. For enhancing the schema definition, we would like to introduce inheritance to gain a more compact representation. Now, we have to decide: is it better to automatically explore possible inheritance structures or should the user decide where to introduce inheritance? The answer to this question may depend on the specific schemas. When a schema is used where classes which have overlapping attributes should also have a taxonomic relationship, this may be automatically generated. However, cases exist where this approach would lead to unintended schema specifications. For example, if we have a class Professor and a class Bottle, which both have an attribute name and an attribute age. Although the subclasses do not have a taxonomic relationship, a common superclass would be generated. For deciding between automatic or manual enhancements, the ratio between true positives and false negatives of automatic improvement rules has to be considered.
For supporting both possibilities, the proposed meta-framework provides two steps for enhancing the quality of the schemas. First, heuristics are introduced which tackle deficiencies of the legacy schema language by automatically improving the generated initial ontology. The heuristics are automatically applied and are therefore located in the first phase of the meta-framework as is also indicated in Figure 3. Applied heuristics should be documented in the initial ontology, e.g., by using annotation mechanisms, in order to give the user the chance to validate the correctness of the heuristic applications. Considering again our running example, a possible heuristic is to transform each attribute of type Enumeration with values on and off from the DTD into a Boolean attribute of the corresponding UML class diagram with an annotation ≪Boolean attribute identified≫.
First source for refactorings are features of the ontology language which are missing in the legacy schema language. Considering the running example, one may define refactorings for introducing inheritance relationships between classes. Therefore, refactoring operations can be introduced which origin from object-oriented refactoring patterns [9] such as introduce abstract superclass, shift attributes to superclass and so on. The second source for refactorings are missing constraints which have not been expressed in the legacy schema. Such cases are typically hard to identify when examining only the schemas. Therefore, often additional information resources have to be inspected, e.g., additional declarative specifications or source code of programs which use the schemas. However, in general it is tedious to derive the necessary constraints using such a white-box approach. A black-approach would be to test the applications in an explorative manner or to study documentations such as user handbooks.
In order to verify that the ontology is properly semantically enriched, a list of deficiencies of the legacy schema language as well as lists of defined heuristics and refactoring patterns should be established. Then, each entry of the deficiencies list must be at least mapped to a heuristic or to a refactoring pattern. This is required to verify that no deficiencies of the legacy schema remain in the generated ontology.
III. TOOL SUPPORT
Because reverse engineering is a tedious and error-prone task, tools support is inevitable. Therefore, we discuss in this section, how tools are constructed to support the proposed meta-framework. First, we present a tool architecture for reverse engineering of schemas, and second, for supporting also cases where it is necessary to migrate data from legacy schemas to ontology instances, we elaborate on a tool architecture for automatically migrating instances based on the schema transformation.
A. Ontology Generation Architecture
In this subsection, we elaborate on the core components for the reverse engineering of schemas. Figure 4 shows the details of a possible conceptual architecture. The architecture is divided into two areas according to the two-phase generation process. In a first step a specific parser, e.g., a DTD parser in the running example, builds an object graph of the legacy schema. Then each node in the object graph is visited and transformed according to the transformation rules and heuristics. As soon as the complete object graph of the ontology has been generated, the default serializer of the ontology language is activated in order to serialize the ontology, e.g., as an XML file. This file may be loaded into existing graphical ontology editors. In the second phase, the annotations which indicate the application of a heuristic are
validated by the user and the ontology is refactored accordingly. The editor should provide additional functionality such as the direct navigation to the heuristic annotations as well as their convenient acceptance and rejection. The refactoring patterns should be available as refactoring operations within the editor to allow their convenient application and their propagation to the instance level which is elaborated in the next subsection.
B. Data Migration Architecture
After presenting the tool architecture for the ontology generation, we have now the background to discuss the architecture for data migration which is also divided into two areas (cf. Phase 1 and Phase 2 in Figure 5). The prerequisites for migrating data from the legacy schema to ontology instances are the automatic transformation rules and heuristics (cf. Step 1 in Figure 5) as well as the validation of applied heuristics and the employed refactoring operations (cf. Step 2 in Figure 5). In particular, the automatic transformation of the schema implies how the data should be transformed into initial ontology instances (cf. Step 3). This dependency is not problematic, because the rules and heuristics are predefined which allows to provide a generic component for transforming data to ontology instances conforming to the automatically generated ontology. The more problematic dependency is the second one, namely between the validation and refactoring of the initial generated ontology. This dependency requires proper adaptation (cf. Step 4)) of the initial instances generated by Step 3 in order to conform to the reworked ontology which leads to the field of co-evolution [10].
In order to provide co-evolution rules for instances, it must be ensured that the generated initial ontology is systematically changed by predefined refactoring patterns and not by arbitrary modifications. The reason for this restriction is the fact that for predefined refactoring patterns, rules can be derived for co-evolving instances. Regarding to their impact on the instances, two main categories of refactoring patterns can be distinguished.
(1) Refactorings without impact on instances: Several refactorings are only restructuring the ontologies without influencing their corresponding instances. For example, introducing an abstract class for collecting common properties has no impact on the automatically generated ontology instances. Therefore, no co-evolution rules have to be provided.
(2) Refactorings with impact on instances: Of course, some refactorings have an impact on the instance level. For example, if new ontology elements are introduced which can be instantiated (in contrast to the previous example of the abstract class), or if ontology elements are introduced, co-evolution rules are necessary. This category can be further divided.
(2a) Automatically derivable: Some co-evolution rules are automatically derived from the refactoring patterns and specified as transformation rules. Consider our running example, when an ID/IDREF attribute pair of a DTD is replaced by a typed relationship between two classes within the corresponding UML class diagram, a general rule can handle the co-evolution of the instances by replacing the attribute values by a link between the instances.
(2b) User defined: Some co-evolution rules have to be at least partly defined by the user. For example, if an attribute named mark of type Integer is refactored as an attribute of type Enumeration with values \{A,B,C,...\} then the user has to provide value mappings, e.g., in the form if 1 then A, to ensure an appropriate co-evolution of the ontology instances.
In summary, for migrating data, first a component for supporting Step 3 is required in order to generate initial ontology instances which is again a transformation component based on the correspondences between the legacy schema language and the ontology language. Second, a component supporting Step 4, namely the co-evolution of the generated initial instances, must be available which takes a summary of the applied refactorings (cf. Evolution Report in Figure 5) as input and activates the necessary co-evolution rules on the ontology instances.
IV. RELATED WORK
Due to space limitations, only the most related work to the presented meta-framework is discussed, although we are aware that a huge amount of reverse engineering approaches
exist which are bound to specific schema language combinations, e.g., cf. [11], [12], [13] for approaches and cf. [14], [15], [16] for surveys.
The most closely related work is the ModelGen operator proposed by Bernstein [17] in the field of model management. The basic idea of model management is to provide a high-level language which is based on a set of generic operators such as ModelGen, Match, Merge, or Diff. These operators can be composed in so-called scripts in order to solve more complex integration scenarios. In particular, the ModelGen operator is used when a source schema $S_1$ defined in terms of the data model $M_1$ is given, as well as a target data model $M_2$. The ModelGen operator is capable of producing a target schema $S_2$ defined in terms of data model $M_2$ which semantically corresponds to the source schema $S_1$. Furthermore, also instances of source schema $S_1$, i.e., the data, is transformed to instances of source schema $S_2$. Atzeni et al. [18], [19] provided an implementation of the ModelGen operator regarding commonly used data models such as ER, UML class diagrams, and the relational data model. For the implementation of ModelGen, the authors decided to use a so-called supermodel, a model that integrates the modeling constructs of commonly used data models and acts as a “pivot” model. In addition, adapters are used to transform ER, UML class diagrams, and relational models into the supermodel formalism. If one wants to transform an ER model into a relational model, then the ER model is transformed with simple one-to-one translations into a supermodel model, then a set of generic operators is applied on the model as long as it only uses concepts of the relational data model, e.g., many-to-many relationships are transformed into additional relations, and finally, the model may be translated with a simple one-to-one transformation into a relational model. Although our meta-framework also exploits the fact that correspondences on the language layer can be used for transforming schemas on the next lower layer as well as for transforming instances of those schemas, we are aiming at the semantic enrichment of the schemas. In particular, in the second phase of our meta-framework, the user has to incorporate constraints in the schemas which have not been captured in the original schema definitions. Therefore, in addition to unambiguous transformation rules, we are using heuristics and refactorings to improve the design and precision of the schemas.
V. CONCLUSION AND FUTURE WORK
In this paper we have presented a meta-framework for reverse engineering of schemas into ontologies, which is applicable in several scenarios. The presented meta-framework has been proven useful in various integration scenarios [20], [6], [21]. Promising results have been achieved for reverse engineering of languages supported by CASE tools and standards such as (X)HTML, but also for schemas of legacy web applications. One of the most comprehensive case study was the creating a metamodel for WebML from the accompanying tool WebRatio, where the language has been implemented with several DTDs [22]. By following the presented meta-framework, major improvements of the quality of the language description have been achieved. For example, the overall amount of elements used to describe the language WebML has been reduced from 707 to 487. This quality improvements have been useful in subsequent projects in which WebML has been integrated with other web modeling languages [23] and extended with aspect-oriented modeling concepts [24].
Several issues remain open for future work. We plan to establish a generic tool platform which provide adapters for several schema languages, a high-level transformation language for implementing the schema transformations as well as the refactoring patterns, and finally a generic co-evolution tool should be integrated which allows to define the co-evolution rules with an appropriate domain-specific language. For developing this platform, we plan to employ model-based technologies which show a high unification potential [25] as well as an appropriate abstraction level.
REFERENCES
|
{"Source-Url": "https://publik.tuwien.ac.at/files/PubDat_177494.pdf", "len_cl100k_base": 4358, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 20052, "total-output-tokens": 6346, "length": "2e12", "weborganizer": {"__label__adult": 0.00034880638122558594, "__label__art_design": 0.0005321502685546875, "__label__crime_law": 0.000385284423828125, "__label__education_jobs": 0.0011272430419921875, "__label__entertainment": 7.790327072143555e-05, "__label__fashion_beauty": 0.000179290771484375, "__label__finance_business": 0.0002994537353515625, "__label__food_dining": 0.0003254413604736328, "__label__games": 0.00046753883361816406, "__label__hardware": 0.0006060600280761719, "__label__health": 0.0005884170532226562, "__label__history": 0.0003230571746826172, "__label__home_hobbies": 9.101629257202148e-05, "__label__industrial": 0.0005078315734863281, "__label__literature": 0.00047516822814941406, "__label__politics": 0.0002663135528564453, "__label__religion": 0.000522613525390625, "__label__science_tech": 0.06011962890625, "__label__social_life": 0.00011664628982543944, "__label__software": 0.01287841796875, "__label__software_dev": 0.9189453125, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.0004563331604003906, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28078, 0.02573]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28078, 0.62332]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28078, 0.88705]], "google_gemma-3-12b-it_contains_pii": [[0, 4979, false], [4979, 9690, null], [9690, 14573, null], [14573, 18964, null], [18964, 24674, null], [24674, 28078, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4979, true], [4979, 9690, null], [9690, 14573, null], [14573, 18964, null], [18964, 24674, null], [24674, 28078, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28078, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28078, null]], "pdf_page_numbers": [[0, 4979, 1], [4979, 9690, 2], [9690, 14573, 3], [14573, 18964, 4], [18964, 24674, 5], [24674, 28078, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28078, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
e5b8f9aa4ffa2cfda8eb51c25764dd034167a12a
|
<table>
<thead>
<tr>
<th><strong>Title</strong></th>
<th>Integrating Developer-related information across Open Source Repositories</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Author(s)</strong></td>
<td>Iqbal, Aftab; Hausenblas, Michael</td>
</tr>
<tr>
<td><strong>Publication Date</strong></td>
<td>2012</td>
</tr>
<tr>
<td><strong>Item record</strong></td>
<td><a href="http://www.deri.ie/content/integrating-developer-related-information-across-open-source-repositories">http://www.deri.ie/content/integrating-developer-related-information-across-open-source-repositories</a>; <a href="http://hdl.handle.net/10379/4510">http://hdl.handle.net/10379/4510</a></td>
</tr>
</tbody>
</table>
Some rights reserved. For more information, please see the item record link above.
Integrating Developer-related information across Open Source Repositories
Aftab Iqbal and Michael Hausenblas
Digital Enterprise Research Institute (DERI)
National University of Ireland, Galway (NUIG)
IDA Business Park, Galway, Ireland
Email: \{aftab.iqbal , michael.hausenblas\}@deri.org
Abstract
Software developers use various software repositories in order to interact with each other or to solve software related problems. They are required to adopt an identity for each of the software repositories they wanted to use. Quite often developers are also found on different code forges developing open source projects. It is worth mentioning that the information relevant to the developers are distributed on the Web among different data sources each requires an ID and an authentication mechanism. In this paper, we propose to interlink the identities of a developer across different data sources on the Web. Further, we show the benefit of integrating developer-related information from different data sources using some real-world scenarios.
1. Introduction and Motivation
Software developers use tools with underlying repositories to support the collaboration of distributed software development. In order to interact with the many software repositories that are part of an open-source project, software developers usually require to adopt an identity for each repository. Often, they use multiple identities for the same repository [10]. Research has shown that these software repositories contain rich amount of information about software projects. By mining the information contained in these software repositories, practitioners can depend less on their experience and more on the historical data [5]. However, software repositories are commonly used only as record-keeping repositories and rarely for design decision processes [4]. As pointed out by Conklin in [3], it is still surprisingly difficult to obtain and abstract information from these software repositories in order to answer even simple questions like: How many developers are working on the project? How big the community is surrounding a project? How many contributors are contributing to the project? What is the development ratio per developer? Is the project flourishing? and many more. These are few of the many questions which are hidden deep in the software repositories and developers usually have in mind before joining or start contributing to a project. Having answers to such questions can give a clear picture to the newcomers or other interested users/developers of the project.
Apart from mining information from software repositories in order to answer questions which are mentioned above, there also exist analytic services which provides detailed analysis on different open source projects. One good example of such a service is Ohloh\(^1\). Ohloh is a free, public software directory which monitors up-to-date development activity of open source projects. Ohloh allows software developers to join (i.e., adopt an identity) and claim their commits on existing projects and also add projects not yet on Ohloh, in order to assemble a complete profile of their open source project contributions. Ohloh provides several types of information about an open source project. For example, it provides detailed analysis of per developer commit ratio to the project, per programming language commit ratio to the project, longevity of the project and software metrics such as total lines of source code, commit statistics, comment ratio etc. Other global statistics like programming-language usage are also provided. Ohloh provides all types of information which a user, developer or project manager is interested or keen to know. At the time of writing this paper Ohloh indexed 552,103 open source projects connecting more than 1,534,084 open source developers/contributors making it a valuable data source for collecting up-to-date metrics about open source projects.
With the success and adoption of open source software development, we have seen a tremendous growth in the availability of different code forges. Different forges provide different kinds of features in order to keep existing projects and attract more projects. Because of this, an open source project sometimes developed at multiple code forges. For example, Apache Software Foundation (ASF)\(^2\)
\(^1\)http://www.ohloh.net
\(^2\)http://apache.org
manage projects at their own infrastructure but also provide github mirrors for developers who prefer git over svn versioning system. Therefore certain developers use the GitHub infrastructure to contribute to the development of Apache projects. At the time of writing this paper, ASF host mirrors of approximately 320 different Apache projects on GitHub. Further, an open source project sometimes migrate between different code forges during its entire time period. These code forges also require projects and developers to adopt an identity in order to host a project and to keep track of developers development activity. Eventually, developers end up in developing multiple projects in different code forges. For example, Stefan Bodewig who is a member of ASF and is contributing to many Apache projects, has also developed few projects on GitHub and SourceForge. Hence, the history of open source project development and developer’s contribution to different open source projects are distributed across multiple code forges.
It is worth mentioning that there is an implicit connection between the developer’s activity in different software repositories (i.e., mailing lists, bug tracking systems, source control etc.) hosting a particular project, project development statistics (available via Ohloh), activity on the social media platforms and involvement in multiple projects on different code forges. The developers are required to adopt an identity for each of the repositories they wanted to use. For example, they are required to adopt an email address in order to send an email to the project mailing list, adopt an ID to interact with others on social media platforms, adopt an ID on a particular code forge, adopt an ID to push commits to the source code repository, adopt an ID to comment on the bug in a bug tracking system etc. Each repository implemented its own proprietary ID management in order to authenticate developers to log on and its own proprietary user profile system to manage information about developers. Hence the information relevant to developers are distributed on the Web among different data sources (i.e., social platforms, code forges, software repositories etc.). The different types of identities developer adopts are OpenID, WebID, email address etc. We need hence not only make the interconnection between developer identities among different software repositories within a project explicit but also allow connecting it to other related data sources available on the Web. Having such an explicit representation of the interconnection between the data sources available we will be able to support certain scenarios often found in the software development process:
1. Synthesis Scenarios
- A developer could effectively query the co-developers activities in different software repositories of the project.
- A developer could learn about the expertise of co-developers in different programming languages.
- A developer could easily track the contribution of co-developers in different projects.
2. Analysis Scenarios
- Different programming languages used in the project and the ratio of commits relevant to each programming language.
- Development ratio of a project across multiple code forges.
- Developer’s contribution statistics on each project.
The contribution of this paper is twofold: first, we identified the different types of identities which developers are using in different data sources and provide a simple yet effective approach to interlink identities of the same developer found in different data sources. It will enable developers and other interested users to not only query facts which are hidden deep inside the software repositories but also allow to query development statistics as well as development activity of a developer across multiple code forges. Further, we show different use case scenarios which can be easily addressed by integrating data from multiple data sources.
The paper is structured as follows: in Section 2 we present use cases which describes the benefit of data integration from multiple sources. Then, we introduce the overall architecture of data extraction, transformation and interlinking process in Section 3. We report on exemplary queries and their results in Section 4. In Section 5 we review related work and compare it to our approach. Finally, we conclude in Section 6 and outline future steps.
2. Use Cases
In the following we describe real-world scenarios from the software development domain that can benefit from our methodology. By establishing the identity of developers throughout two or more data sources on the Web (for example a code forge, social media platform and Ohloh) we can integrate the necessary information to meet the requirements of the following, non-exhaustive list of application scenarios:
**Identifying a potential contributor** Ryan is the initiator of an open source project dealing with dynamic Web content. He would like to extend the codebase and is looking for one or more developers he can approach based on a profile he defines as: “Must be familiar with HTTP and REST and should have at least five years experience in back-end development. Working experience with JavaScript is a plus”. How can Ryan, based on information from GitHub, Geekli.st, Twitter and the project mailing list, find appropriate candidates he can approach?
**Supporting team changes** Mary, a developer for a software company, has to relocate in the middle of a project. Julie, her supervisor, needs to hire someone who can replace Mary. Julie wants to analyse Mary’s expertise and recent activities: assigned bugs, committed code, mailing list and blog posts, etc. What Julie ultimately wants is to enable the new hire to hit the ground running, making the new team member as productive as possible from day one by benefitting from Mary’s experience.
**Selecting a project for corporate sponsorship** The board of VOZ, a big, multinational company has identified an opportunity to strategically (that is, both in-kind and financially) sponsor an open source project dealing with a MapReduce implementation. Ken, a new VP for this area, is responsible to suggest an open source project to the board. Ken has access to the code repositories and issue trackers, the mailing lists, a few blog posts and white papers of the projects. How can Ken assess which of the many open source projects is trustworthy? Which project has a mature base and a healthy community? What project fits best both with VOZ’s company and technology culture? How can Ken rank the candidate projects with as little manual work as possible involved, based on objective criteria?
3. **Design and Architecture**
In this section, we describe the data sources we used to extract information and the usage of a common model and standard format to represent extracted information from different data sources to support better integration. One may think of questions like: what is the best way to express the knowledge so that it can be integrated easily across different data sources? Can the knowledge be further used to link to other data sources which contains extra information about a certain entity? Can it be done in an automated fashion?
We propose to use Semantic Web technologies to represent data from multiple data sources. As such, we propose to use RDF [7] (Resource Description Framework) as the core, target data model. Once modeled in RDF, the data can be indexed and queried using the SPARQL query standard and associated tools. Finally, the integrated data can be published on the Web using Linked Data principles allowing third parties to discover and subsequently crawl the knowledge, and also allowing to interlink with background information available remotely on the Web. In Fig. 1, the overall architecture of our approach is depicted. The architecture basically covers the layers as described in the following:
1. The project and developer’s information from different data sources are extracted and transformed into RDF, yielding RDF data sets.
2. Interlink the RDF data sets with each other and across different data sources, where necessary.
3. Load the interlinked RDF data sets into an SPARQL endpoint. This enables one to query the interlinked data sources in order to address many use cases (cf. Section 2).

3.1 **Transforming Data Sources into RDF**
We considered Apache ANT project repositories, GitHub and Ohloh as our primary data sources. We gen-
generated RDF data sets from mailing list archives, bug tracking systems, source control repositories and source code of Apache ANT project. Further, we extracted project and developer related information from GitHub and Ohloh, producing more RDF data sets.
In order to generate and interlink information from Apache ANT project repositories, we used a Linked Data-driven approach for extracting and interlinking information, as we have argued elsewhere [6]. The overall concept of Linked Data Driven Software Development (LD2SD) is to extract information from software repositories of a particular project in RDF format by assigning URIs to each project entity (i.e., bug, email, developer id etc.) and interlink the URIs where necessary. Our LD2SD approach currently generates RDF data sets from bug tracking systems, mailing list archives, source control commit logs and the Java source code of a particular project. An excerpt of an exemplary RDF representation of a source code file is shown in Listing 1.
Listing 1. An Exemplary Java Source in RDF.
```
@prefix : <http://example.org/prj/org/> .
@prefix b: <http://vocab.deri.ie/linkedfloss#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
:connect a b:JavaClass;
:bimports "java.io.IOException";
:bauthor <http://example.org/ant/author/bodewig>;
:bpackage <http://example.org/prj/org/>;
:bhasMethod :connect#write;
:bhasAttribute "_port" .
:connect#write a b:JavaMethod;
:b:parameter-type "byte[]";
:b:parameter-name "buff" .
```
Listing 2. The information extracted for a particular developer describes the developers he/she is following and is being followed (see line#4), the projects he/she is working on (see lines#5-6) and basic profile information (see lines#7-9).
```
@prefix : <http://vocab.deri.ie/linkedfloss#> .
@prefix doap: <http://usefulinc.com/ns/doap#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix meta: <http://example.org/gh/dev/bodewig> a doap:Person;
:followers <http://example.org/gh/dev/larrys>;
:repos <http://example.org/gh/prj/Ant4NantAndMSBuild>;
:repos <http://example.org/gh/prj/ant>;
:location "Germany";
:foaf:accountName "bodewig";
:foaf:name "Stefan Bodewig" .
```
Listing 2. An Exemplary Developer Information extracted from GitHub in RDF.
An excerpt of an exemplary RDF representation of a project extracted from GitHub is shown in Listing 3. The project information extracted from GitHub (cf. Listing 3) describes some basic information about the project (see lines#6-10), core developers of the project (see line#11), the developers who forked the project (i.e., contributors of the project) and source control commits relevant to the project (see lines#12-17).
```
@prefix : <http://vocab.deri.ie/linkedfloss#> .
@prefix doap: <http://usefulinc.com/ns/doap#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix meta: <http://example.org/gh/prj/ant> a doap:Project;
:forkedby <http://example.org/gh/dev/terabyte>;
:repos <http://example.org/gh/prj/ant>;
:doap:programming-language "Java";
:doap:creator <http://example.org/gh/dev/bodewig>;
:doap:creator <http://example.org/gh/dev/larrys>;
:doap:creator <http://example.org/gh/dev/terabyte>;
:commit <http://example.org/gh/prj/commit/0035b01af>.
:commit <http://example.org/gh/prj/commit/0035b01af>.
```
Listing 3. An Exemplary Project Information extracted from GitHub in RDF.
Ohloh provides a RESTful API to the Ohloh open source directory and returns XML-formatted data in response to HTTP GET requests. We used the Ohloh API to get statistical information about projects and developers in XML format and converted it into RDF data sets. For details on the metadata provided by Ohloh API we recommend interested
---
16The URIs used in the Listings are just for illustration purposes and are not dereferenceable.
17https://api.github.com
readers to have a look at their API tutorial\(^{19}\). An excerpt of an exemplary RDF representation of a developer information extracted from Ohloh is shown in Listing 4. The information extracted for a particular developer describes the basic profile information (see lines \#4-9) along with the projects he is working on (see lines \#11-13).
### Listing 4. An Exemplary Developer Information extracted from Ohloh in RDF.
An excerpt of an exemplary RDF representation of a project information extracted from Ohloh is shown in Listing 5. The project information extracted from Ohloh describes the number of users who uses the project, total number of lines of code, developers contributed to the project and the name of the project (see lines \#6-9) etc. Moreover, the total number of commits made by a particular developer, basic information about him/her and the total number of commits made using different programming or scripting languages is also extracted (see lines \#12-25), which will help in identifying the expertise of a developer in a certain programming or scripting language.
### 3.2 Interlinking RDF Data Sources
From the Listings, we are able to conclude that the RDF fragments are talking about two different entities, i.e., a project named “Apache Ant” and a developer named “Stefan Bodewig”. We can interlink these RDF fragments using an \texttt{owl:sameAs} property indicating that these URIs actually refers to the same entity (see Listing. 6). Through interlinking Apache Ant repositories, GitHub and Ohloh RDF data sources, we will be able to query the projects which “Stefan Bodewig” is developing at GitHub, his development activity (e.g., last month commits, bug fixes, social interaction etc.) in Apache Ant repositories and his project development ratio in different programming languages (available via Ohloh).
\(^{19}\)http://meta.ohloh.net/getting_started/
### Listing 5. An Exemplary Project Information extracted from Ohloh in RDF.
In order to interlink software repositories of a particular project, we wrote our own scripts. For example, the \texttt{log extractor} generates the RDF data sets from source control commit logs and further, links it to the RDF data sets of bugs and source-code where necessary. An excerpt of an exemplary RDF representation of a source control log is shown in Listing 7. It exploits the convention used by the developers mentioning bug IDs in the summary of a commit while committing changes to the source control repository. The \texttt{log extractor} uses simple text search algorithm to search for certain phrases commonly used by developers such as, \texttt{bug#xxxx} in the summary of a source control logs. When a bug ID is detected, the \texttt{log extractor} adds a triple using property \texttt{b:fixes} to interlink the source control log to that particular bug (see line \#8). The \texttt{log extractor} also links the source code file URL on source control repository URL to the meta-information of that particular source code file using \texttt{owl:sameAs} property (see line \#11).
Generating `owl:sameAs` links between the developers of Ohloh to the Apache or GitHub developers is straightforward. Ohloh contains statistics of a particular project by analyzing source code repositories which means that the developer names at Ohloh will be same as the developer names in the source code repository of that particular project. In order to generate a set of `owl:sameAs` links between an Apache project and the Ohloh project, we extracted a list of developers who committed on the source control repository of that particular project. For example, the result shows that the developer `bodewig` has most experience in “Java” comparing to “MetaFont” as part of a particular project, it is easy to query the number of commits he made to a particular project using different programming languages based on the number of commits he made to the particular project using different programming languages as shown in Listing 9.
In order to generate links between Ohloh and GitHub developers, we took a subset of 153 random projects from Ohloh and 4414 projects from GitHub. We extracted a list of developers who worked on the GitHub projects under consideration and compared it with the developers of selected 153 Ohloh projects. The string similarity approach resulted in 196 `owl:sameAs` links between Ohloh and GitHub data sets. This enables us to not only query the development activity of developers in GitHub project but also allows to query their contribution statistics which are stored by Ohloh.
### Listing 7. An Exemplary Source Control Interlinking.
```sparql
@prefix : <http://example.org/apache/prj/SVN/> .
@prefix b: <http://baetle.googlecode.com/svn/ns/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
b:revision "39823";
b:sumaary "this patch fixes bug#123.";
<http://example.org/prj/svn/org/connect.java> a b:JavaSource;
<http://example.org/apache/prj/organize> a b:Committing.
```
4. Preliminary Experimental Results
We are currently in the phase of preparing a ground truth in order to validate our approach of interlinking data sources as well as comparing it to other duplicate detection algorithms and frameworks like Silk [11], Swoosh [1], Duke[20] etc. We will address in this section few use cases which we discussed in Section 2 and show how linked data sets can be used to exploit them.
In order to show the benefit of integrating developer related information from different data sources, we hosted a SPARQL endpoint[21] which contains the RDF data sets from GitHub, Ohloh and Apache projects. We will use this SPARQL endpoint to run SPARQL queries which are presented in this section. We start with a simple query to lists all projects on which a developer is working on or has worked in the past (cf. Listing 8).
### Listing 8. Developer Projects Query Pattern.
In our current settings, we extracted Apache ANT developers data from one code forge only (i.e., GitHub). In the near future, we will incorporate other code forges (i.e., SourceForge, GoogleCode etc.) and further apply interlinking approach in order to get an extensive list of projects on which developer is working or has worked in the past. Given that the Ohloh data set contains the development statistics of a developer in different programming languages as part of a particular project, it is easy to query the number of commits he made to a particular project using different programming languages as shown in Listing 9.
The results of the SPARQL query (cf. Listing 9) is shown in Table 1 which makes it easier to understand the expertise of a particular developer in different programming languages based on the number of commits he made to the project. For example, the result shows that the developer has most experience in “Java” comparing to “MetaFont” programming language. It also address to a certain extent our first use case scenario outlined in Section 2.
It is very likely that the developer has contributed to other open source projects which are also indexed by Ohloh. Hence, one can also query the development statistics of all the programming languages which a developer has used in developing different open source projects. The query in
20http://code.google.com/p/duke/
21http://linkedfloss.srvgal85.deri.ie/sparql
<table>
<thead>
<tr>
<th>Programming Language</th>
<th>Commits</th>
</tr>
</thead>
<tbody>
<tr>
<td>Java</td>
<td>2501</td>
</tr>
<tr>
<td>HTML</td>
<td>1404</td>
</tr>
<tr>
<td>XML</td>
<td>1311</td>
</tr>
<tr>
<td>JavaScript</td>
<td>257</td>
</tr>
<tr>
<td>Shell Script</td>
<td>74</td>
</tr>
<tr>
<td>XSL Transformation</td>
<td>30</td>
</tr>
<tr>
<td>DOS Batch Script</td>
<td>28</td>
</tr>
<tr>
<td>CSS</td>
<td>23</td>
</tr>
<tr>
<td>Perl</td>
<td>8</td>
</tr>
<tr>
<td>Python</td>
<td>7</td>
</tr>
<tr>
<td>C#</td>
<td>6</td>
</tr>
<tr>
<td>XML Schema</td>
<td>1</td>
</tr>
<tr>
<td>MetaFont</td>
<td>1</td>
</tr>
</tbody>
</table>
Table 1. Developer Commits to the Project based on Programming Language.
Listing 10. Developer's Average Commit Ratio Query Pattern.
<table>
<thead>
<tr>
<th>Programming Language</th>
<th>Commits Average</th>
</tr>
</thead>
<tbody>
<tr>
<td>Java</td>
<td>327.00</td>
</tr>
<tr>
<td>JavaScript</td>
<td>257.00</td>
</tr>
<tr>
<td>HTML</td>
<td>171.88</td>
</tr>
<tr>
<td>XML</td>
<td>129.57</td>
</tr>
<tr>
<td>Shell Script</td>
<td>37.50</td>
</tr>
<tr>
<td>DOS Batch Script</td>
<td>28.00</td>
</tr>
<tr>
<td>C#</td>
<td>27.00</td>
</tr>
<tr>
<td>CSS</td>
<td>11.33</td>
</tr>
<tr>
<td>XSL Transformation</td>
<td>8.00</td>
</tr>
<tr>
<td>Perl</td>
<td>8.00</td>
</tr>
<tr>
<td>Python</td>
<td>7.00</td>
</tr>
<tr>
<td>XML Schema</td>
<td>1.50</td>
</tr>
<tr>
<td>MetaFont</td>
<td>1.00</td>
</tr>
<tr>
<td>Ruby</td>
<td>1.00</td>
</tr>
</tbody>
</table>
Table 2. Developer's Average Commit Ratio based on Programming Language.
Listing 10 returns an average commit ratio of a developer in different programming languages based on all the projects he worked. The results returned by the query (cf. Listing 10) gives an idea about the expertise level of a developer in different programming languages as shown in Table 2.
Further, one can query the total number of commits made by all the developers to the project in different programming languages. It will help newcomers (i.e., volunteers) of the open source project to get an insight of which programming or scripting language they could potentially use to start contributing to the project. In fact, most or all of the questions pointed out by Conklin (cf. Section 1) can be answered by simple queries. Enabling such integration will not only allow developers to query abstract level information (e.g., number of commits made by a developer to the project) about the project or developer but also allow to query information which is hidden deep inside the project repositories (e.g., contribution of a developer in the last release of the project). We have tried to show the benefit of integrating developer-related data from different data sources in order to serve a variety of use cases often found in the software development domain.
5. Related Work
To the best of our knowledge, there are only a few published works on identifying and relating the different identities that developers use to interact with different tools in the field of software engineering. In [2], Bird et al. proposed an approach to produce a list of \(<name, email>\) identifiers by parsing the emails and clustering them. The clustering algorithm to measure the similarity between every pair of IDs is based on string similarity between names, between names and emails, etc. We also use the string similarity approach in order to interlink the data sources but
our scope is broader in a sense that we are not applying the heuristics within a single project but across different data sources.
Robles et al. [10] discusses the problem of developer identification in general, but the work lacks in details about the heuristics they propose to identify and match the different identities of developers. The authors propose a technique to build one identity from another by extracting the “real life” name from email addresses, such as nsurname@domain.com, name.surname@domain.com etc. Their approach also relies on string similarity algorithm. In general, the problem is related to duplicate detection. The duplicate detection frameworks provide several techniques to effectively solve matching different entities. In this regard, Kopcke et al. [8] analyzed and compared 11 different duplicate detection frameworks. While research in this area mostly refers to identifying duplicates in the same data set, the techniques might be mapped to the case of matching over different data sets. However, they are tailor-made for identifying different IDs of the same developer inside one repository. Naumann et al. [9] provides a nice overview of this research direction.
In the Semantic Web domain, Volz et al. [11] proposed an interlinking framework also known as SILK framework which generates link between two RDF data sets based on some string similarity measures specified by the user. Their interlinking framework supports different string similarity algorithms to compare if two different RDF resources are similar or not. In a next step, we will assess to what extent we can use the SILK framework to link different data sources in the near future.
6. Conclusion
We have motivated and proposed a simple yet effective approach of integrating developer-related information from different data sources on the Web. We have argued that Semantic Web technologies allow integrating and querying information across different data sources and have illustrated this through a number of real-world examples.
We have made some initial progress in integrating different data sources using string similarity based interlinking approach. Currently, we are in the phase of preparing a ground truth and will compare our approach with other duplicate detection algorithms and frameworks. Additionally, we improve the interlinking approach, yielding higher quality and quantity links between different data sources. We also plan to extract more project and developer-related information from different data sources (i.e., Apache, GitHub, Ohloh, SourceForge, GoogleCode etc.), transform them into RDF data sets, interlink them and host them via our SPARQL endpoint. In the near future, we also plan to provide an application on top of our interlinked data sources in order to address certain issues/use cases often found in software development processes.
References
|
{"Source-Url": "https://aran.library.nuigalway.ie/bitstream/handle/10379/4510/iqbal_a_et_al_aug_2012.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 6749, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29156, "total-output-tokens": 7833, "length": "2e12", "weborganizer": {"__label__adult": 0.0003058910369873047, "__label__art_design": 0.00019884109497070312, "__label__crime_law": 0.0003006458282470703, "__label__education_jobs": 0.001010894775390625, "__label__entertainment": 3.516674041748047e-05, "__label__fashion_beauty": 9.673833847045898e-05, "__label__finance_business": 0.00026106834411621094, "__label__food_dining": 0.0002114772796630859, "__label__games": 0.0002906322479248047, "__label__hardware": 0.0003802776336669922, "__label__health": 0.0003066062927246094, "__label__history": 0.0001342296600341797, "__label__home_hobbies": 5.930662155151367e-05, "__label__industrial": 0.00017940998077392578, "__label__literature": 0.00015878677368164062, "__label__politics": 0.0001685619354248047, "__label__religion": 0.0002713203430175781, "__label__science_tech": 0.0034809112548828125, "__label__social_life": 0.00010329484939575197, "__label__software": 0.007045745849609375, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.00017213821411132812, "__label__transportation": 0.000255584716796875, "__label__travel": 0.00013947486877441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33360, 0.03044]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33360, 0.6241]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33360, 0.86564]], "google_gemma-3-12b-it_contains_pii": [[0, 770, false], [770, 5177, null], [5177, 10000, null], [10000, 13692, null], [13692, 17645, null], [17645, 20717, null], [20717, 25215, null], [25215, 28551, null], [28551, 33360, null]], "google_gemma-3-12b-it_is_public_document": [[0, 770, true], [770, 5177, null], [5177, 10000, null], [10000, 13692, null], [13692, 17645, null], [17645, 20717, null], [20717, 25215, null], [25215, 28551, null], [28551, 33360, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33360, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33360, null]], "pdf_page_numbers": [[0, 770, 1], [770, 5177, 2], [5177, 10000, 3], [10000, 13692, 4], [13692, 17645, 5], [17645, 20717, 6], [20717, 25215, 7], [25215, 28551, 8], [28551, 33360, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33360, 0.19474]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
43d33b5d6140224be46d643b000658bd106955ea
|
Designing E-Banking Cardless Transaction Services Framework for Banking Sectors in Ethiopia
Abebaw Teshome
Temtim Assefa
Follow this and additional works at: https://aisel.aisnet.org/menacis2021
This material is brought to you by the MENA at AIS Electronic Library (AISeL). It has been accepted for inclusion in MENACIS2021 by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org.
Abstract: E-banking cardless technologies, enable to withdraw cash from ATM without virtual or any physical card. It increases enhanced utilization of ATM banking services and improve flexibility of services to customers. The purpose of this research was to develop cardless e-banking services. The study used design Science research methodology. The study uses requirement elicitation method to identify and analyze existing system challenges then to design E-banking cardless transaction services framework. Based on the proposed framework, the software was developed that allows customer to withdraw cash from ATM machine using their mobile phone. Based on the study result, card expiration, captured, dispute and forgotten were the main challenges to the exiting ATM based banking services. All of the respondents were used Mobile and ATM services. Domain experts were evaluated both the framework and the prototype, acceptable result was found from the evaluation. The integration of ATM and Mobile banking services and the ECTS framework development can enhance utilization of E-banking services.
Introduction
Electronic banking (E-banking) is a form of banking services which is delivered through digital devices to increase flexibility in service delivery. It includes different electronic channels and E-banking services like Telephone, ATM, Mobile, Internet, Agent, etc. (Sekhon and Yap, 2010). This study predominantly focused on Mobile and ATM banking services to properly utilize E-banking services. On the existing card system, customers’ information like customer account number and PIN is integrated in the ATM card using magnetic strip. This information can be easily accessed by third parties and fraudulent activities may be happened on the customer account (Iyabode et al., 2015). Due to this issue, the customers may not trust to use the existing carding system and may need any other design solution. According to the report of global ATM market and forecasts to 2021, there are 4 million ATMs’ around the globe even if the use of Electronic Automated Teller Machine (EATM) services has declined in recent years (atmmarketplace, 2021). In order to overcome the problem, most banks’ in Ethiopia allocated more amount of currency to acquire effective E-banking. However, the existing carding system security has a lot of problems. OTP, Secret code and registered Mobile number used as a security second layer but the old carding system only authenticated by PIN which can be a means to reduce utilization of E-banking services (Ahmad et al, 2016). The above problems can be solved by using E-banking cardless transaction services which integrates ATM and Mobile banking applications. Currently, EATM without any card can be designed to enhance the efficiency of the ATM usage in banking sectors across the globe because E-banking services using ATM card has many problems like fraud and theft (Iyabode et al., 2015). Hegde and Sharath (2016), Iyabode, Nureni, et al. (2015) and Kinsman (2019) conducted empirical researches on card based E-banking service and they found problems of ATM card expiration, lost, cloned, damaged, skimmed, captured and disputed, forgetting wallet at home, account debited without paying and long reconciliation time issues. Ahmad and Rifen (2016) proposed special biometric features (finger print and face recognition) on ATM to do cardless transaction. Development of ATM having such special features is not cost effective. This study proposed cardless cash withdrawal from normal ATM using mobile phone authentication method.
Literature Review
**EATM Cardless Withdrawal:** EATM is an electronic banking outlet that permits customers to finish one or more banking transactions without the help of any bank official or teller. It is a self-service technology in financial service delivery usually adopted by financial institutions to succeed in their customers outside the banking hall. The user of existing EATM machine uses card to access their account to perform one or more financial transactions. Several problems are related to the usage of carding system.
**Mobile Banking:** Mobile banking involves the usage of Mobile phone for the settlement of financial transactions. It supports account balance enquiry, fund transfer, recharge phones, changing users' password and bill payment. Cardless ATM withdrawal as well as secured messaging for confirmation of receipt to the beneficiary is also meant for low value transactions. Speed of completing the transaction is the key issue having exciting potential, given the low infrastructure requirements and a rapidly increasing Mobile phone penetration (Onodugo, 2015). The increased prevalence of Mobile phones provides exciting opportunities for the growth of Mobile banking. Three billion people were expected to own Mobile phones in the globe by 2012 (Goyal, Pandey and Batra, 2012) which shows that when the number of Mobile banking users will increase, it will create an opportunity to utilize the ATM E-banking services effectively without acquiring any additional cost.
**Cardless ATM Banking:** ATM exchanges electronic financial transactions without the usage of an ATM card, employing a portable client gadget or Mobile phone. The Mobile phone communicates with the ATM using the mobile application. The ATM communicates with the portable client gadget through ATM application which may incorporate communication through any remote machine. A portable client gadget may give exchange data or confirmation data to ATM or to a verification framework in communication with ATM. The exchange may be related with the users' E-banking account or another account. It may produce an energetic esteem which may be utilized as a watchword, a verification esteem, an account identifier or an exchange identifier (Varadarajan, 2011). Managing an account using cardless ATM banking is the moment prevalent get to channel banking system in order to manage an account administration behind customers' account management by the bank. It is critical that banks should provide quality services to stay competitive through cardless ATMs’.
**One Time Password (OTP)** It is one-time PIN or dynamic password randomly generated (Basavegowda and Seenappa, 2014) by E-banking system while a user initiate cardless OTP request using Mobile phones. It is a password that is valid for only one and specific transaction which will be invalid after cardless cash withdrawal would be done. Moreover, OTP is one time and more secured alphanumeric password generated by the bank's cardless E-banking system when the user requests the E-banking cardless transaction services using the Mobile banking applications. On E-banking cardless transaction services, OTP used as a second security layer mechanism because the E-banking card system only authenticated by PIN (Hegde and Sharath, 2016). According to the study of Basavegowda and Seenappa (2014), Secret code is widely used in many applications like data transfer, sharing data, login to emails, Electronic banking, etc. So, system users should give high attention on it to have strong authentication mechanism to secure all applications as much as possible. On cardless E-banking system, the Secret code entered by the system users and kept on the mind of the account holder, then should be shared to the beneficiary when requested.
**Traditional ATM Transaction Workflow:** Based on the study of Hegde and Sharath (2016), the traditional card system ATM services activities workflows shown below on Figure 1, which is relevant to identify and capture the existing card system process challenges. On the existing card system, there are many challenges as stated on statement of the problem and as shown below Figure 1, authentication of the customer made by only PIN code. These challenges and security issues need to be resolved by the new system too. Moreover, to develop E-banking transaction services conceptual model and theoretical framework, need to clearly know the traditional and the current EATM services workflows. On traditional ATM cash withdrawal, the customer need to insert ATM card along with PIN code but related to the card, there were so many issues. Especially, security issues would be resolved by the proposed E-banking cardless transaction services framework enhances utilization of E-banking services.
Methodology
Study Area: The study was conducted in Ethiopia at Abay bank S.C. head office found in Addis Ababa region. Abay bank is one of privately owned commercial bank established on July 14, 2010 and started operation on November 4, 2010. Also, the study mainly focused on E-banking area with specific emphasis of designing E-banking cardless service.
Research Approach: In this study, Design Science research approach was applied. Interview, observation and document analysis data were used for data to identify, the existing E-banking card system related problems. Additionally, Design Science research is a method used to create and evaluate new artifacts.
Research method: In order to design the proposed theoretical E-banking cardless transaction services framework, Design Science (DS) research method was used to develop the artifact. According to Hevner, et al. (2004), DS research method looks to amplify the boundaries of human and organizational capabilities by creating modern and imaginative artifacts. DS research is used to create and evaluate IT artifacts intended to solve identified organizational problems. Design uses theoretical knowledge from the knowledge base and creates new innovative artifacts which did not exist before. DS research in IT often addresses problems associated with some aspect of the design of a system. Constructing a design instantiation that automates a process demonstrates that the method can be automated (Hevner, March, et.al., 2004).
The study used the DS research process which was developed by Offermann (2009). The design process is relevant to achieve the general and specific objectives of this research and to design the required E-banking cardless transaction services framework. It has three main stages namely problem identification, solution design and evaluation. As indicated below on Figure 5, the first step is problem identification in which existing system problems are clearly identified, analyzed and interpreted. System problems are challenges related to E-banking card system. Therefore, to resolve all the problems related to the card system, the researcher designed and proposed E-banking cradles transaction services framework which is relevant to enhance utilization of E-banking services. Finally, the proposed framework was evaluated using different evaluation parameters. Based on the evaluation checklist (see Appendix 2 and 3), E-banking and IT experts evaluated the framework and the developed software.
In relation to DS research process model indicated above on Figure 5, identified problems were challenges stated on statement of the problem which were related to the existing carding system. The DS research methodology mainly focused on problem identification, solution design and evaluation. The design solution of this study was a framework having the objectives to enhance proper utilization of E-banking services. In order to design the framework and to develop the sample cardless E-banking services software, object oriented method or prototyping was used. Moreover, to evaluate the system, interview questions evaluation checklists were prepared and applied.
**Target Population and Sampling:** The target population of the research were Abay bank external and internal customers include managers and accountants, cashiers, tellers, E-banking experts and IT experts. The selected respondents are those employees who can give relevant and rich information about the research problem. So that, to select the respondents, purposive sampling used as a sampling technique. Purposive sampling is a nonprobability sampling technique and the research data would be stored, analyzed and interpreted using thematic data analysis method and tool too. Due to homogeneity of research data at branches, large number of populations, main branch found at head office and main branch is the first pioneer branch, the current research mainly focused at Abay bank head office. Based on the information got from Abay bank, the researcher found some internal customers utilizing both ATM and Mobile banking services actively and frequently. From the internal customers, the researcher purposively selected 23 respondents based on the criteria that can provide relevant and rich information about the issues. The respondents’ research data distribution looks like the following one:
<table>
<thead>
<tr>
<th>No</th>
<th>Respondents</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>E-banking experts</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>IT experts</td>
<td>3</td>
</tr>
</tbody>
</table>
Table 1: Respondents’ data distribution
<table>
<thead>
<tr>
<th>Role</th>
<th>Count</th>
</tr>
</thead>
<tbody>
<tr>
<td>Managers</td>
<td>3</td>
</tr>
<tr>
<td>Accountants</td>
<td>4</td>
</tr>
<tr>
<td>Cashiers</td>
<td>3</td>
</tr>
<tr>
<td>Tellers</td>
<td>2</td>
</tr>
<tr>
<td>Other external Customers</td>
<td>3</td>
</tr>
<tr>
<td><strong>GrandTotal</strong></td>
<td><strong>23</strong></td>
</tr>
</tbody>
</table>
**Data Collection Procedures:** In order to collect required research data, semi-structure interview and personal observations could be applied to know how the existing system works, to identify and analyze the current system problems. Moreover, published journal articles, websites, thesis and dissertations, magazines, newspapers, and organizational documents would be analyzed in the research. The study of Ryan, Coughlan and Cronin (2009) states that individual interview could even be a valuable method of gaining insight into people’s perceptions, understandings, and experiences of a given phenomenon and will contribute to in-depth data collection. However, the interview is quite a conversational interaction between two people and requires considerable knowledge and skill on behalf of the interviewer. Moreover, according to the study of Myers and Newman (2007), interview is one of the most important data gathering tools in research, yet it has remained an unexamined craft in Information Systems research. The interview is used in DS research of all kinds for problem identification, whether positivist, interpretive, or critical realist. Interview is frequently used in case study research too. This method is also appropriate to gather relevant data for this research.
**Instrument and System Evaluation:** According to the study of Yin (2003), validity and reliability are applicable to quantitative research and DS researches concern with trustworthiness, dependability, and transferability. Therefore, to address DS research issues related to thematic analysis, the researcher used QDA Miner software to manage the database, triangulation data collection methods with description of the research procedures and to make the research process transparent. According to Offermann (2009), system evaluation should be performed to check whether if the new artifact achieves artifact development objectives. The main parameters used to evaluate the proposed artifact were ease of use and usefulness to the task. Evaluation checklist was prepared to gather data from the selected respondents. E-banking and IT experts would be participated to evaluate the proposed framework and the developed prototype to know the right framework was designed or not.
**Data Analysis**
In this study, two categories of interview semi-structured questions were prepared. One type of interview was used to collect data from all respondents. The second category interview question was used to collect data from experts. A total of 23 respondents were participated to answer 18 semi-structured interview questions. Similarly, all selected respondents for the semi-structured interview uses both Mobile and ATM E-banking services. Moreover, all interview discussion was transcribed using Microsoft Excel 2016 and then imported to QDA Miner lite software for analysis. The bar graph shown below in Figure 3, shown the distribution of issues related to E-banking services challenges. Based on the graph card expiration, captured, dispute, and forgotten were the major distributed keywords which indicates these were the main challenges on the existing E-banking services. Card cloning and skimming were also the list distribution key words and issues. All of the respondents were used Mobile and ATM services.
Findings: Based on the analysis, discussion and the result got from QDA Miner as shown above on the bar graph, the following were the major findings and can be summarized as stated below:
- ATM card expiration, lost, cloned, damaged, skimmed, captured, disputed, additional cost of issuance, forget wallet at home, account debited without paying and long reconciliation time were the challenges of E-banking services.
- ATM card expiration, dispute, forgotten, reconciliation time, captured and PIN issues were the most critical challenges on existing ATM E-banking services.
- There is very low utilization of ATM services due to the above challenges.
- ATM card skimming and cloning were the least critical challenges.
- Security is the most critical challenges on E-banking carding system
From the above findings, it can be concluded that in existing card system there is no enhanced and proper utilization of E-banking services due to those challenges. So that, to address those issues and to increase enhanced and proper utilization of E-banking services, cardless E-banking services framework need to be designed.
E-banking Cardless Prototype Development
This section discusses proposed E-banking cardless framework and evaluation of the proposed framework using different evaluation metrics like simplicity, completeness, consistency, integrity, security and usability. According to Creswell (2003) framework is a foundation for programmers before start coding the actual system application. The framework need to be designed to resolve the existing E-banking carding system related challenges and to enhance proper utilization of E-banking services. The research also mainly focused to address the basic design requirements that have been gathered from the interviews within Abay bank and to model ideas gathered from different literatures. Therefore, the framework would be designed based on the requirements collected through interviews. Finally, based on the framework, a sample prototype was developed for ATM and Mobile banking modules along with SMS notifications.
Functional Requirements: Based on the current study, E-banking services affected by different challenges and there were no enhanced and proper utilization of E-banking services preferably on ATM E-banking services. As informed by different E-banking and IT experts during the interview time, a new system need to be designed that can address the existing carding system challenges and agreed to design
ECTS framework. The main functional requirements of the system were Cardless cash withdraw from ATM, Insert registered Mobile number, PIN, OTP and Secret Code, Check the validity of registered Mobile number, PIN, OTP and Secret Code, Check customer account balance, Debit customer account and handle transactions and SMS notification for the beneficiary.
**Nonfunctional Requirements**: The following were some of nonfunctional requirements of the system: Increase security features on E-banking services and to have trust by the customers, develop user friendly design interface and application to fulfill ease of use, Feedback for wrong entry, Strengthen performance of frontend and backend applications, and Scalability, maintainability and availability of the system.
**Proposed System Mobile Banking Workflow**: In order to design the actual ECTS framework it is better to know how ECTS works and Figure 4 shows the work flow from mobile side and these are the steps: User “Open Mobile App/USSD” then “Enter PIN”, here the system checks whether the PIN is correct or not and if incorrect the system stops generating the OTP. If correct, move to “Enter Mobile No”, if the mobile number is not valid, the system stops generating the OTP and if it is correct, move to “Select Generate OTP”, next “Enter Amount and Secret Code”, here the system check the customer account balance and if there is insufficient amount, the system stops generating the OTP and if there is sufficient amount, system generate the requested OTP. Finally, the system sends SMS to the mobile that the user entered and successfully terminate the whole process.

From the Mobile banking process side as indicated above on Figure 4, the user expected to enter valid 4 digit registered PIN code and Mobile number, amount and any 4-digit secret code. Then, the system checks PIN, Mobile number and amount. If all the user inputs were correct then the system automatically generate the OTP number send a message to the registered Mobile number. The message looks like “Dear customer OTP generated with OTP number ***, amount *** birr from Mobile No 09***”. OTP number can be generated with two options (self, other customers). For security reason “Secret Code” could not be sent automatically to the beneficiary so that, phone call or SMS is required to get the “Secret Code” from the sender. Only OTP number and amount information will be sent to the beneficiary and to get the cardless withdrawal services, the customer expected to visit the nearest bank’s ATM.
**Proposed System ATM Banking Workflow**: Similar to the steps on Mobile banking module, the following are the steps from ATM side as indicated on Figure 5. User “Press Cardless Withdrawal” button
from ATM then “Enter PIN”, here the system checks whether the PIN is correct or not, if it is wrong the system stops the amount withdrawal process. If the PIN is correct, move to “Enter Mobile No, OTP and Secret Code”, here system check the validity of Mobile number, OTP and Secret code and if one of these would be wrong, the system terminate the amount withdrawal process. If all of these issues becomes correct, system made withdrawal transaction then debit the customer account and dispenses cash from ATM. Finally, the system sends SMS notification to the account holder mobile number to notify the customer as the account was debited based on the OTP request. Moreover, the customer expected to enter registered and valid PIN code and Mobile number from ATM side. Since the beneficiary already got SMS from the sender, OTP information also visible for the beneficiary from the SMS text. In the same way, the beneficiary customer should get the “Secret Code” from the sender using phone call. After entering all the necessary parameters, the system did the withdrawal transaction by debiting the customer account and the ATM immediately dispenses the cash to the beneficiary. Finally, the system automatically sends SMS to the account holder and the message looks like “Dear customer your account ‘**** debited with amount **** ETB birr’.
Proposed System Framework: Framework provide guidance about all facets of the study relevant to assessing the general philosophical ideas behind the inquiry, follow detailed data collection and analysis procedures and situate plans in ideas that are well-grounded in the literature (Creswell, 2003). Therefore, the ECTS framework relevant to design and implementation of cardless E-banking services system for Abay bank is shown below (Figure 6).
Prototyping for ECTS System: As stated by Houde (1997), prototypes are widely recognized to be a core means of exploring and expressing designs for interactive system relevant to deliver an artifact. In order to know the right framework was developed prototyping is important. The proposed prototype sample screen shots and codes from the designed sample software on Mobile and ATM banking modules are discussed in the following section.
Mobile Banking Module Screens: If the user entered valid and registered mobile number, the screen shown below on Figure 7, will be displayed.
Based on the screen shown above on Figure 18, the user expected to enter 1 to generate OTP request to self and need to enter 2 to generate OTP cardless withdrawal request for other beneficiary.
ATM Banking Module Screens: Existing system ATM services were, cash withdrawal, balance inquiry, short statement, fund transfer, money send, top up and bill payment but the new system enables E-banking services to be done without card and added one service called “Cardless” services. Based on Figure 8 shown below, the user should enter active and valid OTP which already sent to the beneficiary’s Mobile number through SMS. So that the system user expected to enter such OTP number and click submit button to proceed to the next step.
Due to security reason, the developer did not want to send the Secret code information through SMS and the simple Secret code resides on the mind of the sender. As a result, the beneficiary need to communicate the sender in order to get the right secret code that the sender already used for confirmation. As shown below on Figure 9, the system requested to enter the correct 4-digit Secret code and then press submit button.
In order to complete the whole process, the user should enter valid and resisted beneficiary's Mobile number.
**SMS Notifications Sample SMS**: Figure 10 shown below, shows the sample SMS send to the beneficiary using the developed cardless E-banking services system.
Moreover, the sample SMS code (Appendix 4) used to send SMS notification to the beneficiaries’ Mobile number when OTP would be generated for the customer and when the customer account debited based on the system user request.
**Framework and Prototype Evaluation**: As stated by Petter and Khazanchi (2010), DS based framework can be evaluated using different evaluation criterions' which includes plausibility, effectiveness, feasibility, predictively, reliability, comprehensiveness, scalability, ease of use and security. To evaluate the framework and prototype, interview checklist was prepared based on these criterions, then IT and E-banking experts were evaluated the proposed framework. Moreover, to evaluate the framework using the evaluation criterions, “Yes” and “No” options were presented for the respondents. If
the framework satisfies the criteria, the respondent tick (√) for Yes and (X) for No, then the researcher converted the result to average percentage.
According to the study of Mugisha, Nankabirwa, et.al (2019), if the average evaluation checklist value \( \geq 80\% \), the system is the most usable. The average percentage of the evaluation checklist of the respondents was 97.2\% which shows that the right framework was designed and proposed for cardless E-banking services. Similarly, based on the study of Mugisha, Nankabirwa, et.al (2019), if the average evaluation checklist value \( \geq 80\% \), the designed software is strongly acceptable. The average percentage of the respondents on the criterions was 96.2 \%. This shows that, the developed cardless E-banking services prototype was accurate and can be implemented in banking sectors.
**Conclusion**
Based on the current study findings, the researcher can give the following major conclusions: In banking sectors, E-banking services plays a great role for the development of the bank by providing convenient, effective and efficient banking services. EATM reduces congestion of customers in bank hall. The existing carding system challenged by card expiration, lost, clone, damage, skimming, capture, dispute, additional cost of issuance and maintenance, forget wallet at home, theft, account debited without paying and long reconciliation time. Due to different challenges, there is very low utilization of EATM services even if huge investment were paid to acquire E-banking technologies. Therefore, to enhance utilization of EATM services, existing card based ATM services challenges were identified, analyzed and interpreted to propose and evaluate cardless E-banking service system. The cardless E-banking system allows withdrawing cash from ATM without using physical ATM card. It needs only the integration of Mobile and ATM banking system modules. Based on the study, the functional and nonfunctional requirements to develop E-banking cardless services framework were identified. The main functionality of the proposed prototype is to accept registered Mobile number, PIN, OTP and Secret Code, check customer account balance, debit customer account and handle transactions and SMS notification for the beneficiary. The system was also designed to be user friendly by having graphic interface, feedback for wrong entry, scalability, availability, maintainability and performance of application. Security also the most critical challenge on existing E-banking carding system because authentication of the customer done by only PIN but the new system uses registered Mobile number, OTP and secret code as additional security mechanisms. From Mobile banking module, OTP will be generated and SMS will be sent to the beneficiary. Similarly, on the ATM module, user requested to enter OTP number, Secret code and Mobile number, then when the transaction become successful, account debited is notified through SMS to the account holder.
**References**
|
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1021&context=menacis2021", "len_cl100k_base": 5875, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29194, "total-output-tokens": 7093, "length": "2e12", "weborganizer": {"__label__adult": 0.0013322830200195312, "__label__art_design": 0.00872802734375, "__label__crime_law": 0.002323150634765625, "__label__education_jobs": 0.0157928466796875, "__label__entertainment": 0.00045228004455566406, "__label__fashion_beauty": 0.0012378692626953125, "__label__finance_business": 0.2666015625, "__label__food_dining": 0.002040863037109375, "__label__games": 0.004852294921875, "__label__hardware": 0.009368896484375, "__label__health": 0.003910064697265625, "__label__history": 0.0014829635620117188, "__label__home_hobbies": 0.0008249282836914062, "__label__industrial": 0.00397491455078125, "__label__literature": 0.0011205673217773438, "__label__politics": 0.0008502006530761719, "__label__religion": 0.0012331008911132812, "__label__science_tech": 0.17041015625, "__label__social_life": 0.0002732276916503906, "__label__software": 0.043426513671875, "__label__software_dev": 0.453369140625, "__label__sports_fitness": 0.0006718635559082031, "__label__transportation": 0.0049591064453125, "__label__travel": 0.0006422996520996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32438, 0.01532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32438, 0.14617]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32438, 0.92292]], "google_gemma-3-12b-it_contains_pii": [[0, 451, false], [451, 4050, null], [4050, 8821, null], [8821, 11314, null], [11314, 13333, null], [13333, 17011, null], [17011, 19497, null], [19497, 22285, null], [22285, 24078, null], [24078, 25392, null], [25392, 26916, null], [26916, 30971, null], [30971, 32438, null]], "google_gemma-3-12b-it_is_public_document": [[0, 451, true], [451, 4050, null], [4050, 8821, null], [8821, 11314, null], [11314, 13333, null], [13333, 17011, null], [17011, 19497, null], [19497, 22285, null], [22285, 24078, null], [24078, 25392, null], [25392, 26916, null], [26916, 30971, null], [30971, 32438, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32438, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32438, null]], "pdf_page_numbers": [[0, 451, 1], [451, 4050, 2], [4050, 8821, 3], [8821, 11314, 4], [11314, 13333, 5], [13333, 17011, 6], [17011, 19497, 7], [19497, 22285, 8], [22285, 24078, 9], [24078, 25392, 10], [25392, 26916, 11], [26916, 30971, 12], [30971, 32438, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32438, 0.14458]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
f9a52eeead0641eaa10043e87a2d26169ca08085
|
[REMOVED]
|
{"len_cl100k_base": 6825, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33600, "total-output-tokens": 9477, "length": "2e12", "weborganizer": {"__label__adult": 0.00040078163146972656, "__label__art_design": 0.0002841949462890625, "__label__crime_law": 0.0002522468566894531, "__label__education_jobs": 0.0009870529174804688, "__label__entertainment": 4.595518112182617e-05, "__label__fashion_beauty": 0.00015103816986083984, "__label__finance_business": 0.0003561973571777344, "__label__food_dining": 0.00036787986755371094, "__label__games": 0.0004038810729980469, "__label__hardware": 0.0006613731384277344, "__label__health": 0.0003743171691894531, "__label__history": 0.0002243518829345703, "__label__home_hobbies": 7.408857345581055e-05, "__label__industrial": 0.0003523826599121094, "__label__literature": 0.00023877620697021484, "__label__politics": 0.00019752979278564453, "__label__religion": 0.0003659725189208984, "__label__science_tech": 0.0035495758056640625, "__label__social_life": 8.273124694824219e-05, "__label__software": 0.0034618377685546875, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.0004301071166992187, "__label__travel": 0.0001895427703857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43401, 0.03504]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43401, 0.59073]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43401, 0.93302]], "google_gemma-3-12b-it_contains_pii": [[0, 4485, false], [4485, 8005, null], [8005, 13521, null], [13521, 15351, null], [15351, 18157, null], [18157, 21866, null], [21866, 26092, null], [26092, 30238, null], [30238, 32639, null], [32639, 37796, null], [37796, 43401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4485, true], [4485, 8005, null], [8005, 13521, null], [13521, 15351, null], [15351, 18157, null], [18157, 21866, null], [21866, 26092, null], [26092, 30238, null], [30238, 32639, null], [32639, 37796, null], [37796, 43401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43401, null]], "pdf_page_numbers": [[0, 4485, 1], [4485, 8005, 2], [8005, 13521, 3], [13521, 15351, 4], [15351, 18157, 5], [18157, 21866, 6], [21866, 26092, 7], [26092, 30238, 8], [30238, 32639, 9], [32639, 37796, 10], [37796, 43401, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43401, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3a1984c9d553b52045f5eb4d3ccb5ef9771e5646
|
MAP decoding: The BCJR algorithm
- Maximum a posteriori probability (MAP) decoding
- Baum-Welch algorithm (1963?)*
Decoder inputs:
- Received sequence $r$ (soft or hard)
- A priori $L$-values $L_a(u_l) = \ln(P(u_l = 1)/P(u_l = -1))$
Decoder outputs:
- A posteriori probability (APP) $L$-values $L(u_l) = \ln(P(u_l = 1| r)/P(u_l = -1| r))$
- $> 0$: $u_l$ is most likely to be 1
- $< 0$: $u_l$ is most likely to be -1
BCJR (cont.)
\[
L(u_l) \equiv \ln \left[ \frac{P(u_l = +1 | r)}{P(u_l = -1 | r)} \right]
\]
\[
P(u_l = +1 | r) = \frac{p(u_l = +1, r)}{P(r)} = \frac{\sum_{u \in U^+_l} p(r|v) P(u)}{\sum_u p(r|v) P(u)}
\]
\[
L(u_l) = \ln \left[ \frac{\sum_{u \in U^+_l} p(r|v) P(u)}{\sum_{u \in U^-_l} p(r|v) P(u)} \right]
\]
\[
P(u_l = +1 | r) = \frac{p(u_l = +1, r)}{P(r)} = \frac{\sum_{(s',s) \in \Sigma^+_l} p(s_l = s', s_{l+1} = s, r)}{P(r)}
\]
$L(u_l) = \ln \left\{ \frac{\sum_{(s',s) \in \Sigma_l^+} p(s_l = s', s_{l+1} = s, r)}{\sum_{(s',s) \in \Sigma_l^-} p(s_l = s', s_{l+1} = s, r)} \right\}$
$p(s', s, r) = p(s', s, r_{t<l}, r_l, r_{t>l})$,
$p(s', s, r) = p(r_{t>l} | s', s, r_{t<l}, r_l) p(s', s, r_{t<l}, r_l) = p(r_{t>l} | s', s, r_{t<l}, r_l) p(s, r_l | s', r_{t<l}) p(s', r_{t<l}) = p(r_{t>l} | s) p(s, r_l | s') p(s', r_{t<l})$.
BCJR (cont.)
BCJR (cont.)
\[\alpha_l(s') \equiv p(s', r_{t<l})\]
\[\gamma_l(s', s) \equiv p(s, r_l | s')\]
\[\beta_{l+1}(s) \equiv p(r_{t>l} | s),\]
\[p(s', s, r) = \beta_{l+1}(s) \gamma_l(s', s) \alpha_l(s').\]
\[\alpha_{l+1}(s) = p(s, r_{t<l+1}) = \sum_{s' \in \sigma_l} p(s', s, r_{t<l+1})\]
\[= \sum_{s' \in \sigma_l} p(s, r_l | s', r_{t<l}) p(s', r_{t<l})\]
\[= \sum_{s' \in \sigma_l} p(s, r_l | s') p(s', r_{t<l})\]
\[= \sum_{s' \in \sigma_l} \gamma_l(s', s) \alpha_l(s'),\]
\[\alpha_0(s) = \begin{cases} 1, & s = \emptyset \\ 0, & s \neq \emptyset \end{cases}\]
**BCJR (cont.)**
\[
\beta_l(s') = \sum_{s \in \sigma_{l+1}} \gamma_l(s', s) \beta_{l+1}(s)
\]
\[
\beta_K(s) = \begin{cases}
1, & s = 0 \\
0, & s \neq 0
\end{cases}
\]
\[
\gamma_l(s', s) = p(s, r_l | s') = \frac{p(s', s, r_l)}{P(s')}
\]
\[
= \left[ \frac{P(s', s)}{P(s')} \right] \left[ \frac{p(s', s, r_l)}{P(s', s)} \right]
\]
\[
= P(s | s') p(r_l | s', s) = P(u_l) p(r_l | v_l)
\]
\[
\gamma_l(s', s) = P(u_l) p(r_l | v_l) = P(u_l) \left( \sqrt{\frac{E_s}{\pi N_0}} \right)^n e^{-\frac{E_s}{N_0} ||r_l - v_l||^2}
\]
**AWGN**
MAP algorithm
- Initialize forward and backward recursions $\alpha_0(s)$ and $\beta_N(s)$
- Compute branch metrics $\{\gamma_l(s', s)\}$
- Carry out forward recursion $\{\alpha_{l+1}(s)\}$ based on $\{\alpha_l(s)\}$
- Carry out backward recursion $\{\beta_{l-1}(s)\}$ based on $\{\beta_l(s)\}$
- Compute APP $L$-values
- Complexity: Approximately $3 \times \text{Viterbi}$
- Requires detailed knowledge of SNR
- Viterbi just maximizes $r \cdot v$, and does not require exact knowledge of SNR
BCJR (cont.)
\[ \gamma_l(s', s) = P(u_l) e^{-E_s/N_0 \|r_l - v_l\|^2} \]
\[ P(u_l = \pm 1) = \frac{[P(u_l = +1)/P(u_l = -1)]^\pm 1}{1 + [P(u_l = +1)/P(u_l = -1)]^\pm 1} \]
\[ = \frac{e^{\pm L_a(u_l)}}{1 + e^{\pm L_a(u_l)}} \]
\[ = \frac{e^{-L_a(u_l)/2}}{1 + e^{-L_a(u_l)}} e^{u_l L_a(u_l)/2} \]
\[ = A_l e^{u_l L_a(u_l)/2}, \]
\[ \gamma_l(s', s) = A_l e^{u_l L_a(u_l)/2} e^{-(E_s/N_0)\|r_l - v_l\|^2} \]
\[ = A_l e^{u_l L_a(u_l)/2} e^{(2E_s/N_0)(r_l \cdot v_l) - \|r_l\|^2 - \|v_l\|^2} \]
\[ = A_l e^{-(\|r_l\|^2 + n)} e^{u_l L_a(u_l)/2} e^{(L_c/2)(r_l \cdot v_l)} \]
\[ = A_l B_l e^{u_l L_a(u_l)/2} e^{(L_c/2)(r_l \cdot v_l)}, \ l = 0, 1, \ldots, h - 1, \]
\[ \gamma_l(s', s) = P(u_l) e^{-(E_s/N_0)\|r_l - v_l\|^2} \]
\[ = e^{-(E_s/N_0)\|r_l - v_l\|^2} \]
\[ = B_l e^{(L_c/2)(r_l \cdot v_l)}, \ l = h, h + 1, \ldots, K - 1, \]
\[ \max^*(x, y) \equiv \ln(e^x + e^y) = \max(x, y) + \ln(1 + e^{-|x-y|}) \]
\[ \gamma^*_l(s', s) \equiv \ln \gamma_l(s', s) = \begin{cases} \frac{u_l L_0(u_l)}{2} + \frac{L_c}{2} r_l \cdot v_l, & l = 0, 1, \ldots, h - 1, \\ \frac{L_c}{2} r_l \cdot v_l, & l = h, h + 1, \ldots, K - 1. \end{cases} \]
\[ \alpha^*_{l+1}(s) \equiv \ln \alpha_{l+1}(s) = \ln \sum_{s' \in \sigma_l} \gamma^*_l(s', s) \alpha^*_l(s') \]
\[ = \ln \sum_{s' \in \sigma_l} e^{[\gamma^*_l(s', s) + \alpha^*_l(s')]}, \quad l = 0, 1, \ldots, K - 1 \]
\[ \alpha^*_0(s) \equiv \ln \alpha_0(s) = \begin{cases} 0, & s = 0 \\ -\infty, & s \neq 0 \end{cases} \]
\[ \beta^*_l(s') \equiv \ln \beta_l(s') = \ln \sum_{s \in \sigma_{l+1}} \gamma_l(s', s) \beta_{l+1}(s) \]
\[ = \ln \sum_{s \in \sigma_{l+1}} e^{[\gamma^*_l(s', s) + \beta^*_l(s)]}, \quad l = K - 1, K - 2, \ldots, 0 \]
\[ \beta^*_K(s) \equiv \ln \beta_K(s) = \begin{cases} 0, & s = 0 \\ -\infty, & s \neq 0 \end{cases} \]
BCJR (cont.)
\[ p(s', s, r) = e^{\beta^*_l + 1(s) + \gamma^*_l(s', s) + \alpha^*_l(s')} \]
\[
L(u_l) = \ln \left\{ \sum_{(s', s) \in \Sigma_l^+} e^{\beta^*_l + 1(s) + \gamma^*_l(s', s) + \alpha^*_l(s')} \right\} - \ln \left\{ \sum_{(s', s) \in \Sigma_l^-} e^{\beta^*_l + 1(s) + \gamma^*_l(s', s) + \alpha^*_l(s')} \right\}
\]
\[
\max^*(x, y, z) \equiv \ln(e^x + e^y + e^z) = \max^*[\max^*(x, y), z]
\]
\[
L(u_l) = \max^*_{(s', s) \in \Sigma_l^+} [\beta^*_l + 1(s) + \gamma^*_l(s', s) + \alpha^*_l(s')] - \max^*_{(s', s) \in \Sigma_l^-} [\beta^*_l + 1(s) + \gamma^*_l(s', s) + \alpha^*_l(s')]
\]
\[ L(u_l) = \max^* (\beta^*_{l+1} + \gamma^*_l + \alpha^*_l \text{ for solid lines}) - \max^* (\beta^*_{l+1} + \gamma^*_l + \alpha^*_l \text{ for dashed lines}) \]
Log-MAP algorithm
- Initialize forward and backward recursions $\alpha_0*(s)$ and $\beta_N*(s)$
- Compute branch metrics $\{\gamma_l*(s', s)\}$
- Carry out forward recursion $\{\alpha_{l+1}*(s)\}$ based on $\{\alpha_l*(s)\}$
- Carry out backward recursion $\{\beta_{l-1}*(s)\}$ based on $\{\beta_l*(s)\}$
- Compute APP $L$-values
- Advantages over MAP algorithm:
- Easier to implement
- Numerically more stable
Max-log-MAP algorithm
\[
\max^*(x, y) \equiv \ln(e^x + e^y) = \max(x, y) + \ln(1 + e^{-|x-y|})
\]
- Replace max* by max, i.e., remove table look-up correction term
- Advantage: Simpler and much faster
- Forward and backward passes are equivalent to a Viterbi decoder
- Disadvantage: Less accurate, but the correction term is limited in size by \(\ln(2)\)
- Can improve accuracy by scaling with an SNR-(in)dependent scaling factor*
Example: log-MAP
\[ G(D) = [1 \quad 1/(1 + D)] \]
Example: log-MAP
- Assume $E_s/N_0 = 1/4 = -6.02$ dB
- $R = 3/8$, so $E_b/N_0 = 2/3 = -1.76$ dB
\[ \gamma_0^*(S_0, S_0) = \frac{-1}{2} L_a(u_0) + \frac{1}{2} r_0 \cdot v_0 = \frac{1}{2} (-0.8 - 0.1) = -0.45 \]
\[ \alpha_1^*(S_0) = [\gamma_0^*(S_0, S_0) + \alpha_0^*(S_0)] = -0.45 + 0 = -0.45 \]
\[ \alpha_2^*(S_0) = \max\{[\gamma_1^*(S_0, S_0) + \alpha_1^*(S_0)], [\gamma_1^*(S_1, S_0) + \alpha_1^*(S_1)]\} \]
\[ = \max\{[(-0.25) + (-0.45)], [(0.75) + (0.45)]\} \]
\[ = \max\{(-0.70, +1.20) = 1.20 + \ln(1 + e^{-|-1.9|}) = 1.34 \]
Example: log-MAP
• Assume $E_b/N_0 = 1/4 = -6.02$ dB
• $R = 3/8$, so $E_b/N_0 = 2/3 = -1.76$ dB
\[
L(u_0) = [\beta^*_1(S_1) + \gamma^*_0(S_0, S_1) + \alpha^*_0(S_0)] - [\beta^*_1(S_0) + \gamma^*_0(S_0, S_0) + \alpha^*_0(S_0)]
\]
\[
= (3.47) - (2.99) = +0.48
\]
\[
= \max^*(-0.70, +1.20) = 1.20 + \ln(1 + e^{-|-1.9|}) = 1.34
\]
Example: Max-log-MAP
- Assume $E_s/N_0 = 1/4 = -6.02$ dB
- $R = 3/8$, so $E_b/N_0 = 2/3 = -1.76$ dB
\[
\gamma_0^*(S_0, S_0) = \frac{1}{2} L_a(u_0) + \frac{1}{2} r_0 \cdot v_0
\]
\[
= \frac{1}{2} (0.8 - 0.1) = -0.45
\]
\[
\alpha_1^*(S_0) = -0.45 + 0 = -0.45
\]
\[
\alpha_2^*(S_0) = \max(-0.70, +1.20) = 1.20
\]
Example: Max-log-MAP
- Assume $E_s/N_0 = 1/4 = -6.02$ dB
- $R = 3/8$, so $E_b/N_0 = 2R E_s/N_0 = 1.8$
\[ L(u_0) = [\beta_1^*(S_1) + \gamma_0^*(S_0, S_1) + \alpha_0^*(S_0)] - [\beta_1^*(S_0) + \gamma_0^*(S_0, S_0) + \alpha_0^*(S_0)] \]
\[ = (2.79) - (2.86) = -0.07 \]
Punctured convolutional codes
- Recall that an \((n,k)\) convolutional code has a decoder trellis with \(2^k\) branches going into each state
- More complex decoding
- Solutions:
- Bit-level encoders
- Syndrome trellis decoding (Riedel)*
- **Punctured codes**
- Start with low-rate convolutional *mother* code (rate \(1/n\)?)
- Puncture (delete) some code bits according to a predetermined pattern
- Punctured bits are not transmitted. Hence, the code rate is increased, but the free distance of the code could be reduced
- Decoder inserts dummy bits with neutral metrics contribution
Example: Rate 2/3 punctured from rate 1/2
The punctured code is also a convolutional code
\( d_{\text{free}} = 3 \)
Example: Rate 3/4 punctured from rate 1/2
\[ d_{\text{free}} = 3 \]
More on punctured convolutional codes
• Rate-compatible punctured convolutional (RCPC) codes:
• Used for applications that need to support several code rates, e.g., adaptive coding or hybrid ARQ
• Sequence of codes is obtained by repeated puncturing
• Advantage: One decoder can decode all codes in the family
• Disadvantage: Resulting codes may be sub-optimum
• Puncturing patterns:
• Usually periodic puncturing patterns
• Found by computer search
• Care must be exercised to avoid catastrophic encoders
Best punctured codes
<table>
<thead>
<tr>
<th>Mother Code</th>
<th>Punctured Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>ν</td>
<td>g(0)</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>13</td>
</tr>
<tr>
<td>4</td>
<td>31</td>
</tr>
<tr>
<td>5</td>
<td>65</td>
</tr>
<tr>
<td>6</td>
<td>155</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Mother Code</th>
<th>Punctured Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>ν</td>
<td>g(0)</td>
</tr>
<tr>
<td>2</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>13</td>
</tr>
<tr>
<td>4</td>
<td>31</td>
</tr>
<tr>
<td>5</td>
<td>65</td>
</tr>
<tr>
<td>6</td>
<td>155</td>
</tr>
</tbody>
</table>
Tailbiting convolutional codes
- Purpose: Avoid the terminating tail (rate loss) and maintain a uniform level of protection
- Note: Cannot avoid distance loss completely unless the length is not too short. When the length gets larger, the minimum distance approaches the free distance of the convolutional code
- Codewords can start in any state
- This gives $2^\nu$ as many codewords
- However, each codeword must end in the same state that it started from. This gives $2^{-\nu}$ as many codewords
- Thus, the code rate is equal to the encoder rate
- Tailbiting codes are increasingly popular for moderate length purposes
- Some of the best known linear block codes are tailbiting codes
- Tables of optimum tailbiting codes are given in the book
- DVB: Turbo codes with tailbiting component codes
Feedforward encoder: Always possible to find an information vector that ends in the proper state (inspect the last $m$ $k$-bit input tuples)
Example: Feedback encoder
- Feedback encoder: Not always possible, for every length, to construct a tailbiting code
- For each u: Must find unique starting state
- \( L^* = 6 \) not OK
- \( L^* = 5 \) OK
- In general, \( L^* \) should not have the length of a zero input-weight cycle as a divisor
Decoding of tailbiting codes:
- Try all possible starting states (multiplies complexity by $2^\nu$), i.e., run the Viterbi algorithm for each of the $2^\nu$ subcodes and compare the best paths from each subcode.
- Suboptimum Viterbi: Initialize an arbitrary state at time 0 with zero metric and find the best ending state. Continue "one round" from there with the best subcode.
- MAP: Similar
|
{"Source-Url": "http://www.ii.uib.no/~eirik/INF244/Lectures/Lecture11.pdf", "len_cl100k_base": 4888, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 35544, "total-output-tokens": 6155, "length": "2e12", "weborganizer": {"__label__adult": 0.0006837844848632812, "__label__art_design": 0.0007219314575195312, "__label__crime_law": 0.0007734298706054688, "__label__education_jobs": 0.00036454200744628906, "__label__entertainment": 0.00021708011627197263, "__label__fashion_beauty": 0.00027489662170410156, "__label__finance_business": 0.00026035308837890625, "__label__food_dining": 0.00069427490234375, "__label__games": 0.0020294189453125, "__label__hardware": 0.014617919921875, "__label__health": 0.0008406639099121094, "__label__history": 0.000537872314453125, "__label__home_hobbies": 0.0002384185791015625, "__label__industrial": 0.0020294189453125, "__label__literature": 0.0002694129943847656, "__label__politics": 0.0004642009735107422, "__label__religion": 0.0010347366333007812, "__label__science_tech": 0.362548828125, "__label__social_life": 9.453296661376952e-05, "__label__software": 0.0111236572265625, "__label__software_dev": 0.59765625, "__label__sports_fitness": 0.0009565353393554688, "__label__transportation": 0.0010900497436523438, "__label__travel": 0.00038242340087890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 11472, 0.04074]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 11472, 0.88291]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 11472, 0.54296]], "google_gemma-3-12b-it_contains_pii": [[0, 478, false], [478, 914, null], [914, 1328, null], [1328, 1893, null], [1893, 2427, null], [2427, 2922, null], [2922, 3754, null], [3754, 4704, null], [4704, 5468, null], [5468, 5884, null], [5884, 6317, null], [6317, 6368, null], [6368, 6903, null], [6903, 7234, null], [7234, 7549, null], [7549, 7819, null], [7819, 8427, null], [8427, 8545, null], [8545, 8614, null], [8614, 9136, null], [9136, 9834, null], [9834, 10641, null], [10641, 10782, null], [10782, 11080, null], [11080, 11472, null]], "google_gemma-3-12b-it_is_public_document": [[0, 478, true], [478, 914, null], [914, 1328, null], [1328, 1893, null], [1893, 2427, null], [2427, 2922, null], [2922, 3754, null], [3754, 4704, null], [4704, 5468, null], [5468, 5884, null], [5884, 6317, null], [6317, 6368, null], [6368, 6903, null], [6903, 7234, null], [7234, 7549, null], [7549, 7819, null], [7819, 8427, null], [8427, 8545, null], [8545, 8614, null], [8614, 9136, null], [9136, 9834, null], [9834, 10641, null], [10641, 10782, null], [10782, 11080, null], [11080, 11472, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 11472, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 11472, null]], "pdf_page_numbers": [[0, 478, 1], [478, 914, 2], [914, 1328, 3], [1328, 1893, 4], [1893, 2427, 5], [2427, 2922, 6], [2922, 3754, 7], [3754, 4704, 8], [4704, 5468, 9], [5468, 5884, 10], [5884, 6317, 11], [6317, 6368, 12], [6368, 6903, 13], [6903, 7234, 14], [7234, 7549, 15], [7549, 7819, 16], [7819, 8427, 17], [8427, 8545, 18], [8545, 8614, 19], [8614, 9136, 20], [9136, 9834, 21], [9834, 10641, 22], [10641, 10782, 23], [10782, 11080, 24], [11080, 11472, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 11472, 0.07018]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1e9ebda0f79ae7c95631724700970f6693339485
|
1. Status of this memo
This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
To view the list Internet-Draft Shadow Directories, see http://www.ietf.org/shadow.html.
2. Abstract
The current RSVP-TE specification, "RSVP-TE: Extensions to RSVP for LSP Tunnels" (RFC 3209) and GMPLS extensions to RSVP-TE, "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions" (RFC 3473) allow abstract nodes and resources to be explicitly included in a path setup, but not to be explicitly excluded.
In some systems where precise explicit paths are not computed at the head end it may be useful to specify and signal abstract nodes and resources that are to be explicitly excluded from routes. These exclusions may apply to the whole of a path, or to parts of a path between two abstract nodes specified in an explicit route.
Shared Risk Link Groups (SRLGs) allow the definition of resources or groups of resources that share the same risk of failure. The knowledge of SRLGs may be used to compute diverse paths that can be used for protection. In systems where it is useful to signal exclusions, it may be useful to signal SRLGs to indicate groups of resources that should be excluded on the whole of a path or between two abstract nodes specified in an explicit path.
This document specifies ways to communicate route exclusions during path setup using RSVP-TE.
2.1 Future Work
Future work on this document may include the following.
- Addition of further examples and explanation of the applicability of route exclusion.
- Reduction of the length of the XRO and EXRS subobjects.
- Identification of the scope of relevance of exclusions so that they may be omitted from signaled messages, or at least from path computations, when they are not relevant.
- Exclusion of unnumbered links.
- Convergence of SRLG identification with formats defined in other drafts.
3. Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
4. Overview
The current RSVP-TE specification [RSVP-TE] and GMPLS extensions [GMPLS-RSVP-TE] allow abstract nodes and resources to be explicitly included in a path setup, using the Explicit Route Object (ERO).
In some systems it may be useful to specify and signal abstract nodes and resources that are to be explicitly excluded from routes. This may be because loose hops or abstract nodes need to be prevented from causing a route through a specific resource. This is a special case of path calculation distribution to nodes within the system.
Two types of exclusions are required:
i) Do not include any of the abstract nodes in a given set anywhere on the path. This set of abstract nodes to exclude is referred to as the Exclude Route list.
ii) Do not include certain abstract nodes or resources between a specific pair of abstract nodes present in an ERO. Such specific exclusions are referred to as Explicit Exclusion Route.
To convey these constructs within the signaling protocol, a new RSVP object and a new ERO subobject are introduced respectively.
i) A new RSVP-TE object is introduced to convey the Exclude Route list. This object is the Exclude Route Object (XRO).
ii) The second type of exclusion is achieved through a modification to the existing ERO. A new subobject type the Explicit Exclude Route Subobject (EXRS) is introduced to indicate an exclusion.
between a pair of included abstract nodes.
SRLGs allow the definition of resources or groups of resources that share the same risk of failure. The knowledge of SRLGs may be used to compute diverse paths that can be used for protection. In systems where it is useful to signal exclusions, it may be useful to signal SRLGs to indicate groups of resources that should be excluded on the whole of a path or between two abstract nodes specified in an explicit path.
This document introduces an ERO subobject to indicate an SRLG to be signaled in either of the two exclusion methods described above. This subobject might also be appropriate for use within Explicit Routes or Record Routes, but that discussion is outside the scope of this document.
4.1 Scope of Excluded Routes
This document does not preclude a route exclusion from listing many nodes or network elements to avoid. The intent is, however, to indicate only the minimal number of subobjects to be avoided. For instance it may be necessary to signal only the SRLGs (or Shared Risk Groups) to avoid.
It is envisaged most of the conventional inclusion subobjects are specified in the signaled ERO only for the area where they pertain. The number of subobjects to be avoided, specified in the signaled XRO may be constant throughout the whole path setup, or the subobjects to be avoided may be removed from the XRO as they become irrelevant in the subsequent hops of the path setup.
For example, consider an LSP that traverses multiple computation domains. A computation domain may be an area in the administrative or IGP sense, or may be an arbitrary division of the network for active management and path computational purposes. Let the primary path be (Ingress A1,A2,AB1,B1,B2,BC1,C1,C2,Egress1) where Xn denotes a node in domain X, and XY1 denotes a node on the border of domain X and domain Y. Ingress is a node in cdomain A, and Egress is a node in domain C.
Consider the establishment of a node diverse protection path. The protection path must avoid all nodes on the primary path. The exclusions for area A are handled during CSPF at Ingress, so the ERO and XRO signaled at Ingress (A3-strict, A4-strict, AB2-strict, Egress-loose) and (B1, B2, BC1, C1, C2) respectively. At AB2 the ERO and XRO could be (B3-strict, B4-strict, BC2-strict, Egress-loose) and (C1,C2) respectively. At BC2 the ERO could be (C3-strict, C4-strict, Egress-strict) and an XRO is not needed from BC2 onwards.
In general, consideration should be given (as with explicit route) to the size of signaled data and the impact on the signaling protocol.
4.2 Relationship to MPLS TE MIB
[MPLS-TE-MIB] defines managed objects for managing and modeling MPLS-based traffic engineering. Included in [MPLS-TE-MIB] is a means to configure explicit routes for use on specific LSPs. This
configuration allows the exclusion of certain resources.
In systems where the full explicit path is not computed at the ingress (or at a path computation site for use at the ingress) it may be necessary to signal those exclusions. This document offers a means of doing this signaling.
5. Shared Risk Link Groups
The identifier of a SRLG is defined as a 32 bit quantity in [GMPLS-OSPF]. These 32 bits are divided into an 8 bit type field and a 24 bit identifier in [CCAMP-SRLG].
5.1 SRLG ERO Subobject
The format of the ERO and its subobjects are defined in [RSVP-TE].
The new SRLG subobject is defined by this document as follows.
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|L| Type | Length | Tolerance | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| SRLG Id |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
The L bit is an attribute of the subobject. The L bit is set if the subobject represents a loose hop in the explicit route. If the bit is not set, the subobject represents a strict hop in the explicit route.
For exclusions, the L bit SHOULD be set to zero and ignored.
Type
The type of the subobject [TBD].
Length
The Length contains the total length of the subobject in bytes, including the Type and Length fields. The Length is always 8.
Tolerance
The level to which it is permissible for this SRLG to be included in the path when more than one SRLG is specified. A value of zero indicates that this SRLG MUST be avoided. A tolerance value of n < m indicates that the SRLG MUST be avoided in preference to an SRLG with tolerance value m.
If only one SRLG is present, then a value other than zero indicates the SRLG SHOULD be avoided.
SRLG Id
The 32 bit identifier of the SRLG.
5.2 Exclusion Tolerance Semantics
The Tolerance field in the SRLG subobject indicates the degree to which the SRLG must be avoided. (The degree to which it is permissible to include it.)
If the Tolerance field has the value zero (0), the LSP MUST NOT traverse or use any resource that is a member of the SRLG.
If the value is non-zero, all path computation elements SHOULD attempt to select routes that avoid all resources that are members of the SRLG.
Where more than one SRLG with non-zero Tolerance value is specified for exclusion and no route can be found that avoids both SRLGs, a route SHOULD be chosen that avoids the SRLG with the lower Tolerance value.
6. Exclude Route List
The exclude route identifies a list of abstract nodes that MUST NOT be traversed along the path of the LSP being established.
6.1 Exclude Route Object (XRO)
Abstract nodes to be excluded from the path are specified via the EXCLUDE_ROUTE object (XRO). The Exclude Route Class value is [TBD].
Currently one C_Type is defined, Type 1 Exclude Route. The EXCLUDE_ROUTE object has the following format:
Class = TBD, C_Type = 1
```
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
Subobjects
The contents of an EXCLUDE_ROUTE object are a series of variable-length data items called subobjects. The subobjects are identical to those defined in [RSVP-TE] and [GMPLS-RSVP-TE] for use in EROs.
The following subobject types are supported.
<table>
<thead>
<tr>
<th>Type</th>
<th>Subobject</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>IPv4 prefix</td>
</tr>
<tr>
<td>2</td>
<td>IPv6 prefix</td>
</tr>
<tr>
<td>32</td>
<td>Autonomous system number</td>
</tr>
<tr>
<td>TBD</td>
<td>SRLG</td>
</tr>
</tbody>
</table>
The defined values for Type above are specified in [RSVP-TE] and in this document.
The concept of loose or strict hops has no meaning in route exclusion. The L bit, defined for ERO subobjects in [RSVP-TE], is re-used here to indicate that an abstract node MUST be avoided (value 0) or SHOULD be avoided (value 1).
An Attribute octet is introduced in the subobjects that define IP addresses to indicate the attribute (e.g. interface, node, SRLG) associated with the IP addresses that can be excluded from the path. For instance, the attribute node allows a whole node to be excluded from the path, in contrast to the attribute interface, which allows specific interfaces to be excluded from the path. The attribute SRLG allows all SRLGs associated with an IP address to be excluded from the path.
6.1.1 Subobject 1: IPv4 prefix
<table>
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>L</td>
<td>Type</td>
<td>Length</td>
</tr>
<tr>
<td>IPv4 address (4 bytes)</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>IPv4 address (continued)</td>
<td>Prefix Length</td>
<td>Attribute</td>
<td></td>
</tr>
</tbody>
</table>
L
0 indicates that the attribute specified MUST be excluded
1 indicates that the attribute specified SHOULD be avoided
Attribute
interface
0 indicates that the interface or set of interfaces associated with the IP address that should be excluded or avoided
node
1 indicates that the node or set of nodes associated with the IP address should be excluded or avoided
SRLG
2 indicates that all the SRLGs associated with the IP address should be excluded or avoided
Resvd
Zero on transmission. Ignored on receipt.
The rest of the fields are as defined in [RSVP-TE].
6.1.2 Subobject 2: IPv6 Prefix
<table>
<thead>
<tr>
<th>L</th>
<th>Type</th>
<th>Length</th>
<th>IPv6 address (16 bytes)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>IPv6 address (continued)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>IPv6 address (continued)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>IPv6 address (continued)</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>IPv6 address (continued)</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
L
0 indicates that the abstract node specified MUST be excluded
1 indicates that the abstract node specified SHOULD be avoided
Attribute
interface
0 indicates that the interface or set of interfaces associated with the IP address that should be excluded or avoided
node
1 indicates that the node or set of nodes associated with the IP address should be excluded or avoided
SRLG
2 indicates that all the SRLG associated with the IP address should be excluded or avoided
Resvd
Zero on transmission. Ignored on receipt.
The rest of the fields are as defined in [RSVP-TE].
6.1.3 Subobject 32: Autonomous System Number
The L bit of an Autonomous System Number subobject does has meaning in an Exclude Route (contrary to its usage in an Explicit Route defined in [RSVP-TE]. The meaning is as for other subobjects described above. That is:
0 indicates that the abstract node specified MUST be excluded
1 indicates that the abstract node specified SHOULD be avoided
The rest of the fields are as defined in [RSVP-TE]. There is no Attribute octet defined.
6.1.4 Subobject TBD: SRLG
The Attribute octet is not present. The rest of the fields are as defined in the "SRLG ERO Subobject" section of this document.
6.2. Semantics and Processing Rules for the Exclude Route Object (XRO)
The exclude route list is encoded as a series of subobjects contained in an EXCLUDE_ROUTE object. Each subobject identifies an abstract node in the exclude route list.
Each abstract node may be a precisely specified IP address a node, or an IP address with prefix identifying interfaces of a group of nodes, or an Autonomous System.
The Explicit Route and routing processing is unchanged from the description in [RSVP-TE] with the following additions:
a. When a Path message is received at a node, the node must check that it is not a member of any of the abstract nodes in the XRO if it is present in the Path message. If the node is a member of any of the abstract nodes in the XRO it should return a PathErr with the error code "Routing Problem" and error value of "Local node in Exclude Route". If there are SRLGs in the XRO, the node should check that it and the resources it uses are not part of any SRLG that is specified with Tolerance value of zero. If it is, it should return a PathErr with the error code "Routing Problem" and error value of "Local node in Exclude Route". The node may be a member of an SRLG in the XRO that is specified with a non-zero Tolerance value.
b. When choosing a next hop or expanding an explicit route to include additional subobjects, a node:
i) must not introduce an explicit node or an abstract node that equals or is a member of any abstract node that is specified in the Exclude Route Object.
ii) must not (or should not, in the case of a non-zero Tolerance value) introduce links, nodes or resources identified by the SRLG ID specified in the SRLG subobjects(s). If these rules preclude further forwarding of the Path message, the node should return a PathErr with the error code "Routing Problem" and error value of "Route blocked by Exclude Route".
c. The subobjects in the ERO and XRO SHOULD not contradict each other. If they do contradict, the subobjects with the L bit not set, strict or MUST be excluded, respectively, in the ERO or XRO MUST take precedence. If there is still a conflict, the subobjects in the ERO MUST take precedence.
The XRO Class-Num is of the form 11bbbbbb so that nodes which do not support the XRO will forward it uninspected and will not apply the extensions to ERO processing described above. This makes the XRO a 'best effort' process.
This 'best-effort' approach is chosen to allow route exclusion to traverse parts of the network that are not capable of parsing or handling the new function. Note that Record Route may be used to
allow computing nodes to observe violations of route exclusion and attempt to re-route the LSP accordingly.
7. Explicit Exclude Route
The Explicit Exclude Route defines abstract nodes or resources (such as links, unnumbered interfaces or labels) that must not be used on the path between two inclusive abstract nodes or resources in the explicit route.
7.1. Explicit Exclusion Route Subobject (EXRS)
A new ERO subobject type is defined. The Explicit Exclude Route Subobject (EXRS) has type [TBD]. The EXRS may not be present in an RRO or XRO.
The format of the EXRS is as follows.
```
0 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| L | Type | Length | EXRS subobjects |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
L
- ignored and must be zero
- [Note: The L bit in an ERES subobject is as defined for the XRO subobjects]
Type
- The type of the subobject, i.e. EXRS [TBD]
EXRS subobjects
- An EXRS subobject indicates the abstract node or resource to be excluded. The format of this field is exactly the format of an XRO subobject and may include an SRLG subobject. Both subobjects are as described earlier in this document.
Thus, an EXRO subobject for an IP hop might look as follows:
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| L | Type | Length |L| Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv4 address (4 bytes) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Prefix Length | Attribute | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```
Note: The Most Significant Bit in the Type field could be used to indicate exclusion of IPv4/IPv6, AS and SRLG subobjects, eliminating the need to prepend the subobject with an additional TLV header. This would reduce the number bytes require for each subobject by 2 bytes. However, this approach would reduce the ERO Type field space by half. This issue need WG discussion and feedback.
7.2. Semantics and Processing Rules for the EXRS
Each EXRS may carry multiple exclusions. The exclusion is encoded exactly as for XRO subobjects and prefixed by an additional Type and Length.
The scope of the exclusion is the step between the previous ERO subobject that identifies an abstract node, and the subsequent ERO subobject that identifies an abstract node. Multiple exclusions may be present between any pair of abstract nodes.
Exclusions may indicate explicit nodes, abstract nodes or Autonomous Systems that must not be traversed on the path to the next abstract node indicated in the ERO.
Exclusions may also indicate resources (such as unnumbered interfaces, link ids, labels) that must not be used on the path to the next abstract node indicated in the ERO.
SRLGs may also be indicated for exclusion from the path to the next abstract node in the ERO by the inclusion of an EXRO Subobject containing an SRLG subobject. If the Tolerance value in the SRLG subobject is zero, the resources (nodes, links, etc.) identified by the SRLG must not be used on the path to the next abstract node indicated in the ERO. If the Tolerance value is non-zero, the resources identified by the SRLG should be avoided, but may be used in preference to resources associated with another SRLG indicated for exclusion if that SRLG has a (numerically) lower Tolerance value.
The subobjects in the ERO and EXRS SHOULD not contradict each other. If they do contradict, the subobjects with the L bit not set, strict or MUST be excluded, respectively, in the ERO or XRO MUST take precedence. If there is still a conflict, the subobjects in the ERO MUST take precedence.
If a node is called upon to process an EXRS and does not support handling of exclusions it will return a PathErr with a "Bad EXPLICIT_ROUTE object" error.
If the presence of EXRO Subobjects precludes further forwarding of the Path message, the node should return a PathErr with the error code "Routing Problem" and error value of "Route blocked by Exclude Route".
8. Security
The new exclude route object poses no security exposures over and above [RSVP-TE] and [GMPLS-RSVP-TE]. Note that any security concerns that exist with Explicit Routes should be considered with regard to route exclusions.
9. IANA Considerations
9.1. New Class Numbers
One new class number is required.
EXCLUDE_ROUTE
Class-Num = 011bbbb
CType: 1
9.2. New Subobject Types
A new subobject type for the Exclude Route Object and Explicit Exclude Route Subobject is required.
SRLG subobject
A new subobject type for the ERO is required.
Explicit Exclude Route subobject
9.3. New Error Codes
New error values are needed for the error code ‘Routing Problem’.
Unsupported Exclude Route Subobject Type [TBD]
Local Node in Exclude Route [TBD]
Route Blocked by Exclude Route [TBD]
10. Acknowledgments
This document reuses text from [RSVP-TE] for the description of EXCLUDE_ROUTE.
The authors would like to express their thanks to Igor Bryskin, Lou Berger and Dimitri Papadimitriou for their considered opinions on this draft. Also thanks to Yakov Rekhter for reminding us about SRLGs!
11. Normative References
[RF2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997
12. Informational References
13. Authors’ Information
Cheng-Yin Lee
Alcatel
600 March Road.
Ottawa, Ontario
Canada K2K 2E6
e-mail: Cheng-Yin.Lee@alcatel.com
Adrian Farrel
Movaz Networks, Inc.
7926 Jones Branch Drive, Suite 615
McLean VA, 22102 USA
Phone: +1-703-847-1867
Email: afarrel@movaz.com
Stefaan De Cnodder
Alcatel
Francis Wellesplein 1
B-2018 Antwerp, Belgium
e-mail: stefaan.de_cnodder@alcatel.be
14. Full Copyright Statement
Copyright (C) The Internet Society (2003). All Rights Reserved.
This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of
developing Internet standards in which case the procedures for
copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.
The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-ietf-ccamp-rsvp-te-exclude-route-00.pdf", "len_cl100k_base": 5815, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 39821, "total-output-tokens": 6976, "length": "2e12", "weborganizer": {"__label__adult": 0.0004382133483886719, "__label__art_design": 0.00033855438232421875, "__label__crime_law": 0.0009622573852539062, "__label__education_jobs": 0.0012521743774414062, "__label__entertainment": 0.000274658203125, "__label__fashion_beauty": 0.0002180337905883789, "__label__finance_business": 0.0014810562133789062, "__label__food_dining": 0.0003633499145507813, "__label__games": 0.00140380859375, "__label__hardware": 0.01070404052734375, "__label__health": 0.0003180503845214844, "__label__history": 0.0006260871887207031, "__label__home_hobbies": 9.721517562866212e-05, "__label__industrial": 0.0017528533935546875, "__label__literature": 0.0005321502685546875, "__label__politics": 0.0007252693176269531, "__label__religion": 0.0005407333374023438, "__label__science_tech": 0.368408203125, "__label__social_life": 0.00012409687042236328, "__label__software": 0.1177978515625, "__label__software_dev": 0.488037109375, "__label__sports_fitness": 0.00042176246643066406, "__label__transportation": 0.0027675628662109375, "__label__travel": 0.000278472900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24552, 0.03304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24552, 0.61217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24552, 0.84506]], "google_gemma-3-12b-it_contains_pii": [[0, 1949, false], [1949, 1949, null], [1949, 4070, null], [4070, 4113, null], [4113, 6888, null], [6888, 6945, null], [6945, 8817, null], [8817, 8852, null], [8852, 10737, null], [10737, 10737, null], [10737, 12280, null], [12280, 12280, null], [12280, 13757, null], [13757, 13757, null], [13757, 16349, null], [16349, 16457, null], [16457, 18727, null], [18727, 18727, null], [18727, 20992, null], [20992, 20992, null], [20992, 22438, null], [22438, 22438, null], [22438, 23835, null], [23835, 23898, null], [23898, 24552, null], [24552, 24552, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1949, true], [1949, 1949, null], [1949, 4070, null], [4070, 4113, null], [4113, 6888, null], [6888, 6945, null], [6945, 8817, null], [8817, 8852, null], [8852, 10737, null], [10737, 10737, null], [10737, 12280, null], [12280, 12280, null], [12280, 13757, null], [13757, 13757, null], [13757, 16349, null], [16349, 16457, null], [16457, 18727, null], [18727, 18727, null], [18727, 20992, null], [20992, 20992, null], [20992, 22438, null], [22438, 22438, null], [22438, 23835, null], [23835, 23898, null], [23898, 24552, null], [24552, 24552, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24552, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24552, null]], "pdf_page_numbers": [[0, 1949, 1], [1949, 1949, 2], [1949, 4070, 3], [4070, 4113, 4], [4113, 6888, 5], [6888, 6945, 6], [6945, 8817, 7], [8817, 8852, 8], [8852, 10737, 9], [10737, 10737, 10], [10737, 12280, 11], [12280, 12280, 12], [12280, 13757, 13], [13757, 13757, 14], [13757, 16349, 15], [16349, 16457, 16], [16457, 18727, 17], [18727, 18727, 18], [18727, 20992, 19], [20992, 20992, 20], [20992, 22438, 21], [22438, 22438, 22], [22438, 23835, 23], [23835, 23898, 24], [23898, 24552, 25], [24552, 24552, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24552, 0.09717]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
4786758ed0fb56da94ac26d89d5afb4dcc2511b3
|
A-level
COMPUTER SCIENCE
(7517/1A/1B/1C/1D/1E)
Paper 1
Specimen 2015 am/pm Time allowed: 2 hours 30 minutes
Materials
• For this paper you must have access to:
• a computer
• a printer
• appropriate software.
• An electronic version of the Skeleton Program and Data File.
• A hard copy of the Preliminary Material.
Instructions
• Type the information required on the front of your Electronic Answer Document.
• Enter your answers into the Electronic Answer Document.
• Answer all questions.
• Before the start of the examination make sure your centre number, candidate name and candidate number are shown clearly in the footer of every page of your Electronic Answer Document (not the front cover).
• Tie together all your printed Electronic Answer Document pages and hand them to the invigilator.
Information
• The marks for questions are shown in brackets.
• The maximum mark for this paper is 100.
• No extra time is allowed for printing and collating.
• The question paper is divided into four sections.
• You are not allowed to use a calculator.
Advice
• You are advised to spend time on each section as follows:
Section A – 55 minutes
Section B – 20 minutes
Section C – 15 minutes
Section D – 60 minutes.
• Save your work at regular intervals.
The famous detective John Stout was called in to solve a perplexing murder mystery. He determined the following facts.
(a) Nathan, the murdered man, was killed by a blow on the head.
(b) Either Suzanne or Martin was in the dining room at the time of the murder.
(c) If Peter was in the kitchen at the time of the murder, then Ian killed Nathan using poison.
(d) If Suzanne was in the dining room at the time of the murder, then Steve killed Nathan.
(e) If Peter was not in the kitchen at the time of the murder, then Martin was not in the dining room when the murder was committed.
(f) If Martin was in the dining room at the time the murder was committed, then Paul killed Nathan.
(g) If Kevin was in the hall at the time of the murder, then Suzanne killed Nathan by a blow to the neck with a saucepan.
Who murdered Nathan?
A Paul
B Steve
C Suzanne
D Ian
E It is not possible for John Stout to solve the crime.
Write the letter corresponding to the correct answer in the box provided in your Electronic Answer Document.
[1 mark]
Explain how you know your answer to 01 is correct.
[2 marks]
Use the space below for rough working, then write your answer in your Electronic Answer Document.
A finite state machine (FSM) can be used to define a language: a string is allowed in a language if it is accepted by the FSM that represents the rules of the language. Figure 1 shows the state transition diagram for an FSM.
Figure 1
An FSM can be represented as a state transition diagram or as a state transition table. Table 1 is an incomplete state transition table for Figure 1.
Complete Table 1 and copy the table into the Electronic Answer Document.
Table 1
<table>
<thead>
<tr>
<th>Original state</th>
<th>Input</th>
<th>New state</th>
</tr>
</thead>
<tbody>
<tr>
<td>S3</td>
<td></td>
<td></td>
</tr>
<tr>
<td>S3</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
[1 mark]
Any language that can be defined using an FSM can also be defined using a regular expression.
The FSM in Figure 1 defines the language that allows all strings containing at least, either two consecutive 1s or two consecutive 0s.
The strings 0110, 00 and 01011 are all accepted by the FSM and so are valid strings in the language.
The strings 1010 and 01 are not accepted by the FSM and so are not valid strings in the language.
Write a regular expression that is equivalent to the FSM shown in Figure 1.
[3 marks]
Question 2 continues on the next page
Backus-Naur Form (BNF) can be used to define the rules of a language.
**Figure 2** shows an attempt to write a set of BNF production rules to define a language of full names.
**Figure 2**
Note: underscores (_) have been used to denote spaces.
Note: rule numbers have been included but are not part of the BNF rules.
**Rule number**
1 <fullname> ::= <title>_<name>_<endtitle> | <name> | <title>_<name> | <name>_<endtitle>
2 <title> ::= MRS | MS | MISS | MR | DR | SIR
3 <endtitle> ::= ESQUIRE | OBE | CBE
4 <name> ::= <word> | <name>_<word>
5 <word> ::= <char><word>
6 <char> ::= A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
BNF can be used to define languages that are not possible to define using regular expressions. The language defined in **Figure 2** could not have been defined using regular expressions.
Complete **Table 2** below by writing either a 'Y' for Yes or 'N' for No in each row.
**Table 2**
<table>
<thead>
<tr>
<th>Rule number (given in Figure 2)</th>
<th>Could be defined using a regular expression</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td></td>
</tr>
</tbody>
</table>
Copy your answer in **Table 2** into the Electronic Answer Document.
[1 mark]
There is an error in rule 5 in Figure 2 which means that no names are defined by the language.
Explain what is wrong with the production rule and rewrite the production rule so that the language does define some names – the names ‘BEN D JONES’, ‘JO GOLOMBEK’ and ‘ALULIM’ should all be defined.
[2 marks]
The Cat transportation company (CTC) is a business that specialises in preparing cats for cat shows.
They need to take five cats to the AQA cat show. They will transport the cats in their van. CTC owns only one van.
They cannot put all the cats in their van at the same time because some of the cats get stressed when in the company of some of the other cats. The cats would not therefore arrive in top condition for the cat show if they were all in the van at the same time.
The graph in Figure 3 shows the relationships between the five cats (labelled 1 to 5). If there is an edge between two cats in the graph then they cannot travel in the van together at the same time.
**Figure 3**
Explain why the graph in Figure 3 is not a tree.
[1 mark]
Represent the graph shown in Figure 3 as an adjacency list by completing Table 3.
Complete Table 3 and copy the table into the Electronic Answer Document.
[2 marks]
<table>
<thead>
<tr>
<th>Vertex (in Figure 3)</th>
<th>Adjacent vertices</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td></td>
</tr>
</tbody>
</table>
Table 4 shows how the graph in Figure 3 can be represented as an adjacency matrix.
**Table 4**
<table>
<thead>
<tr>
<th>Vertex (in Figure 3)</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
0 3. Explain the circumstances in which it is more appropriate to represent a graph using an adjacency list instead of an adjacency matrix.
[2 marks]
**Question 3 continues on the next page**
Figure 4 shows an algorithm, written in pseudo-code, that CTC use.
Figure 4
\[
\begin{align*}
\text{NoOfCats} & \leftarrow 5 \\
\text{Cat}[1] & \leftarrow 1 \\
\text{FOR } A & \leftarrow 2 \text{ TO NoOfCats} \\
\quad B & \leftarrow 1 \\
\quad C & \leftarrow 1 \\
\quad \text{WHILE } B < A \text{ DO} \\
\quad \qquad \text{IF } \text{M}[A, B] = 1 \\
\quad \qquad \qquad \text{THEN} \\
\quad \qquad \qquad \qquad \text{IF } \text{Cat}[B] = C \\
\quad \qquad \qquad \qquad \qquad \text{THEN} \\
\quad \qquad \qquad \qquad \qquad \quad B \leftarrow 1 \\
\quad \qquad \qquad \qquad \qquad \quad C \leftarrow C + 1 \\
\quad \qquad \qquad \qquad \qquad \text{ELSE } B \leftarrow B + 1 \\
\quad \qquad \quad \text{ENDIF} \\
\quad \qquad \text{ELSE } B \leftarrow B + 1 \\
\quad \quad \text{ENDIF} \\
\quad \text{ENDWHILE} \\
\quad \text{Cat}[A] & \leftarrow C \\
\text{ENDFOR}
\end{align*}
\]
The two-dimensional array, M, is used to store the adjacency matrix shown in Table 4.
Complete **Table 5** to show the result of tracing the algorithm in **Figure 4**.
[6 marks]
Copy your answer in **Table 5** into the Electronic Answer Document.
<table>
<thead>
<tr>
<th>NoOfCats</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Explain the purpose of the algorithm in **Figure 4**.
[1 mark]
After a cat show, CTC needs to return the cats to their owners. They can have all the cats in the van at the same time because the show is now finished.
CTC likes to plan the return journey so that the shortest possible distance is travelled by the van. This is an example of an intractable problem.
What is meant by an intractable problem?
[2 marks]
What approach might a programmer take if asked to solve an intractable problem?
[2 marks]
Figure 5 shows an incomplete algorithm for a binary search.
**Figure 5**
PROCEDURE BSearch(List, F, L, ItemToFind)
Found ← False
Failed ← (1)..............................
WHILE NOT Failed AND NOT Found
M ← (F + L) DIV 2
IF List[M] = ItemToFind
THEN Found ← True
ELSE
IF F >= L
(2)........................................
ELSE
IF List[M] > ItemToFind
THEN (3).................................
ELSE F ← M + 1
ENDIF
ENDIF
ENDIF
ENDFILEWHILE
IF Found = True
THEN OUTPUT "Item is in list"
ELSE OUTPUT "Item is not in list"
ENDFILEPROCEDURE
The **DIV** operator calculates the whole number result of integer division. For example, $15 \div 4 = 3$, $17 \div 4 = 4$.
04.1 What code should be added at position (1) in Figure 5? [1 mark]
04.2 What code should be added at position (2) in Figure 5? [1 mark]
04.3 What code should be added at position (3) in Figure 5? [2 marks]
Table 6 contains a list of orders of time complexity (in no particular order).
Table 6
<table>
<thead>
<tr>
<th>Order of time complexity</th>
</tr>
</thead>
<tbody>
<tr>
<td>( O(1) )</td>
</tr>
<tr>
<td>( O(n^2) )</td>
</tr>
<tr>
<td>( O(\log n) )</td>
</tr>
<tr>
<td>( O(k^n) )</td>
</tr>
<tr>
<td>( O(n) )</td>
</tr>
</tbody>
</table>
Which of the orders of time complexity given in Table 6:
1. could be the time complexity of an intractable problem? [1 mark]
2. is the time complexity for a binary search? [1 mark]
3. is the time complexity for getting the first item in a list? [1 mark]
4. is the time complexity for a linear-search algorithm? [1 mark]
5. Explain why a linear-search has the order of time complexity given in your answer to question 4. [2 marks]
Convert the following Reverse Polish Notation expressions to their equivalent infix expressions.
1. \(0 \ 5 \ \ 3 \ 4 \ \ast\) [1 mark]
2. \(0 \ 5 \ \ 1 \ \ 12 \ 8 \ \ast\ \ast\) [1 mark]
Reverse Polish Notation is an alternative to standard infix notation for writing arithmetic expressions.
3. \(0 \ 5 \ \ 3\) State one advantage of Reverse Polish Notation over infix notation. [1 mark]
END OF SECTION A
Section B begins on page 14
There are no questions printed on this page
Turn over for Section B
Section B
You are advised to spend no more than 20 minutes on this section.
Enter your answers to Section B in your Electronic Answer Document.
You must save this document at regular intervals.
The question in this section asks you to write program code starting from a new program/project/file.
- Save your program/project/file in its own folder/directory.
- Save your program/project/file at regular intervals.
Create a folder/directory called Question6 for your new program.
One method for converting a decimal number into binary is to repeatedly divide by 2 using integer division. After each division is completed, the remainder is output and the integer result of the division is used as the input to the next iteration of the division process. The process repeats until the result of the division is 0.
Outputting the remainders in the sequence that they are calculated produces the binary digits of the equivalent binary number, but in reverse order.
For example, the decimal number 210 could be converted into binary as shown in Figure 7.
**Figure 7**
\[
\begin{align*}
210 \div 2 &= 105 \text{ remainder } 0 \\
105 \div 2 &= 52 \text{ remainder } 1 \\
52 \div 2 &= 26 \text{ remainder } 0 \\
26 \div 2 &= 13 \text{ remainder } 0 \\
13 \div 2 &= 6 \text{ remainder } 1 \\
6 \div 2 &= 3 \text{ remainder } 0 \\
3 \div 2 &= 1 \text{ remainder } 1 \\
1 \div 2 &= 0 \text{ remainder } 1 \\
\end{align*}
\]
The sequence 0, 1, 0, 0, 1, 0, 1, 1 which would be output by this process is the reverse of the binary equivalent of 210 which is 11010010.
What you need to do
Task 1
Write a program that will perform the conversion process described above. The program should display a suitable prompt asking the user to input a decimal number to convert and then output the bits of the binary equivalent of the decimal number in reverse order.
Task 2
Improve the program so that the bits are output in the correct order, e.g. for 210 the output would be 11010010.
Task 3
Test the program works by entering the value 210.
Save the program in your new Question6 folder/directory.
Evidence that you need to provide
Include the following in your Electronic answer document.
Your PROGRAM SOURCE CODE after you have completed both Task 1 and Task 2.
If you complete Task 1 but do not attempt Task 2 then a maximum of 9 marks will be awarded.
SCREEN CAPTURE(S) for the test showing the output of the program when 210 is entered.
The marks for this test will be awarded whether the binary digits are output in reverse order or in the correct order.
Section C
You are advised to spend no more than 15 minutes on this section.
Type your answers to Section C in your Electronic Answer Document.
You must save this document at regular intervals.
These questions refer to the Preliminary Material and require you to load the Skeleton Program, but do not require any additional programming.
Refer either to the Preliminary Material issued with this question paper or your electronic copy.
The class diagram in Figure 8 is an attempt to represent the relationships between some of the classes in the MONSTER! Game.
Figure 8
- Trap
- Triggered : Boolean
+GetTriggered()
+ToggleTrap()
- Monster
- Awake : Boolean
+MakeMove(PlayerPosition)
+GetAwake()
+ChangeSleepStatus()
- Character
+MakeMove(Direction)
07.1 Explain what errors have been made in Figure 8. [2 marks]
07.2 Give an example of instantiation from the Skeleton Program. [1 mark]
07.3 State the name of an identifier for an array variable. [1 mark]
07.4 State the name of an identifier for a subclass. [1 mark]
State the name of an identifier for a variable that is used to store a whole number.
[1 mark]
State the name of an identifier for a class that uses composition.
[1 mark]
Look at the `GetNewRandomPosition` subroutine in the `Game` class in the `Skeleton Program`.
[1 mark]
Explain why the generation of a random position needs to be inside a repetition structure.
[1 mark]
Look at the `Game` class in the `Skeleton Program`.
Why has a named constant been used instead of a numeric value?
[2 marks]
Describe the changes that would need to be made to the `Game` class to add a third trap to the cavern. The third trap should have exactly the same functionality as the other two traps. You do not need to describe the changes that would need to be made to the `SetUpGame` subroutine.
[2 marks]
END OF SECTION C
This question refers to the subroutines CheckValidMove and Play in the Game class.
The Skeleton Program currently does not make all the checks needed to ensure that the move entered by a player is an allowed move. It should not be possible to make a move that takes a player outside the 7 × 5 cavern grid.
The Skeleton Program needs to be adapted so that it prevents a player from moving west if they are at the western end of the cavern.
The subroutine CheckValidMove needs to be adapted so that it returns a value of FALSE if a player attempts to move west when they are at the western end of the cavern.
The subroutine Play needs to be adapted so that it displays an error message to the user if an illegal move is entered. The message should state "That is not a valid move, please try again".
Evidence that you need to provide
Include the following in your Electronic Answer Document.
- Your amended PROGRAM SOURCE CODE for the subroutine CheckValidMove.
[3 marks]
- Your amended PROGRAM SOURCE CODE for the subroutine Play.
[2 marks]
- SCREEN CAPTURE(S) for a test run showing a player trying to move west when they are at the western end of the cave.
[1 mark]
This question will extend the functionality of the game.
The game is to be altered so that there is a new type of enemy: a sleepy enemy. A sleepy enemy is exactly the same as a normal enemy, except that after making four moves it falls asleep again.
**Task 1**
Create a new class called `SleepyEnemy` that inherits from the `Enemy` class.
**Task 2**
Create a new integer attribute in the `SleepyEnemy` class called `MovesTillSleep`.
**Task 3**
Create a new public subroutine in the `SleepyEnemy` class called `ChangeSleepStatus`. This subroutine should override the `ChangeSleepStatus` subroutine from the `Enemy` class. The value of `MovesTillSleep` should be set to 4 in this subroutine.
**Task 4**
Create a new public subroutine in the `SleepyEnemy` class called `MakeMove`. This subroutine should override the `MakeMove` subroutine from the `Enemy` class. When called this subroutine should reduce the value of `MovesTillSleep` by 1 and then send the monster to sleep if `MovesTillSleep` has become equal to 0.
**Task 5**
Modify the `Game` class so that the Monster object is of type `SleepyEnemy` (instead of `Enemy`).
**Task 6**
Check that the changes you have made work by conducting the following test:
- play the training game
- move east
- move east
- move south.
---
**Evidence that you need to provide**
Include the following in your Electronic Answer Document.
1. Your PROGRAM SOURCE CODE for the new `SleepyEnemy` class. [8 marks]
2. SCREEN CAPTURE(S) showing the requested test. [2 marks]
This question refers to the Game and Character classes and will extend the functionality of the game.
The game should be altered so that once per game the player can shoot an arrow instead of making a move in the cavern. The arrow travels in a straight line, in a direction of the player’s choice, from the cell the player is in to the edge of the cavern. If the arrow hits the monster then the player wins the game and a message saying that they have shot the monster should be displayed.
For this question you are only required to extend the program so that it checks if the monster is hit by the arrow when the user chooses to shoot an arrow northwards. However, the user should be able to select any of the four possible directions.
In Figure 9, the two shaded cells show the cells which, if the monster is in one of them, would result in the player winning the game, as long as the player is in the cell five to the east and three to the south and chooses to shoot an arrow northwards.
**Figure 9**
```
. . . . .
. . . . .
. . . . .
. . . . .
. . . . *
. . . . .
```
**Task 1**
Modify the DisplayMoveOptions subroutine in the Game class so that the option to enter A to shoot an arrow is added to the menu.
**Task 2**
Create a new Boolean attribute called HasArrow in the Character class.
The value of HasArrow should be set to True when a new object of class Character is instantiated.
**Task 3**
Create a new public subroutine called GetHasArrow in the Character class that returns the value of the HasArrow attribute to the calling routine.
**Task 4**
Modify the CheckValidMove subroutine in the Game class so that:
- it is a valid move if A is selected and the player does have an arrow
- it is not a valid move if A is selected and the player does not have an arrow.
Task 5
Create a new public subroutine called `GetArrowDirection` in the `Character` class.
This subroutine should return a character to the calling routine.
The user should be asked in which direction they would like to shoot an arrow (N, S, E or W) and the value entered by the user should be returned to the calling routine.
If an invalid direction is entered then the user should be repeatedly asked to enter a new direction, until a valid direction is entered.
The value of `HasArrow` should then be changed to FALSE.
Task 6
Modify the `Play` subroutine in the `Game` class so that if the move chosen by the user is not M it then checks if the move chosen is A.
If the move chosen was A, then there should be a call to the player's `GetArrowDirection` subroutine. If the user chooses a direction of N then the program should check to see if the monster is in one of the squares directly north of the player's current position. If it is then a message saying "You have shot the monster and it cannot stop you finding the flask" should be displayed. The value of `FlaskFound` should then be set to TRUE.
After the arrow has been shot, if the monster is still alive and awake, it is now the monster's turn to move, the player should remain in the same cell as they were in before the arrow was shot.
There is no need to write any code that checks if the monster has been shot when the player chooses to shoot either to the east, to the west or to the south.
Task 7: test 1
Test that the changes you have made work by conducting the following test:
- play the training game
- shoot an arrow
- choose a direction of N for the arrow.
Task 8: test 2
Test that the changes you have made work by conducting the following test:
- play the training game
- move east
- shoot an arrow
- choose a direction of N for the arrow
- shoot an arrow.
Question 10 continues on the next page
**Evidence that you need to provide**
Include the following in your Electronic Answer Document.
1. Your amended **PROGRAM SOURCE CODE** for the subroutine `DisplayMoveOptions`.
**[1 mark]**
2. Your amended **PROGRAM SOURCE CODE** for the subroutine `CheckValidMove`.
**[2 marks]**
3. Your amended **PROGRAM SOURCE CODE** for the class `Character`.
**[8 marks]**
4. Your amended **PROGRAM SOURCE CODE** for the subroutine `Play`.
**[6 marks]**
5. SCREEN CAPTURE(S) showing the results of **Test 1**.
**[1 mark]**
6. SCREEN CAPTURE(S) showing the results of **Test 2**.
**[1 mark]**
**END OF QUESTIONS**
There are no questions printed on this page
DO NOT WRITE ON THIS PAGE
ANSWER IN THE SPACES PROVIDED
There are no questions printed on this page
Acknowledgement of copyright holders and publishers
Permission to reproduce all copyright material has been applied for. In some cases, efforts to contact copyright holders have been unsuccessful and AQA will be happy to rectify any omissions of acknowledgements in future papers if notified.
Copyright © 2014 AQA and its licensors. All rights reserved.
|
{"Source-Url": "https://filestore.aqa.org.uk/resources/computing/AQA-75171-SQP.PDF", "len_cl100k_base": 6478, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 42747, "total-output-tokens": 7272, "length": "2e12", "weborganizer": {"__label__adult": 0.000843048095703125, "__label__art_design": 0.001190185546875, "__label__crime_law": 0.0013647079467773438, "__label__education_jobs": 0.3466796875, "__label__entertainment": 0.00042128562927246094, "__label__fashion_beauty": 0.0006046295166015625, "__label__finance_business": 0.0010700225830078125, "__label__food_dining": 0.0013446807861328125, "__label__games": 0.0056610107421875, "__label__hardware": 0.003040313720703125, "__label__health": 0.0010194778442382812, "__label__history": 0.001468658447265625, "__label__home_hobbies": 0.0004467964172363281, "__label__industrial": 0.00122833251953125, "__label__literature": 0.0015420913696289062, "__label__politics": 0.00104522705078125, "__label__religion": 0.0012569427490234375, "__label__science_tech": 0.0460205078125, "__label__social_life": 0.0005860328674316406, "__label__software": 0.01465606689453125, "__label__software_dev": 0.56494140625, "__label__sports_fitness": 0.0013217926025390625, "__label__transportation": 0.0014467239379882812, "__label__travel": 0.0005636215209960938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23748, 0.03252]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23748, 0.52882]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23748, 0.88551]], "google_gemma-3-12b-it_contains_pii": [[0, 1269, false], [1269, 2471, null], [2471, 3666, null], [3666, 5368, null], [5368, 5675, null], [5675, 6911, null], [6911, 7512, null], [7512, 8487, null], [8487, 9486, null], [9486, 10564, null], [10564, 11283, null], [11283, 11723, null], [11723, 11792, null], [11792, 13355, null], [13355, 14350, null], [14350, 15405, null], [15405, 16239, null], [16239, 17420, null], [17420, 18935, null], [18935, 20722, null], [20722, 22608, null], [22608, 23247, null], [23247, 23348, null], [23348, 23748, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1269, true], [1269, 2471, null], [2471, 3666, null], [3666, 5368, null], [5368, 5675, null], [5675, 6911, null], [6911, 7512, null], [7512, 8487, null], [8487, 9486, null], [9486, 10564, null], [10564, 11283, null], [11283, 11723, null], [11723, 11792, null], [11792, 13355, null], [13355, 14350, null], [14350, 15405, null], [15405, 16239, null], [16239, 17420, null], [17420, 18935, null], [18935, 20722, null], [20722, 22608, null], [22608, 23247, null], [23247, 23348, null], [23348, 23748, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23748, null]], "pdf_page_numbers": [[0, 1269, 1], [1269, 2471, 2], [2471, 3666, 3], [3666, 5368, 4], [5368, 5675, 5], [5675, 6911, 6], [6911, 7512, 7], [7512, 8487, 8], [8487, 9486, 9], [9486, 10564, 10], [10564, 11283, 11], [11283, 11723, 12], [11723, 11792, 13], [11792, 13355, 14], [13355, 14350, 15], [14350, 15405, 16], [15405, 16239, 17], [16239, 17420, 18], [17420, 18935, 19], [18935, 20722, 20], [20722, 22608, 21], [22608, 23247, 22], [23247, 23348, 23], [23348, 23748, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23748, 0.0995]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8d534e898030b937f090d357c1c29757e383720e
|
This specification defines semantics for using the XMPP publish-subscribe protocol to broadcast state change events associated with an instant messaging and presence account. This profile of pubsub therefore enables a standard XMPP user account to function as a virtual pubsub service, easing the discovery of syndicated data and event notifications associated with such an account.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
1 Introduction
1.1 Motivation
Personal eventing provides a way for a Jabber/XMPP user to send updates or “events” to other users, who are typically contacts in the user’s roster. An event can be anything that a user wants to make known to other people, such as those described in User Geolocation (XEP-0080) ¹, User Mood (XEP-0107) ², User Activity (XEP-0108) ³, and User Tune (XEP-0118) ⁴. While the XMPP Publish-Subscribe (XEP-0060) ⁵ extension (“pubsub”) can be used to broadcast such events associated, the full pubsub protocol is often thought of as complicated and therefore has not been widely implemented. ⁶ To make publish-subscribe functionality more accessible (especially to instant messaging and presence applications that conform to XMPP IM ⁷), this document defines a simplified subset of pubsub that can be followed by instant messaging client and server developers to more easily deploy personal eventing services across the Jabber/XMPP network. We label this subset “Personal Eventing Protocol” or PEP.
Note: Any use cases not described herein are described in XEP-0060. Also, this document does not show error flows related to the generic publish-subscribe use cases referenced herein, since they are exhaustively defined in XEP-0060. The reader is referred to XEP-0060 for all relevant protocol details related to the XMPP publish-subscribe extension. This document merely defines a “subset” or “profile” of XMPP publish-subscribe.
1.2 How It Works
This section provides a friendly introduction to personal eventing via pubsub (PEP). Imagine that you are a Shakespearean character named Juliet and that you want to generate events about what music you’re listening to, which anyone may see as long as they are authorized to see your online/offline presence (i.e., a pubsub access model of “presence”). We assume that you have three contacts with the following relationship to you:
1. benvolio@montague.lit, who has no subscription to your presence
2. nurse@capulet.lit, who has a bidirectional subscription to your presence and who is in your "Servants" roster group
⁶Instead, many "extended presence" formats are currently sent using the <presence/> stanza type; unfortunately, this overloads presence, results in unnecessary presence traffic, and does not provide fine-grained control over access. The use of publish-subscribe rather than presence is therefore preferable.
3. romeo@montague.lit, who has a bidirectional subscription to your presence and who is in your "Friends" roster group
We also assume that your server (capulet.lit) supports PEP and that your client discovered that support when you logged in.
Now you start playing a song on your music playing software. Your client captures that "event" and publishes it to your server:
**Listing 1: Publishing an event**
```xml
<iq from='juliet@capulet.lit/balcony' type='set' id='pub1'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='http://jabber.org/protocol/tune'>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love's Labors Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</tune>
</item>
</publish>
</pubsub>
</iq>
```
Note the following about your publish request:
1. It is sent with no 'to' address (see Every Account a Pubsub Service).
2. It specifies a node of "http://jabber.org/protocol/tune" (see One Node per Namespace).
If all goes well (see Publishing Events), everyone who is interested in what you are listening to will receive notification of the event:
**Listing 2: Interested parties receive event notifications**
```xml
<message from='juliet@capulet.lit'
to='romeo@montague.lit/orchard'
type='headline'
id='tunefoo1'>
<event xmlns='http://jabber.org/protocol/pubsub#event'>
<items node='http://jabber.org/protocol/tune'>
<item>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
</tune>
</item>
</items>
</event>
</message>
```
Because PEP services must send notifications to the account owner, you too receive the notification at each of your resources (here "balcony" and "chamber").
Listing 3: Publisher receives event notification
But how do Romeo and the Nurse tell your server that they are interested in knowing what you’re listening to? In generic pubsub they typically need to explicitly subscribe to your “http://jabber.org/protocol/tune” node. But PEP services support two special features:
1. “auto-subscribe” -- because they are subscribed to your presence, they automatically receive your events (see Use Presence).
2. “filtered-notification” -- they can include some special flags in their Entity Capabilities (XEP-0115) information to specify which event types (payloads) they want to receive (see Filtered Notifications).
Listing 4: Romeo sends presence with caps
```
<presence from='romeo@montague.lit/orchard'>
<c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://www.chatopus.com'
ver='zHyE0gxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
That may still be necessary for open access model nodes in PEP if another user does not send you presence, such as benvolio@montague.lit in our scenario.
Your server knows to send tune information to Romeo because when the server unpacks the value of the 'ver' attribute ("054H4A7280JuT6+IroVYxgCAjZo=") in accordance with XEP-0115, it discovers that Romeo’s client advertises a service discovery feature of "http://jabber.org/protocol/tune+notify", where the "+notify" suffix indicates interest in receiving notifications of the node whose NodeID precedes the suffix (see XEP-0060 § 9.2). The server can verify this support if needed by sending a service discovery request to Romeo’s full JID, where the response would be as follows:
Listing 5: Disco#info result from extension
```
<iq from='romeo@montague.lit/orchard'
to='juliet@capulet.lit'
type='result'
id='disco123'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='client' name='Exodus_0.9.1' type='pc'/>
<feature var='http://jabber.org/protocol/disco#info'/>
<feature var='http://jabber.org/protocol/disco#items'/>
<feature var='http://jabber.org/protocol/geoloc'/>
<feature var='http://jabber.org/protocol/geoloc+notify'/>
<feature var='http://jabber.org/protocol/tune'/>
<feature var='http://jabber.org/protocol/tune+notify'/>
</query>
</iq>
```
Naturally your server doesn’t need to send out a disco#info request every time, since it will quickly create a large cache of 'ver' values.
So that’s the general idea.
## 2 Concepts and Approach
Personal eventing via pubsub ("PEP") is based on the following principles:
1. Every account a pubsub service.
2. One publisher per node.
3. Use presence.
4. Filter notifications based on expressed interest.
5. Smart defaults.
These principles are described more fully below.
2.1 Every Account a Pubsub Service
When a user creates an account (or has an account provisioned) at a Jabber/XMPP server that supports PEP, the server associates a virtual pubsub service with the account. This greatly simplifies the task of discovering the account owner’s personal pubsub nodes, since the root pubsub node simply is the account owner’s bare JID (<localpart@domain.tld> or <domain.tld>). This assumption also simplifies publishing and subscribing.
2.2 One Publisher Per Node
There is no need for multiple publishers to a PEP service, since by definition the service generates information associated with only one entity. The owner-publisher for every node is the bare JID of the account owner.
2.3 Use Presence
Although generic publish-subscribe services do not necessarily have access to presence information about subscribers, PEP services are integrated with presence in the following ways:
- Each messaging and presence account simply is a virtual publish-subscribe service.
- The default access model is "presence".
- A contact’s subscription to an account owner’s personal eventing data is automatically created because the contact has an XMPP presence subscription (the "auto-subscribe" feature).
- Services take account of subscriber presence in the generation of notifications. ¹⁰
- A service automatically sends notifications to all of the account owner’s connected resources (subject to notification filtering).
These uses of presence simplify the task of developing compliant clients (cf. XMPP Design Guidelines (XEP-0134) ¹¹).
Note: It is strongly NOT RECOMMENDED to use directed presence with Entity Capabilities data that differs from the data included in broadcast presence for the purpose of establishing implicit PEP subscriptions to another entity, because the directed presence information will be overwritten by any subsequent presence broadcast.
¹⁰This works only if the subscription state is "both" (see RFC 3921).
2.4 Filtered Notifications
By default, the existence of an XMPP presence subscription is used to establish a PEP subscription to the account owner’s personal eventing data. In order to filter which notifications are sent by the PEP service, the contact’s client includes extended Entity Capabilities (XEP-0115) information in the presence notifications it sends to the account owner. Because the PEP service supports the "filtered-notifications" feature, it sends only those notifications that match the contact’s expressed notification preferences.
2.5 Smart Defaults
Most pubsub configuration options and metadata are not needed for personal eventing. Instead, PEP services offer smart defaults to simplify node creation and management.
3 Publishing Events
An account owner publishes an item to a node by following the protocol specified in XEP-0060:
Listing 6: Account owner publishes item
```xml
<iq from='juliet@capulet.lit/balcony' type='set' id='pub1'>
<pubsub xmlns='http://jabber.org/protocol/pubsub'>
<publish node='http://jabber.org/protocol/tune'>
<item>
<tune xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love’s Labors Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</tune>
</item>
</publish>
</pubsub>
</iq>
```
If the node does not already exist, the PEP service MUST create the node. This “auto-create” feature (defined in XEP-0060) MUST be supported by a PEP service. (Naturally, the account owner’s client MAY follow the node creation use case specified in XEP-0060 before attempting to publish an item.)
A PEP service SHOULD also support the "publish-options" feature defined in XEP-0060. If the publication logic dictates that event notifications shall be sent, the account owner’s
---
4 RECEIVING EVENT NOTIFICATIONS
server generates notifications and sends them to all appropriate entities as described in the Receiving Event Notifications section of this document, as well as to any of the account owner’s available resources.
Note: PEP ties the receipt of PEP notifications to the subscriber’s presence, but does not tie the generation of PEP notifications to the publisher’s presence. If the publisher wishes to stop generating PEP events (or to generate an “empty” event as can be done for some PEP payloads) before ending its presence session, the publisher MUST direct its client to do so and MUST NOT depend on the PEP service to automatically “zero out” its PEP information when the PEP service receives unavailable presence from the publisher.
4 Receiving Event Notifications
An entity shall receive event notifications if:
1. The node has an open access model and the entity has explicitly or implicitly subscribed to the node as explained in XEP-0060.
2. The entity shares presence with the account owner (see Presence Sharing), is authorized to receive events from the node in accordance with the node access model (see XEP-0060), and advertises an interest in the payload type (see Notification Filtering).
3. The entity is the account owner itself, in which case the PEP service shall send notifications to all of the account owner’s available resources (subject to notification filtering).
4.1 Automatic Subscriptions
A PEP service MUST support the “auto-subscribe” feature defined in Section 9.1 of XEP-0060. This implies that when a user has an XMPP presence subscription to the account owner’s presence, the user automatically also has the right to subscribe to any of the account owner’s PEP nodes (if the access model is the default of “presence”) and to retrieve items from such PEP nodes.
4.2 Notification Filtering
A PEP service MUST support the “filtered-notifications” feature defined in Section 9.2 of XEP-0060. This implies that when an automatic subscriber can specify which event payloads it wants to receive by including appropriate feature bundles in the XEP-0115 information it broadcasts.
4.3 Generating Notifications
4.3.1 Addressing
1. The server MUST set the 'from' address on the notification to the bare JID (<localpart@domain.tld> or <domain.tld>) of the account owner (in these examples, "juliet@capulet.lit").
2. Any errors generated by the recipient or the recipient’s server in relation to the notification MUST be directed to the JID of the 'from' address on the notification (i.e., the bare JID) so that bounce processing can be handled by the PEP service rather than by the publishing client.
3. When sending notifications to an entity that has a presence subscription to the account owner, the server SHOULD include an Extended Stanza Addressing (XEP-0033) "replyto" extension specifying the publishing resource (in this example, "juliet@capulet.lit/balcony"); this enables the subscriber’s client to differentiate between information received from each of the account owner’s resources (for example, different resources may be in different places and therefore may need to specify distinct geolocation data). However, a server MUST NOT include the “replyto” address when sending a notification to an entity that does not have a presence subscription to the account owner.
4. If the PEP service has presence information about the intended recipient, it SHOULD direct the notification(s) to the full JID(s) of the recipient’s (<localpart@domain.tld/resource> or <domain.tld/resource>); if the PEP service does not have presence information about a subscriber, it MUST address the notification to the subscriber’s bare JID (<localpart@domain.tld> or <domain.tld>).
4.3.2 Number of Notifications
1. If an entity subscribed using a full JID (<localpart@domain.tld/resource> or <domain.tld/resource>) or a bare domain identifier <domain.tld>, a PEP service MUST send one notification only, addressed to the subscribed JID.
2. If a subscriber subscribed using a bare JID <localpart@domain.tld> and a PEP service does not have appropriate presence information about the subscriber, a PEP service MUST send at most one notification, addressed to the bare JID <localpart@domain.tld> of the subscriber, and MAY choose not to send any notification. (By "appropriate presence information" is meant an available presence stanza with XEP-0115 data that...)
indicates interest in the relevant data format.)
3. If a subscriber subscribed using a bare JID <localpart@domain.tld> and a PEP service has appropriate presence information about the subscriber, the PEP service MUST send one notification to the full JID (<localpart@domain.tld/resource> or <domain.tld/resource>) of each of the subscriber’s available resources that have included XEP-0115 information indicating an interest in the data format.
4.3.3 When to Generate Notifications
1. When an account owner publishes an item to a node, a PEP service MUST generate a notification and send it to all appropriate subscribers (where the number of notifications is determined by the foregoing rules).
2. When a PEP service receives initial presence from a subscriber’s resource including XEP-0115 information that indicates an interest in the data format, it MUST generate a notification containing at least the last published item for that node and send it to the newly-available resource; see below under Sending the Last Published Item.
3. As an exception to the foregoing MUST rules, a PEP service MUST NOT send notifications to a subscriber if the user has blocked the subscriber from receiving the kind of stanza used for notifications (typically message stanzas) by means of communications blocking as specified in Privacy Lists (XEP-0016) or Blocking Command (XEP-0191).
4.3.4 Sending the Last Published Item
As mentioned, a PEP service MUST send the last published item to all new subscribers and to all newly-available resources for each subscriber, including the account owner itself. (That is, the default value of the “pubsub#send_last_published_item” node configuration field must be “on_sub_and_presence”; this behavior essentially mimics the functionality of presence as defined in XMPP IM.) When processing a new subscription, the service MAY send not only the last published item but instead all items that are currently associated with the node (i.e., up to the maximum number of items at the node, which might be one if the node is a ”singleton node” as described in XEP-0060). If the service has knowledge about the datetime that a subscriber’s newly-available resource last received updated information from the node (e.g.,
---
14By “initial presence” is meant a presence stanza with no ‘type’ attribute that the PEP service receives after the subscriber was previously unavailable; any subsequent presence stanza with no ‘type’ attribute that the PEP service receives after the initial presence notification but before the subscriber against goes offline MUST NOT trigger sending of a new pubsub notification.
as described in Last Activity in Presence (XEP-0256)\(^7\), then it MAY also send more items than only the last published item to the newly-available resource.
Note: The "on_sub_and_presence" setting relates to the subscriber’s presence, not the publisher’s presence.
Listing 7: Subscriber sends presence from newly-available resource
```xml
<presence from='romeo@montague.lit/orchard'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://www.chatopus.com' ver='zHyE0gxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
Listing 8: Subscriber’s server sends presence from newly-available resource to publisher’s bare JID (i.e., PEP service)
```xml
<presence from='romeo@montague.lit/orchard' to='juliet@capulet.lit'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://www.chatopus.com' ver='zHyE0gxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
Listing 9: PEP service sends last published item to newly-available resource
```xml
<message from='juliet@capulet.lit' to='romeo@montague.lit/orchard' type='headline' id='foo'>
<event xmlns='http://jabber.org/protocol/pubsub#event'>
<items node='http://jabber.org/protocol/tune'>
<item xmlns='http://jabber.org/protocol/tune'>
<artist>Gerald Finzi</artist>
<length>255</length>
<source>Music for "Love’s Labors Lost" (Suite for small orchestra)</source>
<title>Introduction (Allegro vigoroso)</title>
<track>1</track>
</item>
</items>
</event>
<delay xmlns='urn:xmpp:delay' stamp='2003-12-13T23:58:37Z'/>
</message>
```
5 Recommended Defaults
A PEP service MUST:
- Support the node discovery, node creation, node deletion, publish item, subscribe, unsubscribe, and item retrieval use cases specified in XEP-0060.
- Support the "auto-create", "auto-subscribe", and "filtered-notifications" features.
- Support the "owner" and "subscriber" affiliations.
- Support the "presence" access model and set it to the default.
- Support the "open", "roster", and "whitelist" access models.
- Treat the account owner’s bare JID (<localpart@domain.tld> or <domain.tld>) as a collection node (i.e., as the root collection node for the account’s virtual pubsub service).
- Default the 'deliver_notifications' configuration option to true (i.e., deliver payloads by default).
- Default the 'send_last_published_item' configuration option to on_sub_and_presence (i.e., send the last published item on subscription and on receipt of presence).
A PEP service MAY support other use cases, affiliations, access models, and features, but such support is OPTIONAL.
6 Determining Support
6.1 Account Owner Service Discovery
Naturally, before an account owner attempts to complete any PEP use cases, its client SHOULD determine whether the account owner’s server supports PEP; to do so, it MUST send a Service Discovery (XEP-0030) information request to its own bare JID:
Listing 10: Account owner queries server regarding protocol support
```xml
<iq from='juliet@capulet.lit/balcony'
to='juliet@capulet.lit'
id='disco1'
type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
Because subscriptions are implicit in PEP rather than explicit as in generic pubsub, the on_sub_and_presence setting effectively means sending on presence.
If the account owner’s server supports PEP and the account is provisioned for PEP, the server MUST return an identity of “pubsub/pep” on behalf of the account (as well as a list of the namespaces and other features it supports, including all supported XEP-0060 features):
Listing 11: Server communicates protocol support
```xml
<iq from='juliet@capulet.lit'
to='juliet@capulet.lit/balcony'
id='disco1'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>
<identity category='account' type='registered'/>
<identity category='pubsub' type='pep'/>
<feature var='http://jabber.org/protocol/pubsub#access-presence'/>
<feature var='http://jabber.org/protocol/pubsub#auto-create'/>
<feature var='http://jabber.org/protocol/pubsub#auto-subscribe'/>
<feature var='http://jabber.org/protocol/pubsub#config-node'/>
<feature var='http://jabber.org/protocol/pubsub#create-and-configure'/>
<feature var='http://jabber.org/protocol/pubsub#create-nodes'/>
<feature var='http://jabber.org/protocol/pubsub#filtered-notifications'/>
<feature var='http://jabber.org/protocol/pubsub#persistent-items'/>
<feature var='http://jabber.org/protocol/pubsub#publish'/>
<feature var='http://jabber.org/protocol/pubsub#retrieve-items'/>
<feature var='http://jabber.org/protocol/pubsub#subscribe'/>
...
</query>
</iq>
```
6.2 Contact Service Discovery
A contact MAY send service discovery requests to the account owner’s bare JID (<local-part@domain.tld> or <domain.tld>). If the contact already has a subscription to the account owner’s presence, this is not necessary in order to receive notifications from the account owner via personal eventing. However, a user without a presence subscription needs to do so in order to discover if the account owner is a virtual pubsub service and to discover the account owner’s eventing nodes. The relevant protocol flows are demonstrated in XEP-0060.
Note: When returning disco#items results, the account owner’s server MUST check the access model for each of the account owner’s PEP nodes and MUST return as service discovery items only those nodes to which the contact is allowed to subscribe or from which the contact is allowed to retrieve items without first subscribing.
7 Implementation Notes
7.1 Cancelling Subscriptions
In order to ensure appropriate access to information published at nodes of type "presence" and "roster", a PEP service MUST re-calculate access controls when:
1. A presence subscription state changes (e.g., when a subscription request is approved).
2. A roster item is modified (e.g., when the item is moved to a new roster group).
If the modification results in a loss of access, the service MUST cancel the entity’s subscription. In addition, the service MAY send a message to the (former) subscriber informing it of the cancellation (for information about the format of messages sent to notify subscribers of subscription cancellation, see the "Notification of Subscription Denial or Cancellation" section of XEP-0060).
7.2 One Node Per Namespace
An earlier version of this document specified that there could be only one publish-subscribe node associated with any given payload type (XML namespace) for the account owner (e.g., there could be only one pubsub node for geolocation events, one node for tune events, and one node for mood events, etc.). However, this rule is now considered overly restrictive because some data formats can be used to encapsulate many different kinds of information; the usual example is Atom as defined in RFC 4287, for which many extensions exist. Therefore, this document now does not specify that there is a one-to-one relationship between NodeIDs and payload namespaces.
A specification that defines a given payload format for use in PEP MUST specify whether there shall be only one node per namespace, or whether multiple NodeIDs for the same namespace are allowable.
8 Security Considerations
A PEP service MAY enforce additional privacy and security policies when determining whether an entity is allowed to subscribe to a node or retrieve items from a node; however, any such policies shall be considered specific to an implementation or deployment and are out of scope for this document.
9 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA) 21.
10 XMPP Registrar Considerations
10.1 Service Discovery Category/Type
The XMPP Registrar 22 includes a category of "pubsub" in its registry of Service Discovery identities (see <https://xmpp.org/registrar/disco-categories.html>); as a result of this document, the Registrar includes a type of "pep" to that category. The registry submission is as follows:
```xml
<category>
<name>pubsub</name>
<type>
<name>pep</name>
<desc>
A personal eventing service that supports the publish-subscribe subset defined in XEP-0163.
</desc>
<doc>XEP-0163</doc>
</type>
</category>
```
11 XML Schema
Because PEP simply reuses the protocol specified in XEP-0060, a separate schema is not needed.
12 Acknowledgements
The authors wish to thank the participants in the XMPP Interoperability Testing Event held July 24 and 25, 2006, who provided valuable feedback that resulted in radical simplification of the protocol.
21The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
22The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
Thanks also to the many members of the standards@xmpp.org discussion list who patiently suffered through seemingly endless discussion of the auto-create and publish-and-configure features.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0163.pdf", "len_cl100k_base": 6904, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 49908, "total-output-tokens": 8466, "length": "2e12", "weborganizer": {"__label__adult": 0.0003802776336669922, "__label__art_design": 0.0006022453308105469, "__label__crime_law": 0.0013217926025390625, "__label__education_jobs": 0.0013189315795898438, "__label__entertainment": 0.0003955364227294922, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.001621246337890625, "__label__food_dining": 0.00023746490478515625, "__label__games": 0.0008969306945800781, "__label__hardware": 0.00225830078125, "__label__health": 0.0004761219024658203, "__label__history": 0.00034928321838378906, "__label__home_hobbies": 0.0001029372215270996, "__label__industrial": 0.0003151893615722656, "__label__literature": 0.0005931854248046875, "__label__politics": 0.0004580020904541016, "__label__religion": 0.0005006790161132812, "__label__science_tech": 0.09124755859375, "__label__social_life": 0.00026488304138183594, "__label__software": 0.325927734375, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.00023031234741210935, "__label__transportation": 0.0003380775451660156, "__label__travel": 0.000225067138671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31435, 0.02404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31435, 0.13263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31435, 0.82697]], "google_gemma-3-12b-it_contains_pii": [[0, 383, false], [383, 2918, null], [2918, 2918, null], [2918, 5800, null], [5800, 7563, null], [7563, 7771, null], [7771, 8861, null], [8861, 10543, null], [10543, 12587, null], [12587, 14597, null], [14597, 16746, null], [16746, 19107, null], [19107, 21891, null], [21891, 23535, null], [23535, 25344, null], [25344, 27627, null], [27627, 29699, null], [29699, 31247, null], [31247, 31435, null]], "google_gemma-3-12b-it_is_public_document": [[0, 383, true], [383, 2918, null], [2918, 2918, null], [2918, 5800, null], [5800, 7563, null], [7563, 7771, null], [7771, 8861, null], [8861, 10543, null], [10543, 12587, null], [12587, 14597, null], [14597, 16746, null], [16746, 19107, null], [19107, 21891, null], [21891, 23535, null], [23535, 25344, null], [25344, 27627, null], [27627, 29699, null], [29699, 31247, null], [31247, 31435, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31435, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31435, null]], "pdf_page_numbers": [[0, 383, 1], [383, 2918, 2], [2918, 2918, 3], [2918, 5800, 4], [5800, 7563, 5], [7563, 7771, 6], [7771, 8861, 7], [8861, 10543, 8], [10543, 12587, 9], [12587, 14597, 10], [14597, 16746, 11], [16746, 19107, 12], [19107, 21891, 13], [21891, 23535, 14], [23535, 25344, 15], [25344, 27627, 16], [27627, 29699, 17], [29699, 31247, 18], [31247, 31435, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31435, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1d078e58772c07d5c37de51f85fb31dfff0c3c1f
|
Package ‘DMRMark’
April 21, 2017
Type Package
Title DMR Detection by Non-Homogeneous Hidden Markov Model from Methylation Array Data
Version 1.1.1
Date 2017-02-25
Author Linghao SHEN <sl013@ie.cuhk.edu.hk>
Depends MCMCpack, mvtnorm, ellipse
Maintainer Linghao SHEN <sl013@ie.cuhk.edu.hk>
Description Perform differential analysis for methylation array data. Detect differentially methylated regions (DMRs) from array M-values. The core is a Non-homogeneous Hidden Markov Model for estimating spatial correlation and a novel Constrained Gaussian Mixture Model for modeling the M-value pairs of each individual locus.
License GPL
NeedsCompilation no
Repository CRAN
Date/Publication 2017-04-21 17:58:43 UTC
R topics documented:
DMRMark-package .............................................................. 2
BLCA .............................................................. 3
boundFinder .............................................................. 3
DMRMark .............................................................. 5
DMRViterbi .............................................................. 6
FullSample .............................................................. 8
MakeGSOPTIONS ........................................................... 9
mvScatter .............................................................. 11
reformData .............................................................. 12
Index 13
DMR Detection by Non-Homogeneous Hidden Markov Model from Methylation Array Data
Description
Perform differential analysis for methylation array data. DMRMark detects differentially methylated regions (DMRs) from array M-values. The core of DSS is a Non-homogeneous Hidden Markov Model for estimating spatial correlation and a novel Constrained Gaussian Mixture Model for modeling the M-value pairs of each individual locus.
DMRMark only works for two-group comparisons currently. We have the plan to extend the transition and response model that make then suitable for complex experimental designs in the future.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
Examples
```r
# DMR detection performed on chr18 of a small BLCA dataset from TCGA
data(BLCA)
dataHBLCAI
# Use a small subset
nprobe <- 500
# M-values
mv <- BLCA$mv[1:nprobe,]
# Distance between probes, L<0 indicates crossing chromosomes
L = BLCA$distance[1:nprobe]
# Initialize new chain when probe distance too long
# or across different chromosomes
newChains <- which((L > 100000) | L < 0)
# The starting positions of new chains
starting <- c(1, newChains[-length(newChains)]+1)
# Run DMRMark with default options
set.seed(0)
par <- DMRMark(mv, L, starting)
# Get the posterior of being certain states
# Return the result of DMC for plotting by setting 'region=FALSE'
results <- DMRViterbi(mv, par, L, starting, region=FALSE)
# The MAP states being 3 or 4 indicate DMCs
isDMC <- (results$states > 2) + 0
mvScatter(mv, isDMC, nPlot=10000)
```
BLCA
Single paired M-values of BLCA chr18 from TCGA
Description
This data set contains one pair of M-values of BLCA chr18 from The Cancer Genome Atlas TCGA (TCGA). In addition, it contains the distance between each probes and the gold standard methylation status getting by the matched WGBS data (also from TCGA).
Usage
data(BLCA)
Format
A List with the following items, all items will length 5492 corresponding to the Illumina 450K probes on chr18:
- \textbf{mv} A matrix with one pair of M-values
- \textbf{distance} a numeric vector of the probe distance
- \textbf{truth} a binary vector represents the WGBS methylation status (0-nDML, 1-DML)
Source
Data generated by the TCGA Research Network: <http://cancergenome.nih.gov/>
Examples
data(BLCA)
boundFinder
\textit{Find a pair of reasonable distances of group means for hyper- and hypomethylation based on the quantile of two-group difference.}
Description
This function takes the M-values to produce the distance $D$ to be the maximum value satisfies that the proportion of absolute values of two-group difference larger than $D$ is at certain level. Due to the precision limitation, the $D$’s for hyper- and hypomethylation are not necessarily the same. If the samples are not totally paired, than user should first call 'reformData' to process M-values.
Usage
boundFinder(mv, prop = 0.1)
Arguments
mv The input M-values matrix. If the samples are not totally paired, than user MUST first call "reformData" to process M-values.
prop The proportion that absolute values of two-group difference larger than $D$ must be satisfied. Default value is 0.1
Details
The choices of 'prop' are not going to be too extreme or stringent, which will produce to dominated prior. This value should depend on the belief that around 'prop' proportion of loci are differentially methylated. In general 0.1 to 0.2 should be reasonable and and well-performed. Users may also choose different $D$'s for two differentially methylation status freely, in this situation, values around 1.5 to 3 are recommended.
Users must ensure the M-values come from paired samples or has been processed by 'reformData' according to the experiment design.
Value
A two-value vector contains $D_1$ and $D_2$ for the group-mean difference of hypermethylation and hypomethylation respectively. Due to the precision limitation, $D_1$ may not necessarily equal to $D_2$.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
See Also
reformData to tackle unpaired data.
Examples
# Finding the 5% and 95% quantile of normal samples
set.seed(0)
mv <- cbind(rep(0,100000),rnorm(100000))
boundFinder(mv)
# Output matched the normal p-values
# 5.0% 94.9%
#-1.639578 1.639691
**DMRMark**
*Gibbs Sampler to estimate model parameters*
Description
Given the M-values and probe distance, this function calls Gibbs Sampler for estimating the parameters of non-homogeneous hidden Markov model.
Usage
```r
DMRMark(mv, L = rep(1, nrow(mv)), starting = NULL,
pd = NULL, initHeuristic = TRUE,
GSoptions = NULL)
```
Arguments
- **mv**: The input M-values matrix, NA is not allowed.
- **L**: A vector to specify the distance between each probes in bp. $L < 0$ represents change of chromosome. Default is $L = 1$ for all probes.
- **starting**: A vector to specify the position to initial new chains. We suggest new chains should be initiated at least at starting of new chromosome. When it is null, new chains initiate at beginning and where $L > 100000$ or $L < 0$.
- **pd**: A design matrix, which can be generated by `stats::model.matrix`. If the M-values are totally paired or single paired, just leave it to be NULL.
- **initHeuristic**: If set to TRUE, heuristics will be used for faster computation, which rely on finding good initial value and then using less iterations. This will mask GS controls parameters of `GSoptions`. Recommended for getting some quick insight about new study. Default it TRUE.
- **GSoptions**: List of prior parameters and GS control parameters. See `MakeGSoptions`.
Details
This function is the main functionality of this package. It takes the M-values and probe distance and calls Gibbs Sampler for estimating the parameters of non-homogeneous hidden Markov model. New chains will be initiated at positions specified in 'starting'. Depends on the scale of M-values, this function may take certain time to the GS. In this situation user may first set `initHeuristic = TRUE` for a quick insight.
Value
The return value depends on 'GSoptions$track'. In default situation (`GSoptions$track = FALSE`), the return value is a list contains:
- **theta**: A vector contains posterior means of non-DMC’s control groups.
- **mu**: A 2-by-2 matrix, each row corresponding to the paired posterior mean of DMCs.
- **sigma12**: A vector contains posterior means of variance of non-DMC’s control groups.
sigmaN Single value, the posterior mean of variance of non-DMC’s between-group difference.
Sigma34 An Array contains posterior means of DMC’s Covariance.
charL Posterior means of characteristic length.
init The probabilities of the initial states of all chains. Sum up to 1.
If `GSoptions$track = TRUE`, an additional dimension will be added to each item of the list, and along this dimension user can retrieve the sample from each iterations.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
See Also
See MakeGSOptions for different prior parameters and Gibbs Sampler control parameters. See DMRViterbi for interpreting the estimated parameters.
Examples
```r
# DMRMark
# DMR detection performed on chr18 of a small BLCA dataset from TCGA
data(BLCA)
# Use a small subset
nprobe <- 500
# M-values
mv <- BLCA$mv[1:nprobe,]
# Distance between probes, L<0 indicates acrossing chromosomes
L = BLCA$distance[1:nprobe]
# Initialize new chain when probe distance too long
# or across different chromosomes
newChains <- which((L > 100000) | L < 0)
# The starting positions of new chains
starting <- c(1, newChains[-length(newChains)]+1)
# Run DMRMark with default options
pars <- DMRMark(mv, L, starting)
pars
```
DMRViterbi Viterbi algorithm to estimate posterior probabilities of DMRs.
Description
This function takes M-values and estimated parameters from 'DMRMark', then uses Viterbi algorithm for estimating states’ posterior probabilities for each locus.
Usage
```r
DMRViterbi(mv, pars, L = rep(1, nrow(mv)), starting = NULL,
pd = NULL, region = TRUE,
orderBy = c("max", "mean", "median", "min"), VitP = NULL)
```
Arguments
- **mv**: The input M-values matrix, NA is not allowed.
- **pars**: The list of model parameters. Getting by calling 'DMRMark'.
- **L**: A vector to specify the distance between each probes in bp. $\$L < 0\$ represents change of chromosome. Default is $\$L = 1\$ for all probes.
- **starting**: A vector to specify the position to initial new chains. We suggest new chains should be initiated at least at starting of new chromosome. When it is null, new chains initiate at beginning and where $\$L > 100000$ or $\$L < 0\$.
- **pd**: A design matrix, which can be generated by 'stats::model.matrix'. If the M-values are totally paired or single paired, just leave it to be NULL.
- **region**: If set to TRUE, this function returns the regions formed by Viterbi posterior states. Otherwise, it returns posterior probabilities and states for individual loci. Default is TRUE.
- **orderBy**: Only enabled when 'region = TRUE'. Order the regions by which statistics? Choice include 'max', 'mean', 'median' and 'min', which orders the regions by the maximum, geometric mean, median or minimum of the posterior probabilities in each region respectively. Default is 'max'.
- **VitP**: Only enabled when 'region = FALSE'. The minimum posterior probabilities required to be the DMC states. A locus with DMC’s posterior probability lower than 'VitP' will in the non-DMC states with highest probabilities. When set to NULL, simply return the MAP states. Default is NULL.
Value
If 'region = FALSE', the return value is a list contains:
- **states**: The MAP methylation status satisfies the 'VitP'.
- **deltas**: The matrix with each row corresponds to the posterior probabilities of each locus in which states.
If 'region = TRUE', the return value is a dataframe with the following fields:
- **begin**: Beginning of each region. In probe index.
- **ending**: Ending of each region. In probe index.
- **MAP_state**: The MAP state of each region.
minVP The minimum Viterbi posterior probability of the MAP state in each region
meanVP The geometric mean of Viterbi posterior probability of the MAP state in each region
maxVP The maximum Viterbi posterior probability of the MAP state in each region
midVP The median Viterbi posterior probability of the MAP state in each region
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
See Also
See DMRMark about model parameter estimation
Examples
# DMRViterbi
# DMR detection performed on chr18 of a small BLCA dataset from TCGA
data(BLCA)
# Use a small subset
nprobe <- 500
# M-values
mv <- BLCA$mv[1:nprobe,]
# Distance between probes, L<0 indicates acrossing chromosomes
L = BLCA$distance[1:nprobe]
# Initialize new chain when probe distance too long
# or across different chromosomes
newChains <- which((L > 100000) | L < 0)
# The starting positions of new chains
starting <- c(1, newChains[-length(newChains)]+1)
# Run DMRMark with default options
pars <- DMRMark(mv, L, starting)
# Get the posterior of being certain states
results <- DMRViterbi(mv, pars, L, starting)
head(results)
FullSample Function implementing Gibbs Sampler with old-version interface
Description
This function implements Gibbs Sampler for estimating model parameters, but with the old version interface. This function remains callable just for backward compatibility, and not going to be used by new users.
MakeGSoptions
Details
This function not going to be used by new users.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
MakeGSoptions
Encapsulate prior parameters and Gibbs Sampler (GS) control parameters
Description
This function encapsulate prior parameters and Gibbs Sampler control parameters. All parameters with initial values. The encapsulation is for easy initiating, managing and passing of parameters.
Usage
MakeGSoptions(pi0 = c(100, 100, 5, 5),
cmu0 = c(11.5, 11.5, 8, 8),
theta0 = c(-3, 2),
mu0 = matrix(c(-2, 2, 2, -2), 2, byrow = TRUE),
kappa0 = c(50, 50, 5, 5),
nu0 = rep(4, 2),
A0 = array(rep(c(2, 0.8, 0.8, 4), 2),
dim = c(2, 2, 2)),
alpha12N = rep(40, 3),
beta12N = rep(60, 3),
D_mu = rep(-2, 2),
chi_alpha = 0.2, #This and above for priors
burnin = 500, #This and below for Gibbs Sampler Control
nsamples = 100,
sampleSep = 10,
onHMM = TRUE,
track = FALSE,
verbose = FALSE)
Arguments
<table>
<thead>
<tr>
<th>Argument</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>pi0</td>
<td>Length-4 vector, the concentration of Dirichlet distribution. Prior of initial states.</td>
</tr>
<tr>
<td>cmu0</td>
<td>Single value, the mean of Normal distribution. Prior of characteristic length.</td>
</tr>
<tr>
<td>theta0</td>
<td>Length-2 vector, each value is the mean of a Normal distributions. Priors for means of control groups of two non-differentially methylated CpG sites (non-DMCs) responses.</td>
</tr>
</tbody>
</table>
**mu0** 2-by-2 matrix, each row is the means of a bivariate Normal distributions. Priors for means of two DMCs responses.
**kappa0** Length-4 vector, each value is the prior observation number of Normal-Inverse-Gamma (NIG) or Normal-Inverse-Wishart (NIW) depends on the corresponding state.
**nu0** Length-2 vector, each value is the degree of freedom of an IW distribution. Priors for covariance of DMC responses.
**A0** 2-by-2-by-2 array, each 2-by-2 matrix along the third dimension is the scale matrix of an IW distribution. Priors for covariance of DMC responses.
**alpha12N** Length-3 vector, each value is the shape of an IG distribution. Priors for variance of non-DMC responses.
**beta12N** Length-3 vector, each value is the rate of an IG distribution. Priors for variance of non-DMC responses.
**D_mu** Length-2 vector, each value is the minimum distance between two group means of DMCs. Prior for truncating the means of bivariate normals of DMC’s responses.
**chi_alpha** p-value of chi-square distribution with 2 degrees of freedom. Prior for truncating the covariant matrices of bivariate normals of DMC’s responses.
**burnin** Number of iterations for burn-in. Gibbs Sampler control parameter. Default is 500.
**nsamples** Number of samples to compute the point estimators. Gibbs Sampler control parameter. Default is 100.
**sampleSep** Only keep every 'sampleSep'-th samples to estimate point estimators. Gibbs Sampler control parameter. Default is 10.
**onHMM** Set to FALSE will disable HMM, and reduce to simple clustering of Mixture Model. Gibbs Sampler control parameter. Default is TRUE.
**track** Set to TRUE will make DMRMark return all samples from the beginning of burn-in to the end of sampling instead of point estimators. Useful for inspecting convergence. Please know well about this issue before you decide to set it to TRUE. Gibbs Sampler control parameter. Default is TRUE.
**verbose** Set to TRUE to show the details when running the Gibbs Sampler. Gibbs Sampler control parameter. Default is FALSE.
**Value**
Simply a list with all items are the same with input. Just an encapsulation.
**Author(s)**
Linghao SHEN <sl013@ie.cuhk.edu.hk>
**See Also**
DMRMark
mvScatter
Examples
# MakeGSOptions
opts <- MakeGSOptions()
mvScatter
Visualize the distributions of M-value pairs from differentially methylated CpG sites (DMC) or non-DMCs
Description
Given the M-values, True DMCs and optional the experiment design, plot the scatter plot of M-values. DMCs are marked by red daggers and non-DMCs by green circles.
Usage
mvScatter(mv, isDMC, pd = NULL, nPlot = 5000)
Arguments
mv
The input M-values matrix, NA is not allowed.
isDMC
A binary vector corresponding to each row of 'mv'. 0 indicates non-DMC and 1 for DMC.
pd
A design matrix, which can be generated by 'stats::model.matrix'. If the M-values are totally paired or single paired, just leave it to be NULL.
nPlot
The maximum number of loci to be plotted. Using too large value will lead to messy scatter and long execution time. Default is 5000.
Value
This function only generates a figure and has no return value.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
Examples
# mvScatter
data(BLCA)
mvScatter(BLCA$mv, BLCA$truth, nPlot = 10000)
reformData
Reform M-values into a two-column matrix.
Description
Reform M-values into a matrix with two columns representing matched control and case groups. It concatenate M-values pair-by-pair based on the design matrix.
Usage
reformData(mv, pd=NULL)
Arguments
- mv: The input M-values matrix, NA is not allowed.
- pd: A design matrix, which can be generated by `stats::model.matrix`. If the M-values are totally paired or single paired, just leave it to be NULL.
Value
A matrix with two columns representing matched control and case groups. If a sample has no paired sample in another group (say group B), then the values in group B will be represented by NA.
Author(s)
Linghao SHEN <sl013@ie.cuhk.edu.hk>
Examples
# Assume the values come from Tumor is 10 larger than those from Normal.
# The case with totally paired data
mv1 <- matrix(1:20,5)
reformData(mv1)
# The case with One more sample from Tumour group
# The second Tumour sample is the extra one
patient <- factor(c(1,3,1:3))
type <- c(rep("Normal",2),rep("Tumour",3))
pd <- model.matrix(~patient + type + 0)
reformData(mv2, pd)
Index
*Topic package
DMRMark-package, 2
BLCA, 3
boundFinder, 3
DMRMark, 5, 8, 10
DMRMark-package, 2
DMRViterbi, 6, 6
FullSample, 8
MakeGSoptions, 5, 6, 9
mvScatter, 11
reformData, 4, 12
|
{"Source-Url": "https://cran.r-project.org/web/packages/DMRMark/DMRMark.pdf", "len_cl100k_base": 4917, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 24785, "total-output-tokens": 5715, "length": "2e12", "weborganizer": {"__label__adult": 0.0005049705505371094, "__label__art_design": 0.0005469322204589844, "__label__crime_law": 0.0005435943603515625, "__label__education_jobs": 0.0015001296997070312, "__label__entertainment": 0.00019121170043945312, "__label__fashion_beauty": 0.00020682811737060547, "__label__finance_business": 0.00031828880310058594, "__label__food_dining": 0.0006537437438964844, "__label__games": 0.0012531280517578125, "__label__hardware": 0.0015134811401367188, "__label__health": 0.0031261444091796875, "__label__history": 0.0003862380981445313, "__label__home_hobbies": 0.000202178955078125, "__label__industrial": 0.0005526542663574219, "__label__literature": 0.00030684471130371094, "__label__politics": 0.0004668235778808594, "__label__religion": 0.000644683837890625, "__label__science_tech": 0.2265625, "__label__social_life": 0.00024318695068359375, "__label__software": 0.041900634765625, "__label__software_dev": 0.71728515625, "__label__sports_fitness": 0.0005521774291992188, "__label__transportation": 0.00035071372985839844, "__label__travel": 0.0003197193145751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18823, 0.03598]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18823, 0.51273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18823, 0.70702]], "google_gemma-3-12b-it_contains_pii": [[0, 1435, false], [1435, 2954, null], [2954, 4312, null], [4312, 5657, null], [5657, 7821, null], [7821, 9125, null], [9125, 11434, null], [11434, 12823, null], [12823, 14240, null], [14240, 16452, null], [16452, 17502, null], [17502, 18630, null], [18630, 18823, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1435, true], [1435, 2954, null], [2954, 4312, null], [4312, 5657, null], [5657, 7821, null], [7821, 9125, null], [9125, 11434, null], [11434, 12823, null], [12823, 14240, null], [14240, 16452, null], [16452, 17502, null], [17502, 18630, null], [18630, 18823, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18823, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18823, null]], "pdf_page_numbers": [[0, 1435, 1], [1435, 2954, 2], [2954, 4312, 3], [4312, 5657, 4], [5657, 7821, 5], [7821, 9125, 6], [9125, 11434, 7], [11434, 12823, 8], [12823, 14240, 9], [14240, 16452, 10], [16452, 17502, 11], [17502, 18630, 12], [18630, 18823, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18823, 0.01534]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4d128761e501671ab5bfdc7cf9ea8f9f1efa7e40
|
Configuration Management with Windows PowerShell
Desired State Configuration (DSC)
Keeping information system baselines consistent with a formal configuration management plan can be a very difficult task. Changes to server based systems and networking must be monitored in order to provide some measure of compliance. A new distributed configuration management platform by Microsoft® called Desired State Configuration (DSC) makes this task easier. The objective of this paper is to describe in depth how PowerShell 4.0 can help to solve this common problem. DSC uses a declarative syntax that any ski...
Abstract
Keeping information system baselines consistent with a formal configuration management plan can be a very difficult task. Changes to server based systems and networking must be monitored in order to provide some measure of compliance. A new distributed configuration management platform by Microsoft® called Desired State Configuration (DSC) makes this task easier.
The objective of this paper is to describe in depth how PowerShell 4.0 can help to solve this common problem. DSC uses a declarative syntax that any skilled administrator can utilize to deploy software, monitor configuration drift and even report conformance. DSC is cross-platform compatible with hundreds of useful resources freely available. DSC leverages PowerShell 4.0 and gives administrators a useful way to automate configuration management.
1. Introduction
Every organization serious about information system security must be able to account for configuration changes. Most organizations create a formal configuration management (CM) plan but struggle to control configuration changes. Information systems are constantly changing to make services available to customers while balancing performance with adequate security. A recent Algosec network security survey concluded that poor change management poses the greatest challenge in managing risk due to poor processes and a lack of information system visibility. More than 80% of respondents experienced network or application outages resulting from out-of-process changes (Algosec, 2014). Organizations need more reliable automated mechanisms that help identify information system changes, control unauthorized changes and validate a formal change management process. Microsoft® has a relatively new feature called Desired State Configuration (DSC) released with Windows PowerShell 4.0. Windows PowerShell is also called Windows Management Framework because of its fundamental design. DSC can provide the reliability and extensibility needed to plan, deploy and monitor configuration changes. Any organization with a Microsoft® Windows network and administrators adept with PowerShell could use DSC to make configuration management goals a reality.
2. Why Adopt Desired State Configuration (DSC)
DSC offers some measurable benefits over group policy objects (GPO). DSC is capable of measuring if the configuration of specific nodes has drifted from an approved baseline. Measuring and communicating GPO effectiveness is often difficult in large enterprises. To measure GPO effectiveness, administrators frequently resort to the `gpresult` command commonly used to troubleshoot GPO conflicts and analyze the Resultant Set of Policy. Conflicts and errors in applying GPO’s are also common with filtering, linking, blocking of inheritance or other GPO’s controlling an object due to higher precedence. GPO’s can also be relatively easy to defeat if the end user wishes to prevent a given GPO from being applied. GPO’s require Windows Active Directory and
are cumulative in the application of configuration policy, but these are not requirements for DSC.
An organization wishing to more effectively monitor and control the configuration of critical nodes may have to consider acquiring a third party application to accomplish the task. However, what if this configuration management and reporting capability could be included in a free upgrade? This DSC capability comes with the Windows Management Framework (WMF) 4.0. Even greater DSC enhancements will come with WMF 5.0, which will be provided to eligible Windows desktop operating systems as a free upgrade built into Windows 10 to be released on 29 July 2015. An organization managing a Windows based network infrastructure could benefit tremendously from this capability to control and fix configuration drift.
A first step for any organization that primarily uses a Windows network is to conduct an inventory of operating system versions and PowerShell versions that may already be installed. DSC also requires the installation of the .NET Framework 4.5 as a pre-requisite to installing WMF 4.0. An organization could use the following information found in Table 1 to assess its current Windows infrastructure.
<table>
<thead>
<tr>
<th>OS Version</th>
<th>Operating System Name</th>
<th>PS/WMF Version</th>
<th>DSC Capable</th>
<th>Requires PS/WMF 4.0 and .NET 4.5</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>Windows 10</td>
<td>5.0</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>6.3</td>
<td>Server 2012 R2</td>
<td>4.0</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>6.2</td>
<td>Server 2012</td>
<td>3.0</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>6.3</td>
<td>Windows 8.1</td>
<td>4.0</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>6.2</td>
<td>Windows 8</td>
<td>4.0</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>6.1</td>
<td>Windows 7 SP1</td>
<td>3.0</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>6.1</td>
<td>Server 2008 R2</td>
<td>2.0</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>6.0</td>
<td>Server 2008</td>
<td>1.0</td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
**Table 1 – Requirements per Operating System and PowerShell Version**
DSC can also be integrated with System Center Operations Manager (SCOM) to receive configuration change alerts in order to validate critical nodes against an approved baseline.
2.1 DSC Is Built on PowerShell 4.0
Windows PowerShell is a task based command line shell and configuration management framework built on the Microsoft® .NET framework. DSC is essentially an extension of the PowerShell language and provides a declarative syntax to express a configuration for information systems. Declarative means when an administrator is writing a script using PowerShell they do not necessarily have to know how DSC will provide a specific feature or software installation because the declarative syntax is more like an INI type expression specifying what should be present on the node (Jones, & Siddaway, & Hicks, 2014). A person with basic PowerShell skills can understand the declarative syntax used in a DSC configuration script.

brian@brianequick.com
PowerShell 4.0 introduced a new scripting keyword named "configuration". This keyword enables the declaration of resources with an additional new dynamic keyword named "node". Other new commands introduced are as follows in Table 2.
<table>
<thead>
<tr>
<th>Command Modules</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Get-DscResource</td>
<td>Gets desired state configuration resources present on the computer.</td>
</tr>
<tr>
<td>Start-DscConfiguration</td>
<td>Applies a configuration to a node.</td>
</tr>
<tr>
<td>Stop-DscConfiguration</td>
<td>Stops a configuration currently running a configuration job.</td>
</tr>
<tr>
<td>Get-DscConfiguration</td>
<td>Gets the current configuration of the node.</td>
</tr>
<tr>
<td>Test-DscConfiguration</td>
<td>Tests whether an actual configuration on a node matches the desired configuration.</td>
</tr>
<tr>
<td>Restore-DscConfiguration</td>
<td>Restores the previous configuration for a node.</td>
</tr>
<tr>
<td>Update-DscConfiguration</td>
<td>Runs the existing configuration on the computer.</td>
</tr>
<tr>
<td>Get-DscLocalConfigurationManager</td>
<td>Gets the local configuration manager (LCM) setting for a node.</td>
</tr>
<tr>
<td>Set-DscLocalConfigurationManager</td>
<td>Applies LCM settings to a node.</td>
</tr>
<tr>
<td>New-DscCheckSum</td>
<td>Creates checksum files for DSC documents and DSC resources.</td>
</tr>
</tbody>
</table>
Table 2 – DSC Commands Also Called DSC Cmdlets
2.1.1 DSC Has Cross-Platform Compatibility Standards
PowerShell is designed to be an object based scripting language and makes DSC possible. DSC is built on the Common Information Model (CIM) standard developed by the Desktop Management Task Force (DMTF) and provides cross platform compatibility with its language used to define managed elements in the Managed Object Format (MOF). DSC uses Windows Remote Management (WinRM) technology as a communication mechanism. WinRM is the Microsoft® implementation of web services for management called WS-Management (WSMan) (Chaganti, 2014). The MOF is the primary configuration language in defining how specific nodes should be configured. MOF files can be created by PowerShell to describe the classes and instance definitions of a configuration in textual form. MOF files can also be created by third party tools like Puppet or Chef, and DSC is capable of applying them. MOF files are used by the Local Configuration Manager (LCM) to enforce a precise configuration for each unique node whether the operating system is Windows or Linux as seen in Figure 2 (Greene, 2014).

The LCM is the engine or agent of DSC and is installed when PowerShell 4.0 is installed. How this configuration data is communicated to nodes is explained next.
2.1.2 DSC Has Flexible Modes of Operation
The architecture of DSC can be described as a push or pull mode of operation. Push mode is best described as DSC being initiated manually from a server and the configuration data being pushed out to connected nodes. This paper will show an instance where a MOF file is pushed to another server. In contrast, the pull mode is described as
each node requesting its specific MOF configuration file from the pull server at a pre-defined refresh frequency in minutes. Communication between nodes can be configured using Server Message Block (SMB), as in a common file share or using WSMAN. In the interest of security it is recommended as a best practice to configure WinRM communications using HTTPS with a Secure Socket Layer (SSL) certificate when using DSC in a production environment. PowerShell has built-in command modules called “cmdlets” that make it easy to check, validate and configure the necessary WinRM listeners enabling secure communication. Figure 3 provides a conceptual depiction of DSC components in pull mode.

Each node participating in DSC registers itself with the pull server using a global unique ID (GUID). This GUID is sensitive information that correlates to a specific MOF file designed uniquely for a specific node. The primary advantage of implementing DSC in pull mode is scalability. A single pull server can provide DSC configurations to many connected nodes with the additional benefit of specifying how often the LCM on each node should check back with the pull server enforcing a configuration. Configuration management procedures may dictate that general servers only need to have configuration drift checked once every 48 hours, but every 15 minutes for critical servers that host sensitive data where system changes could result in serious losses to the organization.
brian@brianequick.com
Once the organization has agreed upon a planned baseline system, administrators and developers can begin creating DSC scripts with resources necessary to deploy an approved baseline.
### 2.2 DSC Resources Offer Extensibility
#### 2.2.1 Built-in DSC Resources
DSC comes with built-in resources also called resource providers, which are the building blocks required to write configuration scripts and deploy configuration management solutions. Twelve DSC resources are immediately available upon installation of the WMF 4.0. Some of these built-in resources are "Archive", "Environment", "File", "Group", "Log", "Registry" and "WindowsFeature". These familiar names provide mechanisms to manage what each title implies. For example, an administrator can use the "WindowsFeature" resource to make certain that the IIS and ASP 4.5 roles are installed for multiple nodes. Administrators can open a PowerShell command prompt and type in `PS C:\> Get-WindowsFeature` to see the very same roles or features available in Server Manager that can be installed using DSC. The syntax allows administrators to specify this in a configuration script by setting the “Ensure” property equal to the value of “Present” demonstrating this easy to use declarative syntax in DSC. If administrators planned to install all the sub features for the "WindowsFeature" resource, they could simply insert the line "IncludeAllSubFeature = $true" under the “Ensure” property. This configuration script would create a MOF file used to enforce the configuration for the node or nodes specified after the “Node” element. A small sample configuration script is shown in Figure 4.
DSC resources that come with WMF 4.0, when installed on a given machine can be displayed by using the `PS C:\> Get-DscResource` cmdlet. The output below in Figure 5 shows the twelve built-in resources explained with the properties available with each named resource.
```
PS C:\> get-dscresource
<table>
<thead>
<tr>
<th>ImplementedAs</th>
<th>Name</th>
<th>Module</th>
<th>Properties</th>
</tr>
</thead>
<tbody>
<tr>
<td>Binary</td>
<td>File</td>
<td>PSDesiredConfiguration</td>
<td>[DestinationPath, Attributes, Checksum, Con...</td>
</tr>
<tr>
<td></td>
<td>Archive</td>
<td>PSDesiredConfiguration</td>
<td>[Destination, Path, Checksum, Credential...]</td>
</tr>
<tr>
<td></td>
<td>Environment</td>
<td>PSDesiredConfiguration</td>
<td>[Name, DependsOn, Ensure, Path...]</td>
</tr>
<tr>
<td></td>
<td>Group</td>
<td>PSDesiredConfiguration</td>
<td>[GroupName, Credential, DependsOn, Description...]</td>
</tr>
<tr>
<td></td>
<td>Log</td>
<td>PSDesiredConfiguration</td>
<td>[Name, Path, ProductId, Arguments...]</td>
</tr>
<tr>
<td></td>
<td>Package</td>
<td>PSDesiredConfiguration</td>
<td>[GetScript, SetScript, Credentials...]</td>
</tr>
<tr>
<td></td>
<td>Registry</td>
<td>PSDesiredConfiguration</td>
<td>[Name, BuildInAccount, Credential, DependsOn...]</td>
</tr>
<tr>
<td></td>
<td>Script</td>
<td>PSDesiredConfiguration</td>
<td>[Name, Credentials, DependsOn, Description...]</td>
</tr>
<tr>
<td></td>
<td>Service</td>
<td>PSDesiredConfiguration</td>
<td>[Arguments, Path, Credential, DependsOn...]</td>
</tr>
<tr>
<td></td>
<td>User</td>
<td>PSDesiredConfiguration</td>
<td></td>
</tr>
<tr>
<td></td>
<td>WindowsFeature</td>
<td>PSDesiredConfiguration</td>
<td>[Arguments, Path, Credential, DependsOn...]</td>
</tr>
</tbody>
</table>
```
**Figure 5** – Example List of Get-DscResource Output
Once administrators identify a resource they want to utilize in their configuration script, they can further expand and analyze all the properties available. A compound command, like `PS C:\> Get-DscResource WindowsFeature`, can be used to obtain more specific property information available for each resource. Specific property and value information is shown below in Figure 6 for the “WindowsFeature”.
Properties for all DSC resources such as “WindowsFeature”, are available, making it very simple to declare what features an administrator or developer may want installed on each node with specific help information available in PowerShell.
brian@brianequick.com
PS C:\> Get-DscResource WindowsFeature -Syntax
Figure 6 – Properties for the WindowsFeature Resource
These properties can be shown for each DSC resource aiding developers in creating configuration scripts using the built-in resources provided by PowerShell 4.0. What if administrators need more resources or a unique capability that is not yet available in the built-in resources that came with PowerShell 4.0?
2.2.2 Experimental DSC Resources
Since the 2013 release of WMF 4.0, Microsoft® and the development community have collaborated to create many new DSC resource providers that are being released in waves. Wave 10 is currently available with over fifty resources for use to create DSC scripts for configuration management and many other deployment solutions (Microsoft, 2015).
The DSC Resource Kit can be obtained and downloaded here:
https://gallery.technet.microsoft.com/scriptcenter/DSC-Resource-Kit-All-c449312d
Microsoft has released DSC resources as beta versions; and although these resources may not be fully supported by a Microsoft standard support program, organizations, such as Amazon Web Services (AWS) and Rackspace, are using DSC because it is a powerful tool. Amazon uses DSC to deploy IT infrastructure services with predefined configurations, and Rackspace uses DSC to maintain and manage applications based on customer defined requirements (Barr, 2014). The list of DSC resources continues to grow due to the devops community discovering the benefits of DSC and making contributions of their time and talent to develop new useful resources. There are an estimated two-
hundred DSC resources when including the WMF 5.0 preview found at https://www.powershellgallery.com with many seen below in Figure 7.

Figure 7 - Built-in DSC Resources with Experimental Resources Included
### 2.2.3 Creating New DSC Resources
If currently released resources do not meet the needs of an organization, Microsoft® has made it possible for any developer to create new DSC resources. (Murawski, 2014) Developers can create resources with the three mandatory functions named "Get-TargetResource," "Set-TargetResource" and "Test-TargetResource" that enable custom defined properties to be applied. Explaining how to author new custom resources is outside the scope of this paper; however, an excellent article written by Ritesh Modi can help explain in greater detail how to author your own DSC custom resources (Modi, 2015).
brian@brianequick.com
3. Deploying DSC in Pull Server Mode
3.1 Setting Up a Pull Server
A DSC pull server can be setup in the following steps:
• Setup three servers with Windows Server 2012 R2 fully updated.
• Use the MakePullServer.ps1 script in this paper to create a MOF file.
• Use the pull server MOF file to push the configuration.
• Obtain certificates for client/server authentication if using HTTPS.
A rudimentary scenario is used in this paper to explain how to create a simple DSC pull server with a MOF file. The purpose of the DSC pull server is to help keep a WSUS server consistent with an approved baseline documented in a signed system security plan. The Server 2012 R2 operating system is utilized as three domain joined servers. The IIS role and DSC service could be installed with the Add Roles and Features wizard built into Server 2012 R2, but DSC will install most of the configurations needed for the new pull server. This paper demonstrates implementing a basic pull server with three PowerShell scripts. The first script to create the MOF file for the pull server; the second script will create a MOF file for a WSUS server, and the third will be used to configure the LCM on the WSUS server to make it a pull client. Server 2012 R2 will need additional resources from the Wave 10 release. The administrator should place these resources in the modules directory at this path "C:\Program Files\WindowsPowerShell\Modules" on all the servers. These new resources can be seen by typing "Get-DscResource". The "DSCServiceFeature" is a mandatory feature declared in the script along with "xPSDesiredStateConfiguration" and "xDSCWebService". The administrator will need another domain-connected server with the hostname of "server2012r2" as seen on line 5 and line 52 of Figure 9. The script should be executed in a PowerShell console using "Run as Administrator".
PS C:\MakePullServer> .\MakePullServer.ps1
brian@brianequick.com
The MakePullServer.ps1 (Figure 9) script creates a MOF file. The instance definitions created by this script can be seen in the following MOF file snippet in Figure 8.

Figure 9 - Configuration Script used by PowerShell to Create a Pull Server MOF file. The "Start-DSCConfiguration" cmdlet can be used to invoke a CIM session and push the MOF file to "server2012r2" as seen in Figure 10 below (Hicks, 2015).
Figure 10 - A Simple Hash Table can now be used to Apply the MOF file to the Node
Figure 11 -Verbose Output as the Configuration is Applied to "server2012r2"
Figure 11 reveals verbose information as the push operation applies the configuration to "Server2012r2". "Server2012r2" should be renamed to "pullserver", and the administrator should now validate that the web services are functioning properly to operate in pull mode. Administrators first need a certificate that will provide server authentication. The certificate must be bound to the web service on the pull server. The administrator may wish to install and utilize "IIS Manager" using the Server certificates feature, add it there with “complete certificate request” and, when prompted, browse to the certificate file. When the MOF file configuration was pushed to the new pull server the following tasks were accomplished on the "pull server":
- Created a directory at "c:\inetpub\wwwroot\PSDSCPullServer"
• Copied five files from
"$pshome/Modules/PSDesiredStateConfiguration/PullServer"
Global.asax, PSDCPullServer.mof, PSDCPullServer.svc,
PSDCPullServer.xml and PSDCPullServer.config to the following path
"c:\inetpub\wwwroot\PSDCPullServer"
• Renamed PSDCPullServer.config to web.config.
• Created a new directory named "c:\inetpub\wwwroot\bin".
• Copied Devices.mdb at
"$pshome\modules\psdesiredstateconfiguration\pullserver
\Devices.mdb" and place it in
"$env:programfiles\WindowsPowerShell\DscService\Devices
.mdb"
Using the IIS web server manager, the administrator should verify that a new
application pool named "PSWS" is running under the local system account. The final step
in configuring web services is to add a database provider to the web.config configuration
file by adding keys as seen in Figure 12 to the "appsettings" section of the web.config file
at "C:\inetpub\wwwroot\PSDCPullServer\" (2013, Murawski).
<add key="dbprovider" value="System.Data.OleDb" />
<add key="dbsource" value="Provider=Microsoft.Jet.OLEDB.4.0;
Data Source="C:\Program Files\WindowsPowerShell\DscService\Devices.mdb";" />
<add key="configpath" value="C:\Program Files\WindowsPowerShell\DscService\Configuration" />
<add key="modulepath" value="C:\Program Files\WindowsPowerShell\DscService\Modules" />
Figure 12 - Database Provider Configuration
Finally, the administrator can verify that the pull server service is running by
navigating to the "PSDCPullServer.svc" service using a web browser on the pull
server, as seen in Figure 13.
Figure 13 - Verification that the New Pull Server is Functioning
The second configuration script is designed to generate a MOF file for the WSUS server. The "WindowsFeature" and "xFirewall" resource providers are used in this example scenario to install Windows Update Services and a firewall exception adhering to a simple baseline as seen in Figure 14.
Figure 14 - Script to Create the WSUS01 MOF File
The WSUS.ps1 script is also executed using "Run as Administrator" credentials.
brian@brianequick.com
PS C:\WSUS\> .\WSUS.ps1
The important difference in using pull mode is that the target nodes are identified by a
Global Unique ID (GUID) rather than by a name. This method ensures that each target
node gets the proper MOF file created for a specific node configuration. The "New-
CheckSum" cmdlet is also used to generate a checksum of each MOF file to help protect
the integrity of the MOF files on the pull server (Technet, 2013). A code sample is
provided in Figure 15 providing a way to create these methods.
```powershell
$Guid = [guid]::NewGuid()
$source = "WSUS01.mof"
$target = "\pull-server\c$\program files\windowspowershell\dscservice\configuration\$Guid.mof"
copy $source $target
New-DSCChecksum $target
```
**Figure 15 - GUID Creation**
### 3.2 Setting Up Nodes to Communicate with the Pull Server
Since clients must be able to receive configuration data from the pull server
administrators must validate that WinRM listeners are functioning properly for each node.
This validation can be done with "Test-WSMan" and then enabling the HTTP or HTTPS
listener as needed with the "Set-WSManQuickConfig" cmdlet (Hicks, 2013). A
certificate must be added to the local machine store when using HTTPS. The LCM on
"WSUS01" must be configured for pull mode, and the LCM.ps1 script as seen in Figure
16 will generate a special "meta.mof" file used for this purpose.
The "Set-DscLocalConfigurationManager" cmdlet can now be used to configure the LCM on "WSUS01" to use pull server mode and shown in Figure 17.
The default mode of operation in DSC is push mode; so after applying the LCM meta configuration, the new mode of operation on "WSUS01" should be pull mode as seen in Figure 18 by using "Get-DscLocalConfigurationManager". The new LCM Meta-Configuration shows "Pull" after the "RefreshMode" setting.
As the engine of DSC, the LCM will now check back with the pull server every 15 minutes as seen after the "ConfigurationModeFrequencyMins" setting. Configuration drift for "WSUS01" will now be corrected with the "ConfigurationMode" setting being defined as "ApplyAndAutoCorrect". How to troubleshoot LCM issues or communication problems with DSC logs is next.
### 3.3 Troubleshooting DSC with Logs
DSC records errors and events like most Windows machines in logs that can be viewed in "Event Viewer". These DSC logs are found under "Application and Service logs.", "Microsoft", "Windows", and then "Desired State Configuration". Writing configuration scripts can be challenging for beginners, and having logs will make problem solving easier if issues arise. DSC created three primary logs: Operational, Analytic, and Debug logs. Operational logs are turned on by default, but Analytic and Debug logging must be enabled in order to be utilized for troubleshooting. The wevtutil utility can be used to enable these logs (Technet, 2014).
```
PS C:\Users> wevtutil.exe set-log "Microsoft-Windows-Dsc/Analytic" /q:true /e:true
PS C:\Users> wevtutil.exe set-log "Microsoft-Windows-Dsc/Debug" /q:true /e:true
```
brian@brianequick.com
DSC also has two experimental resources that will help administrators analyze DSC logs; the "xDscDiagnostics" and "Get-xDscOperation". These resources have functions that help identify all local events from past DSC operations or DSC events on remote nodes and operations running on one or more nodes (Technet, 2014).
4. Measuring System Changes with Compliance Server
A formal configuration management plan seldom works well enough that all changes are accounted for in a timely and accurate way as to provide a measure of those changes against the approved baseline. Although relatively new, DSC offers an additional web service along with the pull server service "PSDSCPullServer.svc" called the compliance server "DSCComplianceServer.svc" service. The compliance server web service was created for the purpose of measuring the status of each node connected to a pull server. The pull operational status, configuration and node information is all stored in the database configured previously (Figure 12) and can be used by administrators to periodically check the status of the nodes to see if their configurations are in sync with the pull server or not. This node status querying capability can be enhanced with a lightweight data-interchange format called JavaScript Object Notation (JSON) to output the information to any website. The type query information that can be obtained from connected nodes is listed below.
- **NodeCompliant** - Information on the compliance of each node or nodes.
- **ServerCheckSum** - The checksum of the MOF on the pull server.
- **TargetCheckSum** - The checksum of the MOF on the target node.
- **LastComplianceTime** - The last successful node configuration.
- **LastHeartbeatTime** - The last successful node connection (Technet, 2013).
5. Conclusion
Configuration management for IT systems has always been a very challenging endeavor. Tracking and accounting for system changes is a daunting task but
PowerShell driven DSC provides a new and promising built-in resource that any organization using a Windows network infrastructure can utilize to monitor, control and report compliance. Uncontrolled changes to information systems introduces serious threats that often go undiscovered until more serious consequences occur. DSC brings developers and PowerShell savvy administrators a new capability making it even easier to automate change control and reporting.
References
brian@brianequick.com
<table>
<thead>
<tr>
<th>Event</th>
<th>Location</th>
<th>Dates</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS San Francisco Fall 2018</td>
<td>San Francisco, CAUS</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>European Security Awareness Summit 2018</td>
<td>London, GB</td>
<td>Nov 26, 2018 - Nov 29, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Stockholm 2018</td>
<td>Stockholm, SE</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Khobar 2018</td>
<td>Khobar, SA</td>
<td>Dec 01, 2018 - Dec 06, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Nashville 2018</td>
<td>Nashville, TNUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Santa Monica 2018</td>
<td>Santa Monica, CAUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dublin 2018</td>
<td>Dublin, IE</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Tactical Detection & Data Analytics Summit & Training 2018</td>
<td>Scottsdale, AZUS</td>
<td>Dec 04, 2018 - Dec 11, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Frankfurt 2018</td>
<td>Frankfurt, DE</td>
<td>Dec 10, 2018 - Dec 15, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Bangalore January 2019</td>
<td>Bangalore, IN</td>
<td>Jan 07, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Sonoma 2019</td>
<td>Santa Rosa, CAUS</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam January 2019</td>
<td>Amsterdam, NL</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Miami 2019</td>
<td>Miami, FLUS</td>
<td>Jan 21, 2019 - Jan 26, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dubai January 2019</td>
<td>Dubai, AE</td>
<td>Jan 26, 2019 - Jan 31, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Las Vegas 2019</td>
<td>Las Vegas, NVUS</td>
<td>Jan 28, 2019 - Feb 02, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Security East 2019</td>
<td>New Orleans, LAUS</td>
<td>Feb 02, 2019 - Feb 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS SECS04 Stuttgart 2019 (In English)</td>
<td>Stuttgart, DE</td>
<td>Feb 04, 2019 - Feb 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Anaheim 2019</td>
<td>Anaheim, CAUS</td>
<td>Feb 11, 2019 - Feb 16, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Secure Japan 2019</td>
<td>Tokyo, JP</td>
<td>Feb 18, 2019 - Mar 02, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Scottsdale 2019</td>
<td>Scottsdale, AZUS</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dallas 2019</td>
<td>Dallas, TXUS</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Zurich February 2019</td>
<td>Zurich, CH</td>
<td>Feb 18, 2019 - Feb 23, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Austin 2018</td>
<td>OnlineTXUS</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>Books & MP3s OnlyUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/bestprac/configuration-management-windows-powershell-desired-state-configuration-dsc-36167", "len_cl100k_base": 7904, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 46344, "total-output-tokens": 9636, "length": "2e12", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.00047469139099121094, "__label__crime_law": 0.0007562637329101562, "__label__education_jobs": 0.002044677734375, "__label__entertainment": 0.00015604496002197266, "__label__fashion_beauty": 0.00017595291137695312, "__label__finance_business": 0.00270843505859375, "__label__food_dining": 0.0002467632293701172, "__label__games": 0.000682830810546875, "__label__hardware": 0.0020160675048828125, "__label__health": 0.00044465065002441406, "__label__history": 0.00027942657470703125, "__label__home_hobbies": 0.00013339519500732422, "__label__industrial": 0.0007276535034179688, "__label__literature": 0.0002949237823486328, "__label__politics": 0.0004208087921142578, "__label__religion": 0.0003514289855957031, "__label__science_tech": 0.0963134765625, "__label__social_life": 0.00013649463653564453, "__label__software": 0.307373046875, "__label__software_dev": 0.5830078125, "__label__sports_fitness": 0.00015938282012939453, "__label__transportation": 0.0003123283386230469, "__label__travel": 0.00020933151245117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37088, 0.0344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37088, 0.12982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37088, 0.83443]], "google_gemma-3-12b-it_contains_pii": [[0, 606, false], [606, 1433, null], [1433, 3599, null], [3599, 6114, null], [6114, 6947, null], [6947, 8617, null], [8617, 10335, null], [10335, 11898, null], [11898, 13546, null], [13546, 16303, null], [16303, 17907, null], [17907, 18807, null], [18807, 20740, null], [20740, 20982, null], [20982, 21304, null], [21304, 22193, null], [22193, 23740, null], [23740, 24249, null], [24249, 25621, null], [25621, 26063, null], [26063, 27296, null], [27296, 29245, null], [29245, 29706, null], [29706, 31456, null], [31456, 32568, null], [32568, 37088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 606, true], [606, 1433, null], [1433, 3599, null], [3599, 6114, null], [6114, 6947, null], [6947, 8617, null], [8617, 10335, null], [10335, 11898, null], [11898, 13546, null], [13546, 16303, null], [16303, 17907, null], [17907, 18807, null], [18807, 20740, null], [20740, 20982, null], [20982, 21304, null], [21304, 22193, null], [22193, 23740, null], [23740, 24249, null], [24249, 25621, null], [25621, 26063, null], [26063, 27296, null], [27296, 29245, null], [29245, 29706, null], [29706, 31456, null], [31456, 32568, null], [32568, 37088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37088, null]], "pdf_page_numbers": [[0, 606, 1], [606, 1433, 2], [1433, 3599, 3], [3599, 6114, 4], [6114, 6947, 5], [6947, 8617, 6], [8617, 10335, 7], [10335, 11898, 8], [11898, 13546, 9], [13546, 16303, 10], [16303, 17907, 11], [17907, 18807, 12], [18807, 20740, 13], [20740, 20982, 14], [20982, 21304, 15], [21304, 22193, 16], [22193, 23740, 17], [23740, 24249, 18], [24249, 25621, 19], [25621, 26063, 20], [26063, 27296, 21], [27296, 29245, 22], [29245, 29706, 23], [29706, 31456, 24], [31456, 32568, 25], [32568, 37088, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37088, 0.28814]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
5933ff8f4544d293730dd65e385d0a2eddc67c45
|
Copyright Notice
Copyright © 2019 Imperas Software Limited All rights reserved. This software and documentation contain information that is the property of Imperas Software Limited. The software and documentation are furnished under a license agreement and may be used or copied only in accordance with the terms of the license agreement. No part of the software and documentation may be reproduced, transmitted, or translated, in any form or by any means, electronic, mechanical, manual, optical, or otherwise, without prior written permission of Imperas Software Limited, or as expressly provided by the license agreement.
Right to Copy Documentation
The license agreement with Imperas permits licensee to make copies of the documentation for its internal use only. Each copy shall include all copyrights, trademarks, service marks, and proprietary rights notices, if any.
Destination Control Statement
All technical data contained in this publication is subject to the export control laws of the United States of America. Disclosure to nationals of other countries contrary to United States law is prohibited. It is the reader’s responsibility to determine the applicable regulations and to comply with them.
Disclaimer
IMPERAS SOFTWARE LIMITED, AND ITS LICENSORS MAKE NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Table of Contents
1 Preface .......................................................................................................................... 4
1.1 Notation ................................................................................................................... 4
1.2 Related OVP Documents ....................................................................................... 4
2 Introduction .................................................................................................................. 5
2.1 Prerequisites .......................................................................................................... 5
3 Debugging Example ..................................................................................................... 6
3.1 Creating a Debuggable Platform ........................................................................... 6
3.1.1 Specify debug using the Command Line Parser ........................................... 7
3.1.1.1 Specifying the debugger connection details ...................................... 8
3.1.1.2 Nominating the processor for debug .............................................. 8
3.1.2 Specify debug using OP API .......................................................................... 8
3.1.2.1 Specifying the debugger connection details ...................................... 8
3.1.2.2 Nominating the processor for debug .............................................. 8
3.2 Building the Platform ............................................................................................ 9
3.3 Starting Debugging 'gdbconsole' ......................................................................... 9
3.3.1 Running the Platform .................................................................................. 9
3.4 Starting Debugging Manual Attachment ............................................................. 10
3.4.1 Running the Platform ................................................................................ 10
3.4.2 Running GDB ............................................................................................ 10
3.4.2.1 Connecting GDB to OVPsim ............................................................. 11
3.5 An example debug session .................................................................................... 11
4 Further GDB Connection Information and Features ................................................... 13
4.1 RSP Interface ....................................................................................................... 13
4.2 Environment variables ......................................................................................... 13
4.3 Detaching and Reattaching ................................................................................. 13
4.3.1 Modifying simulator behavior on detach .................................................... 13
4.3.1.1 Wait for next connection ................................................................... 13
4.3.1.2 Finish simulation ............................................................................. 14
4.4 Enabling a debug port without initial connection ............................................... 14
4.5 Environment Variable Enables Debug Connection ............................................... 14
4.6 Debugging RSP Connections .............................................................................. 14
5 Creating a Debuggable SystemC/TLM2.0 Platform ..................................................... 16
5.1.1 Nominating the debugged processor .............................................................. 16
1 Preface
This document describes how to debug an application running on the OVP simulator using the Gnu debugger, GDB.
1.1 Notation
Code
1.2 Related OVP Documents
• CpuManager and OVPsim User Guide
2 Introduction
The CpuManager and OVPsim User Guide describes how platforms containing any number of processor models can be constructed. This document describes how to debug an application running on one processor in such a platform while it is simulating using the freely-available OVPsim simulation environment. OVPsim supports single-processor debugging with the Gnu debugger (GDB) via the Remote Serial Protocol (RSP). Advanced multi-processor debug facilities are available in Imperas commercial products.
2.1 Prerequisites
This documentation is supported by C code samples in an Examples directory, available either to download from the www.ovpworld.org website or as part of an Imperas installation.
GCC Compiler Versions
<table>
<thead>
<tr>
<th>Platform</th>
<th>Version</th>
<th>Compiler</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linux32</td>
<td>4.5.2</td>
<td>i686-nptl-linux-gnu (Crosstool-ng)</td>
</tr>
<tr>
<td>Linux64</td>
<td>4.4.3</td>
<td>x86_64-unknown-linux-gnu (Crosstool-ng)</td>
</tr>
<tr>
<td>Windows32</td>
<td>4.4.7</td>
<td>mingw-w32-bin_i686-mingw</td>
</tr>
<tr>
<td>Windows64</td>
<td>4.4.7</td>
<td>mingw-w64-bin_i686-mingw</td>
</tr>
</tbody>
</table>
For Windows environments, Imperas recommends using MinGW (www.mingw.org) and MSYS.
The example given in this document uses the opencores OR1K processor model and tool chain, also available to download from the www.ovpworld.org website or as part of an Imperas installation.
3 Debugging Example
3.1 Creating a Debuggable Platform
A suitable single-processor platform example is available in the directory:
$IMPERAS_HOME/Examples/SimulationControl/debugWithGDB
This uses the freely-available OR1K processor (see http://www.opencores.org/projects.cgi/web/or1k/architecture).
The test hardware definition source is in file module/module.op.tcl:
```tcl
ihwnew -name debugWithGDB -stoponctrlc
ihwaddbus -instancename bus -addresswidth 32
#
# Add a processor to do some reading and writing
#
ihwaddprocessor -instancename cpu1 \
-vendor ovpworld.org -library processor -type or1k -version 1.0 \
-variant generic \
-semihostname or1kNewlib
ihwconnect -bus bus -instancename cpu1 -busmasterport INSTRUCTION
ihwconnect -bus bus -instancename cpu1 -busmasterport DATA
#
# Memory on the main bus
#
ihwaddmemory -instancename ram -type ram
ihwconnect -bus bus -instancename ram -busslaveport sp1 -loaddress 0x00000000 -hiaddress 0xffffffff
```
This creates a definition including an OR1K processor connected to a bus which also contains a memory.
The C OP API code generated by iGen is in the file module.igen.h and looks like
```c
// instantiate module components
static OP_CONSTRUCT_FN(instantiateComponents) {
// Bus bus
optBusP bus_b = opBusNew(mi, "bus", 32, 0, 0);
// Processor cpu1
const char *cpu1_path = opVLNVString(0, "ovpworld.org", "processor", "or1k", "1.0", OP_PROCESSOR, 1 // report errors
```
© 2019 Imperas Software Limited www.OVPworld.org
For a full explanation of OVPsim platform construction please see the *iGen Platform and Module Creation User Guide*. This section describes only those aspects of platform construction that relate to debugging.
### 3.1.1 Specify debug using the Command Line Parser
the platform in this example includes the standard Command Line Parser (CLP). This allows the
3.1.1.1 Specifying the debugger connection details
The debug port is enabled by specifying the argument \texttt{--port <port number>} on the command line. A specific port number may be specified or by setting \textit{port number} to 0 the next available port is opened.
Alternatively the argument \texttt{--gdbconsole} will open a port and connect the default GDB debugger automatically.
3.1.1.2 Nominating the processor for debug
In an OVPsim simulation only a single processor may be connected to a GDB debugger\(^1\). this requires that the processor is selected using the \texttt{--debugprocessor <processor name>}. In this case the processor name is the instance name in the platform, for example \textit{platform/OR1K}.
3.1.2 Specify debug using OP API
3.1.2.1 Specifying the debugger connection details
The OP kernel is initialized by calling \texttt{opModuleNew}:
\begin{verbatim}
opModuleP opRootModuleNew (optModuleAttrP attrs, const char *name, optParamP params)
\end{verbatim}
The params argument of \texttt{opRootModuleNew} is used to initialize the simulator. One of the options available, \texttt{OP_FGDBCONSOLE}, is to enable the automatic startup and connection of a GDB to a processor in the simulated platform.
\begin{verbatim}
opRootModuleNew(0, 0, OP_PARAMS(OP_PARAM_BOOL_SET(OP_FGDBCONSOLE, 1)));
\end{verbatim}
GDB Remote Serial Protocol (RSP) debugging as supported by OVPsim uses standard operating system sockets on the host running OVPsim and GDB.
3.1.2.2 Nominating the processor for debug
If the processor has one core, it is passed to \texttt{opProcessorDebug}.
\begin{verbatim}
opProcessorDebug(processor);
\end{verbatim}
If it is a multicore device the appropriate core must be located:
\begin{verbatim}
optProcessorP sub = optObjectByName(root, MODULE_NAME "/CPU0_P0", OP_PROCESSOR_EN).Processor; // for example
\end{verbatim}
\(^1\) The Imperas Professional products allow the ability to attach a GDB debugger to any or all the processors defined in a platform. Imperas also provide alternative debugging solutions.
opProcessorDebug(sub);
Giving an incorrect name causes an error message which lists all the legal names. This is a useful way to find the core names.
### 3.2 Building the Platform
The OVPsim examples are written to work with GCC and MAKE which are typically available on Linux and can be installed on Windows as part of MinGW and MSYS (see section 2.1.) The example commands below assume you are using a Bash shell on Linux or MSYS.
Take a copy of the debugging example:
```
cp -r $IMPERAS_HOME/Examples/SimulationControl/debugWithGDB .
```
The test platform can be compiled to produce an executable, `platform.<IMPERAS_ARCH>.exe`, by using `make` in the example directory:
```
make -C module
```
Cross-compile a simple test application for the OR1K processor:
```
make -C application
```
### 3.3 Starting Debugging 'gdbconsole'
#### 3.3.1 Running the Platform
Start the OVP simulator with the example platform by running the native platform executable built earlier. This simple platform uses the command line parser to specify the start up of a console in which the correct GDB for the processor type will be invoked and connected to the platform.
```
harness.exe --modulefile module/model.$(IMPERAS_SHRSUF) --gdbconsole --program application/application.OR1K.elf
```
OVPsim (32-Bit) v20150205.0 Open Virtual Platform simulator from www.OVPworld.org.
Copyright (C) 2005-2015 Imperas Ltd. Contains Imperas Proprietary Information. Licensed Software, All Rights Reserved.
Visit www.imperas.com for multicore debug, verification and analysis solutions.
OVPsim started: Mon Mar 9 12:28:15 2015
Info (GDBT_PORT) Host: <hostname>, Port: <portnumber>
Info (GDBT_WAIT) Waiting for remote debugger to connect...
Info (GDBT_CONNECTED) Client connected
Once the platform has made a call to `opRootModuleSimulate` (or `opProcessorSimulate`), OVPsim will wait for the debugger connection. The output above shows the host and
portnumber being provided in the GDBT_PORT message which is used to connect the automatically invoked GDB.
The GDB displays the current execution location:
0x00000100 in start ()
3.4 Starting Debugging Manual Attachment
3.4.1 Running the Platform
Start the OVP simulator with the example platform by running the native platform executable built earlier. This simple platform uses the command line parser to specify the port number to use for the debugger connection.
```
harness.exe --modulefile module/model.$(IMPERAS_SHRSUF) --port 0 --program application/application.OR1K.elf
```
A non zero numeric value opens a port on the specified port while the value zero allows OVPsim to choose any free host port.
```
OVPsim (32-Bit) v20150205.0 Open Virtual Platform simulator from www.OVPworld.org.
Copyright (C) 2005-2015 Imperas Ltd. Contains Imperas Proprietary Information. Licensed Software, All Rights Reserved.
Visit www.imperas.com for multicore debug, verification and analysis solutions.
OVPsim started: Mon Mar 9 12:28:15 2015
```
```
Info (GDBT_PORT) Host: <hostname>, Port: <portnumber>
Info (GDBT_WAIT) Waiting for remote debugger to connect...
```
Once the platform has made a call to opRootModuleSimulate (or opProcessorSimulate), OVPsim will wait for the debugger connection. The output above shows the host and portnumber being provided in the GDBT_PORT message which will be used to manually connect GDB remote target to this port.
3.4.2 Running GDB
When the OVPsim platform is waiting for a debugger connection we can start the Gnu debugger. GDB executables for OR1K and other processor model architectures provided by OVP are included with the Gnu toolchains available for download from the www.ovpworld.org website.
Start GDB in another shell/terminal:
```
cd debugWithGDB
"$IMPERAS_HOME/lib/$IMPERAS_ARCH/CrossCompiler/or32-elf/bin/or32-elf-gdb"
```
The GDB startup banner and prompt will be displayed:
```
GNU gdb 5.3
Copyright 2002 Free Software Foundation, Inc.
```
GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "--host=i686-pc-cygwin --target=or32-elf".
(gdb)
Now load the simulated application file into GDB to provide symbolic debugging information:
(gdb) file application/application.OR1K.elf
Reading symbols from application/application.OR1K.elf...done.
(gdb)
3.4.2.1 Connecting GDB to OVPsim
The GDB target command is used to connect GDB to OVPsim:
(gdb) target remote localhost:1438
Remote debugging using localhost:1438
0x000000100 in start ()
(gdb)
The port number must match the port on which OVPsim is waiting for a connection. Once the connection is made, OVPsim shows a message:
Info (GDBT_CONNECTED) Client connected
and GDB displays the current execution location:
0x000000100 in start ()
3.5 An example debug session
We are now able to inspect and control the platform and processor state while simulating the application on OVPsim.
Display a disassembly of the next instruction each time execution stops:
(gdb) display /i $pc
1: x/i $pc 0x100 <start>: 1.addi r2,r0,0x0
(gdb)
Show processor register values:
(gdb) info registers
R0: 00000000
R1: 00000000
R2: deadbeef
R3: deadbeef
R4: deadbeef
R5: deadbeef
R6: deadbeef
R7: deadbeef
...
(gdb)
Step one instruction:
```plaintext
(gdb) stepi
0x000000104 in start ()
1: x/i $pc 0x104 <start+4>: l.addi r3,r0,0x0
(gdb)
```
Show register values again:
```plaintext
(gdb) info registers
R0 R1 R2 R3 R4 R5 R6 R7
00000000 00000000 00000000 deadbeef deadbeef deadbeef deadbeef deadbeef
...
(gdb)
```
Set a breakpoint on the application’s main function:
```plaintext
(gdb) break main
Breakpoint 1 at 0xf3c: file application/application.c, line 4.
(gdb)
```
Run until we hit a breakpoint:
```plaintext
(gdb) continue
Continuing.
Breakpoint 1, main () at application/application.c:4
4 printf("Hello\n");
1: x/i $pc 0xf3c <main+16>: l.movhi r3,0x0
(gdb)
```
Step over the C printf call
```plaintext
(gdb) next
5 }
```
(The printf output is shown in the OVPsim window.)
Finally, run the test application to completion
```plaintext
(gdb) continue
Continuing.
Program exited normally.
(gdb)
```
4 Further GDB Connection Information and Features
This section describes some of the other ways in which the simulation platform execution may be started and used.
4.1 RSP Interface
RSP is the gdb (Gnu debugger) Remote Serial Protocol. It allows a debugger to communicate with a simulator on the same host machine or over a network to a simulator on a different host machine. OVPsim and CpuManager support RSP as used by most versions of gdb. They automatically switch to an extended version of RSP to communicate with the Imperas stand-alone multi-core debugger.
4.2 Environment variables
<table>
<thead>
<tr>
<th>Variable</th>
<th>Type</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>IMPERAS_NO_WAIT</td>
<td>boolean</td>
<td>Do not wait for an RSP connection before starting simulation (but keep listening).</td>
</tr>
<tr>
<td>IMPERAS_RSP_PORT</td>
<td>integer</td>
<td>Listen on this port for a debugger (0 means choose a port from the pool)</td>
</tr>
<tr>
<td>IMPERAS_RSP_PORT_FILE</td>
<td>filename</td>
<td>If port is chosen from the pool, write the port number in this file</td>
</tr>
<tr>
<td>IMPERAS_RSP_WAIT_DISCONNECT</td>
<td>boolean</td>
<td>When disconnected, the simulator waits for a new connection, rather than continuing.</td>
</tr>
<tr>
<td>IMPERAS_RSP_FINISH_DISCONNECT</td>
<td>boolean</td>
<td>When disconnected, the simulator finishes rather than waiting.</td>
</tr>
</tbody>
</table>
4.3 Detaching and Reattaching
The stand-alone multi-core debugger can be detached from a simulation. When the detach is performed the simulator may perform one of two operations
1. finish the simulation
2. continue the execution of the software application until the application completes or makes no further progress. A debugger can then be reattached, causing simulation to stop immediately so that debugging can continue.
The default operation is dependent upon the simulator runtime, the OVPsim and CpuManager simulators will free run when the debugger is disconnected.
4.3.1 Modifying simulator behavior on detach
The default behavior of the simulator when a debugger is disconnected can be modified to wait for a further connection or to finish (continue the execution of) the simulation.
4.3.1.1 Wait for next connection
Wait is the suspension of the simulation when the debugger is detached. No further execution will take place and the simulator will wait for a further debugger connection.
Set the environment variable IMPERAS_RSP_WAIT_DISCONNECT before starting the simulation.
### 4.3.1.2 Finish simulation
The simulation continues until it finishes or a further debugger connection is made when the debugger is detached.
Set the environment variable IMPERAS_RSP_FINISH_DISCONNECT before starting the simulation.
### 4.4 Enabling a debug port without initial connection
This 'no wait' option allows a simulation platform to be started with a debug port enabled but without the need to connect the debugger prior to simulation starting.
A debugger can be connected at any time but the simulation will start executing immediately.
The debug port is enabled in the normal way and the no wait mode is enabled by using one of the following:
Set the environment variable IMPERAS_NO_WAIT.
Add --nowait into a control file.
Add OP_FP_RSPNOWAIT into the OP Parameters (opParams) of a call to opRootModuleNew.
### 4.5 Environment Variable Enables Debug Connection
If you have a platform executable it is not always convenient to re-compile the platform in order to enable debugging.
The opening of a debug port can also be accomplished using an environment variable.
Set the environment variable IMPERAS_RSP_PORT to either a port number or to 0 and the next available port will be selected.
### 4.6 Debugging RSP Connections
When there is an error in the RSP connection additional information can be obtained by enabling logging of the connection.
This log file should be provided to Imperas when reporting a problem with other information about the platform used.
Set the environment variable IMPERAS_RSP_LOG_FILE to a file into which transactions over the RSP connection will be written.
5 Creating a Debuggable SystemC/TLM2.0 Platform
When an OVP model is used within a SystemC TLM2.0 platform it may still be debugged using the RSP connection.
A suitable single-processor platform example is available in the directory:
$IMPERAS_HOME/Examples/SimulationControl/debugSystemC_TLM2.0WithGDB
This uses the freely-available OR1K processor (see http://www.opencores.org/projects.cgi/web/or1k/architecture).
The test platform source is in file platform/platform.cpp:
```c++
class TLM2Platform : public sc_core::sc_module {
public:
TLM2Platform (sc_core::sc_module_name name);
tlmModule Platform;
tlmDecoder bus1;
tlmRam raml;
tlmRam ram2;
or1k cpu1;
extension semihostlib;
params platformParams() {
params p;
p.set("remotedebugport", (Uns32)0);
return p;
}
}; /* TLM2Platform */
TLM2Platform::TLM2Platform ( sc_core::sc_module_name name)
: sc_module (name)
, Platform (""); platformParams()
, bus1 (Platform, "bus1", 2, 2)
, raml (Platform, "ram1", 0x0000FFFFFF)
, ram2 (Platform, "ram2", 0x0000FFFFFF)
, cpu1 (Platform, "cpu1")
, semihostlib (cpu1, opVLNVString (NULL, "ovpworld.org", "semihosting", "or1kNewlib", "1.0", OP_EXTENSION, 1), "semihostlib")
...
For a full explanation of OVPsim platform construction please see the iGen Platform and Module Creation User Guide. This section describes only those aspects of platform construction that relate to debugging.
5.1.1 Nominating the debugged processor
The processor object method debugThisProcessor is called from sc_main:
```c++
int sc_main (int argc, char *argv[]) {
```
session s;
... TLM2Platform top("top"); // instantiate example top module
... // Specify the debug processor.
top.cpu1.debug();
...
|
{"Source-Url": "http://www.ovpworld.org/documents/OVPsim_Debugging_Applications_with_GDB_User_Guide.pdf", "len_cl100k_base": 5417, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30322, "total-output-tokens": 6232, "length": "2e12", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0004467964172363281, "__label__crime_law": 0.00033974647521972656, "__label__education_jobs": 0.00032639503479003906, "__label__entertainment": 8.630752563476562e-05, "__label__fashion_beauty": 0.00012063980102539062, "__label__finance_business": 0.0003218650817871094, "__label__food_dining": 0.00020241737365722656, "__label__games": 0.0016546249389648438, "__label__hardware": 0.006465911865234375, "__label__health": 0.00013196468353271484, "__label__history": 0.0001854896545410156, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.0008044242858886719, "__label__literature": 0.00016796588897705078, "__label__politics": 0.00015163421630859375, "__label__religion": 0.0004296302795410156, "__label__science_tech": 0.0204315185546875, "__label__social_life": 3.522634506225586e-05, "__label__software": 0.036224365234375, "__label__software_dev": 0.93017578125, "__label__sports_fitness": 0.00024056434631347656, "__label__transportation": 0.0004122257232666016, "__label__travel": 0.00012922286987304688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23072, 0.03777]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23072, 0.15318]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23072, 0.68022]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1461, false], [1461, 5227, null], [5227, 5427, null], [5427, 6797, null], [6797, 8306, null], [8306, 8667, null], [8667, 10734, null], [10734, 12665, null], [12665, 14666, null], [14666, 16129, null], [16129, 17079, null], [17079, 19535, null], [19535, 21119, null], [21119, 21244, null], [21244, 22934, null], [22934, 23072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1461, false], [1461, 5227, null], [5227, 5427, null], [5427, 6797, null], [6797, 8306, null], [8306, 8667, null], [8667, 10734, null], [10734, 12665, null], [12665, 14666, null], [14666, 16129, null], [16129, 17079, null], [17079, 19535, null], [19535, 21119, null], [21119, 21244, null], [21244, 22934, null], [22934, 23072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23072, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1461, 2], [1461, 5227, 3], [5227, 5427, 4], [5427, 6797, 5], [6797, 8306, 6], [8306, 8667, 7], [8667, 10734, 8], [10734, 12665, 9], [12665, 14666, 10], [14666, 16129, 11], [16129, 17079, 12], [17079, 19535, 13], [19535, 21119, 14], [21119, 21244, 15], [21244, 22934, 16], [22934, 23072, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23072, 0.03725]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
574841ad9cb91498211e45bbeac8fb37c49efc5a
|
POSIX threads parallelization for example of Particle-In-Cell density calculations in plasma computer simulations
Anna Sasak*, Marcin Brzuszek
Institute of Computer Science, Maria Curie Skłodowska University, pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland.
Abstract – The TRQR program [1–4] simulates trajectories of charged particles (electrons or ions) in the electromagnetic field. TRQR is based on the Particle-In-Cell method whose basic guideline is the use of computational particles (called macro particles) that represent a large number of real particles of the same kind moving in the same direction. The program calculates particles charge density distribution and potential distribution for chosen ion sources, analyses particles behaviour in the electromagnetic field, describes the process of beams from the source extraction. A number of factors influences simulation results. In order to improve efficiency the program has been parallelized. This paper presents the process of converting chosen parts of the TRQR program into the multi-thread version. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard. Each of threads has its own stack, set of registers, program counter, individual data, local variables, state information. All threads of particular process share one address space, general signal operations, virtual memory, data, input and output. The Mutex functions were used as a synchronization mechanism. This paper presents the analysis of a particular piece of main program that implements computations of particles density distribution. The paper presents execution time dependencies for different simulation parameters such as: the number of macro particles, size of the simulation mesh and the number of used threads.
1 Introduction
Due to the complexity of physical processes, computer simulations of plasma behaviour in ion sources are still a great challenge for programmers. One of the methods of computing the
trajectories of charged particles in the electromagnetic field is the Particle-In-Cell method. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro-particles’. Each macro-particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles.
This paper presents the results from migration of one piece of TRQR program to parallel mode. First, the program was moved from Fortran 77 to C++ and then parallelized using the Pthread library. The paper presents the results of simulations for different parameters such as a number of used threads, a number of macro particles, mesh size.
## 2 TRQR - principle of operation
The TRQR program was developed in order to study plasma behaviour as well as the process of extraction and formation of the ion beams emitted from the plasma ion sources. The method implemented for computer simulation consists of the following steps:
1. Setting the systems geometry such as a number of particles etc. and generating initial distribution for all kinds of particles.
2. Calculations of particles density distributions for chosen ion sources using the PiC method.
3. Solving the Poisson equation for the charge density obtained in the previous step and the boundary conditions imposed by electrodes.
4. Calculation of electrical field in the grid points.
5. Solving the Lorentz equations of motion for each particle.
6. Generating new particles if it is needed due to hits on electrodes and plasma chamber walls.
This procedure, steps from 2 to 6, continues until a final state is achieved[3].
The special subject of interest for this paper is the particle-in-cell (PiC) method the second step of simulation is based on. In the PiC method a large number of particles such as ions or electrons in plasma or beam is represented by a smaller, numerically tractable number of so called ‘macro particles’. Each macro particle behaves like a single particle of certain kind, but carries a charge large enough to represent all real particles. The simulation space is divided into small regions creating a spatial mesh. The method weights particles to grid points using a particle shape factor to obtain charge on the grid. This distribution process is carried out with one of two possible schemes. The first method called nearest grid point (NGP) assigns the macro-particle charge to the point of grid that is the nearest to the particles position. In the second one called cloud-in-cell (CiC) fractions of macro-particle charge are assigned to 8 (in the case of 3D calculations) nearest in the mesh grid points. Even better charge distribution is obtained if in the CiC method the macro particle charge is distributed among 27 nearest grid points [4].
In architectures with shared memory threads can be used to implement parallelism. For the Unix systems, a standardized C language threads programming interface has been specified by the IEEE POSIX 1003.1c standard. The already mentioned POSIX standard from 1995 is included in the Unix system distributions.
Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system. The comparison between threads and processes is presented in Table 1.
What needs to be emphasized is that in the case of threads - reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.
The subroutines which comprise the Pthreads API can be informally grouped into three major classes (included in the library Pthreads):
1. Thread management – the group of functions that work directly on threads - creating, detaching, joining, etc. Here are also included the functions that set thread attributes.
2. Mutexes (abbreviation for ‘mutual execution’) – the functions that deal with synchronization. The Mutex functions provide for creating, destroying, locking and unlocking mutexes and also setting or modifying mutex attributes.
3. Condition variables – the functions that address communications between threads that share a mutex. They are based upon programmer specified conditions. This class includes the functions to create, destroy, wait and signal based upon specified variable values. In this paper condition variables are only mentioned without further analysis as they were not implemented in the pthread parallelization presented in this paper.
Fig. 1. Block scheme for the TRQR program.
3 POSIX threads API
In architectures with shared memory threads can be used to implement parallelism. For the Unix systems, a standardized C language threads programming interface has been specified by the IEEE POSIX 1003.1c standard. The already mentioned POSIX standard from 1995 is included in the Unix system distributions.
Technically, a thread is defined as an independent stream of instructions that can be scheduled to run as such by the operating system. The comparison between threads and processes is presented in Table 1.
What needs to be emphasized is that in the case of threads - reading and writing to the same memory locations is possible, and therefore requires explicit synchronization by the programmer.
The subroutines which comprise the Pthreads API can be informally grouped into three major classes (included in the library Pthreads):
1. Thread management – the group of functions that work directly on threads - creating, detaching, joining, etc. Here are also included the functions that set thread attributes.
2. Mutexes (abbreviation for ‘mutual execution’) – the functions that deal with synchronization. The Mutex functions provide for creating, destroying, locking and unlocking mutexes and also setting or modifying mutex attributes.
3. Condition variables – the functions that address communications between threads that share a mutex. They are based upon programmer specified conditions. This class includes the functions to create, destroy, wait and signal based upon specified variable values. In this paper condition variables are only mentioned without further analysis as they were not implemented in the pthread parallelization presented in this paper.
Table 1. Process and thread features comparison.
<table>
<thead>
<tr>
<th>PROCESS</th>
<th>THREAD</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Created by the operating system</td>
<td>• Use and exist within the process-creator resources</td>
</tr>
<tr>
<td>• Requires a fair amount of overhead</td>
<td>• Duplicate only the bare essential resources that enable them to exist as executable code</td>
</tr>
<tr>
<td>• Contains information about program resources and program execution state that include:</td>
<td>• Share with other threads in the same process:</td>
</tr>
<tr>
<td>– Process, process group, user and group IDs,</td>
<td>– Global and static variables,</td>
</tr>
<tr>
<td>– environment,</td>
<td>– heap and dynamic variables (Two pointers having the same value point to the same data),</td>
</tr>
<tr>
<td>– working directory,</td>
<td>– operating system resources (files),</td>
</tr>
<tr>
<td>– program instructions,</td>
<td>– process instructions.</td>
</tr>
<tr>
<td>– registers,</td>
<td>• Each thread has a unique:</td>
</tr>
<tr>
<td>– stack,</td>
<td>– Set of registers, stack pointer,</td>
</tr>
<tr>
<td>– heap,</td>
<td>– automatic variables,</td>
</tr>
<tr>
<td>– file descriptors,</td>
<td>– Stack for local variables,</td>
</tr>
<tr>
<td>– signal actions,</td>
<td>– priority,</td>
</tr>
<tr>
<td>– shared libraries,</td>
<td>– thread ID.</td>
</tr>
<tr>
<td>– inter-process communication tools.</td>
<td></td>
</tr>
</tbody>
</table>
4 Thread creation
Initially main() program comprises a single thread. All other threads must be created explicitly by the programmer. Once created, threads are peers and may create other threads. There is no implied hierarchy or dependency between them. A new thread is created by calling int pthread_create(pthread *thread, const pthread_attr *attr, void *(start_routine)(void *), void *arg) subroutine. The arguments of this function in order of appearance stand for: unique identifier for the new thread returned by the subroutine, attribute object that may be used to set thread attributes, the C routine that will be executed by thread once it is created, a single argument that may be passed to start_routine. Attribute parameter set to NULL means that default attributes are used, otherwise it defines members of struct pthread_attr_t that includes: detached state, scheduling policy, stack address and size etc. As it was mentioned before pthread_create() routine permits a programmer to pass only one argument to the thread start routine. To overcome this limitation a structure should be created which contains all of the arguments to be passed. Then just a pointer to that structure should be passed to pthread_create() routine.
There is presented below the fragment of code, which creates NTH threads with a default set of parameters which will execute routine \textit{thread\_fun\_dens} with the parameters from the proper cell of matrix \textit{tab\_th\_data}.
\begin{verbatim}
struct th_data {
long idoms; // starting cell of global density matrix
long idome; // ending cell of global density matrix
long NNion; // number of ions per thread
};
pthread_t th_ids [NTH]; // matrix that contains threads ids
th_data tab_th_data [NTH]; // matrix of threads specific data, passed as a structure pointer to the executed routine
void *thread_func_dens(void *ptr) {
...
pthread_exit(NULL);
}
void main(...) {
...
for (int w=0; w<NTH; w++)
pthread_create( &th_ids[w], NULL, thread_func_dens, (void *) &tab_th_data[w]);
...
}
\end{verbatim}
5 Threads synchronization and termination
There are several ways in which a thread may be terminated. The most common is either when the thread returns from its starting routine or when the thread makes call to the \texttt{pthread\_exit()} subroutine. Typically, the \texttt{pthread\_exit()} routine is called after a thread has completed its work and is no longer required to exist. If main() finishes before the threads it has created, and exits with \texttt{pthread\_exit()}, the other threads will continue to execute. Otherwise, they will be automatically terminated when \texttt{main()} finishes. The programmer may optionally specify a termination status, which is stored as a void pointer for any thread that may join the calling thread.
One way to accomplish synchronization between threads is so called ‘joining’. The \texttt{int pthread\_join(pthread\_t th, void **thread\_return)} subroutine blocks the calling thread until the thread specified by \textit{th} argument terminates. The programmer is able to obtain, via the second argument, the target threads termination status. It is possible though only if it was explicitly specified in the target thread call to \texttt{pthread\_exit} routine. A joining thread can match
only one `pthread_join()` call. It is a logical error to attempt multiple joins on the same thread. In the following figure the scheme of program course is presented, which after creating two worker threads waits for them to exit and then resumes its execution.

Fig. 2. Threads synchronization.
The fragment of main function that stops program execution until all created threads exit would have the following form:
```c
void main(...) {
...
for (int ii = 0; ii < NTH; ii++)
pthread_join(th_ids[ii], NULL); // execute as much
pthread_joins as pthread_create
// were execute before
...
}
```
6 Mutual execution
Mutex variables are one of primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. A mutex variable acts as a ‘lock’ or a semaphore protecting access to a shared data resource – critical section. With the basic mutex concept only one thread can own – which means lock – a mutex variable at any given time. Thus, even if several threads try to lock a certain mutex only one of them will succeed, booking access to the protected resource for himself. The shared data resource is available again not till then mutex owner unlocks that mutex. The presented operation is a safe way to ensure that when several threads update the same variable, the final value is the same as what it would be if only one thread performed the update.
The typical sequence of steps in the use of a mutex is as follows:
1. a mutex variable is created and initialized,
2. several threads attempt to lock the mutex,
3. only one of them succeeds and that thread owns the mutex,
4. the owner thread performs a set of actions,
(5) the owner unlocks the mutex,
(6) another thread acquires the mutex and repeats the process,
(7) finally the mutex is destroyed.
The mutex variable must be declared with the type `pthread_mutex_t` and initialized before it can be used. Initialization can take two forms:
1. static with the instruction
```c
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
```
2. dynamic with `int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr)` routine.
Initially mutex is unlocked. To establish different from default (specified as NULL) properties for the mutex the second argument of the `pthread_mutex_init` routine should be used. Mutex that is no longer needed should be released with `pthread_mutex_destroy(pthread_mutex_t *mutex)` routine.
Three standard routines are used to manage mutex access. The `pthread_mutex_lock(pthread_mutex_t *mutex)` routine is used to acquire lock on the specified mutex variable. If the mutex is already locked by another thread, this call will block the calling thread until the mutex is unlocked. The `pthread_mutex_trylock(pthread_mutex_t *mutex)` will attempt to lock a mutex. However, if the mutex is already locked, the routine will return with ‘busy’ error code. The `pthread_mutex_unlock(pthread_mutex_t *mutex)` will unlock a mutex if called by owning thread. An error will be returned if the mutex has already been unlocked or if the mutex is owned by another thread[5].
The following example presents the way mutexes were used in our simulation.
```c
pthread_mutex_t ***tab_mutex;
...
for (int x=1; x<=Nxx; x++)
for (int y=1; y<=Nyy; y++)
for (int z=1; z<=Nzz; z++) {
int res = pthread_mutex_init(&tab_mutex[x][y][z],NULL);
}
...
// creating threads with pthread_init routine
...
// a piece of code somewhere in the thread startRoutine
int err = pthread_mutex_lock( &tab_mutex[Nx][Ny][Nz] )
;
density_q[Nx][Ny][Nz][kj] += is;
int err2 = pthread_mutex_unlock( &tab_mutex[Nx][Ny][Nz]
) ;
...
```
POSIX threads parallelization for example of Particle-In-Cell density . . .
for (int x=1; x<=Nxx; x++)
for (int y=1; y<=Nyy; y++)
for (int z=1; z<=Nzz; z++) {
int res=pthread_mutex_destroy(&tab_mutex[x][y][z]);
}
7 Parallel mode calculations
The environment for simulations was the Intel Xeon processor 4cores x 2, 16BG RAM, Mandriva operating system and gcc 4.1.2 compiler. In the first step the program was moved from Fortran 77 to C++. Then it was parallelized using the Pthread library with the standard API for C++ contained in the POSIX IEEE 1003.1c standard.
During the simulation process the measure that was analysed was the simulation time. It is a formal but very relative measure as sometimes the process of creating parallel version may not be cost effective contrary to the gained reduction in the simulation time. The second performance criterion that was adopted for plasma density thread parallelization is speedup that is described by the formula $S(p) = \frac{T(1)}{T(p)}$, where $p$ stands for a number of threads, $T(1)$ and $T(p)$ - the simulation time with one or $p$ threads (adequately) [6].
8 Results of simulations
As it was presented in paper [7] using the simplest charge density distribution technique and a large number of macro particles is the best solution as far as charge density calculations are concerned. For example, using NGP and 100 mill of macro particles gives better results (i.e. more homogeneous distributions) in less time than using the CIC method and 20 mill of macro particles. That is why all results presented in this paper are calculated for the NGP method with a different number of macro particles, different sizes of spatial mesh and a different number of threads used in the parallelization process.
Fig. 3 presents the simulation time for the NGP method with different numbers of macro particles and the mesh of size 100x100x100. Red line in each picture stands for the execution time of the sequential version of the algorithm.
Analyzing the above graphs one can conclude that using only two threads gives the execution time close to the sequential version and that using eight threads, which equals the number of available processors, gives the best reduction of execution time. Further improvement of a number of threads, nine and above does not give further reduction of execution time.
As the graphs obtained for simulations with a different number of macro particles show similar results, Fig. 4 presents speedup calculated only for one of them, the one for 200 mill macro particles. It confirms that speedup close to 1 (which means close to the sequential execution time) is for 2 threads and the highest speedup is gained for 8 threads.
In the next step the size of the mesh was changed to 50x50x50. Two simulations were done. First for 200mill of macro particles – Fig. 5(a). In the second one – Fig. 5(b) - the number
of particles was changed proportionally to the change in mesh size, which gave the number of approximately 25mill macro particles. For both simulations speedup factors were calculated and presented in Fig. 6(a) and 6(b) respectively.
Analyzing Fig. 5 and 6 it can be noticed that the maximum speedup gained with the parallelization changed dropped by about 40% compared to the previous simulation. Also the number of threads required to gain the execution time close to sequential changed from 2 to 4.
Further tests were carried out for different sizes of mesh from 200x200x200 down to 15x15x15. For each of them the parallel version run for 200mill macro particles and 8 threads were executing calculations. The red line stands for the execution of sequential version of the algorithm.
POSIX threads parallelization for example of Particle-In-Cell density . . .
Fig. 4. Speedup for NGP parallel run, for 200mln. macro particles mesh of size 100x100x100.
Fig. 5. Time of charge density calculations versus the number of threads used for the parallel run, using the NGP method, mesh of size 50x50x50 and different number of macro particles: a) 200mill, b) 25mill.
Fig. 7 presents that for the meshes of size 80x80x80 and bigger ones give quite good execution time reduction while parallelized. In the case of meshes of size 40x40x40 and smaller running the parallel version of algorithm gives no benefit of reduction of execution time.
Final tests were carried out for the asymmetrical mesh of dimensions 128x64x128 and 100 mill macro particles. The aim of this test was to examine if the geometry of the mesh has any influence on the algorithm performance. Fig. 8 presents the results of that simulation – both simulation time and speedup. The environment of this simulation is similar to the one presented in Fig. 3(c). The results for both mentioned simulations are very close which gives
Fig. 6. Speedup for NGP parallel run, for a) 200mill. b) 25mill. macro particles, mesh size 50x50x50.
Fig. 7. Time of charge density calculations versus the mesh size with the number of threads used for the parallel run equal 8, using the NGP method and 200mln. of macro particles.
A conclusion that only a number of cells influences the simulation time whereas the mesh geometry has no influence on POSIX thread parallelization performance.
9 Conclusion
A direct advantage of program parallelization is more effective time use which relates to the time assigned to the simulation process. This paper presents the POSIX Pthread library as one of the available methods of parallelization. So far Pthread parallelization is implemented only for a part of TRQR program which is charge density calculations, but it gives quite acceptable results encouraging for further research.
References
|
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3282/2476", "len_cl100k_base": 4857, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24365, "total-output-tokens": 5670, "length": "2e12", "weborganizer": {"__label__adult": 0.00041556358337402344, "__label__art_design": 0.0004067420959472656, "__label__crime_law": 0.0005230903625488281, "__label__education_jobs": 0.0009636878967285156, "__label__entertainment": 0.00011092424392700197, "__label__fashion_beauty": 0.00023567676544189453, "__label__finance_business": 0.0002288818359375, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0008287429809570312, "__label__hardware": 0.00458526611328125, "__label__health": 0.0009751319885253906, "__label__history": 0.0004038810729980469, "__label__home_hobbies": 0.0001885890960693359, "__label__industrial": 0.0014743804931640625, "__label__literature": 0.00023472309112548828, "__label__politics": 0.0004372596740722656, "__label__religion": 0.0007276535034179688, "__label__science_tech": 0.337158203125, "__label__social_life": 0.00012969970703125, "__label__software": 0.01052093505859375, "__label__software_dev": 0.63720703125, "__label__sports_fitness": 0.0006308555603027344, "__label__transportation": 0.0008697509765625, "__label__travel": 0.0002180337905883789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23230, 0.01558]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23230, 0.66474]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23230, 0.90478]], "google_gemma-3-12b-it_contains_pii": [[0, 2093, false], [2093, 4944, null], [4944, 8364, null], [8364, 10688, null], [10688, 12776, null], [12776, 14529, null], [14529, 16533, null], [16533, 19434, null], [19434, 20223, null], [20223, 21331, null], [21331, 21775, null], [21775, 23230, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2093, true], [2093, 4944, null], [4944, 8364, null], [8364, 10688, null], [10688, 12776, null], [12776, 14529, null], [14529, 16533, null], [16533, 19434, null], [19434, 20223, null], [20223, 21331, null], [21331, 21775, null], [21775, 23230, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23230, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23230, null]], "pdf_page_numbers": [[0, 2093, 1], [2093, 4944, 2], [4944, 8364, 3], [8364, 10688, 4], [10688, 12776, 5], [12776, 14529, 6], [14529, 16533, 7], [16533, 19434, 8], [19434, 20223, 9], [20223, 21331, 10], [21331, 21775, 11], [21775, 23230, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23230, 0.09756]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
5da65c4e32723af91cee38b06bf8aad70c932948
|
Reinforcement Learning
In the previous note, we discussed Markov decision processes, which we solved using techniques such as value iteration and policy iteration to compute the optimal values of states and extract optimal policies. Solving Markov decision processes is an example of offline planning, where agents have full knowledge of both the transition function and the reward function, all the information they need to precompute optimal actions in the world encoded by the MDP without ever actually taking any actions. In this note, we’ll discuss online planning, during which an agent has no prior knowledge of rewards or transitions in the world (still represented as a MDP). In online planning, an agent must try exploration, during which it performs actions and receives feedback in the form of the successor states it arrives in and the corresponding rewards it reaps. The agent uses this feedback to estimate an optimal policy through a process known as reinforcement learning before using this estimated policy for exploitation, or reward maximization.
Let’s start with some basic terminology. At each timestep during online planning, an agent starts in a state $s$, then takes an action $a$ and ends up in a successor state $s'$, attaining some reward $r$. Each $(s, a, s', r)$ tuple is known as a sample. Often, an agent continues to take actions and collect samples in succession until arriving at a terminal state. Such a collection of samples is known as an episode. Agents typically go through many episodes during exploration in order to collect sufficient data needed for learning.
There are two types of reinforcement learning, model-based learning and model-free learning. Model-based learning attempts to estimate the transition and reward functions with the samples attained during exploration before using these estimates to solve the MDP normally with value or policy iteration. Model-free learning, on the other hand, attempts to estimate the values or $q$-values of states directly, without ever using any memory to construct a model of the rewards and transitions in the MDP.
Model-Based Learning
In model-based learning an agent generates an approximation of the transition function, $\hat{T}(s, a, s')$, by keeping counts of the number of times it arrives in each state $s'$ after entering each q-state $(s, a)$. The agent can
then generate the approximate transition function $\hat{T}$ upon request by normalizing the counts it has collected - dividing the count for each observed tuple $(s, a, s')$ by the sum over the counts for all instances where the agent was in q-state $(s, a)$. Normalization of counts scales them such that they sum to one, allowing them to be interpreted as probabilities. Consider the following example MDP with states $S = \{A, B, C, D, E, x\}$, with $x$ representing the terminal state, and discount factor $\gamma = 1$:
Assume we allow our agent to explore the MDP for four episodes under the policy $\pi_{\text{explore}}$ delineated above (a directional triangle indicates motion in the direction the triangle points, and a blue squares represents taking exit as the action of choice), and yield the following results:
We now have a collective 12 samples, 3 from each episode with counts as follows:
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th>count</th>
</tr>
</thead>
<tbody>
<tr>
<td>$s$</td>
<td>$a$</td>
<td>$s'$</td>
<td></td>
</tr>
<tr>
<td>A</td>
<td>exit</td>
<td>$x$</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>east</td>
<td>$C$</td>
<td>2</td>
</tr>
<tr>
<td>C</td>
<td>east</td>
<td>$A$</td>
<td>1</td>
</tr>
<tr>
<td>C</td>
<td>east</td>
<td>$D$</td>
<td>3</td>
</tr>
<tr>
<td>$D$</td>
<td>exit</td>
<td>$x$</td>
<td>3</td>
</tr>
<tr>
<td>$E$</td>
<td>north</td>
<td>$C$</td>
<td>2</td>
</tr>
</tbody>
</table>
Recalling that $T(s, a, s') = P(s'|a, s)$, we can estimate the transition function with these counts by dividing the counts for each tuple $(s, a, s')$ by the total number of times we were in q-state $(s, a)$ and the reward function directly from the rewards we reaped during exploration:
At any point, we can compute the estimated value of any state by dividing the total utility obtained from \( s \) by the number of times \( s \) was visited. Let’s run direct evaluation on our example from earlier, recalling that \( \gamma = 1 \).
By the law of large numbers, as we collect more and more samples by having our agent experience more episodes, our models of \( \hat{T} \) and \( \hat{R} \) will improve, with \( \hat{T} \) converging towards \( T \) and \( \hat{R} \) acquiring knowledge of previously undiscovered rewards as we discover new \((s,a,s')\) tuples. Whenever we see fit, we can end our agent’s training to generate a policy \( \pi_{\text{exploit}} \) by running value or policy iteration with our current models for \( \hat{T} \) and \( \hat{R} \) and use \( \pi_{\text{exploit}} \) for exploitation, having our agent traverse the MDP taking actions seeking reward maximization rather than seeking learning. We'll soon discuss methods for how to allocate time between exploration and exploitation effectively. Model-based learning is very simple and intuitive yet remarkably effective, generating \( \hat{\pi} \) and \( \hat{\pi} \) for exploitation effectively. Model-based learning fall under a class of algorithms known as passive reinforcement learning. In passive reinforcement learning, an agent is given a policy to follow and learns the value of states under that policy as it experiences episodes, which is exactly what is done by policy evaluation for MDPs when \( T \) and \( R \) are known. Q-learning falls under a second class of model-free learning algorithms known as active reinforcement learning, during which the learning agent can use the feedback it receives to iteratively update its policy while learning until eventually determining the optimal policy after sufficient exploration.
### Direct Evaluation
The first passive reinforcement learning technique we’ll cover is known as direct evaluation, a method that’s as boring and simple as the name makes it sound. All direct evaluation does is fix some policy \( \pi \) and have the agent that’s learning experience several episodes while following \( \pi \). As the agent collects samples through these episodes it maintains counts of the total utility obtained from each state and the number of times it visited each state. At any point, we can compute the estimated value of any state \( s \) by dividing the total utility obtained from \( s \) by the number of times \( s \) was visited. Let’s run direct evaluation on our example from earlier, recalling that \( \gamma = 1 \).
**Transition Function:** \( \hat{T}(s,a,s') \)
- \( \hat{T}(A, \text{exit}, x) = \frac{#(A, \text{exit}, x)}{#(A, \text{exit})} = 1 = 1 \)
- \( \hat{T}(B, \text{east}, C) = \frac{#(B, \text{east}, C)}{#(B, \text{east})} = \frac{2}{2} = 1 \)
- \( \hat{T}(C, \text{east}, A) = \frac{#(C, \text{east}, A)}{#(C, \text{east})} = \frac{1}{4} = 0.25 \)
- \( \hat{T}(C, \text{east}, D) = \frac{#(C, \text{east}, D)}{#(C, \text{east})} = \frac{3}{4} = 0.75 \)
- \( \hat{T}(D, \text{exit}, x) = \frac{#(D, \text{exit}, x)}{#(D, \text{exit})} = \frac{3}{3} = 1 \)
- \( \hat{T}(E, \text{north}, C) = \frac{#(E, \text{north}, C)}{#(E, \text{north})} = \frac{2}{2} = 1 \)
**Reward Function:** \( \hat{R}(s,a,s') \)
- \( \hat{R}(A, \text{exit}, x) = -10 \)
- \( \hat{R}(B, \text{east}, C) = -1 \)
- \( \hat{R}(C, \text{east}, A) = -1 \)
- \( \hat{R}(C, \text{east}, D) = -1 \)
- \( \hat{R}(D, \text{exit}, x) = +10 \)
- \( \hat{R}(E, \text{north}, C) = -1 \)
Walking through the first episode, we can see that from state $D$ to termination we acquired a total reward of 10, from state $C$ we acquired a total reward of $(-1) + 10 = 9$, and from state $B$ we acquired a total reward of $(-1) + (-1) + 10 = 8$. Completing this process yields the total reward across episodes for each state and the resulting estimated values as follows:
<table>
<thead>
<tr>
<th>s</th>
<th>Total Reward</th>
<th>Times Visited</th>
<th>$V^\pi(s)$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$A$</td>
<td>$-10$</td>
<td>1</td>
<td>$-10$</td>
</tr>
<tr>
<td>$B$</td>
<td>16</td>
<td>2</td>
<td>8</td>
</tr>
<tr>
<td>$C$</td>
<td>16</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>$D$</td>
<td>30</td>
<td>3</td>
<td>10</td>
</tr>
<tr>
<td>$E$</td>
<td>$-4$</td>
<td>2</td>
<td>$-2$</td>
</tr>
</tbody>
</table>
Though direct evaluation eventually learns state values for each state, it’s often unnecessarily slow to converge because it wastes information about transitions between states.
In our example, we computed $V^\pi(E) = -2$ and $V^\pi(B) = 8$, though based on the feedback we received both states only have $C$ as a successor state and incur the same reward of $-1$ when transitioning to $C$. According to the Bellman equation, this means that both $B$ and $E$ should have the same value under $\pi$. However, of the 4 times our agent was in state $C$, it transitioned to $D$ and reaped a reward of 10 three times and transitioned to $A$ and reaped a reward of $-10$ once. It was purely by chance that the single time it received the $-10$ reward it started in state $E$ rather than $B$, but this severely skewed the estimated value for $E$. With enough episodes, the values for $B$ and $E$ will converge to their true values, but cases like this cause the process to take longer than we’d like. This issue can be mitigated by choosing to use our second passive reinforcement learning algorithm, temporal difference learning.
Temporal Difference Learning
Temporal difference learning (TD learning) uses the idea of learning from every experience, rather than simply keeping track of total rewards and number of times states are visited and learning at the end as direct evaluation does. In policy evaluation, we used the system of equations generated by our fixed policy and the Bellman equation to determine the values of states under that policy (or used iterative updates like with value iteration).
\[ V^\pi(s) = \sum_s T(s, \pi(s), s')[R(s, \pi(s), s') + \gamma V^\pi(s')] \]
Each of these equations equates the value of one state to the weighted average over the discounted values of that state’s successors plus the rewards reaped in transitioning to them. TD learning tries to answer the question of how to compute this weighted average without the weights, cleverly doing so with an exponential moving average. We begin by initializing \( \forall s, V^\pi(s) = 0 \). At each timestep, an agent takes an action \( \pi(s) \) from a state \( s \), transitions to a state \( s' \), and receives a reward \( R(s, \pi(s), s') \). We can obtain a sample value by summing the received reward with the discounted current value of \( s' \) under \( \pi \):
\[ \text{sample} = R(s, \pi(s), s') + \gamma V^\pi(s') \]
This sample is a new estimate for \( V^\pi(s) \). The next step is to incorporate this sampled estimate into our existing model for \( V^\pi(s) \) with the exponential moving average, which adheres to the following update rule:
\[ V^\pi(s) \leftarrow (1 - \alpha)V^\pi(s) + \alpha \cdot \text{sample} \]
Above, \( \alpha \) is a parameter constrained by \( 0 \leq \alpha \leq 1 \) known as the learning rate that specifies the weight we want to assign our existing model for \( V^\pi(s) \), \( 1 - \alpha \), and the weight we want to assign our new sampled estimate, \( \alpha \). It’s typical to start out with learning rate of \( \alpha = 1 \), accordingly assigning \( V^\pi(s) \) to whatever the first sample happens to be, and slowly shrinking it towards 0, at which point all subsequent samples will be zeroed out and stop affecting our model of \( V^\pi(s) \).
Let’s stop and analyze the update rule for a minute. Annotating the state of our model at different points in time by defining \( V_k^\pi(s) \) and \( \text{sample}_k \) as the estimated value of state \( s \) after the \( k^{th} \) update and the \( k^{th} \) sample respectively, we can reexpress our update rule:
\[ V_k^\pi(s) \leftarrow (1 - \alpha)V_{k-1}^\pi(s) + \alpha \cdot \text{sample}_k \]
This recursive definition for \( V_k^\pi(s) \) happens to be very interesting to expand:
\[
\begin{align*}
V_k^\pi(s) & \leftarrow (1 - \alpha)V_{k-1}^\pi(s) + \alpha \cdot \text{sample}_k \\
V_k^\pi(s) & \leftarrow (1 - \alpha)[(1 - \alpha)V_{k-2}^\pi(s) + \alpha \cdot \text{sample}_{k-1}] + \alpha \cdot \text{sample}_k \\
V_k^\pi(s) & \leftarrow (1 - \alpha)^2V_{k-2}^\pi(s) + (1 - \alpha)\cdot \alpha \cdot \text{sample}_{k-1} + \alpha \cdot \text{sample}_k \\
& \quad \vdots \\
V_k^\pi(s) & \leftarrow (1 - \alpha)^kV_0^\pi(s) + \alpha \cdot [(1 - \alpha)^{k-1} \cdot \text{sample}_1 + \ldots + (1 - \alpha) \cdot \text{sample}_{k-1} + \text{sample}_k] \\
V_k^\pi(s) & \leftarrow \alpha \cdot [(1 - \alpha)^{k-1} \cdot \text{sample}_1 + \ldots + (1 - \alpha) \cdot \text{sample}_{k-1} + \text{sample}_k]
\end{align*}
\]
Because \( 0 \leq (1 - \alpha) \leq 1 \), as we raise the quantity \((1 - \alpha)\) to increasingly larger powers, it grows closer and closer to 0. By the update rule expansion we derived, this means that older samples are given exponentially less weight, exactly what we want since these older samples are computed using older (and hence worse) versions of our model for \( V^\pi(s) \)! This is the beauty of temporal difference learning - with a single straightforward update rule, we are able to:
• learn at every timestep, hence using information about state transitions as we get them since we’re using iteratively updating versions of $V^\pi(s')$ in our samples rather than waiting until the end to perform any computation.
• give exponentially less weight to older, potentially less accurate samples.
• converge to learning true state values much faster with fewer episodes than direct evaluation.
Q-Learning
Both direct evaluation and TD learning will eventually learn the true value of all states under the policy they follow. However, they both have a major inherent issue - we want to find an optimal policy for our agent, which requires knowledge of the q-values of states. To compute q-values from the values we have, we require a transition function and reward function as dictated by the Bellman equation.
$$Q^*(s, a) = \sum_{s'} T(s, a, s')[R(s, a, s') + \gamma V^*(s')]$$
Resultingly, TD learning or direct evaluation are typically used in tandem with some model-based learning to acquire estimates of $T$ and $R$ in order to effectively update the policy followed by the learning agent. This became avoidable by a revolutionary new idea known as Q-learning, which proposed learning the q-values of states directly, bypassing the need to ever know any values, transition functions, or reward functions. As a result, Q-learning is entirely model-free. Q-learning uses the following update rule to perform what’s known as q-value iteration:
$$Q_{k+1}(s, a) \leftarrow \sum_{s'} T(s, a, s')[R(s, a, s') + \gamma \max_{a'} Q_k(s', a')]$$
Note that this update is only a slight modification over the update rule for value iteration. Indeed, the only real difference is that the position of the max operator over actions has been changed since we select an action before transitioning when we’re in a state, but we transition before selecting a new action when we’re in a q-state.
With this new update rule under our belt, Q-learning is derived essentially the same way as TD learning, by acquiring q-value samples:
$$\text{sample} = R(s, a, s') + \gamma \max_{a'} Q(s', a')$$
and incorporating them into an exponential moving average.
$$Q(s, a) \leftarrow (1 - \alpha)Q(s, a) + \alpha \cdot \text{sample}$$
As long as we spend enough time in exploration and decrease the learning rate $\alpha$ at an appropriate pace, Q-learning learns the optimal q-values for every q-state. This is what makes Q-learning so revolutionary - while TD learning and direct evaluation learn the values of states under a policy by following the policy before determining policy optimality via other techniques, Q-learning can learn the optimal policy directly even by taking suboptimal or random actions. This is called off-policy learning (contrary to direct evaluation and TD learning, which are examples of on-policy learning).
Approximate Q-Learning
Q-learning is an incredible learning technique that continues to sit at the center of developments in the field of reinforcement learning. Yet, it still has some room for improvement. As it stands, Q-learning just stores
all q-values for states in tabular form, which is not particularly efficient given that most applications of reinforcement learning have several thousands or even millions of states. This means we can’t visit all states during training and can’t store all q-values even if we could for lack of memory.
Above, if Pacman learned that Figure 1 is unfavorable after running vanilla Q-learning, it would still have no idea that Figure 2 or even Figure 3 are unfavorable as well. Approximate Q-learning tries to account for this by learning about a few general situations and extrapolating to many similar situations. The key to generalizing learning experiences is the feature-based representation of states, which represents each state as a vector known as a feature vector. For example, a feature vector for Pacman may encode
- the distance to the closest ghost.
- the distance to the closest food pellet.
- the number of ghosts.
- is Pacman trapped? 0 or 1
With feature vectors, we can treat values of states and q-states as linear value functions:
\[
V(s) = w_1 \cdot f_1(s) + w_2 \cdot f_2(s) + \ldots + w_n \cdot f_n(s) = \vec{w} \cdot \vec{f}(s)
\]
\[
Q(s,a) = w_1 \cdot f_1(s,a) + w_2 \cdot f_2(s,a) + \ldots + w_n \cdot f_n(s,a) = \vec{w} \cdot \vec{f}(s,a)
\]
where \( \vec{f}(s) = [f_1(s) f_2(s) \ldots f_n(s)]^T \) and \( \vec{f}(s,a) = [f_1(s,a) f_2(s,a) \ldots f_n(s,a)]^T \) represent the feature vectors for state s and q-state \((s,a)\) respectively and \( \vec{w} = \begin{bmatrix} w_1 & w_2 & \ldots & w_n \end{bmatrix} \) represents a weight vector. Defining difference as
\[
difference = [R(s,a,s') + \gamma \max_{a'} Q(s',a')] - Q(s,a)
\]
approximate Q-learning works almost identically to Q-learning, using the following update rule:
\[
w_i \leftarrow w_i + \alpha \cdot difference \cdot f_i(s,a)
\]
Rather than storing Q-values for each and every state, with approximate Q-learning we only need to store a single weight vector and can compute Q-values on-demand as needed. As a result, this gives us not only a more generalized version of Q-learning, but a significantly more memory-efficient one as well.
As a final note on Q-learning, we can reexpress the update rule for exact Q-learning using difference as follows:
\[
Q(s,a) \leftarrow Q(s,a) + \alpha \cdot difference
\]
This second notation gives us a slightly different but equally valuable interpretation of the update: it's computing the difference between the sampled estimated and the current model of $Q(s,a)$, and shifting the model in the direction of the estimate with the magnitude of the shift being proportional to the magnitude of the difference.
**Exploration and Exploitation**
We’ve now covered several different methods for an agent to learn an optimal policy, and harped on the fact that "sufficient exploration" is necessary for this without really elaborating on what’s really meant by "sufficient". In the upcoming two sections, we’ll discuss two methods for distributing time between exploration and exploitation: $\varepsilon$-greedy policies and exploration functions.
### $\varepsilon$-Greedy Policies
Agents following an $\varepsilon$-greedy policy define some probability $0 \leq \varepsilon \leq 1$, and act randomly and explore with probability $\varepsilon$. Accordingly, they follow their current established policy and exploit with probability $(1 - \varepsilon)$. This is a very simple policy to implement, yet can still be quite difficult to handle. If a large value for $\varepsilon$ is selected, then even after learning the optimal policy, the agent will still behave mostly randomly. Similarly, selecting a small value for $\varepsilon$ means the agent will explore infrequently, leading Q-learning (or any other selected learning algorithm) to learn the optimal policy very slowly. To get around this, $\varepsilon$ must be manually tuned and lowered over time to see results.
### Exploration Functions
This issue of manually tuning $\varepsilon$ is avoided by exploration functions, which use a modified q-value iteration update to give some preference to visiting less-visited states. The modified update is as follows:
$$Q(s,a) \leftarrow (1 - \alpha)Q(s,a) + \alpha \cdot [R(s,a,s') + \gamma \max_{a'} f(s',a')]$$
where $f$ denotes an exploration function. There exists some degree of flexibility in designing an exploration function, but a common choice is to use
$$f(s,a) = Q(s,a) + \frac{k}{N(s,a)}$$
with $k$ being some predetermined value, and $N(s,a)$ denoting the number of times q-state $(s,a)$ has been visited. Agents in a state $s$ always select the action that has the highest $f(s,a)$ from each state, and hence never have to make a probabilistic decision between exploration and exploitation. Instead, exploration is automatically encoded by the exploration function, since the term $\frac{k}{N(s,a)}$ can give enough of a "bonus" to some infrequently-taken action such that it is selected over actions with higher q-values. As time goes on and states are visited more frequently, this bonus decreases towards 0 for each state and $f(s,a)$ regresses towards $Q(s,a)$, making exploitation more and more exclusive.
Summary
It’s very important to remember that reinforcement learning has an underlying MDP, and the goal of reinforcement learning is to solve this MDP by deriving an optimal policy. The difference between using reinforcement learning and using methods like value iteration and policy iteration is the lack of knowledge of the transition function $T$ and the reward function $R$ for the underlying MDP. As a result, agents must learn the optimal policy through online trial-by-error rather than pure offline computation. There are many ways to do this:
- Model-based learning - Runs computation to estimate the values of the transition function $T$ and the reward function $R$ and uses MDP-solving methods like value or policy iteration with these estimates.
- Model-free learning - Avoids estimation of $T$ and $R$, instead using other methods to directly estimate the values or q-values of states.
- Direct evaluation - follows a policy $\pi$ and simply counts total rewards reaped from each state and the total number of times each state is visited. If enough samples are taken, this converges to the true values of states under $\pi$, albeit being slow and wasting information about the transitions between states.
- Temporal difference learning - follows a policy $\pi$ and uses an exponential moving average with sampled values until convergence to the true values of states under $\pi$. TD learning and direct evaluation are examples of on-policy learning, which learn the values for a specific policy before deciding whether that policy is suboptimal and needs to be updated.
- Q-Learning - learns the optimal policy directly through trial and error with q-value iteration updates. This an example of off-policy learning, which learns an optimal policy even when taking suboptimal actions.
- Approximate Q-Learning - does the same thing as Q-learning but uses a feature-based representation for states to generalize learning.
|
{"Source-Url": "https://inst.eecs.berkeley.edu/~cs188/su20/assets/notes/n4.pdf", "len_cl100k_base": 6084, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30888, "total-output-tokens": 6454, "length": "2e12", "weborganizer": {"__label__adult": 0.0005397796630859375, "__label__art_design": 0.000583648681640625, "__label__crime_law": 0.0007219314575195312, "__label__education_jobs": 0.002216339111328125, "__label__entertainment": 0.00022935867309570312, "__label__fashion_beauty": 0.0003018379211425781, "__label__finance_business": 0.0004982948303222656, "__label__food_dining": 0.0007596015930175781, "__label__games": 0.0035572052001953125, "__label__hardware": 0.0017919540405273438, "__label__health": 0.0012664794921875, "__label__history": 0.0004930496215820312, "__label__home_hobbies": 0.0002582073211669922, "__label__industrial": 0.0012302398681640625, "__label__literature": 0.0006847381591796875, "__label__politics": 0.00054168701171875, "__label__religion": 0.0006308555603027344, "__label__science_tech": 0.37841796875, "__label__social_life": 0.00013911724090576172, "__label__software": 0.01012420654296875, "__label__software_dev": 0.5927734375, "__label__sports_fitness": 0.0006833076477050781, "__label__transportation": 0.001079559326171875, "__label__travel": 0.0003209114074707031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23400, 0.01149]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23400, 0.72942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23400, 0.89533]], "google_gemma-3-12b-it_contains_pii": [[0, 2365, false], [2365, 3896, null], [3896, 7430, null], [7430, 9307, null], [9307, 13204, null], [13204, 16285, null], [16285, 18593, null], [18593, 21453, null], [21453, 23400, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2365, true], [2365, 3896, null], [3896, 7430, null], [7430, 9307, null], [9307, 13204, null], [13204, 16285, null], [16285, 18593, null], [18593, 21453, null], [21453, 23400, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23400, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23400, null]], "pdf_page_numbers": [[0, 2365, 1], [2365, 3896, 2], [3896, 7430, 3], [7430, 9307, 4], [9307, 13204, 5], [13204, 16285, 6], [16285, 18593, 7], [18593, 21453, 8], [21453, 23400, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23400, 0.12308]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
2a2ffc8a4b915b937c1afa747796736cab941f2c
|
Lean & Agile Project Management for Large Programs & Projects
Dr. David F. Rico, PMP, CSM
Website: http://davidfrico.com
LinkedIn: http://www.linkedin.com/in/davidfrico
Facebook: http://www.facebook.com/profile.php?id=1540017424
Agenda
🌟 Overview of Agile Project Mgt.
- Intro to Agile Project Mgt.
- Types of Agile Project Mgt.
- Phases of Agile Project Mgt.
- Scaling Agile Project Mgt.
- Metrics for Agile Project Mgt.
- Cases of Agile Project Mgt.
- Summary of Agile Project Mgt.
Author
- DoD contractor with 27+ years of IT experience
- Large gov’t projects in U.S., Far/Mid-East, & Europe
- Published six books & numerous journal articles
- Adjunct at George Washington, UMUC, & Argosy
- Agile Program Management & Lean Development
- Expertise in metrics, models, & cost engineering
- Six Sigma, CMMI, ISO 9001, DoDAF & DoD 5000
What is Agility?
- **A-gil-i-ty** (ə-ˈji-lə-tē): Quickness, lightness, and ease of movement; To be very nimble:
- The ability to create and respond to change in order to profit in a turbulent global business environment
- The ability to quickly reprioritize use of resources when requirements, technology, and knowledge shift
- A very fast response to sudden market changes and emerging threats by intensive customer interaction
- Use of evolutionary, incremental, and iterative delivery to converge on an optimal customer solution
- Maximizing the business value with right-sized, just-enough, and just-in-time processes and documentation
What are Agile Methods?
- **Adaptable** system development methodologies
- **Human-centric** method for creating business value
- **Alternative** to large document-based methodologies
### Agile Methods ‘Values’
<table>
<thead>
<tr>
<th>Agile Methods ‘Values’</th>
<th>Agile Methods ‘Principles’</th>
<th>Traditional Methods ‘Values’</th>
</tr>
</thead>
<tbody>
<tr>
<td>Customer Collaboration</td>
<td>Customer Interaction</td>
<td>Contract Negotiation</td>
</tr>
<tr>
<td>Individuals & Interactions</td>
<td>High-Performance Teams</td>
<td>Processes & Tools</td>
</tr>
<tr>
<td>Working System</td>
<td>Iterative Development</td>
<td>Comprehensive Documentation</td>
</tr>
<tr>
<td>Responding to Change</td>
<td>Adaptability or Flexibility</td>
<td>Following a Plan</td>
</tr>
</tbody>
</table>
How do Lean & Agile Intersect?
- Lean thinking provides the **what** (requirements)
- Agile thinking provides the **how** (implementation)
- Agile Methods are lean, light, adaptable, and flexible
<table>
<thead>
<tr>
<th>Agile Pillars</th>
<th>Agile Principles</th>
<th>Lean Pillars</th>
<th>Lean Principles</th>
<th>Other Principles</th>
</tr>
</thead>
<tbody>
<tr>
<td>Customer collaboration</td>
<td>Intensive customer collaboration and interaction</td>
<td>Respect for people</td>
<td>Customer defines value</td>
<td>Economic view</td>
</tr>
<tr>
<td>Individuals and interactions</td>
<td>Small empowered high-performance multi-disciplinary teams</td>
<td></td>
<td>Customer pulls value</td>
<td>Fast feedback</td>
</tr>
<tr>
<td>Working systems and software</td>
<td>Iterative development of working operational systems and software</td>
<td>Continuous improvement</td>
<td>Continuous flow</td>
<td>Reduce batch size</td>
</tr>
<tr>
<td>Responding to change</td>
<td>Responding to change with flexible culture, process, and product</td>
<td></td>
<td>Continuous improvement</td>
<td>Control cadence</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Map value stream (eliminate waste)</td>
<td>Manage queue size</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Exploit variability</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Manage work-in-process</td>
</tr>
</tbody>
</table>
Essence of Agile Methods
- High degree of customer & developer interaction
- Highly-skilled teams producing frequent iterations
- Right-sized, just-enough, and just-in-time process
When to use Agile Methods
- On exploratory or research/development projects
- When fast customer responsiveness is paramount
- In organizations that are highly-innovative & creative
Agenda
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
“Agility” has many dimensions other than software.
Ranges from organizational to technological agility.
The focus of this brief is project management agility.
Today’s Environment
- Highly-unstable global and domestic markets
- Technology is evolving at an exponential speed
- Project plans cannot cope with this level of volatility
Need for a New Model
- Need for a **new model** of project management
- Cope with high-level of **uncertainty** and **ambiguity**
- With just the right balance of **flexibility** and **discipline**
Agile Project Management
- **APM** (ā-pē-ēm): Lightweight, flexible, adaptive, and collaborative; To be market or customer-responsive:
- Rapidly and reliably creating value by engaging customers, continuously learning, and adapting
- Sound, yet flexible process to manage projects under uncertainty, urgency, and a need for unique expertise
- Managing the flow of human thoughts, emotions, and interactions in a way that produces business value
- Values, principles, and practices to help project teams in coming to grips with a challenging environment
Values of APM
- Agile Manifesto (2001) focuses on collaboration
- DOI (2005) focuses on creating business value
- APM Values (2010) focus on all-around agility
**Agile Manifesto**
- Individuals and interactions
- Working software
- Customer collaboration
- Responding to change
**Declaration of Interdependence**
- Increase return on investment
- Deliver reliable results
- Expect uncertainty
- Unleash creativity and innovation
- Boost performance
- Improve effectiveness and reliability
**APM Values**
- Delivering value over meeting constraints
- Leading the team over managing tasks
- Adapting to change over conforming to plans
Agenda
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
» Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
Scrum Project Management
- Created by Jeff Sutherland at Easel in 1993
- Product backlog comprised of customer needs
- Barely-sufficient project management framework
XP Project Management
- Created by Kent Beck at Chrysler in 1998
- Release plan is comprised of customer needs
- Lightweight, rigorous near-term planning element
**Release Planning**
- **Exploration Phase**
- Build a Team
- Write User Stories
- Estimate User Stories
- Split User Stories
- Spike User Stories
- Write User Tests
- **Commitment Phase**
- Sort by Value
- Sort by Risk
- Set Velocity
- Choose a Scope
- Set Iteration Length
- Develop Release Plan
- Accept Tasks
- Set Individual Velocity
- Estimate Tasks
- Analyze Schedules
- Set Load Factors
- Balance Tasks
- New Release Plan
- Select Tools
- Adjust Teams
- **Steering Phase**
- Select Iteration
- Adjust Velocity
- Insert New Stories
- Select Partner
- Write Unit Tests
- Design and Code
- Unit/Integration Test
- User Acceptance Test
- Record Progress
Flexible Project Management
- Created by Doug DeCarlo at Cutter in 2004
- Focus is on collaboration, scoping, and speed
- Thinner traditional project management approach
Adaptive Project Management
- Created by Sanjiv Augustine at CC Pace in 2005
- Builds agile cultures, mind-sets, and environments
- Leadership model for managing agile project teams
Agile Project Management
- Created by Jim Highsmith at Cutter in 2003
- Focus on strategic plans and capability analysis
- Most holistic agile project management framework
Innovation Lifecycle
- **Envision**
- Product Vision
- Product Architecture
- Project Objectives
- Project Community
- Delivery Approach
- **Speculate**
- Gather Requirements
- Product Backlog
- Release Planning
- Risk Planning
- Cost Estimation
- **Explore**
- Iteration Management
- Technical Practices
- Team Development
- Team Decisions
- Collaboration
- **Launch**
- Final Review
- Final Acceptance
- Final QA
- Final Documentation
- Final Deployment
- **Close**
- Clean Up Open Items
- Support Material
- Final Retrospective
- Final Reports
- Project Celebration
Iterative Delivery
- **Technical Planning**
- Story Analysis
- Task Development
- Task Estimation
- Task Splitting
- Task Planning
- **Development, Test, and Evaluation**
- Development Pairing
- Unit Test Development
- Simple Designs
- Coding and Refactoring
- Unit and Component Testing
- **Operational Testing**
- Integration Testing
- System Testing
- Operational Testing
- Usability Testing
- Acceptance Testing
- **Adapt**
- Focus Groups
- Technical Reviews
- Team Evaluations
- Project Reporting
- Adaptive Action
Continuous
- Standups, Architecture, Design, Build, Integration, Documentation, Change, Migration, and Integration
Story Deployment
Agenda
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
Envision Phase
- Determine product vision and project objectives
- Identifies project community and project team
- The major output is a “Product Vision Box”
Diagram:
- **Delivery Approach**
- Self-Organization Strategy
- Collaboration Strategy
- Communication Strategy
- Process Framework Tailoring
- Practice Selection and Tailoring
- **Product Vision**
- Product Vision Box
- Elevator Test Statement
- Product Roadmap
- Product Features
- Product Vision Document
- **Product Architecture**
- Product Skeleton Architecture
- Hardware Feature Breakdown
- Software Feature Breakdown
- Organizational Structure
- Guiding Principles
- **Project Community**
- Get the Right People
- Participant Identification
- Types of Stakeholders
- List of Stakeholders
- Customer-Developer Interaction
- **Project Objectives**
- Project Data Sheet
- Key Business Objectives
- Tradeoff Matrix
- Exploration Factor
- Requirements Variability
---
Speculate Phase
- Determine organizational capability/mission needs
- Identifies feature-sets and system requirements
- The major output is a “System Release Plan”
Explore Phase
- Determine technical iteration objectives/approaches
- Identifies technical tasks and technical practices
- The major output is an “Operational Element”
Adapt Phase
- Determine the effectiveness of operational elements
- Identifies customer feedback and corrective actions
- The major output is a “Process Improvement Plan”
Close Phase
- Determine project outcome and effectiveness
- Identifies strengths, weaknesses, and rewards
- The major output is a “Lessons-Learned Report”
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
Multi-Level Teams
- Enables projects to plan for the future and present
- Decomposes capabilities into implementable pieces
- Unclogs the drainpipes to let the execution flow freely
Multi-Level Planning
- Enables multiple level enterprise plans to co-exist
- Allows stakeholders to build viewpoint-specific plans
- Ensures capabilities are delivered at regular intervals
Multi-Level Backlog
- Enables multiple levels of abstraction to co-exist
- Allows customers and developers to communicate
- Makes optimum use of people’s time and resources
**Capability**
- Mission goal or objective level
- High-level business or product function
- Also called an Epic, i.e., multiple feature sets
- Comprises 18-90 days worth of work
**Feature Set**
- Cross-functional mission threads
- Related user stories that are grouped together
- Also called a Theme, i.e., implemented as an entity
- Comprises 6 to 30 days worth of work
**User Story**
- Functional, system-level requirements
- Simple requirement written by customer or user
- A small unit of functionality having business value
- Comprises 2 to 10 days worth of work
Multi-Level Coordination
- Enables lean and agile methods to scale-up
- Allows enterprises to create large-scale programs
- Unleashes optimum productivity and overall control
Multi-Level Governance
- Enables enterprises to achieve functional needs
- Allows programs to coordinate functional activities
- Ensures optimal technical performance is achieved
Agenda
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
Basic Agile Metrics
- Agile methods are based on traditional measures
- Size, effort, and velocity metrics are most common
- Top-notch shops use complexity and testing metrics
<table>
<thead>
<tr>
<th>Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Size</td>
<td>Story, Story Point, Task, Function Point, LOC, etc.</td>
</tr>
<tr>
<td>Effort</td>
<td>Ideal or Actual Hours, Days, Weeks, Months, Years, etc.</td>
</tr>
<tr>
<td>Velocity</td>
<td>Story, Story Points, Function Points, or LOC per Iteration/Sprint</td>
</tr>
<tr>
<td>Complexity</td>
<td>McCabe, Halstead, Object-Oriented, Relational Database, etc.</td>
</tr>
<tr>
<td>Quality</td>
<td>Defect Density, Defect Removal Efficiency, Rayleigh, etc.</td>
</tr>
<tr>
<td>Testing</td>
<td>Tests Passed/Failed/Broken, Running Tested Features, etc.</td>
</tr>
<tr>
<td>Reliability</td>
<td>Mean Time to Failure, Mean Time between Failure, etc.</td>
</tr>
</tbody>
</table>
Burndown/Burnup Metrics
- Time expended is used for project tracking
- Tracked on a per-iteration or per-sprint basis
- Often described as a basic earned-value metric
<table>
<thead>
<tr>
<th>Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ideal Days</td>
<td>How many days something takes without interruptions</td>
</tr>
<tr>
<td>Actual Days</td>
<td>How many days something takes with interruptions</td>
</tr>
<tr>
<td>Ideal Hours</td>
<td>How many hours something takes without interruptions</td>
</tr>
<tr>
<td>Actual Hours</td>
<td>How many hours something takes with interruptions</td>
</tr>
<tr>
<td>User Stories</td>
<td>How many customer requirements have been satisfied</td>
</tr>
<tr>
<td>Story Points</td>
<td>How many units of software size have been satisfied</td>
</tr>
<tr>
<td>Technical Tasks</td>
<td>How many technical tasks have been completed</td>
</tr>
</tbody>
</table>
## Agile Cost Models
- **Costs** based on **productivity** and **quality** models
- Development costs based on $\text{LOC} \div \text{productivity rate}$
- Maintenance costs based on $\text{defects} \times 100 \times \text{KLOC} \times \text{MH}$
<table>
<thead>
<tr>
<th>Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Basic Form</td>
<td>$(\text{LOC} \div \text{Productivity} + \text{Quality} \times 100) \times \text{Hourly Rate}$</td>
</tr>
<tr>
<td>XP</td>
<td>$(\text{LOC} \div 16.1575 + 0.7466 \times 100) \times \text{Hourly Rate}$</td>
</tr>
<tr>
<td>TDD</td>
<td>$(\text{LOC} \div 29.2800 + 2.1550 \times 100) \times \text{Hourly Rate}$</td>
</tr>
<tr>
<td>PP</td>
<td>$(\text{LOC} \div 33.4044 + 2.3550 \times 100) \times \text{Hourly Rate}$</td>
</tr>
<tr>
<td>Scrum</td>
<td>$(\text{LOC} \div 05.4436 + 3.9450 \times 100) \times \text{Hourly Rate}$</td>
</tr>
<tr>
<td>Agile</td>
<td>$(\text{LOC} \div 21.2374 + 1.7972 \times 100) \times \text{Hourly Rate}$</td>
</tr>
</tbody>
</table>
A major principle of Agile Methods is creating value. ROI is the measure of value within Agile Methods. There are seven closely related ROI measures:
<table>
<thead>
<tr>
<th>Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Costs</td>
<td>Total amount of money spent on agile methods</td>
</tr>
<tr>
<td>Benefits</td>
<td>Total amount of money gained from using agile methods</td>
</tr>
<tr>
<td>Breakeven</td>
<td>Point when the benefits of using agile methods exceed the costs</td>
</tr>
<tr>
<td>B/CR</td>
<td>Ratio of agile methods benefits to costs of using agile methods</td>
</tr>
<tr>
<td>ROI</td>
<td>Ratio of adjusted agile methods benefits to costs of using them</td>
</tr>
<tr>
<td>NPV</td>
<td>Present value of agile methods benefits that result from their use</td>
</tr>
<tr>
<td>Real Options</td>
<td>Value gained from incremental investments in high-risk projects</td>
</tr>
</tbody>
</table>
Agile EVM
- EVM has been adapted to Agile Methods
- EVM based on notion that total scope is known
- EVM is “not” well-suited for large-scale agile projects
<table>
<thead>
<tr>
<th>Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>PMB</td>
<td>Total number of story points planned for a release</td>
</tr>
<tr>
<td>SBL</td>
<td>Total number of iterations multiplied by iteration length</td>
</tr>
<tr>
<td>BAC</td>
<td>The planned budget for the release</td>
</tr>
<tr>
<td>PPC</td>
<td>Number of current iterations divided by planned iterations</td>
</tr>
<tr>
<td>APC</td>
<td>Total story points completed divided by story points planned</td>
</tr>
<tr>
<td>SPC</td>
<td>Story points of work completed from backlog during iteration</td>
</tr>
<tr>
<td>SPA</td>
<td>Story points added/subtracted from backlog during iteration</td>
</tr>
</tbody>
</table>
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
E-Commerce—Google
- Google started using agile methods in 2005
- Used it on one of their most profitable products
- Incrementally adopted agile one practice at a time
<table>
<thead>
<tr>
<th>Project Name</th>
<th>AdWords</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Type</td>
<td>Pay-per-Click (PPC) Internet Advertising Mechanism</td>
</tr>
<tr>
<td>Project Size</td>
<td>20 teams of 140 people distributed over 5 countries</td>
</tr>
<tr>
<td>Product Size</td>
<td>1,838 user stories, 6,250 function points, 500,000 lines of code</td>
</tr>
<tr>
<td>Environment</td>
<td>Entrepreneurial, egalitarian, dynamic, unpredictable, informal, unstructured</td>
</tr>
<tr>
<td>Before APM</td>
<td>Chronic schedule delays, poor quality, unpredictability, poor estimation</td>
</tr>
<tr>
<td>APM Practices</td>
<td>Release planning, wikis for APM support, early testing and continuous integration</td>
</tr>
<tr>
<td>After APM</td>
<td>Better planning and estimates, earlier testing, better quality, large-scale adoption</td>
</tr>
<tr>
<td>Lessons Learned</td>
<td>Agile fit like a hand-in-glove, introduce agile methods slowly and then scale-up</td>
</tr>
</tbody>
</table>
Shrink-Wrapped—Primavera
- Primavera started using agile methods in 2004
- Used it on their flagship project management tools
- Adopted agile all-at-once with top-down mgt. support
<table>
<thead>
<tr>
<th>Project Name</th>
<th>Primavera</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Type</td>
<td>Enterprise Project Management Tool</td>
</tr>
<tr>
<td>Project Size</td>
<td>15 teams of 90 people collocated at one site</td>
</tr>
<tr>
<td>Product Size</td>
<td>26,809 user stories, 91,146 function points, 7,291,666 lines of code</td>
</tr>
<tr>
<td>Environment</td>
<td>Top-down, hierarchical, command and control, traditional, waterfall approach</td>
</tr>
<tr>
<td>Before APM</td>
<td>Poor relationships, quality, usability, and customer satisfaction, functional silos, 18-hour days, 7-day work weeks, frustration, disappointment, apathy, exhaustion</td>
</tr>
<tr>
<td>APM Practices</td>
<td>Release planning, agile project management tools, automated testing tools</td>
</tr>
<tr>
<td>After APM</td>
<td>75% quality and 40% cycle time improvement, 40-hour work week, 0% attrition</td>
</tr>
<tr>
<td>Lessons Learned</td>
<td>Agile results in better communication, motivation, and empowerment</td>
</tr>
</tbody>
</table>
FDA suppliers started using agile methods in 2008
Used it on most stringent Class 3 certified products
Used to modernize 1990s era products & processes
<table>
<thead>
<tr>
<th>Project Name</th>
<th>m2000 Real-time PCR Diagnostics System</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Type</td>
<td>Human Blood Analysis Tool (i.e., HIV-1, HBV, HCV, CT, NG, etc.)</td>
</tr>
<tr>
<td>Project Size</td>
<td>4 teams of 20 people collocated at one site</td>
</tr>
<tr>
<td>Product Size</td>
<td>1,659 user stories, 5,640 function points, 451,235 lines of code</td>
</tr>
<tr>
<td>Environment</td>
<td>FDA-regulated medical devices, real-time, safety-critical, Class III–most stringent</td>
</tr>
<tr>
<td>Before APM</td>
<td>Cumbersome process, poor quality, long cycle time, slow big-bang integration, obsolete, hard-to-staff tools and methods, inability to keep pace with changing requirements, intense market competition, exponential rate of technological change, fewer resources</td>
</tr>
<tr>
<td>APM Practices</td>
<td>Release planning, lighter-weight agile testing techniques, continuous integration</td>
</tr>
<tr>
<td>After APM</td>
<td>25% cycle time and staff-size reduction, 43% cost reduction, fewer defects</td>
</tr>
<tr>
<td>Lessons Learned</td>
<td>Agile enables the ability to balance fast cycle time with high-quality safety-critical solutions</td>
</tr>
</tbody>
</table>
Law Enforcement—FBI
- IC started using agile methods following 9/11
- Used it on billion dollar transformation initiatives
- Goal is to catch bad guys better, faster, and cheaper
<table>
<thead>
<tr>
<th>Project Name</th>
<th>Inter-Agency Intelligence Sharing System</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Type</td>
<td>Domestic Terrorist Database/Data Warehouse</td>
</tr>
<tr>
<td>Project Size</td>
<td>3 teams of 12 people collocated at one site</td>
</tr>
<tr>
<td>Product Size</td>
<td>643 user stories, 2,188 function points, 175,000 lines of code</td>
</tr>
<tr>
<td>Environment</td>
<td>CMMI Level 3, ISO 9001, government-mandated document-driven waterfall life cycle, emerging federal directives for more information sharing and integration among intelligence community partners, rapidly changing customer requirements</td>
</tr>
<tr>
<td>Before APM</td>
<td>Unresponsive waterfall life cycles, chronic schedule delays, anxious customers, unhappy developers, resource focus on becoming CMMI Level 3 certified caused everyone to lose track of the real goal, which was to “catch bad guys”</td>
</tr>
<tr>
<td>APM Practices</td>
<td>Release planning, user stories, test-driven development, continuous integration</td>
</tr>
<tr>
<td>After APM</td>
<td>50% quality improvement, 200% productivity increase, FBI created policy for agile methods</td>
</tr>
<tr>
<td>Lessons Learned</td>
<td>Agile enables fast response times, customer satisfaction, and ability to “catch bad guys”</td>
</tr>
</tbody>
</table>
U.S. DoD—STRATCOM
- U.S. DoD started using agile methods following 9/11
- Used it on billion-dollar software-intensive systems
- Goals are to respond to rapidly emerging threats
<table>
<thead>
<tr>
<th>Project Name</th>
<th>Strategic Knowledge Integration Website (SKIweb)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Type</td>
<td>Knowledge Management System (KMS)—Advanced Search Capability</td>
</tr>
<tr>
<td>Project Size</td>
<td>3 teams of 12 people collocated at one site</td>
</tr>
<tr>
<td>Product Size</td>
<td>390 user stories, 1,324 function points, 105,958 lines of code</td>
</tr>
<tr>
<td>Environment</td>
<td>Traditional linear documentation-based development, contract-oriented, hierarchical communication, rapidly changing operational requirements, need for leaner U.S. military force, seeking better and faster ways of getting critical information to decision makers, decentralization, migration to net-centric service oriented architectures, egalitarian decisions</td>
</tr>
<tr>
<td>Before APM</td>
<td>Long cycle times, dissatisfied customers, unresponsive life cycles, poor quality</td>
</tr>
<tr>
<td>APM Practices</td>
<td>Release planning, frequent customer collaboration, continuous integration</td>
</tr>
<tr>
<td>After APM</td>
<td>Good teamwork, 200% productivity increase, improved quality, fewer defects</td>
</tr>
<tr>
<td>Lessons Learned</td>
<td>Agile improves customer satisfaction/communication, and overall product quality</td>
</tr>
</tbody>
</table>
Overview of Agile Project Mgt.
Intro to Agile Project Mgt.
Types of Agile Project Mgt.
Phases of Agile Project Mgt.
Scaling Agile Project Mgt.
Metrics for Agile Project Mgt.
Cases of Agile Project Mgt.
Summary of Agile Project Mgt.
Advanced Agile Measures
- Agile Methods are a fundamentally new paradigm
- Agile Methods are “not” lighter Traditional Methods
- They should not be viewed through a traditional lens
Benefits of Agile Methods
- Analysis of 23 agile vs. 7,500 traditional projects
- Agile projects are 54% better than traditional ones
- Agile has lower costs (61%) and fewer defects (93%)
Myths about Agile Methods
- Common myths abound, although agile methods have been around for ~20 years:
- Agile methods are only for software development
- Agile methods are only for small co-located teams
- Agile methods have no documentation
- Agile methods have no requirements
- Agile methods need traditional system architectures
- Agile methods have no project management
- Agile methods are undisciplined and unmeasurable
- Systems built using agile methods are unmaintainable and insecure
Conclusions
- Traditional methods are well-suited for predictability
- Agile Methods are well-suited for high uncertainty
- It comes down to efficiency versus effectiveness
Traditional Project Management
- Predictable situations
- Low-technology projects
- Stable, slow-moving industries
- Low-levels of technological change
- Repeatable operations
- Low-rates of changing project performance
- Long-term, fixed-price production contracts
- Achieving concise economic efficiency goals
- Highly-administrative contracts
- Mass production and high-volume manufacturing
- Highly-predictable and stable market conditions
- Low-margin industries such as commodities
- Delivering value at the point-of-plan
Agile Project Management
- High-levels of uncertainty and unpredictability
- High-technology projects
- Fast-paced, highly-competitive industries
- Rapid pace of technological change
- Research-oriented, discovery projects
- Large-fluctuations in project performance
- Shorter-term, performance-based RDT&E contracts
- Achieving high-impact product/service effectiveness
- Highly-creative new product development contracts
- Customer-intensive, one-off product/service solutions
- Highly-volatile and unstable market conditions
- High-margin, intellectually-intensive industries
- Delivering value at the point-of-sale
New Book on Agile Methods
- Guide to Agile Methods for business leaders
- Communicates business value of Agile Methods
- Rosetta stone to Agile Methods for traditional folks
Table of Contents
1. Introduction to Agile Methods
2. Values of Agile Methods
3. History of Agile Methods
4. Antecedents of Agile Methods
5. Types of Agile Methods
6. Practices of Agile Methods
7. Agile Project Management
8. Agile Software Engineering
9. Agile Support Processes
10. Agile Tools and Technologies
11. Comparison of Agile Methods
12. Agile Metrics and Models
13. Surveys of Agile Methods
15. ROI Metrics of Agile Methods
16. Measures of Agile Methods
17. Costs of Agile Methods
18. Benefits of Agile Methods
19. ROI of Agile Methods
20. NPV of Agile Methods
21. Real Options of Agile Methods
22. Business Value of Agile Methods
23. Agile vs. Traditional Methods
24. Future of Agile Methods
http://davidfrico.com/agile-book.htm (Description)
http://www.amazon.com/dp/1604270314 (Amazon)
|
{"Source-Url": "https://www.pmiwdc.org/sites/default/files/presentations/201011/PMIW_Chantilly_presentation_101410.pdf", "len_cl100k_base": 7133, "olmocr-version": "0.1.50", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 97049, "total-output-tokens": 10024, "length": "2e12", "weborganizer": {"__label__adult": 0.0005450248718261719, "__label__art_design": 0.0009527206420898438, "__label__crime_law": 0.0004377365112304687, "__label__education_jobs": 0.01372528076171875, "__label__entertainment": 0.0001118183135986328, "__label__fashion_beauty": 0.0003159046173095703, "__label__finance_business": 0.0104217529296875, "__label__food_dining": 0.0005698204040527344, "__label__games": 0.0012264251708984375, "__label__hardware": 0.0006246566772460938, "__label__health": 0.0005774497985839844, "__label__history": 0.00048160552978515625, "__label__home_hobbies": 0.0003266334533691406, "__label__industrial": 0.0011844635009765625, "__label__literature": 0.0004396438598632813, "__label__politics": 0.0003285408020019531, "__label__religion": 0.0005183219909667969, "__label__science_tech": 0.005573272705078125, "__label__social_life": 0.0002058744430541992, "__label__software": 0.01218414306640625, "__label__software_dev": 0.947265625, "__label__sports_fitness": 0.0006279945373535156, "__label__transportation": 0.0007257461547851562, "__label__travel": 0.0003948211669921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34718, 0.013]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34718, 0.08384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34718, 0.77218]], "google_gemma-3-12b-it_contains_pii": [[0, 231, false], [231, 488, null], [488, 895, null], [895, 1546, null], [1546, 2549, null], [2549, 5006, null], [5006, 5283, null], [5283, 5763, null], [5763, 6010, null], [6010, 6169, null], [6169, 6844, null], [6844, 7544, null], [7544, 8607, null], [8607, 9582, null], [9582, 9826, null], [9826, 10084, null], [10084, 11070, null], [11070, 11406, null], [11406, 11680, null], [11680, 13293, null], [13293, 13535, null], [13535, 14637, null], [14637, 14917, null], [14917, 15201, null], [15201, 15488, null], [15488, 15759, null], [15759, 15992, null], [15992, 16288, null], [16288, 16593, null], [16593, 17454, null], [17454, 17745, null], [17745, 18040, null], [18040, 18282, null], [18282, 19469, null], [19469, 20503, null], [20503, 21694, null], [21694, 22632, null], [22632, 23485, null], [23485, 23719, null], [23719, 24884, null], [24884, 26020, null], [26020, 27489, null], [27489, 29127, null], [29127, 30831, null], [30831, 31064, null], [31064, 31443, null], [31443, 31743, null], [31743, 32257, null], [32257, 33706, null], [33706, 34718, null]], "google_gemma-3-12b-it_is_public_document": [[0, 231, true], [231, 488, null], [488, 895, null], [895, 1546, null], [1546, 2549, null], [2549, 5006, null], [5006, 5283, null], [5283, 5763, null], [5763, 6010, null], [6010, 6169, null], [6169, 6844, null], [6844, 7544, null], [7544, 8607, null], [8607, 9582, null], [9582, 9826, null], [9826, 10084, null], [10084, 11070, null], [11070, 11406, null], [11406, 11680, null], [11680, 13293, null], [13293, 13535, null], [13535, 14637, null], [14637, 14917, null], [14917, 15201, null], [15201, 15488, null], [15488, 15759, null], [15759, 15992, null], [15992, 16288, null], [16288, 16593, null], [16593, 17454, null], [17454, 17745, null], [17745, 18040, null], [18040, 18282, null], [18282, 19469, null], [19469, 20503, null], [20503, 21694, null], [21694, 22632, null], [22632, 23485, null], [23485, 23719, null], [23719, 24884, null], [24884, 26020, null], [26020, 27489, null], [27489, 29127, null], [29127, 30831, null], [30831, 31064, null], [31064, 31443, null], [31443, 31743, null], [31743, 32257, null], [32257, 33706, null], [33706, 34718, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34718, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34718, null]], "pdf_page_numbers": [[0, 231, 1], [231, 488, 2], [488, 895, 3], [895, 1546, 4], [1546, 2549, 5], [2549, 5006, 6], [5006, 5283, 7], [5283, 5763, 8], [5763, 6010, 9], [6010, 6169, 10], [6169, 6844, 11], [6844, 7544, 12], [7544, 8607, 13], [8607, 9582, 14], [9582, 9826, 15], [9826, 10084, 16], [10084, 11070, 17], [11070, 11406, 18], [11406, 11680, 19], [11680, 13293, 20], [13293, 13535, 21], [13535, 14637, 22], [14637, 14917, 23], [14917, 15201, 24], [15201, 15488, 25], [15488, 15759, 26], [15759, 15992, 27], [15992, 16288, 28], [16288, 16593, 29], [16593, 17454, 30], [17454, 17745, 31], [17745, 18040, 32], [18040, 18282, 33], [18282, 19469, 34], [19469, 20503, 35], [20503, 21694, 36], [21694, 22632, 37], [22632, 23485, 38], [23485, 23719, 39], [23719, 24884, 40], [24884, 26020, 41], [26020, 27489, 42], [27489, 29127, 43], [29127, 30831, 44], [30831, 31064, 45], [31064, 31443, 46], [31443, 31743, 47], [31743, 32257, 48], [32257, 33706, 49], [33706, 34718, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34718, 0.17609]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
a9cf2c605212c52e381178553ea077157f6bdff1
|
A hamming distance based VLIW/EPIC code compression technique
Montserrat Ros
University of Queensland, montse@uow.edu.au
Peter Sutton
University Of Queensland
Follow this and additional works at: https://ro.uow.edu.au/eispapers
Part of the Engineering Commons, and the Science and Technology Studies Commons
Recommended Citation
Ros, Montserrat and Sutton, Peter, "A hamming distance based VLIW/EPIC code compression technique" (2004). Faculty of Engineering and Information Sciences - Papers: Part A. 438.
Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: research-pubs@uow.edu.au
A hamming distance based VLIW/EPIC code compression technique
Abstract
This paper presents and reports on a VLIW code compression technique based on vector Hamming distances [19]. It investigates the appropriate selection of dictionary vectors such that all program vectors are at most a specified maximum Hamming distance from a dictionary vector. Bit toggling information is used to restore the original vector. A dictionary vector selection method which considered both vector frequency as well as maximum coverage achieved better results than just considering vector frequency or vector coverage independently. This method was found to outperform standard dictionary compression on TI TMS320C6x program code by an average of 8%, giving compression ratios of 72.1% to 80.3% when applied to the smallest compiler builds. The most favorable results were achieved with a Hamming distance upper limit of 3. An investigation into parallel compression showed that dividing the program into 32-bit parallel streams returned an average compression ratio of 79.4% for files larger than 200kb. This approach enables parallel decompression of instruction streams within a VLIW instruction word. Suggestions for further work include compiler/compression integration, more sophisticated dictionary selection methods and better codeword allocation.
Keywords
vliw, distance, technique, hamming, compression, code, epic
Disciplines
Engineering | Science and Technology Studies
Publication Details
This conference paper is available at Research Online: https://ro.uow.edu.au/eispapers/438
A Hamming Distance Based VLIW/EPIC
Code Compression Technique
Montserrat Ros, Peter Sutton
School of Information Technology and Electrical Engineering
The University of Queensland
Brisbane Australia 4072
{ros, p.sutton}@itee.uq.edu.au
ABSTRACT
This paper presents and reports on a VLIW code compression technique based on vector Hamming distances [19]. It investigates the appropriate selection of dictionary vectors such that all program vectors are at most a specified maximum Hamming distance from a dictionary vector. Bit toggling information is used to restore the original vector.
A dictionary vector selection method which considered both vector frequency as well as maximum coverage achieved better results than just considering vector frequency or vector coverage independently. This method was found to outperform standard dictionary compression on TI TMS320C6x program code by an average of 8%, giving compression ratios of 72.1% to 80.3% when applied to the smallest compiler builds. The most favorable results were achieved with a Hamming distance upper limit of 3.
An investigation into parallel compression showed that dividing the program into 32-bit parallel streams returned an average compression ratio of 79.4% for files larger than 200kb. This approach enables parallel decompression of instruction streams within a VLIW instruction word. Suggestions for further work include compiler/compression integration, more sophisticated dictionary selection methods and better codeword allocation.
Categories and Subject Descriptors
E.4 [Coding and Information Theory]
General Terms
Algorithms, Performance.
Keywords
Code Compression, VLIW, Hamming distance.
1. INTRODUCTION
Code size management is a significant issue for embedded system design. As consumers require more functionality, applications for embedded devices become more and more complex. Furthermore, abstract programming languages are being chosen for the development of embedded applications such that the development can be steered away from the hardware level and more towards a platform-independent design philosophy. As a result of both of these considerations, embedded application code sizes are increasing and this can pose a problem for designers.
Several methods for compressing or compacting code size have been presented in the literature to date, though most algorithms have focused mainly on RISC processors. Lately, however, VLIW (Very Long Instruction Word) processors have begun to be considered as prime candidates for code compression, given not only their inherent large instruction words but also their appeal to the embedded DSP market.
One example of where code compression has reached the VLIW industry is in Atmel’s Diopsis Dual Core DSP implementing a mAgic DSP VLIW core which uses a method of built-in dynamic program decompression [3, 18]. Compressed program code is fed to dynamic program decompression devices (dyprodes) which produce the uncompressed code and this is seamlessly executed. Another advantage of using code compression is that program bus size can be reduced as a result of the smaller instruction word size. This is used to the Diopsis’ advantage.
Code compression efficiency is widely defined [4, 12, 15, 19] as the ratio between the compressed program size and the original program size. That is, the smaller the compression ratio, the better the compression. Compression ratio can depend on the size of the original compiler output. Our previous work has found that the smallest overall sizes after compression are obtained when the smallest possible compiler build is used, even though other builds give better compression ratios [20].
In this paper, we present a new compression scheme and investigate its performance. We have taken selected benchmarks from the Spec2000 [2] and the Mediabench [1] benchmark suites, and built them for the Texas Instruments TMS320c6x [21] and the Intel Itanium [9] as representatives of the VLIW/EPIC processor range.
The remainder of this paper is organized as follows. Section 2 presents background and related work in this field. Section 3 describes the compression scheme used and Section 4 outlines results from applying the compression scheme. Section 5 includes a discussion and comparison of results and Section 6 contains conclusions and further work.
2. RELATED WORK
The area of text or data compression is a mature one, but code compression dates from 1992, when Wolfe and Channin first published a paper on a Compressed Code RISC Processor (CCRP) [22]. VLIW code compression is an even more recent field with papers published in only the last few years. Code compression is a separate field of study given that many data compression based schemes are inapplicable to program code, where branch targets and function entry points need to be decompressed on demand.
2.1 Code Compression on RISC processors
The paper by Wolfe and Channin [22] suggested a CCRP to compress code and used a 'code-expanding instruction cache', such that the decompression could be transparent to the processor. By using a compression technique that did not give consideration to branch targets and function beginnings, extra hardware was required to fetch addresses. Their design used a Line Address Table (LAT) to map original addresses into compressed code addresses.
Lefurgy et al presented dictionary compression in [13] where all unique instructions are recorded in an ‘instruction table’ and each instruction is replaced by an index into the table. They also present a selective version in [14]. Liao et al offered a dictionary compression scheme based on set-covering in [16] which looks at substrings that occur frequently. Lekatsas presented a semi-adaptive dictionary compression scheme in [15] which generated new opcodes for instructions appearing frequently. Some software/compiler methods have also been presented in [5, 6, 14].
2.2 Code Compression on VLIW processors
Code compression techniques have also been applied to VLIW processors. Nam et al [17] achieved average compression ratios of 63%-71% using a dictionary compression method and compared the difference in performance of “identical” (whole instructions words) and “isomorphic” (split into opcode/operator fields) instruction word encoding schemes. Ishiura and Yamaguchi [10] investigated code compression based on Automatic Field Partitioning, achieving compression ratios of 46-60%. They reduced the problem of compressing code to the problem of finding the field partitioning that yields the smallest compression ratio. Larin and Conte [11] compared code compression methods and a tailored encoding of the Instruction Set Architecture. The tailored ISA method produced new code at 64% of the original code size, though at a much smaller cost to decoding hardware than standard compression.
Xie et al. [23, 25] used a reduced-precision arithmetic coding technique combined with a Markov model and applied it to similar systems with different sized sub-blocks. The 16-byte subblock scheme yields the best compression rates at 67.3% – 69.7%. Xie et al. also present a Tunstall-based memory-less variable-to-fixed encoding scheme and an improved Markov variable-to-fixed algorithm in [24]. The use of variable-to-fixed encoding means that codewords are arbitrarily assigned and this assignment can be used to advantage to reduce the number of bit toggles on the instruction bus.
Prakash et al [19] present a dictionary based encoding scheme that divides instructions into two 16-bit halves. For each half, a dictionary is constructed that contains a choice set of vectors such that a majority of the vectors used throughout the program in that half of the instruction differ from one of the dictionary vectors by a Hamming distance of at most 1 (the Hamming distance between two vectors is the number of bits that are different). Each compressed instruction is then replaced by two codewords representing each half-instruction. These codewords are a combination of the indexes into the relevant dictionaries as well as information about which bits are toggled.
This method means that two vectors that differ by only one bit will not require both vectors to be stored in the dictionary. One of the two vectors is stored and the other merely references the stored vector and points out which bit needs to be toggled. Average compression ratios of 78.6% including Line Addressing Table are reported. Although some attempt is made to investigate 32-bit vectors, the dictionary selection method they used did not appear to give compression ratios as good as the 16-bit scheme. Their scheme also uses different dictionaries for each sub-block of 2048 bytes as opposed to using one dictionary for the whole program.
2.3 Previous Implementations of Code Compression
One successful encoding scheme, commercially used in the PowerPC 405, is the CodePack scheme [7]. The CodePack encoding scheme follows an algorithm analogous to a piece-wise Huffman scheme [8] where the most frequent symbols are assigned smaller codewords. Here, the 16-bit half-words are assigned a two or three bit tag which denotes which ‘class’ they belong to, differentiated by the tag and then how long the codeword is. CodePack has a reported performance of an overall program size “reduction” of 35-40% [7] (i.e. a compression ratio of 60-65%). CodePack uses variable-length encoding and requires the use of a mapping table to calculate the new address of a given instruction. Lefurgy et al provide further optimisation and enhancement suggestions for a machine with CodePack in [12].
A second example of the implementation of code compression is the Atmel Diopsis example mentioned earlier [3, 18]. This VLIW code compression architecture claims a 2X to 3X compression of code (33 to 50% compression ratio) whereby 128-bit instruction words are compressed to an average of 50 bits per instruction word. This shows the advantage of an integrated code compression and instruction set architecture if designed together from the start.
In most cases, designing a totally new processor complete with integrated code compression and instruction set architecture is beyond the scope (not to mention budget!) of many embedded applications. Instead, research has tended to concentrate on code compression systems that are software-based or where hardware need only be altered slightly in order to achieve a saving of program size (moderate, but a saving nonetheless). An example of where a slight alteration of hardware is possible would be the inclusion of a decompression engine next to a processor core in an ASIC embedded design. In this case, the program to be run on the processor of choice can be compiled and compressed before loading.
3. ENCODING SCHEME
The encoding scheme presented in this paper is based on the appropriate selection of dictionary vectors such that all program vectors are at most a specified Hamming distance from a dictionary vector. Bit toggling information is used to accurately restore original code. This scheme is similar to the 16-bit version from [19] where only vectors differing by one bit were considered. Instead, our scheme considers 32-bit vectors and was trialed with Hamming distance upper limits from 1 to 8. Furthermore, we consider multiple dictionary selection methods and offer a stream-based compression method for parallel decompression.
The algorithm is divided into the four steps described in the following subsections. A decoder is required in the hardware to decode the uncompressed instructions and is outlined in Section 3.5.
3.1 File Input and Dictionary Construction (First Input Pass)
The first pass in the encoding scheme is equivalent to most dictionary compression schemes. The benchmark to be compressed is read in, one 32-bit vector at a time, and a frequency distribution of all the used vector space is constructed. This histogram-like structure (containing elements from the dictionary) is used in the subsequent compression steps.
3.2 Reduced Dictionary Selection (First Dictionary Pass)
The purpose of this pass is to select from the dictionary, a subset of vectors (called the reduced dictionary) such that all original dictionary vectors are at most a set Hamming distance from any one of the reduced dictionary vectors. The purpose of this dictionary-subset selection is to allow for a smaller dictionary, and include information for bit-toggles where the vectors differ in the replacement codewords.
The benchmark programs were profiled for 32-bit vector space usage and three reduced dictionary selection methods were applied – they are described below. They were tested for up to set Hamming distance upper limits ranging from 1 to 7.
3.2.1 Frequency Selection Method
This method of selecting vectors for inclusion in the reduced dictionary chooses vectors based on their frequencies and continually adds the most frequent vectors until all the vectors in the original dictionary are ‘covered’ by being at most a set maximum Hamming distance from any of the reduced dictionary vectors. The aim of this method is to include vectors into the reduced dictionary that are very frequent in the original program, thus incorporating a higher number of “zero Hamming distance” entries. This means that fewer bit toggle location fields will be required during compression (see Section 3.4).
3.2.2 Maximum Span Selection Method
This method finds, for each vector in the dictionary, the total number of other dictionary vectors that are up to a set maximum Hamming distance from it. The vector that spans the most other vectors is chosen and placed in the reduced dictionary. Then, this method discards all vectors in the dictionary that are the set Hamming distance or less from the chosen vector. Of the undiscarded vectors in the dictionary, the vector that spans the most of the remaining vectors is chosen and the process repeats again until all vectors are discarded from the original dictionary. The aim of this method is to reduce the number of vectors needed in the reduced dictionary.
3.2.3 Combination of Frequency and Spanning Method
This dictionary selection method attempts to combine the best from both of the previous algorithms. It chooses the most frequent vector in the dictionary and places it in the reduced dictionary. Then, it discards all vectors in the dictionary that are the set Hamming distance or less from the chosen vector. Once again, the most frequent vector from the remaining vectors is chosen and the process repeats until all dictionary vectors are covered by the given set Hamming distance.
3.3 Reduced Dictionary Fill and Codeword Assignment (Second Dictionary Pass)
The reduced dictionary is analyzed and filled with further vectors such that the bits required for the indexing of the reduced dictionary is unchanged. Essentially, this fills it with vectors from the original dictionary that did not already exist in the reduced dictionary, up to the next power of 2 so that there is no wasted indexing space. In all three dictionary selection methods, the extra filling stage takes the most frequent vectors that are not already in the reduced dictionary, as this method will reduced the number of toggle locations more. The indices into the reduced dictionary serve as codewords for the compression step.
3.4 Compression Application (Final Input Pass)
The compression scheme is applied by converting each 32-bit vector into compressed code. The compressed code comprises a codeword (determined in the last step), a set number of bits to denote the number of toggles and up to 7 sets of 5-bit toggle locations. An example of this is shown in Figure 1.
3.5 Decompression Engine Design
A decompression unit is required to decompress the instructions ‘on the fly’ and feed them to the CPU. The standard dictionary scheme uses a dictionary as a lookup table, where the compressed
instruction acts as an index into the lookup table and the output of
the table is the uncompressed instruction.
Our scheme works in a similar fashion, with the codeword from
the compressed instruction acting as an index into the reduced
dictionary lookup table, and the extra bits in the compressed
instruction determining which bits (if any) to toggle from the
lookup table output. A block diagram of the dictionary and the bit
toggling hardware required for a code compression scheme with a
Hamming distance upper limit of 3 is given in Figure 2.

Because our scheme is a variable length one, we must consider the
need for a referencing table of some sort such that instruction
locations (such as branch targets) can be retrieved. For this, we
have used a LAT similar to [19], however only branch targets are
included in the table. The block diagram of this LAT hardware is
given in Figure 3. Furthermore, to ensure the branch targets were
byte aligned, padding was required at the end of the previous
instruction of every target.

### 3.6 Stream Encoding
The main problem with the serial decompression of variable-
length codes is that performance is affected. In particular, one
fetch packet (which consisted of four and eight 32-bit vectors in
the processors investigated) can consist of many vectors which are
normally fetched simultaneously. If 8 such 32-bit vectors are to
be serially decompressed, then the latency associated with 8 sets
of dictionary retrievals and bit togglings could be detrimental to
performance.
In a bid to parallelize the decompression of the compressed code
and avoid the serial decompression latency, the option of compressing the information into streams is trialed. This
implementation divides the instruction fetch packet into 32-bit
streams and decompression is applied to the program code in a
given stream rather than the whole program code. Smaller,
individual tables and separate decompressors are required for each
stream.
### 4. RESULTS
Benchmarks were taken from both the Spec2000 [2] and the
Mediabench [1] benchmark suites. These were built for two
processors, the Texas Instruments TMS320c6x [21] using the TI Code
Benchmarks taken from the Mediabench suite included adpcm
(rawc- and rawd-audio), g721 (g721enc and g721dec), epic (and
unepic), mpeg (mpeg2enc and mpeg2dec) and jpeg (cjpeg and
djpeg). Benchmarks taken from the Spec2000 suite included mcf,
art, equake, parser, ammp, twolf and mesa.
In both processor cases, the benchmarks were built with every
optimization level, and the smallest possible build was used. In
most cases, this corresponded with the -ms3 and -o3 flags for the
TI compiler, and the -Os flag for all gcc builds.
Compression ratio is an accurate measurement to compare the
different versions of this compression scheme, because they are all
applied to the same original files (hence starting size will be the
same for any benchmark).
The first issue investigated was that of the dictionary selection
methods. Compression ratio was found to be very dependent on
the selection method thus results are presented for each selection
technique in comparison to a standard dictionary compression.
The standard scheme places all unique vectors found in the
program code in the dictionary and an index is used instead of the
original vector. An example of its application is given in [13]. In
essence, the ‘normal’ dictionary compression method is a method
that tolerates no bit toggles (and as a result requires no extra
information) and can be likened to our method with a Hamming
distance upper limit of 0 where the ‘reduced’ dictionary is
identical to the original dictionary.
Compression ratios in the following sections include the
compressed code, dictionary and LAT sizes. Dictionary sizes are
taken from the number of reduced unique entries required to cover
the entire code, and the LAT sizes are derived from the number of
branch target locations. Average compression ratios across all
benchmarks tested are reported.
#### 4.1 Frequency Selection Results
The Frequency Selection method returned compression ratios
worse than the standard dictionary compression (left column in
Figure 4) for Hamming distance limits of 7 and under, although
compression ratios were improving as the Hamming distance limit
was raised. This prompted the investigation of larger Hamming
distance upper limits and upper limits of up to 16 were
investigated. In fact, the results suggested that a Hamming
distance upper limit of 10 would give best results.
The results at this Hamming distance returned average
compression ratios of 73.1%. This compression scheme uses the
fact that although Hamming distances of up to 10 may be allowed,
a large portion of the program code is a small Hamming distance
from a dictionary vector, because more frequent vectors are added first.
To examine the relative frequencies of different Hamming distances, an example benchmark is profiled. Here, the djpeg benchmark, built for the TI TMS320c6700, has been broken down into how many instructions are a given Hamming distance from a dictionary entry, with the upper limit set to 10. The reason compression is achieved is due to just over half of the program’s vectors being found in the dictionary even though the number of dictionary entries is low. This is because this algorithm greedily includes the most frequent vectors first.
Table 1 – Hamming Distance Frequencies for Frequency Method Example
<table>
<thead>
<tr>
<th>Hamming Distance</th>
<th>Number of Program Instructions (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>15772 (54.7%)</td>
</tr>
<tr>
<td>1</td>
<td>2909 (10.1%)</td>
</tr>
<tr>
<td>2</td>
<td>3166 (11.0%)</td>
</tr>
<tr>
<td>3</td>
<td>2548 (8.8%)</td>
</tr>
<tr>
<td>4</td>
<td>1787 (6.2%)</td>
</tr>
<tr>
<td>5</td>
<td>1184 (4.1%)</td>
</tr>
<tr>
<td>6</td>
<td>796 (2.8%)</td>
</tr>
<tr>
<td>7</td>
<td>470 (1.6%)</td>
</tr>
<tr>
<td>8</td>
<td>179 (0.6%)</td>
</tr>
<tr>
<td>9</td>
<td>30 (0.1%)</td>
</tr>
<tr>
<td>10</td>
<td>15 (0.1%)</td>
</tr>
<tr>
<td>Total Instructions:</td>
<td>28856</td>
</tr>
<tr>
<td>Unique Instructions:</td>
<td>11805</td>
</tr>
<tr>
<td>Dictionary Entries:</td>
<td>2048</td>
</tr>
</tbody>
</table>
The main issue arising from this frequency-based scheme is that the length of the compressed instruction could escalate out of hand. In the example case, the codeword length was $\log_2(2048) = 11$ bits. For a Hamming distance upper limit of 10, 4 ‘bit-toggle’ bits would be required (see Figure 1) and furthermore, up to 10 sets of 5-bit locations toggle locations could be required (as in the case of the 15 instructions shown to be a Hamming distance of 10 from a dictionary entry in Table 1). This means the “compressed” representation would actually be an expansion and would be 65 bits long. The codeword length would only increase with larger programs. Such a large “compressed” instruction (instead of the 32-bit vector without compression) could add significant changes to the instruction fetching, retrieving and decoding hardware.
4.2 Maximum Span Selection Results
In order to keep the Hamming distance upper limit to a more manageable level, the maximum spanning method was trialed. The aim in this method was to include in the reduced dictionary, vectors that covered more of the rest of the vectors in the program code, so that with the same number of dictionary vectors, a larger set of program vectors were covered. The best results were obtained at a Hamming distance upper limit of 3 as shown in Figure 5. This was due to fully utilizing the toggle bit number bits.
Unfortunately, this method did not take into account any information about how frequent the chosen vectors were, and as a result, none of the Hamming distance upper limits investigated achieved compression ratios better than standard dictionary compression. Compression ratios for this method were around 82%.
4.3 Combined Frequency and Spanning Results
The combined frequency and spanning selection method was investigated in order to combine the higher frequencies of smaller compressed instructions from the first selection method and the larger set of program vectors covered by vectors in the reduced dictionary from the second selection method.
The results in Figures 6 and 7 showed that, similar to the maximum span method, selecting the Hamming distance upper limit of 3 yielded the best results in this combined dictionary selection method. In the compression for the TI TMS320C6x program code, the compression scheme using the Hamming distance upper limit of 3 outperformed the normal dictionary compression method by an average of 8%, though for some benchmarks, this was as high as 13%. Compression ratios ranged from 72.1% to 80.3%.
The main contributing factor found in experiments concerning the Hamming distance upper limit of 3, was that the reduced dictionary needed was about one eighth the size of the original dictionary. This meant on average, 3 bits were saved from each and every instruction, with only some of the instructions requiring
extra bit-toggling information. Furthermore, as the dictionary itself was much reduced, this contributed to an overall reduction.
Experimental results for the Intel Itanium program code were not as successful. The Hamming distance limit of 3 was once again the best compression ratio obtained, however this was on average only less than 1% better than standard dictionary decompression. In some cases, the compression ratio was worse. Possible reasons for this are discussed below.
Once again, the number of vectors that were lower Hamming distances from a dictionary entry determined how good the compression would be. The same example benchmark from Section 4.1 (djvpg) was profiled under the combined dictionary selection method, with the results in Table 2. Although the number of instructions found in the dictionary was less than in the Frequency method (54.7% - 35.6% = 19.1% less), the Hamming distance upper limit ensured that not as many toggle fields were needed.
Table 2 – Hamming Distance Frequencies for Combined Method Example
<table>
<thead>
<tr>
<th>Hamming Distance</th>
<th>Number of Program Instructions (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>10278 (35.6 %)</td>
</tr>
<tr>
<td>1</td>
<td>6992 (24.2 %)</td>
</tr>
<tr>
<td>2</td>
<td>9109 (31.6 %)</td>
</tr>
<tr>
<td>3</td>
<td>2477 (8.6 %)</td>
</tr>
<tr>
<td>Total Instructions:</td>
<td>28856</td>
</tr>
<tr>
<td>Unique Instructions:</td>
<td>11805</td>
</tr>
<tr>
<td>Dictionary Entries:</td>
<td>4096</td>
</tr>
</tbody>
</table>
Figure 8 shows a subset of benchmarks with their original size (white), normal dictionary compressed size (light grey) and reduced dictionary compressed size (dark). For each benchmark, the first group of three bars corresponds to the TI TMS320C6x program code, and the second 3 bars (with diagonal hatching) correspond to the Intel Itanium program code.
4.4 Stream Encoding Results
The idea of stream encoding was trialed in order to decompress multiple streams of program code at once, limiting the added delay attributed to the decompression unit. Our study focused on the TI TMS320C6x program code, as results from the previous section showed that Intel Itanium program code did not seem to compress well under 32-bit vectors.
The results obtained in this investigation suggested that compression in streams suited the larger benchmarks. As the program code was divided into 8 smaller streams each one eighth the size of the original code, the sizes of these streams for some of the smaller benchmarks were too small to give good compression results. However, the larger benchmarks responded well, with benchmarks larger than 200kb only adding on average, 4% on the reduced dictionary results to give compression ratios around 79.4%. Figure 9 shows the selected benchmarks with their original code size, reduced dictionary compressed size and the same compression algorithm applied to streams. In the smaller benchmarks, the overhead in the streamed version almost negated the compression, however the larger files still returned good compression results.
5. DISCUSSION
For the Hamming-distance based reduced-dictionary compression scheme presented in this paper, the compression ratio has been found to be very dependent on the dictionary selection method. A vector selection method which considers both the frequency of vectors and the codeword-space coverage of vectors outperformed either method considered independently. This combined dictionary selection method achieved its best results with a Hamming distance upper limit of 3 – it outperformed standard dictionary compression on TI TMS320C6x program code by an average of 8% to give an average compression ratios of 76.2% when applied to the smallest compiler builds. Like all code-compressions schemes, this comes at the cost of additional decoding hardware.
When applied to the Intel Itanium program code, our scheme only resulted in a negligible change, and in some cases led to a worse compression ratio than normal dictionary compression. This is likely to be because our approach considered fixed-size code vectors of 32 bits. TI TMS320C6x program code is made up of 32 bit instructions – which corresponded to the code vectors considered; however, the 128-bit Itanium code bundles contain three 41-bit instructions which did not align well with the 32-bit vectors. It is suggested that other vector lengths could be
examined for the Itanium program code to determine if this type of compression scheme could be applicable under different vector lengths.
An investigation into parallel compression showed that dividing the program into 32-bit parallel streams returned an average compression ratio of 79.4% for programs larger than 200kb. This approach enables parallel decompression of instruction streams within a VLIW instruction word with only a small overhead in compression performance. For small programs, however, there is little advantage to this approach.
6. CONCLUSIONS AND FURTHER WORK
This paper has presented a VLIW code compression technique based on vector Hamming distances. Dictionary vectors are selected such that all program vectors are at most a specified maximum Hamming distance from a dictionary vector. Bit toggling information is used to restore the original vector.
A dictionary vector selection method which considered both vector frequency as well as maximum coverage achieved better results than just considering vector frequency or vector coverage independently. This method, with a Hamming distance upper-limit of 3, was found to outperform standard dictionary compression on TI TMS320C6x program code by an average of 8%, giving compression ratios of 72.1% to 80.3% when applied to the smallest compiler builds.
An investigation into parallel compression showed that dividing the program into 32-bit parallel streams returned an average compression ratio of 79.4% for files larger than 200kb.
Further work is suggested in a number of areas. First, compiler techniques such as register renaming could be used to select registers whose binary representations are small Hamming distances from one another. If the compiler was aware of the Hamming distance upper limit of the subsequent code compression applied, it would be possible to output program code such that the 32-bit instructions used as vectors could be grouped more efficiently and separated by Hamming distances within the compression scheme’s upper limit.
Second, it is proposed to consider other dictionary selection methods that are not greedy (all methods presented in this paper selected reduced dictionary entries based on the maximum current gain only). Other options could be investigated, such as the use of dictionary vectors that are not limited to the vectors found in the program.
Third, the selection of codewords associated with each reduced dictionary entry could be investigated. In this paper, the codewords used were a fixed length, with a variable length tail appended to denote how many and which bits to toggle. A variable scheme could also be applied to the codeword field such that codewords would be smaller for more frequently accessed dictionary entries and longer for infrequent vectors. This could be achieved by applying either a Huffman [8]-like or CodePack [7]-like scheme.
7. REFERENCES
|
{"Source-Url": "https://ro.uow.edu.au/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1443&context=eispapers", "len_cl100k_base": 7282, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29211, "total-output-tokens": 9482, "length": "2e12", "weborganizer": {"__label__adult": 0.0009741783142089844, "__label__art_design": 0.0007958412170410156, "__label__crime_law": 0.0010213851928710938, "__label__education_jobs": 0.0006747245788574219, "__label__entertainment": 0.00018668174743652344, "__label__fashion_beauty": 0.00042366981506347656, "__label__finance_business": 0.0003974437713623047, "__label__food_dining": 0.0007805824279785156, "__label__games": 0.001667022705078125, "__label__hardware": 0.032196044921875, "__label__health": 0.0011348724365234375, "__label__history": 0.0006327629089355469, "__label__home_hobbies": 0.0003020763397216797, "__label__industrial": 0.002162933349609375, "__label__literature": 0.0003910064697265625, "__label__politics": 0.0005970001220703125, "__label__religion": 0.0011091232299804688, "__label__science_tech": 0.3115234375, "__label__social_life": 9.995698928833008e-05, "__label__software": 0.006824493408203125, "__label__software_dev": 0.6318359375, "__label__sports_fitness": 0.0009584426879882812, "__label__transportation": 0.0028743743896484375, "__label__travel": 0.0003552436828613281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39850, 0.03226]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39850, 0.63831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39850, 0.91335]], "google_gemma-3-12b-it_contains_pii": [[0, 717, false], [717, 2516, null], [2516, 6853, null], [6853, 13272, null], [13272, 18431, null], [18431, 23370, null], [23370, 27865, null], [27865, 32345, null], [32345, 33859, null], [33859, 39850, null]], "google_gemma-3-12b-it_is_public_document": [[0, 717, true], [717, 2516, null], [2516, 6853, null], [6853, 13272, null], [13272, 18431, null], [18431, 23370, null], [23370, 27865, null], [27865, 32345, null], [32345, 33859, null], [33859, 39850, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39850, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39850, null]], "pdf_page_numbers": [[0, 717, 1], [717, 2516, 2], [2516, 6853, 3], [6853, 13272, 4], [13272, 18431, 5], [18431, 23370, 6], [23370, 27865, 7], [27865, 32345, 8], [32345, 33859, 9], [33859, 39850, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39850, 0.10204]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
15436fe17ccfe063cd3b9039edc17c04e2ea0f8f
|
Stepwise Construction and Refinement of Dependability Models
Cláudia Betous-Almeida and Karama Kanoun
LAAS-CNRS
7, Avenue du Colonel Roche
31077 Toulouse Cedex 4 - France
E-mail: {almeida,kanoun}@laas.fr
Abstract
This paper presents a stepwise approach for dependability modeling, based on Generalized Stochastic Petri Nets (GSPNs). The first-step model called functional-level model, can be built as early as system functional specifications and then completed by the structural model as soon as the system architecture is known, even at a very high level. The latter can be refined according to three different aspects: Component decomposition, state and event fine-tuning and distribution adjustment to take into account increasing event rates. We define specific rules to make the successive transformations as easy and systematic as possible. This approach allows the various dependencies to be taken into account at the right level of abstraction: Functional dependency, structural dependency and those induced by non-exponential distributions. A part of the approach is applied to an instrumentation and control system (I&C) in power plants.
1. Introduction
Dependability evaluation plays an important role in critical systems’ definition, design and development. Modeling can start as early as system functional specifications, from which a high-level model can be derived to help in analyzing dependencies between the various functions. However the information that can be obtained from dependability modeling and evaluation becomes more accurate as more knowledge about the system’s implementation is incorporated into the models.
The starting point of our work was to help (based on dependability evaluation) a stakeholder of an I&C system in selecting and refining systems proposed by various contractors in response to a Call for Tenders. To this end, we have defined a stepwise modeling approach that can be easily used to select an appropriate system and to model it thoroughly. This modeling approach is general and can be applied to any system, to model its dependability in a progressive way. Thus, it can be used by any system’s developer.
The process of defining and implementing an I&C system can be viewed as a multi-phase process starting from the issue of a call for tenders by the stakeholder. The call for tenders gives the functional and non-functional (e.g., dependability) requirements of the system and asks candidate contractors to make offers for possible systems/architectures satisfying the specified requirements. A preliminary analysis of the numerous responses by the stakeholder, according to specific criteria, allows the pre-selection of two or three candidate systems. At this stage, the candidate systems are defined at a high level and the application software is not entirely written. The comparative analysis of the pre-selected candidate systems, in a second step, allows the selection of the most appropriate one. Finally, the retained system is refined and thoroughly analyzed to go through the qualification process. This process is illustrated in Figure 1. Even though this process is specific to a given company, the various phases are similar to those of a large category of critical systems.
Dependability modeling and evaluation constitute an efficient support for the selection and refinement processes, thorough analysis and preparation for the system’s qualification. Our modeling approach follows the same steps as the development process. It is performed in three steps as described in Figures 1 and 2:
Step 1. Construction of a functional-level model based on the system’s specifications;
Step 2. Transformation of the functional-level model into a high-level dependability model, based on the knowledge of the system’s structure. A model is generated for each pre-selected candidate system;
Step 3. For the retained system, refinement of the high-
level model into a detailed dependability model.
The remainder of the paper is organized as follows. Section 2 describes the functional-level model. The high-level dependability model is presented in Section 3. Section 4 deals with the structural model’s refinement and Section 5 presents a small example of application of the proposed approach to an I&C system. Finally, Section 6 concludes the paper.
2. Functional-level model
The derivation of the system’s functional-level model is the first step of our method. This model is independent of the underlying system’s structure. Hence, it can be built even before the call for tenders, by the stakeholder. It is formed by places representing possible states of functions. For each function, the minimal number of places is two (Fig. 3): One represents the function’s nominal state (F) and the other its failure state (F̅).
In the following, we assume only one failure mode, but it is applicable in the same manner when there are several failure modes per function. Between states F and F̅, there are events that manage changes from F to F̅ and vice-versa. These events are inherent to the system’s structure that is not specified at this step, as it is not known yet. The model containing these events and the corresponding places, is called the link model (M_L). Note that the set \{F, M_L, F̅\}, that constitutes the system’s GSPN model, will be completed once the system’s structure is known.
However, systems generally perform more than one function. In this case we have to look for dependencies between these functions due to the communication between them. We distinguish two degrees of dependency: Total dependency and partial dependency. Figure 4 illustrates examples of the two degrees of functional dependency between two functions F_1 and F_2. F_3 is independent from both F_1 and F_2.
Case (a) Total dependency – F_2 depends totally on F_1 (noted F_2 → F_1): If F_1 fails, F_2 also fails;
Case (b) Partial dependency – F_2 depends partially on F_1 (noted F_2 ↔ F_1): F_1’s failure does not induce
F₂’s failure. In fact, F₁’s failure puts F₂ in a degraded state that is represented by place F₂d. This state is marked whenever F₁ is in its failure state and F₂ in its nominal one. In Figure 4(b), the token is removed from F₂d as soon as F₁ returns to its nominal state, however other scenarios might be considered.
3. High level dependability model
The high level dependability model is formed by the function’s states and the link model that gathers the set of states and events related to the system’s structural behavior. This behavior is modeled by the so-called structural model and then it is connected to F and F places through an interface model. The link model is thus made up of the structural model and of the interface model.
The structural model represents the behavior of the hardware and software components taking into account fault-tolerance mechanisms, maintenance policies as well as dependencies due to the interactions between different components.
The interface model connects the structural model with its functional state places by a set of immediate transitions.
In this section we concentrate mainly on the interface model. In particular, we assume that the structural model can be built by applying one of the many existing modular modeling approaches (see e.g., [5, 9, 10, 11]), and we focus on its refinement in section 4. Note that the structural models presented in this section are not complete. We present simple examples to help understand the notion of interface model before presenting the general interfacing rules.
3.1 Examples of interface models
For sake of simplicity, we first consider the case of a single function then the case of multiple functions.
Single Function: Several situations may be taken into account. Since the two most important cases are the series and the combination series-parallel components, we limit the illustrations to these two basic cases which allow modeling of any system. More details are given in [3].
Series case: Suppose function F carried out by a software component S and a hardware component H. Then, F and F places’ markings depend upon the markings of the hardware and software components models (Fig. 5).
Multiple Functions: Consider two functions (the generalization is straightforward) and let \{C₁\} (resp. \{C₂\}) be the set of components associated to F₁ (resp. F₂). We distinguish the case where functions do not share resources (such as components or repairmen), from the case where they share some. Examples of these two cases are presented hereafter.
3.2. Interfacing rules
The interface model \( M_I \) connects the system’s components with their functions by a set of transitions. This model is a key element in our approach. Particular examples of interface models have been given in Figures 5 to 7. In this section the general organization of the interface model is presented. Interfacing rules have been defined in formal terms. However, the main rules are stated here in an informal manner.
Upstream and downstream \( M_I \) have the same number of immediate transitions and the arcs that are connected to these transitions are built in a systematic way:
- **Upstream \( M_I \):** It contains one function transition \( t_F \) for each series (set of) component(s), to mark the function’s up state place, and one component transition \( t_CX \) for each series, distinct component that has a direct impact on the functional model, to unmark the function’s up state place.
- Each \( t_F \) is linked by an inhibitor arc to the function’s up state place, by an arc to the function’s up state place and by one bidirectional arc to each initial (ok) component’s place;
- Each \( t_CX \) is linked by an arc to the function’s up state place and by one bidirectional arc to each failure component’s place.
- **Downstream \( M_I \):** It contains one function transition \( t'_F \) for each series (set of) component(s), to unmark the function’s failure state place, and one component transition \( t'_CX \) for each series, distinct component that has a direct impact on the functional model, to mark the function’s failure state place.
- Each \( t'_F \) is linked by an arc to the function’s failure state place and by one bidirectional arc to each initial (ok) component’s place;
- Each \( t'_CX \) is linked by an inhibitor arc to the function’s failure state place, by an arc from the function’s failure state place and by one bidirectional arc to each component’s failure place.
4. Refinement of the structural model
We assume that the structural model is organized in a modular manner, i.e., it is composed of sub-models representing the behavior of the system’s components and their interactions. For several reasons, the first model that is built, starting from the functional-level model, may be not very detailed. One of these reasons could be the lack of information in the early system’s selection and development phases. Another reason could be the complexity of the system to be modeled. To master this complexity a high level model is built and then refined progressively.
As soon as more detailed information is available concerning the system’s composition and events, governing component evolution, the structural model can be refined.
Another refinement may be done regarding event distributions. Indeed, an assumption is made that all events governing the system’s behavior are exponentially distributed, which, in some cases, is not a good assumption. In particular, failure rates of some components may increase over time.
Model refinement allows detailed behavior to be taken into account and leads to more detailed results compared to those obtained from a high level model. In turn, these detailed results may help in selecting alternative solutions for a given structure. For our purpose, we consider three types of refinement: Component, state/event and distribution. Given the fact that the system’s model is modular, refinement of a component’s behavior is undertaken within the component sub-model and special attention should be paid to its interactions with the other sub-models. However, in this paper due to the lack of space, we will mainly address the new dependencies created by the refinement, without discussing those already existing.
Component refinement consists in replacing a component by two or more components. From a modeling point of view, such a refinement leads to the transformation of the component’s sub-model into another sub-model. Our approach is to use the same transformation rules as those used for the interface model presented in section 3.
State/event fine-tuning consists in replacing, by a subnet, the place/transition corresponding to this state/event. We define basic refinement cases, whose combination covers most usual possibilities of state/event refinement.
For distribution adjustment, we use the method of stages. Consider an event whose distribution is to be transformed into a non-exponential one. This method consists in replacing the transition associated with this event, by a subnet. We have adapted already published work to take into account dependencies between the component under consideration and components with which it interacts. This is done without changing the sub-models of the latters.
A section is devoted to each refinement type.
4.1. Component decomposition
Consider a single function achieved by a single software component on a single hardware component. Suppose that the software is itself composed of N components. Three basic possibilities are taken into account (combinations of these three cases model any kind of system):
- The N components are redundant, which means that they are structurally in parallel;
- The N components are in series;
- There are Q components in parallel and R+1 components in series (with Q+R=N).
Our goal is to use refinement rules identical, as far as possible, to the ones used in Section 3.
In the following we explain how a single component is replaced by its N components. These decompositions are respectively called parallel, series and mixed.
4.1.1. Parallel decomposition. Consider software S’s decomposition into two redundant components S1 and S2. Thus, S’s up state is the result of S1 or S2’s up states, and S’s failure state is the combined result of S1 and S2’s failure states.
4.1.2. Series decomposition. Consider the decomposition of software S into two series components S1 and S2. Hence, this case is identical to the one presented in Fig. 5 when replacing F by S, H by S1 and S by S2.
Figure 8 gives a GSPN model of this case. The generalization to N components is straightforward. It is worth mentioning that the interface model between the system and its components is built exactly in the same manner as the interface model between a function and its associated components.
4.1.2. Series decomposition. Consider the decomposition of software S into two series components S1 and S2. Hence, this case is identical to the one presented in Fig. 5 when replacing F by S, H by S1 and S by S2.
4.1.3. Mixed decomposition. Suppose S is composed of three components: S1, S2 and S3, where S3 is in series with S1 and S2, that are redundant. This case is identical to the example presented in Fig. 6 when replacing F by S and H by S3.
4.1.4. Conclusion. In all the cases illustrated above, we have considered only one token in each initial place. Identical components can be modeled by a simple model with K tokens in each initial place. When refining the behavior of such components, a dissymmetry in their behavior may appear. Indeed, this is due to the fact that some components that have the same behavior at a given abstraction level, may exhibit a slightly different behavior when more details are taken into account. If this is the case, one has to modify the model of the current abstraction level before refinement. This may lead to changing the interface model either between the functional-level and the structural model, or between two successive structural models. This is the only case where refinement leads to changing the model at the higher level.
4.2. State/Event fine-tuning
In GSPNs, places correspond to system’s states and timed transitions to events that guide state changes. The fine-tuning of places/transitions allows more detailed behavior to be modeled. Refinement has been studied in Petri nets ([13, 12]) and more recently in Time Petri Nets [8].
Our goal is to detail the system’s behavior by refining the underlying GSPN. Our sole constraint is to ensure that the net’s dynamic properties (aliveness, boundness and safeness), at each refinement step, are preserved. The main motivation for model refinement is to have more detailed results about system behavior, that better reflect reality.
We define three basic refinement cases. Combinations of these three cases cover most usual situations for dependability models’ refinement. They are given in Table 1.
<table>
<thead>
<tr>
<th>TR1: Separation into two events</th>
<th>Two competing events</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR2: Sequence of events</td>
<td>Refinement of the action represented by transition T</td>
</tr>
<tr>
<td>TR3: State refinement</td>
<td>t1 = p1 \equiv \text{prob. of firing} t_1, \ t2 = p2 \equiv \text{prob. of firing} t_2 and \ p_1 + p_2 = 1</td>
</tr>
</tbody>
</table>
Table 1. State/Event refinement
Fig. 9(b). The resulting model is presented in Fig. 9(c).
Finally, we model the error detection efficiency by applying TR3. Detected errors allow immediate system’s repair. We then add a perception latency (transition T_{1/2}), Fig. 9(d). This latency is important to be modeled because, as long as the non-detected error is not perceived, the system is in a non-safe state. Repair can be performed only after perception of the effects of such errors.
This is a small example of a state/event refinement application. Other details can be added to the model using the cases presented in this section.
4.3. Distribution adjustment
It is well known that the exponential distribution assumption is not appropriate for all event rates. For example, due to error conditions accumulating with time and use, the failure rate of a software component might increase.
The possibility of including timed transitions with non-exponential firing time is provided by the method of stages [7]. This method transforms a non Markovian process into a Markovian one, by decomposing a state (with a non exponential firing time distribution) into a series of k successive states. Each of these k states will then have a negative exponential firing time distribution, to simulate an increasing rate. In GSPNs, a transition, referred to as extended transition, is replaced by a subnet to model the k stages.
The transformation of an exponential distribution into a non-exponential one might create new timing dependencies. Indeed, the occurrence of some events in other components might affect the extended transition. For example, the restart of a software component might lead to the restart of
the component under consideration (that has an increasing failure rate) and thus stop the accumulation of error conditions, bringing back the software under consideration to its initial state.
In previously published work [1, 2], the dependency between events is modeled only by concurrent transitions enabled by the same place. This is not very convenient when several components interact with the component under consideration, as it could lead to changing their models. We have adapted this extension method to allow more flexibility and take into account this type of dependency.
The salient idea behind our approach is to refine the event’s distribution without changing the sub-models of the components, whose behavior may affect the component under consideration (when assuming a non-exponential distribution).
In the rest of this section, we first present the extension method presented in [2] and then present our adapted extension method.
4.3.1. Previous work. Concerning the transitions’ timers, three memory policies have been identified and studied in the literature, namely, resampling, age memory and enabling memory. The latter is well adapted to model the kind of dependency that is created when modeling system’s dependability as mentioned above. It is defined as follows: At each transition firing, the timers of all the timed transitions that are disabled by this transition are restarted, whereas the timers of all the timed transitions that are not disabled hold their present values.
In [1] and [2] an application of the enabling memory policy in structural conflict situations has been given. It concerns the initial model of Fig. 10, in which transition $T_1$ to be extended is in structural conflict with transition $T_{res}$.
When applying the enabling memory policy as given in [2] to transition $T_1$ of Fig. 10, the resulting model is presented in Fig. 11. In this figure, the $k$ series stages are modeled by transitions $t_c^i$, $i=1,2,3$ and $T_1$, and places $P_1, P_2$ and $P_3$. Token moving in these places is controlled by the control places $P_{c1}, P_{c2}$ and $P_{c3}$.
After removal of the token from $S$ by firing of transition $T_{res}$, the clearing of places $P_1, P_2$ and $P_3$ is accomplished in two steps. As soon as $S$ becomes empty, immediate transitions $t_1, t_2$ and $t_3$ are fired as many times as needed to
remove all tokens from these three places. At the end of this step, places $P_{c3}$ and $P_3$ are marked with one token each. The return to the initial state is then performed by immediate transition $t_4$ that puts one token in place $P_{c1}$, after places $P_1$, $P_2$ and $P_3$ are empty.
4.3.2. Enabling memory with external dependencies. Our approach replaces the transition to be extended by two subnets: One internal to the component, to model its internal evolution, and a dependency subnet, that models its interaction with other components. The initial model is given in Figure 12(a). In this model we assume that $T_1$, $T_{dis1}$ and $T_{dis2}$ are exponentially distributed. Suppose that in refining $T_1$’s distribution, its timer becomes dependent on on $T_{dis1}$ and $T_{dis2}$. The transformed model is given in Fig. 12(b). A token is put in $P_{dep}$ each time the timer of transition $T_1$ has to be restarted, due to the occurrence of an event that disables the event modeled by $T_1$ (firing of $T_{dis1}$ or $T_{dis2}$ in other components models). Like in the previous case, this is done in two steps. As soon as place $P_{dep}$ is marked, $t_1$, $t_2$ and $t_3$ are fired many times as needed to remove all tokens from these three places. The return to the initial state is performed by transition $t_4$ that removes a token from place $P_{dep}$ and puts one token in place $P_{c1}$, after places $P_1$, $P_2$ and $P_3$ are empty.
Note that transitions $t_1$, $t_2$ and $t_3$ replace respectively $t_1$, $t_2$ and $t_3$. Also, we simplified Fig. 12(b), by replacing place $P_3$ by an inhibitor arc between $t_4$ and $P_{c1}$. Thus, the two major differences between Figures 11 and 12(b) are: 1) Place $P_1$ of Fig. 12(b) is replaced by an inhibitor arc going from place $P_{c1}$ to immediate transition $t$; 2) Place $P_{dep}$, that manages dependencies between this net and the rest of the model, is added.
5. Application to I&C systems
In this section we illustrate our modeling approach. Due to space limitations we only present a small part of it.
We start by presenting the functional-level model for a general I&C system. Then we describe how the high-level dependability model is built for one of the I&C systems. Finally we show some results concerning a small part of a dependability model is built for one of the I&C systems.
An I&C system performs five main functions: Human-machine interface (HMI), processing (PR), archiving (AR), management of configuration data (MD), and interface with other parts of the I&C system (IP). The functions are linked by the partial dependencies: HMI $\leftarrow$ {AR, MD}, PR $\leftarrow$ MD, AR $\leftarrow$ MD and IP $\leftarrow$ MD. These relations are modeled by the functional-level model depicted in Fig. 13.
To illustrate the second step of our modeling approach, we consider the example of an I&C system composed of five nodes connected by a Local Area Network (LAN). The mapping between the various nodes and their functions is given in Fig. 14. Note that while HMI is executed on four nodes, Node 5 runs three functions. Nodes 1 to 4 are composed of one computer each. Node 5 is fault-tolerant: It is composed of two redundant computers. The initial structural model of this I&C is built as follows:
- Node 1 to Node 3 – in each node, a single function is achieved by one software component on a hardware component. Its model is similar to the one presented in Figures 5 and 15 (that will be explained later);
- Node 4 – has two functions that are partially dependent. Its functional-level model will be similar to $F_1$ and $F_2$’s functional-level model given in Fig. 4(b). Its structural model will be similar to the one depicted in Fig. 7, followed by a model slightly more complex than the one of Figure 15;
- Node 5 – is composed of two hardware components with three independent functions each. Its structural model is more complex than the one given in Figure 15 due to the redundancy.
Figure 13. Functional-level model for I&C systems
- LAN – the LAN is modeled at the structural level by the structural dependencies that it creates.
The complete high level dependability model for this system is composed of 41 places and 19 tokens. The other two I&C systems of our case study are composed of 76 places and 38 tokens and of 27 places and 13 tokens. It is worth mentioning that these model sizes correspond to the high-level models. After refinement, the models are much larger, as it is illustrated on the following example.
Let us consider the simple case of Fig. 5. The associated detailed structural model is given in Fig. 15 in which the \( S_{k.o} \) place of Fig. 5, corresponds to either place \( S_{r.d} \) or \( S_{s.r} \). The detailed GSPNs presented are obtained using the rules described in section 4.2. The following assumptions and notations are used:
- The activation rate of a hardware fault is \( \lambda_{h} \) (Tr\(_{1}\)) and of a software fault is \( \lambda_{s} \) (Tr\(_{3}\));
- A permanent hardware fault (resp. software) is detected by the fault-tolerance mechanisms with probability \( d_{h} \) (resp. \( d_{s} \) for software faults). The detection rate is \( \delta_{h} \) (Tr\(_{5}\)) for the hardware, and \( \delta_{s} \) (Tr\(_{7}\)) for the software;
- The effects of a non detected error are perceived with rate \( \pi_{h} \) (Tr\(_{4}\)) for the hardware, and rate \( \pi_{s} \) (Tr\(_{8}\)) for the software;
- Errors detected in the hardware component require its repair: repair rate is \( \mu \) (Tr\(_{5}\));
- Permanent errors in the software may necessitate only a reset. The reset rate is \( \rho \) (Tr\(_{6}\)) and the probability that an error induced by the activation of a permanent software fault disappears with a reset is \( r \) (Tr\(_{7}\));
- If the error does not disappear with the software reset, a re-installation of the software is done. The software’s re-installation rate is \( \sigma \) (Tr\(_{10}\)).
Note that a temporary fault in the hardware may propagate to the software (Tr\(_{11}\)) with probability \( p \). We stress that when the software component is in place \( S_{r.d} \) or \( S_{s.r} \), it is in fact not available, i.e., in a failure state.
Also, when the hardware is in the repair state, the software is on hold. The software will be reset or re-installed as soon as the hardware repair is finished. Due to the size of the subsequent model, this case is not represented here.
Thus, from the original 4 places model, we have a 15 places model after refinement.
6. Conclusions
Our modeling approach follows in the footsteps of most of the existing work on dependability modeling. Where this approach is unique is in the inclusion of the system’s functional specifications into the dependability model, by means of a functional-level model. Also, it allows modeling of one system from its functional specification up to its implementation. The existing refinement techniques are conceived in order to preserve the result values. On the contrary, ours provides more accurate models and associated results.
Thus, the modeling approach presented in this paper gives a generally-applicable process for system’s analysis, based on generalized stochastic Petri nets. This process involves a stepwise refinement in which dependencies are introduced at the appropriate level of refinement. A careful and precise definition of the constructs and of the refinement process is given. Indeed, we have shown how starting from functional specifications, a functional-level model can be
Figure 15. Structural model of a software and a hardware components
transformed progressively into a dependability model taking into account the system’s structure. We have also shown how the structural model can be refined to incorporate more detailed information of the system’s behavior. Refinement is a very powerful tool for mastering progressively model construction. It will allow experimented, but not necessarily specially-trained, modelers to analyze the dependability of one or several systems and compare their dependability at the same level of modeling abstraction, if required.
The approach was illustrated here on simple examples related to a specific structure of an instrumentation and control system in power plants. However, we have applied this approach to three different I&C systems to identify their strong and weak points, in order to select the most appropriate one.
Acknowledgements
The authors wish to thank Mohamed Kãâniche for his helpful comments on an earlier version of this paper. We also wish to thank the anonymous reviewers for their useful suggestions for improvement.
References
|
{"Source-Url": "https://hal.laas.fr/hal-01976600/document", "len_cl100k_base": 6620, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 40106, "total-output-tokens": 7930, "length": "2e12", "weborganizer": {"__label__adult": 0.0003273487091064453, "__label__art_design": 0.0006594657897949219, "__label__crime_law": 0.0004677772521972656, "__label__education_jobs": 0.0018053054809570312, "__label__entertainment": 0.00011807680130004884, "__label__fashion_beauty": 0.00017690658569335938, "__label__finance_business": 0.0006608963012695312, "__label__food_dining": 0.000431060791015625, "__label__games": 0.0007853507995605469, "__label__hardware": 0.0025424957275390625, "__label__health": 0.0008563995361328125, "__label__history": 0.000400543212890625, "__label__home_hobbies": 0.0002008676528930664, "__label__industrial": 0.0016164779663085938, "__label__literature": 0.00040984153747558594, "__label__politics": 0.00035834312438964844, "__label__religion": 0.000545501708984375, "__label__science_tech": 0.476318359375, "__label__social_life": 0.00012671947479248047, "__label__software": 0.0124053955078125, "__label__software_dev": 0.497314453125, "__label__sports_fitness": 0.00024437904357910156, "__label__transportation": 0.000865936279296875, "__label__travel": 0.00020563602447509768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32433, 0.02203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32433, 0.42266]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32433, 0.92533]], "google_gemma-3-12b-it_contains_pii": [[0, 3920, false], [3920, 5987, null], [5987, 8540, null], [8540, 10485, null], [10485, 15059, null], [15059, 19026, null], [19026, 21399, null], [21399, 25383, null], [25383, 28959, null], [28959, 32433, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3920, true], [3920, 5987, null], [5987, 8540, null], [8540, 10485, null], [10485, 15059, null], [15059, 19026, null], [19026, 21399, null], [21399, 25383, null], [25383, 28959, null], [28959, 32433, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32433, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32433, null]], "pdf_page_numbers": [[0, 3920, 1], [3920, 5987, 2], [5987, 8540, 3], [8540, 10485, 4], [10485, 15059, 5], [15059, 19026, 6], [19026, 21399, 7], [21399, 25383, 8], [25383, 28959, 9], [28959, 32433, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32433, 0.02941]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
065eeab935999ed65ffd8ecb1d729ef9d7c05d13
|
Semantic-element-based Defining Approach for Model Transformation Rules
Lei Wang and Yuyan Zhang
School of Computer Engineering, Weifang University, Weifang, China
Abstract
In model-driven software development, it is a key technology that transform from platform independent models at higher abstract level to platform specific models at a lower level. The approach to create mapping rules is profoundly impacted with the gap between the source and the target model. By abstractly analyzing the characteristic of syntax and semantics of modeling languages, an approach to define model transformation rules is proposed on the basis of semantic consistency. Firstly, the user must construct an abstract semantic model through an in-depth analysis of target platform. Secondly, the user builds mapping relations from source model to target model via abstract target semantic model. This work is based on the idea of elements in source semantic domain being reconstructed in the target semantic domain. The approach can provide an effective support for validating mapping rules between different abstract level models.
Keywords: Model-driven software development, Model transformation, Semantic consistency, Abstract level
1. Introduction
Recently, model-driven development becomes a hot topic and the main trend in software engineering, in which OMG’s MDA may be the most representative. There have been numerous research institutions and enterprises investing a large amount of money and manpower in this field. MDA have proved that a lot of benefits can be obtained from it such as rapid development, architecture advantage, enhancement of code Quality and maintainability, and system portability across middleware vendors, and it also shows great potential in these areas [2].
On the whole, the provided approaches can be classified into five categories [3-6]: (1) Template-Based Approaches. In this approach, templates consisting of text in the target language include meta-code tags to access information from the source model. In the transformation process, these tags will be interpreted and eventually replaced by code representing the corresponding parts of the source. (2) Target-structure-Driven Approaches. This kind of approaches usually cover model-to-model transformation and draws on the theoretical work on graph transformations. In particular, these approaches operate on typed, attributed, labeled graphs, which is a kind of graphs specifically designed to represent UML-like models. (3) Graph-Transformation-Based Approaches. This category of model transformation approaches is inspired by the theoretical work on graph transformations and is powerful for applications such as generating EJB implementations and database schemas from UML models. (4) Model-Driven Development (MDD). This kind of approaches is driven by the model's meta-information and has a high degree of automation. In the transformation process, the user may not need to pay too much attention to the implement details of code. (5) Mapping Rules. A number of research institutions and enterprises are making efforts to study mapping algorithms that are used for transforming models. This work is based on the idea of elements in source semantic domain being reconstructed in the target semantic domain. The approach can provide an effective support for validating mapping rules between different abstract level models.
and declarative, but also the most complex ones. (4) Relational Approaches. This kind of approaches uses the mathematical concept of relations to specify how source and target models are linked. Relations are declarative but may be given execution semantics. It seems to strike a well balance between flexibility and declarative expression. (5) Transformation Implemented using XSLT. Models can be serialized as XML using the XMI, and implementing model transformations using XSLT.
Most of the approaches given above focus on providing a concrete solution for the transformation from platform independent models (PIMs) to platform specific models (PSMs), and there is little research on the definition principles for mapping rules as well as a basic theory to validate the mapping rules between such models. The research about natural language translation by machine shows that the prerequisite of correct transformation between different languages is the same or similar characteristics of semantics expression within the source and the target [7]. It is the same when talked about transformation between models at different abstract levels in MDA. A model mapping approach based on semantic consistency was proposed by abstractly analyzing the characteristic of syntax and semantics of modeling languages. Abstract target semantic model must be firstly constructed through an in-depth analysis of target platform. Then, based on the idea of elements in source semantic domain being reconstructed in the target semantic domain, mapping relations from source model to target model are created via abstract target semantic model. This approach may not only be a theoretical guidance for model transformation, but also be a measurement for validating the mapping rules between models at different abstract levels of the same system.
2. The Semantic Consistency Requirements of Model Transformation
2.1. Model Gap and Transformation
In MDA, a model is a representation of the function, structure and behavior of an application or system in a given formalism [8]. Any formalization language reflects a viewpoint that determines a set of modeling primitives and their semantics [9-11]. There is often a great difference between models at different abstract levels of the same system, and this situation is called isomeric features between different modeling descriptions in this paper. The isomeric features between different modeling descriptions are represented at three levels: syntax, semantics and structure. The syntactic gap refers to the difference among the date types and styles in different models. There also exist difference within the date structures, link ports and patterns of different models, and which called isomeric features at structure levels. The semantic gap means that the meanings of the terminologies used in different domains are not the same. The distance between them is more significant. The gap between the modeling languages can be narrowed using formalism extensions [11], but cannot be completely eliminated. The fundamental solutions for the problem of the gap seem to be the creation of effective semantics mapping mechanism, so to ensure that the equivalent representations for the system can be obtained [7, 12].
From operational view, a transformation is a terminating algorithm that applies structural and/or semantic changes to a model or a set of models. From function view, a transformation is a function that maps a tuple of models from one or more domains onto another tuple of models in the same or different domains [4]. Transformations are required not only to maintain semantic properties of the models but also maintain certain syntactical properties of the models.
2.2. The Semantics Consistency Relations between Different Models
Semantics is the meaning of information, which is relevant with its context. Some definitions are given below according to References [12] and [13]:
**Definition 1:** Semantic consistency refers to the case as follows: Let U and V be two different sets of elements, and APP be an application system, and then we take U and V as input to APP respectively. Two outputs named APP(U) and APP(V) were obtained respectively after the application’s running over. The meaning and function of the two outputs are fully equivalence (or very similar), which is noted as \( \text{APP(U)} \equiv \text{APP(V)}. \)
**Definition 2:** Semantic consistency of model mapping refers to the case as follows: Let MAP be a mapping from syntactic concepts to semantic domain, and when it applied to two concept patterns (named X and Y respectively) in different models, two set of primitives with equivalent semantics as the output can be obtained, which is noted as \( \text{MAP(X)} \equiv \text{MAP(Y)}. \)
A consistency condition can be defined within the syntax expression based on a common semantic domain. In general, one distinguishes between two kinds of semantic consistency. Horizontal consistency problems exist for a set of models that describe the same aspect of a system from different points of view, potentially using different languages. It has to be ensured that these models do not contain contradictory concepts. Vertical consistency problems exist for models describing the same concept at different levels of abstraction. If a model is refined, it has to be guaranteed that the refined model does not contradict to the specifications of the more abstract model.
2.3. The Requirements for Semantic Consistency of Model Transformation in MDA
In model driven software development such as MDA, the source models are platform independent models of the system, and the target models (or target codes) are the further refinement with specific technologies based on certain platforms. The target codes will be converted into executable components after compiling. These components show target semantic model while they are running on target platform. The semantic consistency between source models and target semantic model is the fundamental requirement of model transformation, and it is also a basic measurement for judging the validity of mapping rules.
3. Model Mapping based on Semantic Consistency
The similar degree between models refers the size of the gap between the source and target models, in which syntax concept, organizational structure, semantic primitives and features will be considered. Its value may be varying in the range \([0, 1]\). The greater the value is, the higher similar degree will be.
3.1. Similar Degree between Models
As the source and the target models may be represented in different ways, it is hard to compute the similar degree between them directly. However, they both include many patterns in their respective description [14], so we can approximately compute the similar degree between models by using the definition of pattern matching.
**Definition 3:** A pattern is a combination of a set of conceptual variables and the relevant constraints which modeling elements bound to the pattern must satisfy [15].
A pattern can be defined as a 3-tuple: \( P = \langle C, A, SR \rangle \), where \( C = \{ c | c \) is a conceptual element in PIM or PSM\}, and \( A = \{ a | a \) is a relevant attributes of the conceptual modeling elements\}. Each attribute \( a \in A \) is defined as a unitary relation \( a(c) \) in which \( c \in C \) is the conceptual elements that a related to. \( SR = \{ \text{kind-of, contain, associate ...} \} \) is the set of semantic relations between the conceptual modeling elements. Each semantic relation \( sr \in SR \) is defined as a binary relation \( sr(c, c') \), where \( c, c' \in C \) and \( c \) relates to \( c' \) through \( sr \). Thereby, the similar degree between models at different levels can be obtained approximately by using the matching degree of patterns involved in the models although some semantic information in the model will be lost (such as in constraints). The calculation process is rather simple and the lost information will not have severe impact on the result. The degree of pattern matching can be computed by adding the degree of concept matching and the degree of the context matching according to their weights. Concept matching degree shows the size of the gap between the meanings brought by the concepts, while context matching degree represents the similar degree about organizational structures and the relationship between the concepts [16].
3.2. Semantic Consistency based Model Mapping Approach
The size of the gap between the source and the target modeling language has a profound impact on the efforts to create mapping rules. The mapping relations are easy to define when the equivalent elements between the source and the target modeling languages can be determined. If the distance between two models is more significant, an intermediary model may be necessary to facilitate the mapping.
Under the guide of the semantic consistency principle given in Section 2, the approach used in this paper to define mapping relations is as follows: Firstly, abstract target semantic model must be constructed through an in-depth analysis of target platform within the limit of semantic constraints. Then the mapping relations from source models to the target semantic model and the mapping relations from the target semantic model to target models (or target codes) should be defined respectively. Thus, mapping relations from source model to target model can be built easily by taking abstract target semantic model as an intermediate, which is shown in Figure 1.
To construct abstract target semantic model, the relevant concepts should be gathered up by an abstract analysis, and then semantic information should be added to these concepts using constraints. In order to facilitate the automatic calculation of semantics, the conditions of constraints are restricted within the intersection of attributes, so to ensure the semantics transfer can be determined.

The semantics mapping between models can be considered as a reconstructing process in the target semantic domain for the elements in source semantic domain. That is to say, starting from source models, the values of relevant attributes can be obtained through observation and deduction on the source elements, and then to ascertain whether these values meet the requirements for the definition of target models and arrange accordingly [17].
Let \( C \) be the set of concepts of patterns in the target, \( i.e., C = \{c_1, c_2, ..., c_n\} \). Let \( O_1 \) be the set of attributes which can be observed directly, and \( O_2 \) be the set of attributes observed implicitly, that is to say, these attributes of source models need to be deduced by using the context of concepts in the pattern, i.e. the necessary condition of these concepts, which is noted as \( \{Nc | c \in C\} \). Let \( R \) be the classification rules for the attributes values of target semantic domain, \( i.e., \) the sufficient condition for the classification in target semantic domain, which is noted as \( \{Sc | c \in C\} \). Let \( M \) be the mapping relations between patterns. The mapping problems can be described as: to find a conceptual set \( c_i \) in target semantic domain for each conceptual element \( c_s \) in source domain, which satisfy the following equation.
\[
O_1 \land O_2 \land R \Rightarrow M(c_s, c_i)
\]
The equation given above describes the mapping problem in formalism, \( i.e., \) a set of concepts of a pattern are given, which is noted as \( CS = \{c_{S1}, ..., c_{Sm}\} \). Then it can be mapped into target domain using the classification rules for the attributes values in target semantic domain noted as a concept set \( CT = \{c_{T1}, ..., c_{Tn}\} \). The mapping process depends on semantics features in both ends. The source provides the observation of the source pattern \( (O= \{NC | c \in CS\}) \). The target provides target pattern and its classification rules \( (C= CT, R = \{Sc | c \in CT\}) \). By this way, a conceptual element of the source pattern can be mapped into target semantic domain through the process of finding the target conceptual set \( c_{Ti} \) which satisfy equation (1).The mapping relations from target semantic model to target models (or target codes) can be easily defined because it is very clear. We are no longer on it for the limited space.
4. A Case Study
The UML-based approach in [18] used as a source and JavaServer Faces assisted with Enterprise JavaBeans used as the target platform are shown in the case study to help interpreting the process of using the semantics consistency based model mapping approach.
4.1. A Modeling Approach for Platform Independent Models
The modeling approach proposed in [18] is based on extending UML and introduces user-interface presentation views. In this approach, there is an abstract description of UI component data and behavior elements rather than a list of interface elements and their attributes. At the same time, the binding relations between UI elements and the corresponding objects are given, which made both the data objects and the behavior elements be independent on concrete UI components and widgets. The FMP can be used to build platform independent models for Web applications as the source in model transformation. Its contents are composed of two layers: architecture modeling and component modeling.
System represents the architecture and constraints of a software system, which is defined as a 4-tuple: \(<\text{Style}, \text{Description}, \text{ComponentSet}, \text{Relations}>\). Style represents the architecture style. Description represents functional descriptions for the system. ComponentSet represents the set of components and connectors. Relations is a list of relations among components and connectors [19]. Component is the foundation of software system for function design and realization.
Function View, Workflow View, Static View, Action View and UI Presentation View are used in the FMP approach to build component models. Each view represents an aspect of the application system.
The functions of components in architecture model, exchange information between the system and outside, and the interactions among function modules of the system are all described using Function Views. It uses Use-Case Diagram in UML to complete description, which is defined as a 3-tuple: <RoleSet, UCSet, AssocSet>. UCSet is the set of use cases, and it is used to describe system’s function. RoleSet represents the set of roles, which is used to describe the user of use cases. AssocSet is a set composed of the using relations between roles and use cases.
Workflow Views are used to model the actions of each individual, and define interactive relations and cooperative relations among these entities. It uses state-machine based activity diagrams to complete description, which is defined as a 4-tuple: <InitState, ActivitySet, CondiSet, FinalStateSet>. InitState is the state-machine’s initial state. ActivitySet is a set of activities. CondiSet represents the set composed of change conditions for states. FinalStateSet is a set of final states of the state-machine.
Static View is an integration of Package Diagram and Class Diagram in UML. It is used to describe analytical classes of the use cases in Function View and the relations among these classes. Static View also includes the information about structure features of a sub-system, and it is defined as a 3-tuple: <ClassSet, PackageSet, AssociSet>. ClassSet is the set of classes. PackageSet is the set of packages. AssociSet is a set including the relations among classes and the relations among packages, and it also includes the relations between classes and packages.
Action View uses extended Collaborative Diagram in UML to describe the actions of objects in more detail. It is defined as a 4-tuple: <RoleSet, ANSet, ObjectSet, AssociSet>. RoleSet is the set of roles. ANSet is a set composed of Action-Nodes, which are abstract representation symbols for the connecting points of system action. The association from Role to Action-Node represents the using relations between them. ObjectSet is the set of objects, and AssociSet is the set of relations between these modeling elements.
Action-Nodes are represented by the ellipses. Object is represented as a rectangle, and Data Collection is represented with overlapped rectangles. Data Object and Data Collection have the property of their data source, which is shown as an additive cylinder in rectangle. Rectangle with two vertical bars is the symbol for other UI Presentation Views. Dotted arrow directed to it means the relations of UI navigation. Rectangle with one or more small circles connected to it represents an external entity or component, where the circles are its Entry Points. Most of the symbols have the property of Visible or Non-Visible on UI. Visible object is represented as a rectangle with real line. Non-Visible objects are represented as rectangles with dashed line.
The implication of Action View is as follows: the role uses the system’s function by touching an Action-Node, thus arose message to transmit along these objects. Next UI page is selected according to the results after the function execution was completed.
UI Presentation View provides intuitional presentation for boundary objects and the interaction points between the users and system in Action View. It also provides binding relations between UI modeling elements and the visible objects in Action View. UI Presentation View is defined as a 2-tuple: <AreaNodeSet, LayOutStrategy>, AreaNode= <UIComponentSet, UCActionSet, UCLayout>. A presentation page is divided into several presentation areas (AreaNode), and each area has a layout strategy (LayOutStrategy). An area also can be divided to several sub-areas. UIComponentSet is a set composed of UI
presentation components, such as Data-grid, Form, Graphics, etc. UCActionSet represents the set of interaction points in Action View corresponding to the presentation area.
4.2. JSF+EJB: the Target of Model Mapping
JavaServer Faces (JSF) [20] is a new standard Java framework for building Web applications, which developed through Java Community Process (JCP). It simplifies development by providing a component-centric approach to developing Java Web user interfaces. JSF also ensures that applications are well designed with greater maintainability by integrating the well-established Model-View-Controller (MVC) design pattern into its architecture. This makes JSF applications much more manageable because the user-interface code (View) is cleanly separated from the application data and logic (Model).
JavaServer Faces assisted with Enterprise JavaBeans (EJB) make a good balance between the efficiency of systems development and the costs for system maintenance, which can be used to develop comprehensive web applications supporting various data types or clients (such as HTML browser and WML browser) to meet the requirements for stringent safety and transaction processing. In this paper, JSF+EJB are used as the target platform for model transformation.
As shown in Figure 2, an abstract target semantic model for JSF+EJB was defined based on the MVC design pattern, and its components are divided into three kinds: Static Component, Action Component and Presentation Component.
The Model Layer (Static Component) contains detail semantic information about Java application programs and EJB specification such as Package, Java-Interface, Java-Class, Attribute, Method and the relationships between Java-Classes.
The Controller Layer (Action Component) is used to describe a system from dynamic aspect, and its elements are organized by surrounding the solution of system tasks. According to the solution process of user’s request, action component model is constructed with reference to the interaction relations between users and the system.
There are two kinds of action elements (WebAction) in Action Component model. The first kind represents the entry-points for interactions between users and the system, and it can be touched off directly. The second kind of WebAction represents action elements within the system and which can be touched off via the first kind. Navigation is the target of next step after the request is resolved. ActionPara represents the parameter object applied in the solution process of a system action. DataObject represents the kind of object that is the target of an operating. Invoke is an invoked relation from WebAction to DataObject.
The essential semantic information brought by Action Component model is as follows: users touch an WebAction, and the application system receives user’s request that maybe including ActionParas, then it analyses and dispatches the request to the corresponding actions. WebAction invokes the method of Invoke-Objects to resolve the task. After completing, actions forward the request to next page according to the result and the conditions of Navigations.
The View Layer (Presentation Component) is organized as a hierarchical tree-like form to represent the specific relationships of UI elements for Web applications. Each UI page is represented as a WebAreaTree, which contains several WebAreaNodes and a layout strategy (Layout). Each WebAreaNode may include some WebUIComponents, such as WebForm, WebGrid, WebTree, etc.
4.3. The Mapping Relations
According to the semantic consistency based modeling approach presented in Section 3 and taking the abstract target semantic model for JSF+EJB (JSFATSM) as an intermediary, we define mapping relations from source model (PIM) to target model (JSFTM) according to the syntax and semantic features of modeling elements. Complex rules can be constructed by simple mapping rules. The holistic mapping relations are shown in Figure 8: Entity-objects in PIM’s Static View are mapped to Entity-beans of JSFATSM. Control-objects are mapped to Session-beans. Boundary-objects are mapped to ActionPara or DataObject in Action Component and UI presentation elements in Presentation Component. The information brought by UI Presentation View should be mapped into the Presentation Component of JSFATSM.
The mapping relations from abstract target semantic model to target models are more obvious and easy to build. The main work is the analysis, restructuring, and integration for the information within target semantic model, and which also includes the addition of the corresponding information about target platform. The Static Component model in JSFATSM are mapped to the corresponding EJB components. The information within Action Component model are mapped into the business logic module, navigation processing module and the mapping relations of configuration files. The information within each WebAreaTree in UI presentation model mapped into the corresponding active server page files, which are mainly for UI layout, presentation components and UI widgets.

As can be seen from Figure 3, the introduction of abstract target semantic model simplifies the definition of mapping relations from PIM to PSM. Template-Based Approach [3] can be used to realize the generation of target codes, which is widely used in MDA-supported tools, such as AndroMDA, OptimalJ, ArcStyle, and which is not repeated in this paper. The static view, the action view and the UI presentation view are all integral parts of a PIM model for a
student information management system. After two steps of transformation (from PIM to PSM, from PSM to target codes), the actual running page based on JSF framework is shown in Figure 4.
5. Conclusion and Future Work
Starting from the analysis of semantic consistency requirements of model transformation, a model mapping approach based on semantic consistency was proposed in this paper. Based on the idea of elements in source semantic domain being reconstructed in the target semantic domain, this approach can be used to build mapping relations from source model to target model. Target semantic model is considered as a reference for disambiguation, and which can provide a good basis for the semantic comparison between modeling languages at different abstract levels (such as UML and target codes). Therefore, by using this approach, semantic consistency between different descriptions of the same component can be ensured. At the same time, the model transformation process is accompanied with a process of model validating, which can provide an effective support for model driven development.
Future works are as follows: (1) more study about formal description of target semantic models, and thus to strengthen the abilities of semantic expressiveness and consistent verification between models; (2) further formalize the model mapping process for the enhancement of accuracy; (3) to completely abstract the description related to UI presentation in target semantic model, and enhance visual attractiveness of the generated page; (4) to diversify target platform in order to verify the practicability of this approach.
Acknowledgments
The authors are most grateful to the anonymous referees for their constructive and helpful comments on the earlier version of the manuscript that helped to improve the presentation of the paper considerably. This research is supported by the Foundation of Science-technology Development Project of Shandong Province of China under Grant No. 2011YD01042.
References
Author
Lei Wang
He is currently working at Weifang College as a lecturer. He received his M.S. degree in the school of Software at the Shandong University, China, in 2006 and his Ph.D. degree in the School of Computer Science and Technology at the Shandong University, China, in 2010. His research interests are in the areas of graphics, vision and human-computer interaction. He is a member of CCF.
|
{"Source-Url": "http://www.sersc.org/journals/IJSH/vol8_no3_2014/8.pdf", "len_cl100k_base": 5736, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30439, "total-output-tokens": 7637, "length": "2e12", "weborganizer": {"__label__adult": 0.00032806396484375, "__label__art_design": 0.00045943260192871094, "__label__crime_law": 0.0002853870391845703, "__label__education_jobs": 0.0011606216430664062, "__label__entertainment": 5.6803226470947266e-05, "__label__fashion_beauty": 0.00014734268188476562, "__label__finance_business": 0.00017130374908447266, "__label__food_dining": 0.00030732154846191406, "__label__games": 0.0004191398620605469, "__label__hardware": 0.0006127357482910156, "__label__health": 0.0003769397735595703, "__label__history": 0.00023090839385986328, "__label__home_hobbies": 7.450580596923828e-05, "__label__industrial": 0.0003478527069091797, "__label__literature": 0.00028514862060546875, "__label__politics": 0.0002061128616333008, "__label__religion": 0.0004649162292480469, "__label__science_tech": 0.01557159423828125, "__label__social_life": 8.410215377807617e-05, "__label__software": 0.004852294921875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.000255584716796875, "__label__transportation": 0.0005030632019042969, "__label__travel": 0.00019609928131103516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33506, 0.02368]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33506, 0.51254]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33506, 0.91272]], "google_gemma-3-12b-it_contains_pii": [[0, 3413, false], [3413, 7131, null], [7131, 10452, null], [10452, 13473, null], [13473, 17403, null], [17403, 21375, null], [21375, 24063, null], [24063, 24518, null], [24518, 26998, null], [26998, 29002, null], [29002, 33104, null], [33104, 33506, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3413, true], [3413, 7131, null], [7131, 10452, null], [10452, 13473, null], [13473, 17403, null], [17403, 21375, null], [21375, 24063, null], [24063, 24518, null], [24518, 26998, null], [26998, 29002, null], [29002, 33104, null], [33104, 33506, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33506, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33506, null]], "pdf_page_numbers": [[0, 3413, 1], [3413, 7131, 2], [7131, 10452, 3], [10452, 13473, 4], [13473, 17403, 5], [17403, 21375, 6], [21375, 24063, 7], [24063, 24518, 8], [24518, 26998, 9], [26998, 29002, 10], [29002, 33104, 11], [33104, 33506, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33506, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
35485dcc233b598da4e76cb6afbe140e7962b569
|
Master Guide
SAP NetWeaver Composition Environment 7.1
Target Audience
- System administrators
- Technology consultants
PUBLIC
Document version: 1.10 – 02/17/2009
Material number: 50084393
## Document History
<table>
<thead>
<tr>
<th>Version</th>
<th>Date</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.10</td>
<td>2/17/2009</td>
<td>SR5 Chapter 3.2 contains the new section <em>Integrating Applications into an SAP NetWeaver Portal</em>. Chapters 2.1 and 3.2 make it clear that only the producer capabilities of the Portal are supported in SAP NetWeaver CE and not the consumer capabilities.</td>
</tr>
<tr>
<td>1.00</td>
<td>5/8/2008</td>
<td>SR5</td>
</tr>
</tbody>
</table>
# Table of Contents
**Chapter 1**
About This Document ........................................... 5
**Chapter 2**
Introduction ..................................................... 7
**Chapter 3**
System Landscape ................................................ 11
3.1 Planning Your Landscape ................................... 11
3.2 Use Cases .................................................. 11
3.3 Implementation of SAP NetWeaver Composition Environment .... 16
1 About This Document
This document explains how to plan an SAP NetWeaver Composition Environment system. For more information about SAP NetWeaver Composition Environment, see http://sdn.sap.com/irj/sdn/nw-ce.
This page is intentionally left blank.
2 Introduction
Overview
SAP NetWeaver Composition Environment is a platform for building and running applications based on Service-Oriented Architecture (SOA) principles. It offers a set of capabilities for integrating new and existing services (from SAP as well as proprietary services), into business-specific solutions. You can develop portable, standard-compliant applications based on the latest Java Enterprise Edition (Java EE) 5 technologies and integrate them in existing SAP and third-party solutions using a central enterprise service registry. SAP NetWeaver CE increases development productivity by providing model-driven composition tools for creating services and user interfaces and orchestrating them into collaborative user-centric workflows.
Enabling SOA
To enable SOA development, SAP NetWeaver CE provides the following key capabilities:
- **A lean and robust application server based on the latest Java EE 5 technology**
With the Java EE 5 certified application server that SAP provides, you can develop Java EE applications based on the latest Java EE standard as well as migrate existing JEE applications. The application server offers full support of the latest Java EE 5 features, updates, and adjustments for simplifying the development of enterprise applications such as JEB 3.0, the new JSF 1.2, the new Java Persistency API 1.0, the updated Web services stack, among others. It provides an implementation of the Service Data Objects (SDO) 2.1 standard simplifying data programming for applications and frameworks, support for development of standard-based portlets, and a job scheduler implementation. With the Java Connector Architecture (JCA) 1.5 and full Java EE 5 Web Services support, it enables connectivity to SAP and non-SAP back ends and services. In addition to being standard-based, the application server in SAP NetWeaver CE comprises features for ensuring its robustness, scalability, and supportability such as configurable session failover support, built-in load balancing support, fast and robust shared memory based on request handling, and robust monitoring and unique supportability of nonfunctional problems based on SAP’s own Java VM features.
- **An integrated environment for Java application development**
The SAP NetWeaver Developer Studio is SAP’s Integrated Development Environment (IDE) for Java and is based on the open-source tools framework Eclipse 3.3. With the SAP NetWeaver Developer Studio, you can develop Java EE 5 applications from scratch using the built-in support for new technologies such as EJB 3.0 and JSF 1.2. In addition, the integration with the service registry in SAP NetWeaver CE enables you to browse and consume services in the applications you create.
- **Model-driven tools for increased development productivity**
SAP NetWeaver CE provides a set of model-driven tools for creating user interfaces and composing services that simplify development and increase productivity significantly. With Visual Composer you can model transactional and analytical user interfaces that can easily be integrated into the user interaction layer of a composite. The tool offers a graphical interface that is suitable for business users as well. Using Web Dynpro in SAP NetWeaver CE, you can build complex user interfaces and data-driven applications while benefiting from graphical tools and code generation that speeds up the development process. Web Dynpro clearly separates business and display logic, and allows user interaction with back-end systems using enterprise services. The Composite Application Framework (CAF) design time integrated into the SAP NetWeaver Developer Studio enables model-driven development of composite applications on top of existing enterprise services.
Service orchestration into user-centric collaborative workflows by means of reusable building blocks
The services and applications that you create are typically transactional and apply to certain use cases. You can add more flexibility and innovation to your solutions by integrating them into collaborative workflows that address enterprise-specific business processes. SAP NetWeaver CE provides Guided Procedures as a framework for designing and running user-centric lightweight processes. It enables you to create reusable workflow building blocks that can be integrated in multiple custom solutions.
UDDI-based service registry for service provisioning and discovery
To enable end-to-end SOA development, SAP NetWeaver CE offers a UDDI v3-based service registry where providers can publish service endpoints, definitions and associated metadata, and consumers can discover the appropriate services for their scenarios. The registry provides capabilities for classifying and browsing services using semantic-rich classification systems.
User interaction by a lightweight portal
All Java and composite applications that you develop on top of SAP NetWeaver CE can be integrated and made available in the lightweight portal provided with the stack. It offers a unified user experience and a single access point for end users.
**Interoperability with SAP Products**
SAP NetWeaver CE is a platform specifically designed to enable application development on top of other solutions such as SAP ERP 6.0. Using the services that this solution provides, you can leverage all existing business logic and data while modeling new solutions to meet the specific requirements of your business.
If you have an SAP NetWeaver 7.0 environment set up, you can also leverage other capabilities offered with it. For example:
- You can connect to an SAP NetWeaver Developer Infrastructure (NWDI) and utilize it for the lifecycle management of the applications you build on SAP NetWeaver CE.
- Using the federated portal network or application integration capabilities in SAP NetWeaver CE, you can integrate your composite applications into an existing SAP NetWeaver 7.0 runtime environment. Note that a portal running on SAP NetWeaver CE can function as a producer portal only; hence, consumer capabilities are not supported.
This page is intentionally left blank.
3 System Landscape
3.1 Planning Your Landscape
This section gives you an overview of the steps required to identify your technical system landscape for SAP NetWeaver CE:
1. You determine the use case of SAP NetWeaver CE you want to implement.
2. You determine the components of SAP NetWeaver CE you want to install.
3. You determine your system landscape; that is, you decide how many systems you require and how you want to use each of these systems.
4. Considering the hardware requirements, you map the required SAP NetWeaver CE systems to hosts.
3.2 Use Cases
You can set up your SAP NetWeaver CE differently according to the use case you wish to enable. For example, to implement a landscape for development, testing, or production purposes, you need to fulfill different requirements and the system landscape is specific to each of these cases.
The following graphic provides an overview of possible SAP NetWeaver CE system landscapes.
You can install SAP NetWeaver CE as:
- **Development Edition**
With the development edition, you can set up a system that has at least one Application Server Java (AS Java) and an SAP NetWeaver Developer Studio. The AS Java can be installed as dedicated server or together with the Developer Studio on one workstation.
- **Productive Edition**
The productive edition is designed to provide optimal performance to run applications and offers more security, scalability, and availability features than a development system.
### Components
You can choose between the following installation components:
- **Application Server Java**
You use this component to develop applications based on Enterprise Edition (Java EE) technology. With this component, you can:
- Develop a Java EE compliant application
Focus on developing open standards-based Java EE applications in the SAP NetWeaver Developer Studio and running them on the Java EE 5-certified SAP NetWeaver Application Server.
- Develop user interfaces with Web Dynpro for Java
Focus on developing professional user interfaces using SAP’s highly productive, model-driven Web Dynpro technology.
**Composition Platform**
You use this component for fast and easy composite application development, SAP provides a tool set for model-driven user interface development, service composition, and process orchestration. They comprise the design-time tools, methodologies, and runtime environment required for building and executing composites. By using these capabilities, you can:
- Create services that can use data from legacy or third-party systems with the Composite Application Framework (CAF).
- Implement service orchestration with Guided Procedures (GP) as collaborative business processes.
- Model user interfaces and integrate them into composites using Visual Composer.
The Composition Platform component contains the Java Application Server component.
**Adobe Document Services**
You use this component to develop application that use Adobe forms, online or offline. Adobe Document Services requires the Application Server Java component.
**Voice**
You use this component to develop applications that allow customers and employees to interactively access SAP or non-SAP solutions from a telephone. Internet connectivity or special mobile devices are not required. Voice is a development and runtime environment for creating and deploying these voice applications.
Voice requires the Application Server Java and Composition Platform component.
**IDE Update Site**
You use this component when you use SAP NetWeaver CE AS Java in development mode with several Developer Studio installations. An update site contains all features for the Developer Studio. You can initiate a check for updates or additional features in the Developer Studio and install them when available. The update site component mirrors the SAP Developer Studio update site on the Service Market Place. You can have several AS Java systems in your landscape but only one AS Java can contain an update site.
The update site requires the Application Server Java component.
**Planning Development Systems**
To implement a development system, you have the following options:
- You set up a *developer workplace* on each host.
- You can install the Application Server Java (AS Java) in development mode together with the SAP NetWeaver Developer Studio on a single host.
- Setting up the AS Java in development mode does not require specific infrastructure settings (such as setting up special users or shares) and saves hardware resources. It includes installation of a single server instance (with multiple server nodes possible).
- You have to implement a developer workplace installation on Windows 32-bit operating systems, since the Developer Studio is available only for this platform. To implement this scenario, use the installation option *Development Edition*.
You install an AS Java centrally and Developer Studio instances on each developer host. This option is recommended for large development projects, as it offers better scalability and requires less hardware resources per developer host. In this landscape scenario you can set up an AS Java in development or productive mode centrally (either on 64-bit Windows or Linux operating system) and connect to it from the other hosts in the landscape using the Developer Studio. We recommend to install the IDE Update Site component on the AS Java.
For each option, you can install the additional components Composition Platform, Adobe Document Services, or Voice.
**Planning Productive Systems**
To implement a productive system, you install an AS Java server on a 64-bit Windows or Linux operating system by using the installation option *Productive Edition*. Compared to a development system, the productive system offers the following enhancements:
- **Clustering**
You can scale your system both by installing additional application server instances and by adding more server nodes to each instance.
In a cluster environment, the installation creates additional SAP system users and shares. The \sapmnt share, which holds global and local (instance-specific) data, is available on the global server host. At server startup all instances synchronize their binaries with the ones available on the global share. Local data for each individual instance is stored in the \saploc share on the relevant local host.
- **Enhanced security**
In a productive system the number of unsuccessful user attempts to log on is limited to six. Afterwards the user is locked. In addition, password expiry is enabled.
- **Resource consumption**
The focus of the productive system is on the system runtime performance, so the default memory settings for certain Java Virtual Machine (JVM) parameters, such as permanent size and heap size, are higher than those for a development system. In addition, some design-time applications in the portal are disabled to save resources for those required for the runtime.
**Using a Development Infrastructure**
For team development and version control, you can use a development infrastructure together with your SAP NetWeaver CE systems. SAP NetWeaver CE supports the following scenarios:
- You use an existing SAP NetWeaver Development Infrastructure (NWDI) installed as a part of SAP NetWeaver 7.0. Using NWDI ensures seamless integration with the SAP NetWeaver CE capabilities.
- You use a non-SAP development environment and connect to it using the Developer Studio in SAP NetWeaver CE. You are flexible to choose a development and production infrastructure of your preference and use the CE development capabilities to implement your projects.
When you use a development infrastructure you have to install the Developer Studio feature SAP NetWeaver Developer Studio Development Infrastructure Client.
### Connecting to Back-end Systems
With the SAP NetWeaver CE you have the possibility to integrate and use a back-end system in the following scenarios:
- **You access data residing on a back-end system.**
You can reuse existing data in the applications that you build on top of SAP NetWeaver CE. For example, if you wish to use data residing in an SAP ERP system, you can use the enterprise SOA capabilities (in SAP ERP 2005 systems based on SAP NetWeaver 7.0 Support Package Stack 9 or higher) or you can connect via Remote Function Calls (RFC) to older systems using the Java Connector (JCo) that is offered as a part of SAP NetWeaver CE.
- **You use enterprise services on an SAP or non-SAP back ends.**
You can leverage the SOA capabilities of the SAP NetWeaver CE stack by consuming services provided by an SAP back-end system, such as SAP ERP 2005 (on SAP NetWeaver 7.0 SPS9 or higher), or the ES Workspace that you can access via the SAP Developer Network (SDN). In addition, you can consume services from a third-party back-end system using the standard-based Web service capabilities of the stack. The SAP NetWeaver CE installation includes an ES Registry that enables you to browse the registered service definitions.
- **You integrate your applications into an SAP NetWeaver Portal.**
Once you create and run your applications on the SAP NetWeaver CE system, you can also enable their access from an SAP NetWeaver 7.0 Portal. To implement this scenario, you use the standard portal capabilities for integrating a Java application in an iView. Use the portal system landscape or portal APIs for back-end connectivity to BI composite and SAP transaction iViews only. Use Remote Function Calls (RFCs) and Web services, configured in the SAP NetWeaver Administrator (NWA), to enable back-end connectivity for other applications types, such as composite views and processes.
### Integrating Applications into an SAP NetWeaver Portal
Once you create and run your applications on the SAP NetWeaver CE system, you can use the standard portal capabilities for integrating a Java application in an iView.
- For back-end connectivity to BI composite and SAP transaction iViews, use the portal system landscape or portal APIs only.
- To enable back-end connectivity for other application types, such as composite views and processes, use Remote Function Calls (RFCs) and Web services, configured in SAP NetWeaver Administrator (NWA).
Optionally, once your applications are available in your local SAP NetWeaver CE system, you can enable their runtime access from a remote SAP NetWeaver 7.0 portal. You benefit by taking advantage of the advanced composition capabilities offered in SAP NetWeaver CE, while keeping your corporate portal in a stable and less frequently updated environment, ensuring a consistent end-user experience. To implement this scenario, do either of the following:
- Use the SAP Web Dynpro Java iView (Remote) template in the iView Wizard on the SAP NetWeaver 7.0 portal to integrate Web Dynpro Java applications running on the remote SAP NetWeaver CE system into local iViews.
- Set up a federated portal network between the SAP NetWeaver CE portal and the SAP NetWeaver 7.0 portal. This allows you to share content between distributed portal installations, both SAP and non-SAP, thus providing a single portal access point per user to portal information, services, and applications distributed on portals throughout the entire organizational network. Note that a portal running on SAP NetWeaver CE can function as a producer portal only; hence, consumer capabilities are not supported. Note that a portal running with SAP NetWeaver CE can function as a producer portal only; hence, consumer capabilities are not supported.
### 3.3 Implementation of SAP NetWeaver Composition Environment
To install the different use cases of SAP NetWeaver Composition Environment, refer to the corresponding documentation:
<table>
<thead>
<tr>
<th>Use Case</th>
<th>Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td>as Developer Edition</td>
<td></td>
</tr>
<tr>
<td>as Production Edition</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
## Typographic Conventions
<table>
<thead>
<tr>
<th>Example</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code><Example></code></td>
<td>Angle brackets indicate that you replace these words or characters with appropriate entries to make entries in the system, for example, “Enter your <code><User Name></code>”.</td>
</tr>
<tr>
<td>Example → Example ↓</td>
<td>Arrows separating the parts of a navigation path, for example, menu options</td>
</tr>
<tr>
<td>Example</td>
<td>Emphasized words or expressions</td>
</tr>
<tr>
<td>Example</td>
<td>Words or characters that you enter in the system exactly as they appear in the documentation</td>
</tr>
<tr>
<td><code>http://www.sap.com</code></td>
<td>Textual cross-references to an internet address</td>
</tr>
<tr>
<td><code>/example</code></td>
<td>Quicklinks added to the internet address of a homepage to enable quick access to specific content on the Web</td>
</tr>
<tr>
<td><strong>123456</strong></td>
<td>Hyperlink to an SAP Note, for example, SAP Note <strong>123456</strong></td>
</tr>
</tbody>
</table>
| Example | - Words or characters quoted from the screen. These include field labels, screen titles, pushbutton labels, menu names, and menu options.
- Cross-references to other documentation or published works |
| Example | - Output on the screen following a user action, for example, messages
- Source code or syntax quoted directly from a program
- File and directory names and their paths, names of variables and parameters, and names of installation, upgrade, and database tools |
| EXAMPLE | Technical names of system objects. These include report names, program names, transaction codes, database table names, and key concepts of a programming language when they are surrounded by body text, for example, **SELECT** and **INCLUDE** |
| EXAMPLE | Keys on the keyboard |
© Copyright 2009 SAP AG. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.
Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.
No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.
Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.
Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation.
IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACE, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation.
Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries.
Oracle is a registered trademark of Oracle Corporation.
UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.
Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc.
HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.
Java is a registered trademark of Sun Microsystems, Inc.
JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape.
SAP, R/3, xApps, xApp, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP Business ByDesign, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.
These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.
This document was created using stylesheet 2007-12-10 (V7.2) / XSL-FO: V5.1 Gamma and XSLT processor SAXON 6.5.2 from Michael Kay (http://saxon.sf.net/), XSLT version 1.
Disclaimer
Some components of this product are based on Java™. Any code change in these components may cause unpredictable and severe malfunctions and is therefore expressly prohibited, as is any decompilation of these components. Any Java™ Source Code delivered with this product is only to be used by SAP’s Support Services and may not be modified or altered in any way.
Documentation in the SAP Service Marketplace
You can find this document at the following address: https://service.sap.com/instguides
|
{"Source-Url": "https://archive.sap.com/kmuuid2/90e6feb4-c41c-2b10-9888-c43d49f47f12/upload_2_.pdf", "len_cl100k_base": 5225, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 78501, "total-output-tokens": 6313, "length": "2e12", "weborganizer": {"__label__adult": 0.0002841949462890625, "__label__art_design": 0.00022852420806884768, "__label__crime_law": 0.00022220611572265625, "__label__education_jobs": 0.0008935928344726562, "__label__entertainment": 6.437301635742188e-05, "__label__fashion_beauty": 0.00010854005813598631, "__label__finance_business": 0.001880645751953125, "__label__food_dining": 0.00020551681518554688, "__label__games": 0.00041103363037109375, "__label__hardware": 0.000759124755859375, "__label__health": 0.00017881393432617188, "__label__history": 0.00011610984802246094, "__label__home_hobbies": 5.0187110900878906e-05, "__label__industrial": 0.0003573894500732422, "__label__literature": 0.000156402587890625, "__label__politics": 0.0001289844512939453, "__label__religion": 0.00024259090423583984, "__label__science_tech": 0.003650665283203125, "__label__social_life": 6.413459777832031e-05, "__label__software": 0.033111572265625, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.0001418590545654297, "__label__transportation": 0.00023877620697021484, "__label__travel": 0.0001264810562133789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27559, 0.01365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27559, 0.02751]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27559, 0.86161]], "google_gemma-3-12b-it_contains_pii": [[0, 191, false], [191, 993, null], [993, 1474, null], [1474, 1474, null], [1474, 1685, null], [1685, 1724, null], [1724, 4531, null], [4531, 6818, null], [6818, 7801, null], [7801, 7840, null], [7840, 8788, null], [8788, 9961, null], [9961, 12725, null], [12725, 15503, null], [15503, 18410, null], [18410, 21952, null], [21952, 23534, null], [23534, 27052, null], [27052, 27559, null], [27559, 27559, null]], "google_gemma-3-12b-it_is_public_document": [[0, 191, true], [191, 993, null], [993, 1474, null], [1474, 1474, null], [1474, 1685, null], [1685, 1724, null], [1724, 4531, null], [4531, 6818, null], [6818, 7801, null], [7801, 7840, null], [7840, 8788, null], [8788, 9961, null], [9961, 12725, null], [12725, 15503, null], [15503, 18410, null], [18410, 21952, null], [21952, 23534, null], [23534, 27052, null], [27052, 27559, null], [27559, 27559, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27559, null]], "pdf_page_numbers": [[0, 191, 1], [191, 993, 2], [993, 1474, 3], [1474, 1474, 4], [1474, 1685, 5], [1685, 1724, 6], [1724, 4531, 7], [4531, 6818, 8], [6818, 7801, 9], [7801, 7840, 10], [7840, 8788, 11], [8788, 9961, 12], [9961, 12725, 13], [12725, 15503, 14], [15503, 18410, 15], [18410, 21952, 16], [21952, 23534, 17], [23534, 27052, 18], [27052, 27559, 19], [27559, 27559, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27559, 0.13143]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
15e5393e9adf9d4601970764e61325be07f83781
|
Abstract. Real Time Systems (RTS) interact with their environments using time constrained input/output signals. A functional misbehavior or a deviation from the specified time constraints may have catastrophic consequences. Hence, ensuring the correctness of such systems is extremely important and necessary. The increasing complexities of now-a-days ubiquitous real time systems require using an adequate modeling language. Unified Modeling Language (UML), a widely used visual object oriented modeling language, has proved to be effective and suitable for real time systems. The paper discusses the ability of UML and its profile to determine the schedulability of a fixed priority real time system. This paper puts stress on the occurrence of deadlock in using the Priority Inheritance Protocol and prevention of such using the Priority Ceiling Protocol. Using UML 2.0 Sequence and Timing Diagrams, we model these two protocols and further, we analyze and compare these models.
Keywords: Real Time Systems, UML, Priority Ceiling Protocol, Priority Inheritance Protocol, Deadlock
1 Introduction
Real time systems are now omnipresent in modern societies in several domains such as avionics, control of nuclear power station, multimedia communications, robotics, systems on chip, air-traffic control, process control, and numerous embedded systems etc. Developing a real time embedded system is a sophisticated and complex task.
A real time system is one in which failure can occur in the time domain as well as in the more familiar value domain. These systems can have a mixture of timing constraints, broadly categorised as hard and soft. A hard time constraint requires that a result must be produced within a bounded interval otherwise a serious fault is said to occur. In a soft real time occasional timing faults may be permitted. Examples of soft real time system are video play back system, on line transaction system, telephone switches as well as electronic games.
The real world is inherently concurrent, and a real time system which is linked to the behaviour of the real world must behave in a concurrent manner. Real time systems are therefore usually engineered using a number of concurrently running tasks, with timing con-
strains placed on them. Because of this concurrency there is contention for resources, requiring scheduling (i.e. how tasks are granted access to a given resource). The processor is an example of a resource which must be scheduled, but other resources (such as network or disk drive bandwidth) may also need to be scheduled.
In RTS, scheduling of tasks with hard deadlines has been an important area of research. An important problem that arises in the context of such real time systems is the effect of blocking. Blocking is occurred due to the need for synchronization of tasks that share common logical or physical resources.
UML which is the de facto standard has become one of the most widely used modeling languages for industrial software systems, essentially because it is a semi-formal notation, relatively easy to use and well supported by tools. It encourages the use of automated tools that facilitate the development process from analysis through coding. This is particularly true for real time embedded systems, whose behavioural aspects can often be described via UML. It is therefore interesting to consider how well UML is adapted to the real time context. One important feature of UML stems from its built-in extensibility mechanisms: stereotypes, tag values and profiles. These allow adapting UML to fit the specificities of particular domains or to support a specific analysis.
The main contribution in this paper is to model, analyze and compare two existing protocols (Priority Inheritance and Priority Ceiling) using UML 2.0 Sequence and Timing diagram. We could not find any such related works where this type of comparative analysis has been done using UML model.
The paper is structured as follows. In section 2 we provide the background of related work on which the models are built. Section 3 describes the actual scope of this work. Section 4 summarises the two real time scheduling protocols considered here. In Section 5, we focus on some of the new built-in features of UML 2.0 that fit the requirements of real time systems. Section 6 gives detail of our proposed work and finally section 7 concludes the paper.
2 Related Works
The vast majority of research work on real time systems is centered on the concept of task. Real time theory does not focus on the problem of generating the task set and assigning non-functional properties to tasks. Generally, task sets are assumed to be given by the designer, using some adhoc software design methodology [11]. The concept of a task is central to both the design and analysis of real time systems.
In particular, formal studies of real time systems frequently represent the time-constrained processing requirements of the system as a set of periodic or sporadic tasks with deadlines [13, 14, 15]. Both preemptive and non preemptive scheduling algorithms have been studied in the literature [12, 13, 14].
Exclusive access to shared resources is typically ensured by having a semaphore [4] guard. In priority inversion [23] higher priority jobs may be blocked by lower-priority tasks. In one of the earlier attempts at tackling blocking in the abstract (as opposed to with respect to a particular environment) Mok [15] proposed that critical sections execute non-preemptively; while this approach restricts blocking to the length of the largest critical section, it has the drawback that even those tasks that do not ever access shared resources are subject to blocking.
Lampson and Redell studied priority inversion and blocking with respect to concurrent programming in the Mesa environment [9, 10], and proposed a number of solutions. These were generalized by Sha, Rajkumar, and Lehoczky [23], and incorporated into the rate-monotonic (RM) scheduling framework [7].
UML (Unified Modeling Language) [17] has become one of the most widely used standards for modeling and designing industrial software systems, essentially because it is a semi-formal notation, relatively easy to use and well supported by tools. UML provides a variety of instruments to describe the characteristics of a generic system in corresponding models. However, it is not complete, in the sense that the basic elements of the language cannot cover all potential needs for describing specific systems from any domain. Hence in some cases the definition of domain-specific variants of the UML may be required. The UML however has already been conceived for extensibility, for which purpose it provides a built-in profiling mechanism to extend the language with elements and constructs apt to describe specialized features, though remaining compliant with its standard definition [15].
An extension to UML, called UML-RT [22], has been defined on the basis of ROOM language [21], which is a useful architectural definition language specifically developed for modeling complex real time systems (RTS), and one which is becoming a standard in the industry for RTS development. UML-RT extends UML with stereotyped active objects, called capsules to represent system components, where the internal behavior of a capsule is defined using state machines. The interaction with other capsules takes place by means of protocols that define the sequence of signals exchanged through stereotyped objects called ports and specify the
UML has been also used in a large number of time-critical and resource-critical systems. Despite its real time capabilities, UML has some limitations as well, because it lacks in notations and semantics to represent several aspects that are of particular concern to real time system developers.
The UML Profile for Schedulability, Performance and Time (SPT-Profile) has been proposed by a working consortium of OMG member companies. A different profile proposed in focuses specifically on the Scheduling sub-profile of the SPT. SPT is still based on version 1.5 of the UML, now superseded by the new UML 2.0 superstructure. Indeed, to respond to the changes introduced by the UML 2.0 and also to address several requested improvements to better specify the properties of real time embedded systems, the OMG has now issued a new RFP for UML Profile for Modeling and Analysis of Real time and Embedded Systems (MARTE).
Maria et al. proposed the capabilities of UML for task scheduling in RTS. They identified a task set and showed whether the task set is schedulable or not using UML model. They use the Priority Inheritance Protocol to share critical resources. But Priority Inheritance Protocol does not prevent deadlock. Their work did not highlight this issue. We, in this work, have considered the occurrence of deadlock using Priority Inheritance Protocol and propose a better approach to overcome it.
3 Scope of Work
This paper concentrates on the occurrence of deadlock when Priority Inheritance Protocol is used and prevention of such using the Priority Ceiling Protocol. The main objective of this paper is to compare these two protocols using UML 2.0 Sequence and Timing Diagrams.
The Priority Inheritance Protocol is used for sharing critical resources but it does not prevent deadlock if nested critical sections are used. The shortcomings of the existing Priority Inheritance Protocol are represented using one UML model. Further the Priority Ceiling Protocol is used to overcome these difficulties using an improved model.
We therefore analyze the schedulability of an application with the following characteristics: Task set composed of three dependent periodic tasks T1, T2, T3.
- Tasks T1, T2 and T3 share a critical resource (R1) and tasks T2 and T3 share a critical resource (R2)
- A task in a critical section can be preempted by a higher priority task which does not need the same resource.
- Deadlines are equal to periods.
4 Real Time Scheduling
The scheduler is the part of the operating system that responds to the requests sent by programs. It interrupts and gives control of the processor to those processes. A scheduler provides an algorithm or policy that determines in which order processes get processor for execution according to some pre-defined criteria. In a conventional multitasking operating system, processes interleaved with higher importance (or priority) processes receive preference. Little or no account is taken of deadlines. This is clearly inadequate for real time systems. These systems require scheduling policies that reflect the timeliness constraints of real time processes.
Schedulers produce a schedule for a given set of processes. If a process set can be scheduled to meet given pre-conditions the process set is termed feasible. A typical pre-condition for hard real time periodic processes is that they should always meet their deadlines. An optimal scheduler is able to produce a feasible schedule for all feasible process sets conforming to a given pre-condition. For a particular process set an optimal schedule is the best possible schedule according to some pre-defined criteria. Typically a scheduler is optimal if it can schedule all process sets that other schedules can.
Schedulers may be preemptive or non-preemptive. The former can arbitrarily suspend a process’s execution and restart it later without affecting the behaviour of that process (except by increasing its elapsed time). Preemption typically occurs when a higher priority process becomes runnable. The effect of preemption is that a process may be suspended involuntarily.
A Non-preemptive scheduler does not suspend a process in this way. This is sometimes used as a mechanism for concurrency control for processes executing inside a resource whose access is controlled by mutual exclusion. Many real time application systems are composed of several independent tasks. Every task in real time systems is commonly executed in a priority-based manner. When a task releases, at that time a unique priority is assigned to it. The highest-priority active task is selected for execution at each instant in time. To ensure efficient response time it is required to prioritize processes so that more important processes always receive processor attention first if they need it. Inde-
dependent tasks in a RTS execute on a shared computing platform comprised of a preemptable processor and serially reusable non-preemptable resources. Each task requires the processor in order to execute; in addition, some tasks may simultaneously need exclusive access to one or more of the resources during part or all of their execution. Exclusive access to these shared resources is typically ensured only within critical sections.
The notion of priority is commonly used to order access to the processor and other shared resources such as communication channels. In priority scheduling, each task is assigned a priority via some policy. Contention for resources is resolved in favour of the task with the highest priority that is ready to run.
### 4.1 Priority Inversion
A priority inversion occurs if the highest priority active task cannot execute because some of the resources needed for its execution are held by some other tasks. At that point of time the higher priority task is blocked while the lower-priority tasks execute.
In order to overcome the priority inversion the Priority Inheritance Protocol can be used.
### 4.2 Priority Inheritance Protocol
**Assigned Priority:** When a task releases, this priority is assigned to the task. It is a unique priority. **Current Priority:** It is the priority at which a ready task is scheduled and executed. It may vary with time.
#### 4.2.1 Rules of the Priority Inheritance Protocol
1. **Scheduling rule:** A ready task is scheduled preemptably in a priority driven manner according to current priority.
2. **Allocation rule:** When a task $T$ requests a resource $R$,
a) If $R$ is free it is allocated to $T$ and is held by $T$ until $T$ releases $R$.
b) If $R$ is not free then the task is blocked.
3. **Priority Inheritance rule:** When requesting task $T$ becomes blocked, the task $T_i$ which blocks $T$ inherits the current priority $\Pi(t)$ of $T$. When $T_i$ releases $R$ its priority reverts to $\Pi(t')$, where $t'$ is the time it acquired the resource $R$.
The Priority Inheritance Protocol does not prevent deadlock, which is explained in the next section using an example. The Priority Ceiling Protocol can be used to overcome deadlock.
### 4.3 Basic Priority Ceiling Protocol
The Priority Ceiling Protocol extends the Priority Inheritance Protocol to prevent deadlocks. This protocol makes two key assumptions:
i) The assigned priority of all tasks is fixed.
ii) The resources required by all tasks are known a priori before the execution of any task begins.
The priority ceiling of a critical resource $R$ is the highest priority of all the tasks that use the resource $R$. Current priority ceiling of a system $\Pi'(t)$ at any time $t'$ is equal to the highest priority ceiling of all the resources used at that time. If all the resources are free, then $\Pi'(t)=\Omega$ where $\Omega$ is a non existing priority level that is lower than the lowest priority of all tasks.
#### 4.3.1 Rules of the Basic Priority Ceiling Protocol
1. **Scheduling rule:** At the release time $t$, the current priority $\Pi(t)$ of every task is equal to the assigned priority. The task remains at that priority level except by rule 3.
2. **Allocation rule:** When a task $T$ requests $R$, one of the following conditions occur:
a) If $R$ is not free then $T$ becomes blocked
b) If $R$ is free then one of the following conditions occur:
i) $T$’s priority is higher than the current priority ceiling $\Pi'(t)$, $R$ is allocated to $T$.
ii) If $T$’s priority is not higher than the priority ceiling $\Pi'(t)$, $R$ is allocated to $T$ only if $T$ is the task holding the resource whose priority ceiling is $\Pi'(t)$. Otherwise $T$’s request is denied.
3. **Priority Inheritance rule:** When $T$ becomes blocked, the task $T_i$ which blocks $T$ inherits the current priority $\Pi(t)$ of $T$. $T_i$ executes at its inherited priority until the time it releases every resource whose priority ceiling is equal to higher than $\Pi(t)$. At that time priority of $T_i$ returns to its priority $\Pi'(t')$ at the time $t'$ when it was granted the resource.
### 5 UML 2.0 for Real Time System
The Unified Modelling Language (UML) is a graphical modeling language for visualizing, specifying, constructing and documenting the artifacts of software systems. UML is widely used to express the general-purpose software design models.
UML as a real time modeling language has some limitations. It basically provides a lot of syntax, but not enough semantics. The UML profile for real time modeling, formally called the UML profile for Schedulability, Performance and Time (UML/SPT), was adopted by the OMG in 2002 [20].
Core of the SPT profile is the general resource modeling framework, itself consisting of three sub-profiles dealing respectively with resource modeling, concurrency and time-specific concepts. Then, based on this common framework, more specific sub profiles are defined. It is supposed to overcome the limitations which are related to and provide suitability for modeling real time systems.
The SPT profile does not invent any new techniques, but offers the possibility to exchange timeliness properties between UML modeling tools and schedulability analysis tools. The profile defines a number of stereotypes, tagged values and constraints, the user can add more features depend on requirements.
UML 2.0 provides some concepts, including active objects, concurrent composite states and concurrent operations. In order to express timing constraints, UML 2.0 provides two data types: Time and TimeExpression. These timing statements can be used either in state diagrams or in Sequence diagrams. Moreover, UML 2.0 introduces a new diagram called Timing Diagram to allow reasoning about time to visualize conditions or state changes over time. UML can better model the real time software systems through its extended features for real time systems. In this work we model our work using some of these features.
6 PROPOSED WORK
In order to describe the task set (mentioned in section 3) and the critical sections of tasks T1, T2 and T3, new parameters are added that specify the three components of any critical section.
- Ca: task duration before entering the critical section
- Cb: task duration within the critical section
- Cc: task duration after the critical section
Then, the computation time becomes C = Ca + Cb + Cc
In [24] the Priority Inheritance Protocol is used for sharing a critical resource but this protocol does not prevent deadlock. In this paper the occurrence of deadlock is first highlighted by considering the following task set which is described by the classical parameters given in Table 1. Further, deadlock avoidance is discussed using the Priority Ceiling Protocol by considering the same task set.
Table 1: A Task Set Sharing Critical Resources
<table>
<thead>
<tr>
<th>Task</th>
<th>R<sub>i</sub></th>
<th>C<sub>a</sub></th>
<th>C<sub>b</sub></th>
<th>C<sub>\beta</sub></th>
<th>C<sub>c</sub></th>
<th>priority</th>
</tr>
</thead>
<tbody>
<tr>
<td>T<sub>1</sub></td>
<td>4</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>T<sub>2</sub></td>
<td>2</td>
<td>5</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>T<sub>3</sub></td>
<td>0</td>
<td>5</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>3</td>
</tr>
</tbody>
</table>
R<sub>i</sub> represents release time of task T<sub>i</sub>, C<sub>i</sub> represents computation time of task T<sub>i</sub> and P<sub>i</sub> represents priority of task T<sub>i</sub>.
6.1 Drawback of Priority Inheritance Protocol
Deadlock occurrence is illustrated in Figures 1 and 2 using a UML 2.0 sequence diagram and a UML 2.0 timing diagram, respectively.
6.1.1 Description of UML 2.0 Sequence Diagram
In UML 2.0, the notation for an interaction in a sequence diagram is a solid-outline rectangle (a rectangular frame). The five sided box at the upper left hand corner names the sequence diagram; keyword sd followed by the interaction name, "Priority Inheritance Protocol". Each lifeline in the diagram represents an individual participant in the scenario.
s1: Scheduler. A scheduler (in our domain, a processor) is responsible for processing the acquisition requests from the clients of a service and based on the appropriate access control policy for that service, it dispenses access to the service. If a service instance is busy, then the reply may remain pending until the access is possible. The scheduler determines a schedule that allocates a set of scheduling tasks to its set of execution engines.
r1, r2: Resource. The stereotype <<SAresource>> of the UML Profile for Schedulability, Performance and Time (schedulability modeling) represents a kind of protected resource (e.g., a semaphore) that is accessed during the execution of a scheduling task. It may be shared by multiple concurrent actions and must be protected by a locking mechanism. The tag "SAccessControl" represents the access control policy for handling requests from scheduling tasks (in our model, 'Priority Inheritance').
T1, T2, T3: Task. The stereotype <<SAschedRes>> of the UML Profile for Schedulability, Performance, and Time (schedulability modeling) represents a unit of concurrent execution (in our domain, a task), which is capable of executing...
Figure 1: Sequence diagram showing deadlock occurrence using the example task set given in Table [1]
Figure 2: Timing diagram showing deadlock occurrence using the example task set given in Table 1.
a single scenario concurrently with other concurrent units. In the general resource modeling of the UML Profile for Schedulability, Performance and Time, an action is defined as a kind of scenario. Therefore, the stereotype \(<\text{SAaction}>\) of this profile (schedulability modeling) is used to characterize the behaviour of each task in the proposed model.
The new metaclass in UML 2.0, TimeObservationAction, is used to know when a task awakes. A time observation triggers an action that, when executed, returns the current value of time in the context in which it is executing. It is depicted with the keyword "now".
Another new metaclass in UML 2.0, StateInvariant, is used to show the different states associated with each lifeline as restrictions. A state invariant is a constraint on the state of a lifeline. If the constraint is true, the trace is a valid trace.
Finally, notes are used to display the textual information.
6.1.2 Observation from Sequence Diagram
The sequence diagram in Figure 1 shows that deadlock occurs. T1 is blocked by T3. T3 is waiting for a resource that is held by T2. T2 is waiting for a resource that is held by T3. As a result all of the three tasks are in the blocking state.
6.1.3 Description of UML 2.0 Timing Diagram
The timing diagram can be stereotyped as \(<\text{SAsituation}>\) to use it in the context of schedulability analysis, representing a real time situation.
The notations of the rectangular frame and the five sided box are same as in the previous sequence diagram, but now we have different elements in the model. Five lifelines are generated one each for the two resources (r1, r2) and the three tasks (T1, T2 and T3) respectively. In this case, scheduler (s1) can be ignored, because it is not necessary to understand the scheduling. Since the changes in states of different lifelines can be represented over linear time, there is no need to show the message passing.
The task states used in the timing diagram are explained in Table 2. There are two simple states for the resource lifeline: idle and busy. Using the timing diagrams it can be seen how the states get changed over time for each lifeline. Therefore, it is not required to use the metaclass StateInvariant as a restriction in lifelines to know the state value at a particular time.
The time axis is linear so it clarifies absolute timing of events, state changes and relative timing between the different lifelines. Therefore, it is not required to use notes indicating when a task awakes (when the state of a task changes to "Ready") (24).
6.1.4 Result and discussion
Using Timing diagram it can be explained how deadlock occurs in Priority Inheritance Protocol.
At time 0, T3 is released and executes at its assigned priority 3. At time 1, resource R1 is assigned to T3.
At time 2, T2 is released. It pre-empts T3 (as priority of T2 is greater than priority of T3) and starts to execute.
At time 3, T2 requests resource R2. R2, being free, is assigned to T2. The task T2 continues to execute.
At time 4, T1 is released and it pre-empts T2 (as priority of T1 is greater than priority of T2).
At time 5, T1 requests R1 but R1 is already assigned to T3. So T1 is now directly blocked by T3 though priority of T1 is greater than priority of T3. According to rule 3, T3 inherits T1’s priority (i.e. 1) and T3 continue execution.
At time 6, T3 request R2 but R2 is already assigned to T2. So T3 is blocked by T2 though current priority of T3 is greater than priority of T2. According to rule 3, T2 inherits T3’s priority (i.e. 1) and T2 continue execution.
At time 8, T2 request for R1 but R1 is already assigned to T3. So T2 is blocked by T3. As T3 is already blocked by T2, deadlock occurs.
<table>
<thead>
<tr>
<th>State</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dormant</td>
<td>The task is set up</td>
</tr>
<tr>
<td>Ready</td>
<td>The task awakes</td>
</tr>
<tr>
<td>Preempted</td>
<td>When running, the task is preempted</td>
</tr>
<tr>
<td>Blocked</td>
<td>The task is waiting for a signal or a resource</td>
</tr>
<tr>
<td>Running</td>
<td>Assignment of processor to task</td>
</tr>
</tbody>
</table>
Table 2: Task states
6.2 Sequence diagram and Timing diagram
Timing diagrams and Sequence diagrams are the two kinds of interaction diagram more adequate to model task scheduling. UML allows modeling the traces of interactions among many objects working together and providing the important information required for schedulability analysis, which is captured in Sequence or Timing diagrams. A Timing diagram is similar to a Sequence diagram in that they both show scenarios of collaborations, but they are not the same at all. In
this work both Sequence diagram and Timing diagram are used because Sequence diagram or Timing diagram alone does not depict the scenario completely.
6.2.1 Advantage of Sequence diagram over Timing diagram
UML Sequence diagrams are used to model the flow of messages, events and actions between the objects or components of a system.
Sequence diagrams are often used to design the interactions between components of a system that need to work together to accomplish a task.
It focuses on when the individual objects interact with each other during execution. It is particularly useful for modeling usage scenarios such as the logic of methods and the logic of services.
Sequence diagrams emphasize message sequence, so the in time of the next message is the message following the current one on the diagram.
Timing diagram does not represent these.
6.2.2 Advantage of Timing diagram over Sequence diagram
A Timing diagram is a simple representation with time along the horizontal axis and objects state or attribute value along the vertical axis.
Although Timing diagrams do not show any information beyond that available in annotated Sequence diagrams, the absolute timing of events, state changes and the relative timing among the lifelines is clearer and more readable than on Sequence diagrams, even when explicit timing constraints are added. Messages on Sequence diagrams are only partially ordered, so in many cases the relative timing between two messages is not specified.
When messages on Sequence diagrams begin or finish on different lifelines, it is not possible to compare which one starts or terminates first.
Time goes down the page on Sequence diagrams, but usually linearity is not implied; that is, further down implies later in time, but the same distance at different places in the diagram does not imply the same amount of time. However, each diagram provides different points of view to the same scenario and both could be very useful.
6.3 Deadlock Avoidance
Deadlock avoidance is illustrated in Figures 3 and 4 using a UML 2.0 Sequence diagram and a UML 2.0 Timing diagram, respectively. Priority Ceiling Protocol is used to prevent deadlock.
6.3.1 Observation from Sequence Diagram
From the Sequence diagram it can be easily seen that deadlock can be prevented. All the three tasks complete their executions.
6.3.2 Result and discussion
The Timing diagram shows how deadlock can be prevented using Priority Ceiling Protocol.
T3 is released at time 0. Ceiling of the system at time 1 is \( \Omega \). When T3 requests R1, it is allocated to T3 according to (i) in part (b) of rule 2. After the allocation of R1, the ceiling of the system is raised to 1, the priority ceiling of R1.
At time 2, T2 is released and it pre-empts T3 (as priority of T2 is greater than priority of T3). At time 3, T2 requests resource R2. R2 is free; however because the ceiling \( \Pi'(3)=1 \) of the system is higher than priority of T2, T2’s request is denied according to (ii) in part (b) of rule 2. T2 is blocked and T3 inherits T2’s priority.
At time 4, T1 is released and it pre-empts T3 (as priority of T1 is greater than priority of T3).
At time 5, T1 requests resource R1 and becomes directly blocked by T3 and T3 inherits T1’s priority. At time 5, T3 requests for resource R2. R2 is free and it is allocated to T3 because T3 holding the resource R1 whose priority ceiling is equal to \( \Pi'(t)=1 \).
At time 6, T3 releases R2 and at time 7, T3 releases R1. So T3 executes at its inherited priority \( \Pi(t)=1 \) until the time when it releases every resource whose priority ceiling is equal to higher than \( \Pi(t) \) (i.e., its inherited priority). T3 completes its execution at time 7.
At that time 7, T1 and T2 are ready. But T1 has higher priority (i.e., 1) it resumes.
At time 8, T1 completes its execution and T2 resumes.
7 Conclusions
The behavior of real time software systems do not depend only on the values of input and output signals, but also on their time of occurrences. Ensuring the correctness of such systems within the specified time constraints is a difficult and complex task. Therefore, complexity of real time systems is continuously increasing which makes their design very challenging. Unified Modeling Language (UML), the standard visual object-oriented modeling language, is suitable to deal with this complexity.
In the last few years, real time processing seems to be the essential part of an operating system, and the scheduling of real time systems is an important area of research in today’s life. In this paper, we consider fixed
Figure 3: Sequence diagram showing deadlock avoidance using Table I
Figure 4: Timing diagram showing deadlock avoidance using Table 1.
priority scheduling. A model (using UML 2.0 Sequence diagram and UML 2.0 Timing diagram) has been developed to represent deadlock occurrence as a drawback of Priority Inheritance Protocol. Further, Priority Ceiling Protocol is used in another improved model (using UML 2.0 Sequence diagram and UML 2.0 Timing diagram) to overcome this difficulty.
As the UML profile for Schedulability, Performance, and Time is clearly biased towards fixed priority scheduling (such as Rate Monotonic), we like to extend the specification to comprise dynamic scheduling (such as Earliest Deadline First). In future we plan to develop a model of dynamic priority scheduling (such as Earliest Deadline First) for the prevention of deadlock in RTS.
References
|
{"Source-Url": "http://www.dcc.ufla.br/infocomp/images/artigos/v12.1/art04.pdf", "len_cl100k_base": 7031, "olmocr-version": "0.1.46", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32122, "total-output-tokens": 8804, "length": "2e12", "weborganizer": {"__label__adult": 0.00028324127197265625, "__label__art_design": 0.0004596710205078125, "__label__crime_law": 0.0003418922424316406, "__label__education_jobs": 0.0006618499755859375, "__label__entertainment": 6.818771362304688e-05, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.00028395652770996094, "__label__food_dining": 0.0002620220184326172, "__label__games": 0.0005955696105957031, "__label__hardware": 0.0016088485717773438, "__label__health": 0.0004067420959472656, "__label__history": 0.0002624988555908203, "__label__home_hobbies": 8.618831634521484e-05, "__label__industrial": 0.0006146430969238281, "__label__literature": 0.00021219253540039065, "__label__politics": 0.00025773048400878906, "__label__religion": 0.00040435791015625, "__label__science_tech": 0.0645751953125, "__label__social_life": 6.240606307983398e-05, "__label__software": 0.01142120361328125, "__label__software_dev": 0.916015625, "__label__sports_fitness": 0.0002346038818359375, "__label__transportation": 0.0006275177001953125, "__label__travel": 0.00017201900482177734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36160, 0.02739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36160, 0.6809]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36160, 0.9093]], "google_gemma-3-12b-it_contains_pii": [[0, 2246, false], [2246, 7529, null], [7529, 12349, null], [12349, 16762, null], [16762, 21675, null], [21675, 21776, null], [21776, 21874, null], [21874, 26504, null], [26504, 31109, null], [31109, 31177, null], [31177, 31244, null], [31244, 35221, null], [35221, 36160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2246, true], [2246, 7529, null], [7529, 12349, null], [12349, 16762, null], [16762, 21675, null], [21675, 21776, null], [21776, 21874, null], [21874, 26504, null], [26504, 31109, null], [31109, 31177, null], [31177, 31244, null], [31244, 35221, null], [35221, 36160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36160, null]], "pdf_page_numbers": [[0, 2246, 1], [2246, 7529, 2], [7529, 12349, 3], [12349, 16762, 4], [16762, 21675, 5], [21675, 21776, 6], [21776, 21874, 7], [21874, 26504, 8], [26504, 31109, 9], [31109, 31177, 10], [31177, 31244, 11], [31244, 35221, 12], [35221, 36160, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36160, 0.06818]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
7731ca4ec463dcb682fd7dc6a908e9db6e0b5402
|
Assembly Language: Part 1
Context of this Lecture
First half of the semester: “Programming in the large”
Second half: “Under the hood”
Starting Now
Afterward
C Language
Assembly Language
Machine Language
Application Program
Operating System
Hardware
Von Neumann Architecture
Instructions are fetched from RAM
- (encoded as bits)
Control unit interprets instructions
- to shuffle data between registers and RAM
- to move data from registers through ALU (arithmetic+logic unit) where operations are performed
CPU
Control Unit
ALU
RAM
Registers
Data bus
Agenda
Language Levels
Instruction-Set Architecture (ISA)
Assembly Language: Performing Arithmetic
Assembly Language: Control-flow instructions
High-Level Languages
Characteristics
- Portable
- To varying degrees
- Complex
- One statement can do much work
- Structured
- while (...) (...) if () ... else ...
- Human readable
count = 0;
while (n>1)
{
count++;
if (n&1)
n = n*3+1;
else
n = n/2;
}
Machine Languages
Characteristics
- Not portable
- Specific to hardware
- Simple
- Each instruction does a simple task
- Unstructured
- Not human readable
- Requires lots of effort!
- Requires tool support
Assembly Languages
Characteristics
- Not portable
- Each assembly language instruction maps to one machine language instruction
- Simple
- Each instruction does a simple task
- Unstructured
- Human readable!!!
(well, in the same sense that Hungarian is human readable, if you know Hungarian).
```
7
movl $0, %r10d
loop:
cmpl $1, %r11d
jle endloop
addl %eax, %r11d
je else
movl %r11d, %eax
andl $1, %eax
jmp endif
else:
sarl $1, %r11d
endif:
jmp loop
endloop:
```
```
8
```
```
9
```
```
10
```
```
11
```
```
12
```
RAM
RAM (Random Access Memory)
Conceptually: large array of bytes
- Contains data
(program variables, structs, arrays)
- and the program!
John Von Neumann (1903-1957)
In computing
- Stored program computers
- Cellular automata
- Self-replication
Other interests
- Mathematics
- Inventor of game theory
- Nuclear physics (hydrogen bomb)
Princeton connection
- Princeton Univ & IAS, 1930-1957
Known for “Von Neumann architecture (1950)”
- In which programs are just data in the memory
- Contrast to the now-obsolete “Harvard architecture”
Von Neumann Architecture
RAM (Random Access Memory)
Conceptually: large array of bytes
- Instructions are fetched from RAM
- Registers
- Small amount of storage on the CPU
- Much faster than RAM
- Top of the storage hierarchy
- Above RAM, disk, ...
Registers
Registers
- Small amount of storage on the CPU
- Much faster than RAM
- Top of the storage hierarchy
- Above RAM, disk, ...
Registers (x86-64 architecture)
General purpose registers:
<table>
<thead>
<tr>
<th></th>
<th>63</th>
<th>31</th>
<th>15</th>
<th>7</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>RAX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RAX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RBX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RBX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RCX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RCX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RDX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RDX</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
RSP is unique; see upcoming slide
Registers (x86-64 architecture)
General purpose registers (cont.):
<table>
<thead>
<tr>
<th></th>
<th>63</th>
<th>31</th>
<th>15</th>
<th>7</th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<td>RSI</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RDI</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RBP</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RSP</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RSP</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RSP</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>RSP</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
RSP is unique; see upcoming slide
### Registers (x86-64 architecture)
#### General purpose registers (cont.):
- **R8**
- `R8D`
- `R8W`
- `R8B`
- **R9**
- `R9D`
- `R9W`
- `R9B`
- **R10**
- `R10D`
- `R10W`
- `R10B`
- **R11**
- `R11D`
- `R11W`
- `R11B`
- **R12**
- `R12D`
- `R12W`
- `R12B`
- **R13**
- `R13D`
- `R13W`
- `R13B`
- **R14**
- `R14D`
- `R14W`
- `R14B`
- **R15**
- `R15D`
- `R15W`
- `R15B`
#### Registers summary
16 general-purpose 64-bit pointer/long-integer registers, many with stupid names:
- `rax`, `rbx`, `rcx`, `rdx`, `rsi`, `rdi`, `r8`, `r9`, `r10`, `r11`, `r12`, `r13`, `r14`, `r15`
- `eax`, `ebx`, `ecx`, `edx`, `esi`, `edi`, `ebp`, `r8d`, `r9d`, `r10d`, `r11d`, `r12d`, `r13d`, `r14d`, `r15d`
**RSP Register**
- **RSP (Stack Pointer) register**
- Contains address of top (low address) of current function’s stack frame
- Sometimes used as a “frame pointer” or “base pointer”
**EFLAGS Register**
- **EFLAGS (Flags) register**
- Contains CC (Condition Code) bits
- Affected by compare (cmp) instruction
- And many others
- Used by conditional jump instructions
- je, jne, jl, jg, jle, jge, jb, jbe, ja, jae, ja
- Sometimes used as a “frame pointer” or “base pointer”
**RIP Register**
- **RIP (Instruction Pointer) register**
- Stores the location of the next instruction
- Address (in TEXT section) of machine-language instructions to be executed next
- Value changed:
- Automatically to implement sequential control flow
- By jump instructions to implement selection, repetition
**Registers summary**
2 special-purpose registers:
- **EFLAGS**
- **RIP**
If you’re operating on 32-bit “int” data, use these stupid names instead:
- `r8d`, `r9d`, `r10d`, `r11d`, `r12d`, `r13d`, `r14d`, `r15d`
It doesn’t really make sense to put 32-bit ints in the stack pointer.
Registers and RAM
Typical pattern:
- Load data from RAM to registers
- Manipulate data in registers
- Store data from registers to RAM
Many instructions combine steps
Control Unit
Control Unit
- Fetches and decodes each machine-language instruction
- Sends proper data to ALU
CPU
CPU (Central Processing Unit)
- Control unit
- Fetch, decode, and execute
- ALU
- Execute low-level operations
- Registers
- High-speed temporary storage
Agenda
Language Levels
Architecture
Assembly Language: Performing Arithmetic
Assembly Language: Control-flow instructions
Instruction Format
Many instructions have this format:
\[ \text{name}\{b,w,l,q\} \ src, \ dest \]
- **name**: name of the instruction (mov, add, sub, and, etc.)
- **byte** ⇒ operands are one-byte entities
- **word** ⇒ operands are two-byte entities
- **long** ⇒ operands are four-byte entities
- **quad** ⇒ operands are eight-byte entities
Instruction Format
Many instructions have this format:
\[
\text{name(b,w,l,q) src, dest}
\]
- **src**: source operand
- The source of data
- Can be
- Register operand: %rax, %ebx, etc.
- Memory operand: 5 (legal but silly), someLabel
- Immediate operand: $5, $someLabel
- **dest**: destination operand
- The destination of data
- Can be
- Register operand: %rax, %ebx, etc.
- Memory operand: 5 (legal but silly), someLabel
- Cannot be
- Immediate operand
Performing Arithmetic: Long Data
```
static int length;
static int width;
static int perim;
perim = (length + width) * 2;
```
Note:
- movl instruction
- addl instruction
- sall instruction
- Register operand
- Immediate operand
- Memory operand
- (to announce TEXT section)
<table>
<thead>
<tr>
<th>Registers</th>
<th>Memory</th>
</tr>
</thead>
<tbody>
<tr>
<td>EAX 14</td>
<td>length 5</td>
</tr>
<tr>
<td>R10</td>
<td>width 2</td>
</tr>
<tr>
<td></td>
<td>perim 14</td>
</tr>
</tbody>
</table>
```
.name "bss"
length: .skip 4
width: .skip 4
perim: .skip 4
.name "text"
movl length, %eax
addl width, %eax
sall $1, %eax
movl %eax, perim
```
```
# Option 1
movb grade, %al
subb $1, %al
movb %al, grade
# Option 2
subb $1, grade
# Option 3
dech grade
```
What would happen if we use movl instead of movb?
Performing Arithmetic: Byte Data
```
static char grade = 'B';
grade--;
```
```
.name "data"
grade: .byte 'B'
.byte 'A'
.byte 'D'
.byte 0
.name "text"
Registers Memory
<table>
<thead>
<tr>
<th>EAX A</th>
<th>grade AAD0</th>
</tr>
</thead>
</table>
<table>
<thead>
<tr>
<th>movb instruction</th>
</tr>
</thead>
<tbody>
<tr>
<td>subb instruction</td>
</tr>
<tr>
<td>decb instruction</td>
</tr>
</tbody>
</table>
Note:
- Comment
```
# Option 1
movb grade, %al
subb $1, %al
movb %al, grade
# Option 2
subb $1, grade
# Option 3
dech grade
```
Operands
### Immediate operands
- $5 \Rightarrow \text{use the number 5 (i.e. the number that is available immediately within the instruction)}$
- $i \Rightarrow \text{use the address denoted by i (i.e. the address that is available immediately within the instruction)}$
- \text{Can be source operand; cannot be destination operand}
### Register operands
- %rax \Rightarrow \text{read from (or write to) register RAX}$
- \text{Can be source or destination operand}
### Memory operands
- 5 \Rightarrow \text{load from (or store to) memory at address 5 (silly; seg fault*)}$
- i \Rightarrow \text{load from (or store to) memory at the address denoted by i}$
- \text{Can be source or destination operand (but not both)}$
- \text{There’s more to memory operands; see next lecture}$
Notation
### Instruction notation:
- q \Rightarrow \text{quad (8 bytes); l} \Rightarrow \text{long (4 bytes)}$
- w \Rightarrow \text{word (2 bytes); b} \Rightarrow \text{byte (1 byte)}$
### Operand notation:
- src \Rightarrow \text{source; dest} \Rightarrow \text{destination}$
- R \Rightarrow \text{register}; I \Rightarrow \text{immediate; M} \Rightarrow \text{memory}$
Generalization: Data Transfer
Data transfer instructions
- `mov(q,l,w,b) srcRM, destRM`: dest = src
- `movsb(q,l,w) srcRM, destR`: dest = src (sign extend)
- `movsw(q,l) srcRM, destR`: dest = src (sign extend)
- `movslq srcRM, destR`: dest = src (sign extend)
- `movzb(q,l,w) srcRM, destR`: dest = src (zero fill)
- `movzw(q,l) srcRM, destR`: dest = src (zero fill)
- `movzlq srcRM, destR`: dest = src (zero fill)
- `cwtl`: reg[EAX] = reg[AX] (sign extend)
- `cbtw`: reg[AX] = reg[AL] (sign extend)
`mov` is used often; others less so.
Generalization: Arithmetic
Arithmetic instructions
- `add(q,l,w,b) srcIRM, destRM`: dest += src
- `sub(q,l,w,b) srcIRM, destRM`: dest -= src
- `inc(q,l,w,b) destRM`: dest++
- `dec(q,l,w,b) destRM`: dest--
- `neg(q,l,w,b) destRM`: dest = -dest
- `ashl`: reg[RDX:RAX] = reg[RAX]*src
- `divl`: reg[EAX] = reg[EDX:EAX]/src
- `divb`: reg[AL] = reg[AX]/src
Q: Is this adding signed numbers or unsigned? A: Yes! [remember properties of 2's complement]
Q: Is this adding signed numbers or unsigned? A: Yes! [remember properties of 2's complement]
See Bryant & O’Hallaron book for description of signed vs. unsigned multiplication and division.
Generalization: Bit Manipulation
Bitwise instructions
- `and(q,l,w,b) srcIRM, destRM`: dest = src & dest
- `or(q,l,w,b) srcIRM, destRM`: dest = src | dest
- `xor(q,l,w,b) srcIRM, destRM`: dest = src ^ dest
- `not(q,l,w,b) destRM`: dest = ~dest
- `sal(q,l,w,b) srcIR, destRM`: dest = dest << src
- `salh(q,l,w,b) srcIR, destRM`: dest = dest >> src (zero fill)
- `shl(q,l,w,b) srcIR, destRM`: (Same as sal)
- `shr(q,l,w,b) srcIR, destRM`: dest = dest >> src (sign extend)
- `sbtrl(q,l,w,b) srcIR, destRM`: dest = dest >> src (sign extend)
- `sarl(q,l,w,b) srcIR, destRM`: dest = dest >> src (zero fill)
Signed (arithmetic right shift)
- 44 / 2: 000101100 = 11
- -44 / 2: 111010100 = -11
Unsigned (logical right shift)
- 44 / 2: 000101100 = 11
- 468 / 2: 111010100 = 117
Translation: C to x86-64
```assembly
movl %r11d, %eax
andl $1, %eax
je loop
addl $1, %r10d
movl %r10d, %eax
addl %eax, %r10d
addl $1, %r10d
jmp endif
movl $0, %r10d
loop:
cmpl $1, %r10d
jle endloop
addl %r10d, %eax
addl %eax, %r10d
addl $1, %r10d
jmp endif
jmp loop
endif:
addl %eax, %r10d
```
Agenda
Language Levels
Architecture
Assembly Language: Performing Arithmetic
Assembly Language: Control-flow instructions
Control Flow with Signed Integers
Comparing (signed or unsigned) integers
Sets condition-code bits in the EFLAGS register
• Beware: operands are in counterintuitive order
• Beware: many other instructions set condition-code bits
• Conditional jump should immediately follow \texttt{cmp}
<table>
<thead>
<tr>
<th>movl $0, %r10d</th>
<th>movl $0, %r10d</th>
</tr>
</thead>
<tbody>
<tr>
<td>loop:</td>
<td>loop:</td>
</tr>
<tr>
<td>cmpl $1, %r11d</td>
<td>cmpl $1, %r11d</td>
</tr>
<tr>
<td>jle endloop</td>
<td>jle endloop</td>
</tr>
<tr>
<td>addl $1, %r10d</td>
<td>addl $1, %r10d</td>
</tr>
<tr>
<td>movl %r11d, %eax</td>
<td>movl %r11d, %eax</td>
</tr>
<tr>
<td>andl $1, %eax</td>
<td>andl $1, %eax</td>
</tr>
<tr>
<td>je else</td>
<td>je else</td>
</tr>
<tr>
<td>movl %r11d, %eax</td>
<td>movl %r11d, %eax</td>
</tr>
<tr>
<td>addl %eax, %r11d</td>
<td>addl %eax, %r11d</td>
</tr>
<tr>
<td>addl %eax, %r11d</td>
<td>addl %eax, %r11d</td>
</tr>
<tr>
<td>addl $1, %r11d</td>
<td>addl $1, %r11d</td>
</tr>
<tr>
<td>jmp endif</td>
<td>jmp endif</td>
</tr>
<tr>
<td>sarl $1, %r11d</td>
<td>sarl $1, %r11d</td>
</tr>
<tr>
<td>endif: jmp loop</td>
<td>endif: jmp loop</td>
</tr>
<tr>
<td>endloop:</td>
<td>endloop:</td>
</tr>
</tbody>
</table>
Unconditional jump
\texttt{jmp X} Jump to address X
Conditional jumps after comparing signed integers
\texttt{je X} Jump to X if equal
\texttt{jne X} Jump to X if not equal
\texttt{jl X} Jump to X if less
\texttt{jle X} Jump to X if less or equal
\texttt{jg X} Jump to X if greater
\texttt{jge X} Jump to X if greater or equal
• Examine condition-code bits in EFLAGS register
Summary
Language levels
• The basics of computer architecture
• Enough to understand x86-64 assembly language
The basics of x86-64 assembly language
• Registers
• Arithmetic
• Control flow
To learn more
• Study more assembly language examples
• Chapter 3 of Bryant and O’Hallaron book
• Study compiler-generated assembly language code
• \texttt{gcc} \texttt{217 -S} \texttt{somefile.c}
|
{"Source-Url": "https://www.cs.princeton.edu/courses/archive/fall17/cos217/lectures/14_Assembly1-6up.pdf", "len_cl100k_base": 4822, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25630, "total-output-tokens": 5075, "length": "2e12", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.000492095947265625, "__label__crime_law": 0.00029087066650390625, "__label__education_jobs": 0.003570556640625, "__label__entertainment": 6.99758529663086e-05, "__label__fashion_beauty": 0.00016105175018310547, "__label__finance_business": 0.00021779537200927737, "__label__food_dining": 0.00043654441833496094, "__label__games": 0.0006918907165527344, "__label__hardware": 0.00543212890625, "__label__health": 0.00039315223693847656, "__label__history": 0.00032711029052734375, "__label__home_hobbies": 0.00022149085998535156, "__label__industrial": 0.0012121200561523438, "__label__literature": 0.0002236366271972656, "__label__politics": 0.00026226043701171875, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.050994873046875, "__label__social_life": 9.340047836303712e-05, "__label__software": 0.006984710693359375, "__label__software_dev": 0.92529296875, "__label__sports_fitness": 0.0004148483276367187, "__label__transportation": 0.0008635520935058594, "__label__travel": 0.0002008676528930664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13375, 0.04816]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13375, 0.79468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13375, 0.6249]], "google_gemma-3-12b-it_contains_pii": [[0, 1196, false], [1196, 1743, null], [1743, 3554, null], [3554, 5392, null], [5392, 6306, null], [6306, 9099, null], [9099, 11647, null], [11647, 13375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1196, true], [1196, 1743, null], [1743, 3554, null], [3554, 5392, null], [5392, 6306, null], [6306, 9099, null], [9099, 11647, null], [11647, 13375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13375, null]], "pdf_page_numbers": [[0, 1196, 1], [1196, 1743, 2], [1743, 3554, 3], [3554, 5392, 4], [5392, 6306, 5], [6306, 9099, 6], [9099, 11647, 7], [11647, 13375, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13375, 0.09691]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7a69dbdf60c6fd83bf98a853159cf28536048af5
|
Investigating the Adoption of Agile Practices in Mobile Application Development
Alan Santos¹, Josiane Kroll¹, Afonso Sales¹, Paulo Fernandes¹ and Daniel Wildt²
¹Computer Science Department, Pontifical University Catholic of Rio Grande do Sul (PUCRS), Porto Alegre, RS, Brazil
²WildTech, Porto Alegre, RS, Brazil
Keywords: Software Engineering, Mobile Application, Software Development, Agile Practices, Challenges, Benefits.
Abstract: The mobile application development market has been dramatically growing in the last few years as the complexity of its applications and speed of software development process. These changes in the mobile development market require a rethinking on the way the software development should be performed by teams. In order to better understand how agile practices support mobile application development, we applied a questionnaire to 20 undergraduate students. These students have been training in an iOS development course combined with agile practices. Our study aims to identify challenges and to report the students experience on the adoption of agile practices to develop mobile applications. Our findings reveal that agile practices help mobile software development mainly in terms of project management and control and development speed. However, aspects of user interface and user experience, different development platforms, and users expectations still point challenges in developing mobile applications.
1 INTRODUCTION
Mobile application development is a new trend in the software industry. It also plays an important role in the economic development of a country as well as in teaching and learning (Zhang, 2015). The combination of devices such as cameras, sensors, touch and GPS with mobile platforms increase the possibilities for developing new mobile applications (apps). Additionally, devices have become more complex and mission critical (Lewis et al., 2013) due to the sudden wave of mobile device use.
According to Wasserman (Wasserman, 2010), mobile devices have been adopted in different ways for desktop or laptop computers. Mobile applications development can be similar to software engineering for other embedded applications. However, mobile applications development present some additional requirements that are less commonly found if compared to traditional software applications. The relevance of mobile software products has reached a point in which its devices have become one of the most popular platforms for the distribution and use of user-oriented software (Corral, 2012). The development speed in mobile software development has become a key factor due to developers' possibility of submitting applications (apps) directly to the market. Thus, it is necessary to identify agile practices to implement mobile applications as well as to provide a good learning experience.
In this paper, we investigate challenges in mobile application development and the students’ experience on the adoption of agile practices for developing mobile applications. In order to achieve this goal, we applied a questionnaire to 20 undergraduate students who have been attending an iOS development course. This course adopted agile practices to develop different types of mobile applications. Our results describe the participants’ perception on the use of agile practices, challenges, and perceived benefits. The main contribution of this paper is to provide a further discussion about the adoption of agile practices for mobile application development.
The remainder of this paper is organized as follows: Section 2 introduces a brief background about mobile application development while Section 3 presents a background on agile software development. In Section 4, we describe the research methodology adopted in this study and, in Section 5, we present the results. Section 6 discusses our results. Finally, we draw our conclusion and future work in Section 7.
2 MOBILE APPLICATION DEVELOPMENT
Since 2008, Apple and Google have opened their application store for iOS and Android platforms, a point where mobile apps started quickly evolve. Mobile application development is a process in which applications are developed for small handheld devices, being either pre-installed on devices during manufacture or downloaded from application stores or other software distribution platforms (Flora and Chande, 2013). Following the evolution of mobile application development, the traditional software development life cycle is no longer the only approach because long project planning phases and long development cycles can result on outdated mobile applications.
There are different programming environments available for the major mobile platforms (Wasserman, 2010), for Windows Phone there is Microsoft’s Visual Studio environment, for Android platform there are Android development tools plug-in for Eclipse, and Apple iOS Dev Center has the Xcode package. According to Xanthopoulos and Xinogalos (Xanthopoulos and Xinogalos, 2013) with the currently increasing number of mobile platforms, developing mobile applications has become difficult for companies, as they need to develop the same applications for each target platform. The typical process for developing native applications is the most appropriate way of deploying mobile apps but it has one major disadvantage: it is not possible to reuse the source code for another platform; the same app must be redeveloped from the beginning.
Mobile web applications are mainly based on technologies such as HTML and JavaScript and do not require installation or device upgrades, enabling information processing functions to be initiated remotely on Web server (Huy and van Thanh, 2012). Some of the web applications drawbacks are: limited access to the underlying device as hardware and data and the extra time needed to render the web content (Xanthopoulos and Xinogalos, 2013). Hybrid development is another approach to develop a mobile application which tries to combine the advantages of web and native apps where applications are primarily built using HTML5 and JavaScript, and a deep knowledge of the target platform is not required (Xanthopoulos and Xinogalos, 2013). According to Alston (Alston, 2012), many mobile applications that are developed are considered to be alternative applications. These applications are developed for a specific platform and it have access to the hardware of a device through the use of Application Programming Interfaces (APIs).
The adoption of a suitable software development methodology is very important in mobile software engineering, since software applications are changing and evolving all the time based on immediate user requirements (Kaleel and Harishankar, 2013). Authors describe Scrum practices as the best suit requirements of android software development and applied them in designing a mobile software development methodology where they were able to successfully develop a secure backup application using important features from Scrum methodology such as adaptability to evolving requirements, technically strong development teams and effective communication through daily meetings (Kaleel and Harishankar, 2013).
3 AGILE DEVELOPMENT
Agile development or adaptive development are aimed to rapidly adapt to the changing reality. An agile method emphasizes communication and collaboration in an iterative process (Smite et al., 2010).
The adoption of agile development makes software processes more flexible, helps in continue learning and incremental delivery, quickly and easily adapting to requirements and technologies changes. Moreover, agile development focuses more on the human aspects of software engineering than the processes, human interaction over tools and processes (Flora and Chande, 2013). Authors also performed a review and analysis on mobile application development process using agile methodologies. According to authors agile development has fit for mobile application development. In this context, there are studies which recommended that agile practices are a good choice and assures different phases of software development life cycle to solve the mobile application development issues (Flora and Chande, 2013), they evaluated the following mobile development process: Mobile D, RaPiD 7, Hybrid Methodology Design, MASAM and SLeSS where they found that work related to mobile software confirms agile practices to be a natural fit for the development of mobile applications and an appropriate agile method could be selected for a given project and can be tailored to a specific requirement based upon project’s complexity and team size.
Agile development is recommended to small-to-medium-sized projects, software development organizations are increasingly recognizing the need for agility.
In literature, Extreme Programming (XP) and Scrum are the most common agile methods for mobile application development. According to Paasivaara et
al. (Paasivaara et al., 2008) these methods can be easily customized by software companies. We describe these agile methods and others in the following subsections.
### 3.1 Extreme Programming
Extreme Programming (XP) is a discipline of software development which emphasizes productivity, flexibility, informality, teamwork, and the limited use of technology outside of programming, working in short cycles and every cycle starts by choosing a subset of requirements from a larger set (Macias et al., 2003).
According to Moore and Flannery (Moore and Flannery, 2007), XP implements a groupware style development where feedback is obtained by daily testing the software where developers deliver the system to the customers as early as possible, allowing for rapid response for requirements and technologies changes. Beck (Beck, 2000) present XP as a light-weight methodology for small-to-medium-sized teams developing software in the face of vague or rapidly-changing requirements.
### 3.2 Scrum
Scrum is an iterative and incremental agile software development approach. It offers a framework and set of practices that keep everything visible, allowing practitioners to know exactly what is going on and to make adjustments in order to have the project moving towards desired goals. The adoption of Scrum practices is the main factor to successfully develop software projects (Scharff and Verma, 2010).
The scrum workflow is a sequence of iterations called sprints which have a duration between one and four weeks each. The team has the work foundation as part of a product backlog which is a list of requirements and priorities.
Each sprint has daily meetings where each team member answers what he/she has been done on the previous day, what is going to be done in the current day and if there is any roadblock to move forward on development activities. At the end of each sprint there is a product demo called Sprint Review and after that it is handled a lessons learned session called Sprint Retrospective (Reichlmayr, 2011).
### 4 RESEARCH METHODOLOGY
We applied a questionnaire to a group of 20 students from the iOS development training course. This course is provide for a large software company in order to train undergraduate students on mobile application development for iOS. The course takes 4 months duration.
In this study, we selected 20 from 87 students, who were attending the course. We adopted a random selection to obtain a pool of participants.
During the course, each student has his/her own equipment to use as part of the class meetings and projects and worked in teams from two to five individuals. The course curriculum includes the following subjects: Object-Oriented Programming, User Interface (UI) components, Model View Controller, Data sources, Navigation, Animations and Frameworks. The course also covered an introduction to Scrum framework. After taking theoretical lessons, all students work for four months to develop real mobile applications using agile practices to support it.
The participants are on average at the 5th semester and majority of participants who answered the questionnaire are from an IT related field: 30% from Computer Science, 35% from Information Systems, 10% from Computer Engineering, 10% from Systems Analysis and 15% from Other courses.
Another profile information from the overall 87 students attending the training is that 35% of the students already had previous software development courses using Java and C#. In this context, 68% had up to 3 years of experience in development, 18% had between 3 and 5 years of experience, and 14% had more than 5 years software development experience. Only 10% of the students had a previous contact with mobile application development. Most of previous students experience were from other courses, as well as from the industry. In the software development methodology analysis, 65% did not have any previous contact with software development methodologies, 20% had previous contact with some practices of agile development, and 15% had contact with traditional software development approach. Table 1 present the participants information.
The course is facilitated by 6 instructors with experience in iOS development, academic and project management background. Four of them, have more than five years of experience as software developers. The course combines elements of Challenge-Based Learning (CBL) and Scrum in order to help the students to develop their apps (Santos et al., 2015).
At the end of the training course, we applied a questionnaire with eight research questions. Six questions to collect the background information of the participants (Name, Age, Undergraduate course, Semester, Previous working/study experience in agile practices, Previous working/study experience in mo-
Table 1: Participants information.
<table>
<thead>
<tr>
<th>Participant</th>
<th>Age</th>
<th>Course</th>
<th>Semester</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>24</td>
<td>Computer Science</td>
<td>8</td>
</tr>
<tr>
<td>B</td>
<td>23</td>
<td>Information Systems</td>
<td>6</td>
</tr>
<tr>
<td>C</td>
<td>21</td>
<td>Computer Engineering</td>
<td>5</td>
</tr>
<tr>
<td>D</td>
<td>22</td>
<td>Information Systems</td>
<td>7</td>
</tr>
<tr>
<td>E</td>
<td>27</td>
<td>Information Systems</td>
<td>5</td>
</tr>
<tr>
<td>F</td>
<td>21</td>
<td>Information Systems</td>
<td>4</td>
</tr>
<tr>
<td>G</td>
<td>24</td>
<td>Computer Science</td>
<td>3</td>
</tr>
<tr>
<td>H</td>
<td>22</td>
<td>Information Systems</td>
<td>5</td>
</tr>
<tr>
<td>I</td>
<td>19</td>
<td>Information Systems</td>
<td>4</td>
</tr>
<tr>
<td>J</td>
<td>21</td>
<td>Computer Engineering</td>
<td>5</td>
</tr>
<tr>
<td>L</td>
<td>34</td>
<td>Systems Analysis</td>
<td>3</td>
</tr>
<tr>
<td>M</td>
<td>19</td>
<td>Information Systems</td>
<td>5</td>
</tr>
<tr>
<td>N</td>
<td>20</td>
<td>Computer Science</td>
<td>4</td>
</tr>
<tr>
<td>O</td>
<td>20</td>
<td>Computer Science</td>
<td>3</td>
</tr>
<tr>
<td>P</td>
<td>26</td>
<td>Computer Science</td>
<td>4</td>
</tr>
<tr>
<td>Q</td>
<td>20</td>
<td>Business</td>
<td>5</td>
</tr>
<tr>
<td>R</td>
<td>24</td>
<td>Systems Analysis</td>
<td>3</td>
</tr>
<tr>
<td>S</td>
<td>21</td>
<td>Engineering</td>
<td>9</td>
</tr>
<tr>
<td>T</td>
<td>22</td>
<td>Systems Analysis</td>
<td>7</td>
</tr>
<tr>
<td>U</td>
<td>24</td>
<td>Computer Science</td>
<td>5</td>
</tr>
</tbody>
</table>
Table 2: Challenges for mobile application development.
<table>
<thead>
<tr>
<th>Challenges</th>
<th>Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Define UI/UX</td>
<td>50%</td>
</tr>
<tr>
<td>Different users’ expectations</td>
<td>30%</td>
</tr>
<tr>
<td>Different development platforms</td>
<td>20%</td>
</tr>
<tr>
<td>Continuous update</td>
<td>10%</td>
</tr>
<tr>
<td>Devices and applications performance</td>
<td>10%</td>
</tr>
</tbody>
</table>
5 RESULTS
The following subsections outlines the results related to the research questions related to the adoption of agile methods for developing mobile applications. We adopted the content analysis as a qualitative research technique to identify the challenges and perceived benefits on the adoption of agile methods for developing mobile applications.
5.1 Challenges in Mobile Application Development
In mobile application development, apart from adopting agile or a traditional approach, developers face many challenges. Based on our data collection, we identified five main challenges related to the adoption of agile practices in mobile application development. Table 2 shows these challenges.
- **Define UI/UX (User Interface/User Experience Design):** UX was cited as one of the factors that differ developing mobile applications for traditional applications. This is point as a challenge because of the diversity of devices, sensors and features that may be are utilized using a mobile device. UI has also been cited as one of the factors that differ developing mobile applications for traditional applications. It due to the diversity of devices and different sizes and development platforms that can be used to develop applications. Participants explain this challenge.
*I think the main difference is about UI, not for the huge amount of different screen sizes, but the way applications are used on a desktop computer was always using a keyboard and mouse as input. We have a keyboard when using mobile devices, but instead of the mouse we have touch screen, that has numerous other representations to click, not to mention the use of all other sensors available, which makes creating the interface to integrate harmoniously challenging. (Participant B)*
*In my opinion, it’s different because you need to think much more in the user experience. Usually the applications are for a general audience, then you should pay attention to all aspects (accessibility, design). (Participant E)*
- **Different Users’ Expectations:** the diversity of users and their expectations is identified as a challenge in mobile application development. First, a single application may have millions of users, according to the sense that a lot of users also correspond to a large diversity of users, with different expectations, demands and devices. Another point raised it is also the question of the speed in which mobile solutions need to be released.
*I think the main difference to develop mobile (applications) over other platforms is the proximity to the user. It is common for a mobile application to be used by millions of people, while a desktop system is different. In my point of view, mobile applications can help change the lives of people in a more direct and fast way, compared to some other systems. The biggest challenge is to promote solutions that really make a difference in people’s lives. (Participant J)*
• Different Development Platforms: differences between hardware and software platforms have also been identified as one of the differences and challenges in developing applications for mobile devices. It due to the fact that the amount of application program interfaces (APIs) in each of the development platforms, as well as different features and differences in hardware.
The great diversity of types and capabilities of these devices also creates a challenge for developers, because they need to develop the system in such a way that it is able to run satisfactorily in a wide range of devices. (Participant K)
• Continuous Update: the constant updating of technologies is also cited as one of the main challenges due to frequent updating of development platforms, as well as the frequent launch of devices with different sizes and features. A participant describe it.
As challenges, I believe the fact that you have to keep up to date because the mobile development is always emerging innovations, new frameworks, new languages. (Participant C)
• Devices and Application Performance: another issue reported by the participants is performance on data access. It happens because of hardware limitations. We can also observe this aspect as an important aspect on the mobile development based on the following answers from the participants.
It’s different, because we have to think of something practical that fits in a relatively small screen and that is attractive. I think the biggest challenge is to be always updated and seek the best performance for the application, or it will become obsolete very quickly. (Participant B)
5.2 Perceived Benefits of Agile Practices for Developing Mobile Applications
Agile development as well mobile application development are research areas with many important aspects to be investigated. Despite of its challenges, we also identified a set of eight benefits of the adoption agile practices for developing mobile applications. Table 3 list the benefits.
• Improves the Management and Control: agile process address the inherent problems of traditional development using product demand and delivery, and also control of ongoing projects. Thus, agile processes implement control through frequent inspection and adaptation and support the project management.
I believe it is extremely important, it enables better organization and control of tasks as better ways to follow the team. (Participant H)
• Improves Development Speed: agile practices helps to attain development velocity. It specially because agile practices focus on short development cycles. Agile development teams tasked to deliver high-value features quickly.
Agile practices positively influence the mobile development, because they are usually solutions that require immediate and rapid development. With many interactions agile is fundamental because with this the team is able to design and prototype a product with more speed, unlike other methodologies. For example, Waterfall approach validates the implementation only at the end of the cycle. (Participant C)
• Continuous Improvement: agile principles, practices, and methods support continuous improvement. Through constant iterations, iterative planning and review, agile development brings the expected results.
The use of agile practices helps to make application development safety because it is possible to identify and eliminate failures or unwanted behaviors quickly and accurately. (Participant M)
• Promotes a Life-cycle Delivery: one of the great advantages of agile software development is the wealth of practices, techniques, and strategies that promote a delivery life-cycle. Agile teams will adopt a life-cycle that is the most appropriate for their situation. The delivery life-cycle is goal-driven.
I believe that the use of agile methods help one mobile team to organize and deliver. Es-
Table 3: Perceived benefits of agile development for developing mobile applications.
<table>
<thead>
<tr>
<th>Benefits</th>
<th>Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Improves the management and control</td>
<td>45%</td>
</tr>
<tr>
<td>Improves development speed</td>
<td>35%</td>
</tr>
<tr>
<td>Continuous improvement</td>
<td>15%</td>
</tr>
<tr>
<td>Promotes a life-cycle delivery</td>
<td>15%</td>
</tr>
<tr>
<td>Support multiple interactions</td>
<td>10%</td>
</tr>
<tr>
<td>Improves communication</td>
<td>10%</td>
</tr>
<tr>
<td>Improves performance</td>
<td>5%</td>
</tr>
<tr>
<td>Allows transparency</td>
<td>5%</td>
</tr>
</tbody>
</table>
especially if the project is very long. This requires collecting metrics during the iterations. (Participant C)
- **Support Multiple Interactions:** The product life-cycle goes from the initial idea for the product, through delivery, to operations and support and often has many iterations of the delivery life-cycle. Multiple iterations allow teams not only to plan at the iteration level but also to conduct long-term release planning (Smite et al., 2010).
I believe it is fundamental for development, mainly by constant reviews that facilitate troubleshooting and redefinition of the scope of the project (if needed). The various existing iterations on agile methods are extremely important for the application’s success. Agile methods facilitate and assist mobile development. (Participant E)
- **Improves Communication:** Agile development in general uses a set of values, principles, and practices to guide teams in being as agile as possible. It includes the adoption of models to support communication and understanding. Their adoption facilitates communication between the group and make team members more critical.
It improves the communication and teamwork among team members providing a realistic view about project progress. (Participant R)
- **Improves Performance:** Agile practices can help to improve the project performance in mobile development environments. It because agile practices provide a major performance of developers. Agile teams provide an agile plan with progress updated every day.
I think that helps a lot in performance improvement. Perhaps even more than other areas of development. It fits very well with mobile development. (Participant A)
- **Allows Transparency:** It was also raised as a point of clarity and objectivity generated by the use of agile development. Software projects only succeed with effective planning, visibility, and coordination. Agile practices promote a disciplined project management.
My experience with agile development was great, in my opinion it is essential to use this methodology because it makes the development process more objective and clearer. (Participant B)
An important aspect to be observed in the use of agile practices. Thus, Figure 1 presents agile practices used by the participants to develop mobile applications during the course.

The majority of the participants adopt daily scrum meeting practice, because it helps them to keep track of project activities and communication. The second practice more adopted by the participants is Kanban (a system for visualising work to be done, in progress or completed). By the adoption of Kanban participants can see the progress of each activity, what still needs to be done, what is in progress and what is completed. Iterative planning was also reported by participants. This practice help to organize the development and deliverables on different interactions and continues improvement. Small releases is also used by participants in order to organize different deliverables in accordance to the iterative planning. Pair programming was also reported as very useful especially when participants need to learn something new or need to work on something critical. Burndown, continuous integration and refactoring were reported by less than 20% of participants. Automated builds and TDD (Test Driven Development) were reported by less than 10% of participants.
### 6 DISCUSSION
Developing mobile applications can be hard due to many reasons. In this study, we found five main challenges. These challenges are faced for both beginners practitioners as well as more experienced developers. We also identified eight benefits of the adoption of agile practices for mobile application development.
The majority of the answers given by interviewees (50%) reported UI/UX as challenge for mobile application development. According to Dalmasso et al. (Dalmasso et al., 2013), most of the developers would like to release apps for major mobile platforms (iOS,
and provide a consistent UI and UX across the platforms. However, developing an app for separate mobile platforms require in-depth knowledge of their SDKs (Software Development Kit). The developer can control all aspects of the user experience, but a mobile application must share common elements of the user interface with other applications and must adhere to externally developed user interface guidelines (Wasserman, 2010). The diversity of mobile platforms, as well as the variety of SDKs and other tools contributes to increase this challenge.
Different users’ expectations and different development platforms are reported in 30% and 20% of the answers, respectively. This result shows that the main elements of mobile applications, user and technology, can pose challenges in mobile development. As well as, it poses challenges in teaching and learning mobile software development. We believe that this challenge will increase over the years. It can happen due to the increase number of new users and technologies. A single mobile application can reach millions of users with different devices, age groups, and supported by different platforms.
Continuous update and devices applications performance are reported in 10% of the answers given by interviewees. These challenges have a lower percentage when compared to define UI/UX challenge. However, these challenges are not less important, and in fact mobile applications are becoming more complex and users require high-quality mobile apps (Wasserman, 2010).
We also identified the benefits of the adoption of agile practices for mobile application development. The greatest benefit according to our findings is to improve the management and control. It makes sense, since agile approaches are focused project management (Scharff and Verma, 2010). At the same time, agile practices help to increase the development speed. It is very important in the mobile market since new applications are available every day in the Apps store.
The benefits of agile practices adoption described in this study are not necessarily restricted to the specific type of software development and it can also be extended to other software application domains. On the other hand, we identified challenges in mobile application development domain. A further investigation should be conducted in order to explore the relationship between challenges and achieved benefits.
6.1 Limitations of this Study
Our study was conducted with a limited number of respondents and from the same iOS development course. In addition, our results are drawn the viewpoint of students (development teams). It is also important to notice that part of project participants were attending a training course without previous experience with other approaches or software practices. These features highlight the fact that participants may become comfortable with it, and accepted the environment challenges and its limitations.
However, our results demonstrated that on using agile practices as part of a mobile application development environment are similar to previous literature studies. Our results have also shown that short development cycles and small releases are important features on mobile application development environments. We have found indicative in our study that agile practices are the best approaches for mobile software development environments.
7 FINAL REMARKS
This study explores the adoption of agile practices for mobile application development. In other words, we investigate challenges and the students’ experience on the adoption of agile practices. We identified five main challenges in mobile application development and eight benefits of agile practices for developing mobile applications.
Our results show that the main challenge to develop mobile applications is to define UI and UX followed by achieve different users’ expectations. Regarding to the benefits, we found improvements on management and control as well as development speed. All teams finished their application projects (apps) delivering more than five different applications covering areas such as games, public transportation, services and productivity. Their apps presented a high
quality and used advanced resources such as data persistence, web services, etc.
Results from our study can be used to support developers, project managers, decision makers, and practitioners in order to choose the software development methodology to develop a mobile application project.
For further work, we will use the findings of this study to design an approach for teaching and learning mobile application development. The adoption of agile practices for mobile application development will be further investigate in order to propose new practices and processes to support software development.
ACKNOWLEDGMENT
Afonso Sales is funded by CNPq-Brazil (Universal 470096/2013-6) and Paulo Fernandes is also funded by CNPq-Brazil (PQ 307602/2013-3).
REFERENCES
|
{"Source-Url": "http://www.scitepress.org/Papers/2016/58354/58354.pdf", "len_cl100k_base": 6423, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25198, "total-output-tokens": 7641, "length": "2e12", "weborganizer": {"__label__adult": 0.000507354736328125, "__label__art_design": 0.0003476142883300781, "__label__crime_law": 0.0003843307495117187, "__label__education_jobs": 0.004390716552734375, "__label__entertainment": 6.502866744995117e-05, "__label__fashion_beauty": 0.00022971630096435547, "__label__finance_business": 0.00040078163146972656, "__label__food_dining": 0.0004639625549316406, "__label__games": 0.0006709098815917969, "__label__hardware": 0.0008115768432617188, "__label__health": 0.0005326271057128906, "__label__history": 0.00025343894958496094, "__label__home_hobbies": 9.79304313659668e-05, "__label__industrial": 0.00039768218994140625, "__label__literature": 0.0002758502960205078, "__label__politics": 0.00034117698669433594, "__label__religion": 0.0004982948303222656, "__label__science_tech": 0.003597259521484375, "__label__social_life": 0.00012189149856567384, "__label__software": 0.003261566162109375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0004553794860839844, "__label__transportation": 0.0007281303405761719, "__label__travel": 0.0002789497375488281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35822, 0.02616]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35822, 0.07965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35822, 0.92953]], "google_gemma-3-12b-it_contains_pii": [[0, 3920, false], [3920, 8929, null], [8929, 13739, null], [13739, 18298, null], [18298, 22763, null], [22763, 26826, null], [26826, 31028, null], [31028, 35822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3920, true], [3920, 8929, null], [8929, 13739, null], [13739, 18298, null], [18298, 22763, null], [22763, 26826, null], [26826, 31028, null], [31028, 35822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35822, null]], "pdf_page_numbers": [[0, 3920, 1], [3920, 8929, 2], [8929, 13739, 3], [13739, 18298, 4], [18298, 22763, 5], [22763, 26826, 6], [26826, 31028, 7], [31028, 35822, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35822, 0.25325]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
de049c5a0473e7a5cde9e84c62c7dea0cae44ba0
|
1 Introduction
Computers can be programmed to perform an impressive variety of tasks, ranging from numerical computations to natural language processing. The computer is surely one of the most versatile tools ever devised, playing a critical role in solving all manner of “real world” problems. Some would argue that computers can solve any problem that a human can solve; some would argue the opposite; and some regard the question as irrelevant. Whatever view one adopts, it is still interesting to consider whether there are any limits to what can be solved by a computer.
Any given computer has only a finite memory capacity, so certainly there are problems that are too large for it to solve. We abstract away from these limitations, and consider the question of what can be solved by an ideal computing device, one which is not limited in its memory capacity (but is still required to produce an answer in a finite amount of time). Computability theory is concerned with exploring the limitations of such idealized computing devices. Complexity theory (which we shall not study here) is concerned with calibrating the resources (both time and space) required to solve a problem.
Our treatment of computability theory is based on problems pertaining to ML programs. We begin by considering questions about ML functions such as “does the function $f$ yield a value when applied to an input $x$?” or “are functions $f$ and $g$ equal for all inputs?” It would be handy to build a debugging package that included ML programs to answer these (and related) questions for us. Can such a package be built? You may well suspect that it cannot, but how does one prove that this suspicion is well-founded?
We then go on to consider questions about ML programs (presented as values of a.datatype representing the abstract syntax of ML). The difference lies in the fact that in ML functions are “black boxes” — we can apply them to arguments, but we can’t look inside the box. “Of course,” you might think, “one can’t test convergence of a function on a given input, but what if it were possible to look at the code of the function? What then? Maybe then one can decide convergence on a given input.” Unfortunately (or fortunately, depending on your point of view), the problem remains undecidable.
*Modified from a draft by Robert Harper, 1997.*
2 Properties of Functions
A decision problem is a well-defined question about well-specified data (called instances of the problem) that has a “yes” or “no” answer. For example, the primality problem is a decision problem, namely to decide whether or not a given natural number \( n \) is prime. A decision problem is decidable (or solvable or computable) iff there is an ML function that, when applied to an instance of the problem, evaluates to either true or false according to whether or not the answer to the instance is “yes” or “no”. The primality problem is decidable: there is an ML function \( \text{is_prime} \) of type \( \text{int} \rightarrow \text{bool} \) such that for every natural number \( n \), \( \text{is_prime} \ n \) evaluates to true iff \( n \) is prime, and evaluate to false otherwise. We will show that there are undecidable problems — ones for which no ML program can decide every instance.
It is important to stress that the whole question of decidability can only be considered for well-posed problems. In particular it must be absolutely clear what are the problem instances and how they are to be represented as input to an ML program. For example, in the case of the primality problem we are representing a natural number as an ML value of type \( \text{int} \). (We could also represent it as a string, or as a list of booleans corresponding to its binary representation!) Questions such as “is sentence \( s \) grammatical according to the rules of English grammar?” are not well-posed in this sense because it is not clear what is the grammar of English, and so it is never clear whether a given sentence is grammatical or not. Many fallacious arguments hinge on the fact that the “problem” under consideration is so ill-defined as to render it meaningless to ask whether or not a computer may be used to solve it!
The fundamental result of computability theory is the unsolvability of the halting problem: given a function \( f \) of type \( \text{int} \rightarrow \text{int} \) and an input \( x \) of type \( \text{int} \), does \( f \) evaluate to a value on input \( x \)? That is, does \( f \) halt on input \( x \)? (If \( f \) on input \( x \) raises an uncaught exception, then we do not regard it as halting.) The halting problem for functions is undecidable:
**Theorem 1** There is no ML function \( H \) of type \( \text{int} \rightarrow \text{int} \) * \( \text{int} \rightarrow \text{bool} \) such that for every \( f \) of type \( \text{int} \rightarrow \text{int} \) and every \( x \) of type \( \text{int} \),
1. \( H(f, x) \) evaluates to either true or false.
2. \( H(f, x) \) evaluates to true iff \( f \ x \) evaluates to a value.
**Proof:** Suppose, for a contradiction, that there were such an ML function \( H \). Consider the following function of type \( \text{int} \rightarrow \text{int} \):
\[
\text{fun diag}(x:\text{int}): ext{int} = \text{if } H(\text{diag}, x) \text{ then loop () else 0}.
\]
Here \( \text{loop} \) is the function defined by the declaration
\[
\text{fun loop () = loop ()}.
\]
(Of course \( \text{loop} () \) runs forever.)
Now consider the behavior of \( H(\text{diag}, 0) \). (There is nothing special about 0; we could as well choose any number.) By our first assumption, either \( H(\text{diag}, 0) \) evaluates to true or \( H(\text{diag}, 0) \) evaluates to false. We show that in either case we arrive at a contradiction. It follows that there is no such ML function \( H \).
1. Suppose that \( H(\text{diag}, 0) \) evaluates to true. Then by the second assumption we know that \( \text{diag} \ 0 \) halts. But by inspecting the definition of \( \text{diag} \) we see that \( \text{diag} \ 0 \) halts only if \( H(\text{diag}, 0) \) evaluates to false! Since true is not equal to false, we have a contradiction.
2. Suppose that $H(\text{diag}, 0)$ evaluates to false. Then by the second assumption we know that $\text{diag} 0$ does not halt. But by inspecting the definition of $\text{diag}$ we see that this happens only if $H(\text{diag}, 0)$ evaluates to true, again a contradiction.
It is worthwhile to contemplate this theorem and its proof very carefully. The function $\text{diag}$ used in the proof is said to be defined by diagonalization, a technique introduced by the Georg Cantor in his proof of the uncountability of the real numbers. The idea is that $\text{diag}$ calls $H$ on itself, and then “does the opposite” — if $H \text{diag}$ evaluates to true, $\text{diag}$ goes into an infinite loop, and otherwise terminates immediately. Thus $\text{diag}$ is a demonic adversary that tries (and succeeds!) to refute the existence of a function $H$ satisfying the conditions of the theorem.
Note that the proof relies on both assumptions about $H$. Dropping the second assumption renders the theorem pointless: of course there are ML functions that always yield either true or false! But suppose we drop the first assumption instead. Is there an ML program $H$ satisfying only the second assumption? Of course! It is defined as follows:
\[
\text{fun } H(f, x) = (f x; \text{true})
\]
Clearly $H(f, x)$ evaluates to true iff $f x$ halts, and that is all that is required. The function $H$ so defined is sometimes called a semi-decision procedure for the halting problem because it yields true iff the given function halts when called with argument $x$, but may give no answer otherwise.
The unsolvability of the halting problem can be used to establish the unsolvability of a number of related problems about functions. The idea is to show that a problem $P$ is unsolvable by showing that if $P$ were solvable, then the halting problem would also be solvable. This is achieved by showing that an ML function to decide the halting problem can be defined if we are allowed to use an ML function to decide $P$ as a “subroutine”. In this way we reduce the halting problem to the problem $P$ by showing how instances of the halting problem can be “coded up” as instances of problem $P$. Since an ML function to solve the halting problem does not exist, it follows that there cannot exist an ML function to solve $P$. Here are some examples.
Is there an ML function $Z$ of type $(\text{int->int})\text{->bool}$ such that $Z f$ evaluates to true iff $f 0$ halts and evaluates to false otherwise? That is, is the “halts on zero” problem decidable? No. Here is a tempting, but incorrect, attempt at a proof:
Clearly, $Z f$ evaluates to true iff $H(f, 0)$, so $Z$ must be undecidable. So we can define $Z$ by $\text{fun } Z(f) = H(f, 0)$. Hence no such $Z$ can exist.
But this is backwards! A correct proof proceeds by showing that if there were an ML function $Z$ satisfying the conditions given above, then there would exist an ML function $H$ solving the halting problem. Stated contrapositively, if no function $H$ solving the halting problem exists, then no function $Z$ solving the “halts on zero” problem exists. Since we’ve already shown there is no such $H$, then there is no such $Z$. To complete the proof, we must show how to define $H$ from $Z$. But this is easy: $\text{fun } H(f, x) = Z(f \text{fn } 0 \Rightarrow (f x))$. Notice that the function $\text{fn } 0 \Rightarrow (f x)$ halts on input 0 iff $f$ halts on input $x$, so the proposed definition of $H$ in terms of $Z$ solves the halting problem, contradiction.
By a similar pattern of reasoning we may show that the halting problem for suspensions is undecidable. More precisely, there is no ML function $S$ of type $(\text{unit->int})\text{->bool}$ such that for every $t$ of type $\text{unit->int}$, the application $S t$ evaluates to true iff $t()$ halts, and evaluates to false otherwise. Suppose there were such an $S$. Then we may define a procedure $H$ to solve
the halting problem as follows: \( \text{fun } H(f, x) = S(\text{fn } () \Rightarrow (f \ x)). \) (Convince yourself that this definition refutes the existence of \( S \) as described.)
Consider the following problem: given an ML function \( f \) of type \( \text{int} \rightarrow \text{int} \), is there any argument \( x \) of type \( \text{int} \) such that \( f \ x \) halts? This problem is also undecidable. For if \( F \) were an ML function of type \( (\text{int} \rightarrow \text{int}) \rightarrow \text{bool} \) solving this problem, then we could define an ML function to solve the halting problem as follows:
\[
\text{fun } H(f, x) = \\
\quad \text{let} \\
\quad \quad \text{fun } g \ y = (f \ x; \ y) \\
\quad \text{in} \\
\quad \quad F \ g \\
\text{end.}
\]
Note that the function \( g \) has the property that it halts on some input only if \( f \) halts on \( x \).
It is worth pointing out that there is nothing special about the type \( \text{int} \) in the above arguments. The proofs would go through for any type \( \tau \), provided that there is a value \( v \) of that type. (In the above cases we took \( \tau = \text{int} \) and \( v = 0 \).)
A type in ML for which there is an equality test function is said to admit equality. For example, the types \( \text{int} \) and \( \text{string} \) admit equality. But not every type admits equality. For example, there is no equality test for values of type \( \text{int} \rightarrow \text{int} \). Is this just an oversight? First let us be clear what we mean by equality of such functions. If \( f \) and \( g \) are functions of type \( \text{int} \rightarrow \text{int} \) then \( f \) is equal to \( g \) iff for every input \( x \) of type \( \text{int} \), either \( f \ x \) and \( g \ x \) both diverge, or both evaluate to the same value of type \( \text{int} \). Thus \( (\text{fn } x: \text{int}=>2\ast x) \) and \( (\text{fn } x: \text{int}=>x+x) \) are equal functions of type \( \text{int} \rightarrow \text{int} \), as are \( (\text{fn } x: \text{int}=>\text{loop}()) \) and the function \( f \) defined by \( \text{fun } f(\text{x: int})=\text{f}(\text{x}). \)
The equality problem for functions of type \( \text{int} \rightarrow \text{int} \) is to decide whether or not two functions \( f \) and \( g \) of this type are equal in the sense just described. The equality problem is undecidable: there is no ML function \( E \) of type \( (\text{int} \rightarrow \text{int}) \ast (\text{int} \rightarrow \text{int}) \rightarrow \text{bool} \) such that \( E(f, g) \) evaluates to \( \text{true} \) iff \( f \) is equal to \( g \), and evaluates to \( \text{false} \) otherwise. Suppose there were such an \( E \). Then we may define a function \( H \) to solve the halting problem as follows:
\[
\text{fun } H(f, x) = E (\text{fn } y: \text{int} \Rightarrow (f \ x; \ y), \ \text{fn } y: \text{int} \Rightarrow y)
\]
Notice that the two functions in the call to \( E \) are equal iff \( f \ x \) halts. Thus \( H \) solves the halting problem, which is a contradiction. Thus we see that the limitation on equality in ML is a feature, rather than a bug!
3 Properties of Programs
Is it possible to decide halting for functions if we are given the actual program, rather than just a “black box”? It may seem plausible, at first sight, since, after all, we as programmers make such judgments based on the program itself, so why might not a computer be able to do the same thing? And if computers have the same capabilities as people (as some would say), then perhaps computers can do this too. One problem with this argument is that it’s far from clear that people can decide halting for arbitrary functions, even given the code: the program might be so complicated as to overwhelm even the most clever among us. Be that as it may, it is possible to prove that halting remains undecidable, even if the program is given as input to the halting tester.
Recall that we defined an interpreter for a small fragment of ML, let’s call it Mini-ML, written in full ML.\(^1\) The implementation consisted of two main parts. First, we defined a datatype called \texttt{exp} for the abstract syntax of Mini-ML and a datatype called \texttt{value} for the values of Mini-ML. Then we defined a function \texttt{eval} of type \texttt{value env * exp -> value} that, given the representation of a value environment and an ML expression as a value of type \texttt{value env * exp}, evaluates that expression and yields a representation of its value as a value of type \texttt{value}. From this we can easily define another function \texttt{eval} of type \texttt{exp -> value} which fixes the top-level value environment to be empty.
First off, let’s be more precise about the representation of Mini-ML expressions as values of type \texttt{exp}. For example, the Mini-ML expression \texttt{2+3} is represented by the value \texttt{Plus(Integer(2), Integer(3))} of type \texttt{exp}, and the expression \texttt{fn x => x} is represented by the value \texttt{Fn("x", Var "x")}. In general, if \(e\) is a Mini-ML expression, then \(\llbracket e \rrbracket\) (“corners \(e\)” or “quote \(e\)”) is its representation as a value of type \texttt{exp}. Thus \(\llbracket e \rrbracket = \texttt{Plus(Integer(2), Integer(3))}\), and so on. A formal definition of quotation is given in Figure 1. There is a corresponding representation function for values, which we write the same way as \(\llbracket . . . \rrbracket\). Note that we have omitted some cases which are analogous to the given ones.
Now we can state precisely the behavior of \texttt{eval}: given a Mini-ML expression \(e\), \texttt{eval} \(\llbracket e \rrbracket\) evaluates to \(\llbracket v \rrbracket\) iff \(e\) evaluates to \(v\). For example, if \(e\) is the Mini-ML expression \texttt{2+3}, which evaluates to 5, then \texttt{eval} \(\llbracket e \rrbracket\) evaluates to \texttt{Rational(5/1)}, which is \(\llbracket 5/1 \rrbracket\) as a value. In other words, given the representation \(\llbracket e \rrbracket\) of a Mini-ML expression \(e\), the function \texttt{eval} yields the representation \(\llbracket v \rrbracket\) of its value \(v\) according to the rules of evaluation for Mini-ML.
For the remainder of these notes we ask you to accept the following claim without proof: there is a type \texttt{exp} of the abstract syntax of all of Standard ML for which we can write a function \texttt{eval} of type \texttt{exp -> value} such that \texttt{eval} \(\llbracket e \rrbracket\) evaluates to \(\llbracket v \rrbracket\) iff \(e\) evaluates to \(v\). The extension to all of ML does not involve many new ideas beyond those appearing in the interpreter for Mini-ML, but, as you can readily imagine, there are a lot more cases to consider.
With this bit of machinery in hand we return to the question of decidability of halting for programs, rather than functions. We will prove that there is no ML function \(K\) of type \texttt{exp * exp -> bool} with the following properties:
1. \(K\ (v, w)\) evaluates to either \texttt{true} or \texttt{false} for any values \(v\) and \(w\) of type \texttt{exp};
2. if \(e\) is an ML expression of type \texttt{exp -> exp} and \(v\) is a value of type \texttt{exp}, then \(K\ (\llbracket e \rrbracket, v)\) evaluates to \texttt{true} iff \(e\ v\) halts.
Before giving the proof, let us note carefully what is being said. First of all, we require that \(K\) terminate for all pairs of inputs of type \texttt{exp * exp}. Second, we specify that \(K\) test halting of the representation of an ML function of type \texttt{exp->exp} on a given input of type \texttt{exp}. That is, if \(e\) is an ML function of type \texttt{exp->exp} for which we are interested in testing whether or not it halts on a given input \(v\), then we call \(K\) with the argument \(\llbracket e \rrbracket, v\) and see whether or not the result is \texttt{true}. This is tricky, and you should pause to make sure you understand precisely what is happening here before reading any further.
Suppose that we are given such a function \(K\). Let \texttt{diag} be the ML function of type \texttt{exp -> exp} defined by
\[
\text{fun diag(x) = if } K(x,x) \text{ then loop () else Integer(0)}
\]
\(^1\) see the code for lecture 24
Expressions
\[ \begin{align*}
\Gamma x & = \text{Var} \ "x" \\
\Gamma n & = \text{Integer} \ n \\
\Gamma (e_1 + e_2) & = \text{Plus} (\Gamma e_1, \Gamma e_2) \\
\Gamma \text{true} & = \text{True} \\
\Gamma \text{false} & = \text{False} \\
\Gamma \text{if} \ e_1 \ \text{then} \ e_2 \ \text{else} \ e_3 & = \text{IfThenElse} (\Gamma e_1, \Gamma e_2, \Gamma e_3) \\
\Gamma \text{fn} \ x \Rightarrow e & = \text{Fn} ("x", \Gamma e) \\
\Gamma (e_1 e_2) & = \text{App} (\Gamma e_1, \Gamma e_2) \\
\Gamma \text{let} \ d \ \text{in} \ e \ \text{end} & = \text{Let} (\Gamma d, \Gamma e)
\end{align*} \]
Declarations
\[ \begin{align*}
\Gamma . & = \text{Null} \\
\Gamma d \ \text{val} \ x & = e \\
\Gamma d \ \text{val rec} \ x & = e
\end{align*} \]
Values
\[ \begin{align*}
\Gamma r & = \text{Rational} \ r \\
\Gamma b & = \text{Boolean} \ b \\
\Gamma \{\eta; e\} & = \text{Closure} (\Gamma \eta, \Gamma e)
\end{align*} \]
Value Environments
\[ \begin{align*}
\Gamma . & = \text{Null} \\
\Gamma \eta, x = v & = \text{Dec} (\Gamma \eta, ("x", \Gamma v)) \\
\Gamma \eta, \text{rec} x = e & = \text{Rec} (\Gamma \eta, ("x", \text{Freeze} \ \Gamma e))
\end{align*} \]
Figure 1: Quotation for Mini-ML
where loop is as defined before and Integer(0) is a convenient value of type exp.
Consider the evaluation of $K(⌜\text{diag}⌝,⌜\text{diag}⌝)$. By the first assumption on $K$, this expression evaluates to either true or false. We consider each case in turn, as before, in order to derive a contradiction:
1. Suppose that $K(⌜\text{diag}⌝,⌜\text{diag}⌝)$ evaluates to true. Then by the second assumption on $K$ it follows that diag $⌜\text{diag}⌝$ halts. But by the definition of diag we see that diag $⌜\text{diag}⌝$ halts only if $K(⌜\text{diag}⌝,⌜\text{diag}⌝)$ evaluates to false, contradicting the assumption.
2. Suppose on the other hand that $K(⌜\text{diag}⌝,⌜\text{diag}⌝)$ evaluates to false. Then by the second assumption on $K$ it follows that diag $⌜\text{diag}⌝$ loops forever. But by the definition of diag this happens only if $K(⌜\text{diag}⌝,⌜\text{diag}⌝)$ evaluates to true, contradicting the assumption.
Thus we arrive at a contradiction in either case, and conclude that there is no ML function $K$ satisfying the two conditions stated above. That is, the halting problem is undecidable even if we are given the (representation of the) program, and not just the function as a “black box”.
We’ve just proved that the halting problem for representations of functions of type exp $\rightarrow$ exp as values of type exp and arguments of type exp is undecidable: there is no ML function $K$ taking a representation of such a function and a purported argument that yields true or false according to whether or not that function halts on the given input. The restriction to functions of type exp $\rightarrow$ exp may seem rather odd. After all, few programs that we’ve written this semester (with the exception of eval!) have this type. Perhaps halting is decidable for (representations of) functions of type, say, int $\rightarrow$ int and purported arguments of type int?
The answer is “no” because we can reduce the halting problem to this problem. That is, we can encode instances of the halting problem as instances of the latter problem, demonstrating that no such decision procedure exists. The reduction is achieved by a technique called Gödelization, named after the great Austrian mathematician Kurt Gödel who invented it. The technique is based on a fundamental lemma: there is a one-to-one and onto correspondence between values of type exp and values of type int. That is, we can encode programs as integers and, conversely, decode integers into programs, without any loss of information.²
How is this done? A simple approach is to think of the ASCII representation of characters. Each character is assigned a code number between 0 and 127. We may think of each character as a “digit” in base 128. Then a string is nothing more than a base-128 number — in other words, a very large integer. Now this representation is somewhat awkward to work with, especially if we want to take apart programs and put them back together. But we can, if we’d like, write a parser and un-parser to translate to and from abstract syntax (the type exp!), where we can do the real work. The important point is that the parsing and un-parsing can be designed to be mutual inverses: if you parse a string, then un-parse it, you get back the same string, and vice-versa. In this way we can reduce the halting problem for functions of type exp $\rightarrow$ exp to the halting problem for functions of type int $\rightarrow$ int, demonstrating that the latter is also undecidable.
Exercise 2 Give an informal argument that halting is undecidable for representations of functions of type int list $\rightarrow$ int list.
²For this to work with programs of any size, we must assume that our integers are of unbounded size. Mere 32-bit machine words will not suffice. But, as we said in the introduction, we are abstracting away from the physical capacity of a given machine.
4 Church’s Thesis
To complete our discussion of computability, we consider the generality of our results. So far we’ve demonstrated that several problems about ML functions and about representations of ML functions as ML data structures are all undecidable. The possibility remains, however, that all of this is an artifact of ML. Might it perhaps be possible to decide halting of representations of C functions as C data structures? Or of Java applets represented as Java data structures?
The answer is “no” because each of these languages is capable of simulating the other. That is, we can write an ML interpreter for C code and a C interpreter for ML code (but it wouldn’t be much fun). Therefore their halting problems are equally unsolvable: a halting checker for C could be used to build a halting checker for ML by asking about the behavior of the ML interpreter written in C. The point is that all of these languages are Turing equivalent which means that they compute the exact same functions over the natural numbers, namely the partial recursive functions.
What about the languages of the future? So far no one has invented a programming language that can be executed by a computer that is more powerful (in the sense of representing number-theoretic functions) than the languages we all know. Might there be one someday? Who knows? Church’s Thesis is the claim that this will never happen — according to Alonzo Church, the great logician and mathematician, the very idea of computable function on the numbers coincides with the partial recursive functions on the natural numbers. That is, all programming languages are of the same expressive power when it comes to functions on the natural numbers.
What about functions on other types? It is easy to see that any types whose values are themselves finite (e.g., lists of integers, or lists of lists of pairs of integers, etc.) may be coded up as integers by some sort of Gödelization scheme. So we cannot hope to make progress here. But what about functions on the real numbers? Is there a sensible notion of computability for inherently infinite objects such as the reals? And are all such notions equivalent? These and related questions are a fascinating part of the extension of computation theory to higher types, all of which lie far beyond the scope of these notes.
5 Conclusion
Thus we see that there are limitations to what can be computed. What are the practical implications of these limitations? An immediate consequence is that we should not expect too much from compilers. For example, it is undecidable whether or not a given conditional branch (say, to the else clause) will ever be taken in a given program. (It is easy to devise a program for which a given if expression takes the else branch iff a given program halts on a given input.) Consequently the compiler must allow for the possibility that either branch may be taken, and must refrain from making optimizations that rely on knowing the outcome. Nor is it possible to determine whether a given variable will ever take on values greater than, say, 255, limiting the possibilities for the compiler to represent it as a byte rather than a full word. In fact any non-trivial property of the execution behavior of programs is undecidable (this can be proved!). On the one hand this is disappointing — only so much can be automated. On the other hand it is a “full employment” guarantee for compiler writers — since the ultimate solution is not programmable, there is always room to improve the partial solutions that are programmable.
\(^3\)We are glossing over a few details here, but this is the general idea.
|
{"Source-Url": "http://www.cs.cmu.edu:80/~me/courses/212/handouts/computability.pdf", "len_cl100k_base": 7056, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27312, "total-output-tokens": 7641, "length": "2e12", "weborganizer": {"__label__adult": 0.0004343986511230469, "__label__art_design": 0.0003867149353027344, "__label__crime_law": 0.00046706199645996094, "__label__education_jobs": 0.0012674331665039062, "__label__entertainment": 0.00010496377944946288, "__label__fashion_beauty": 0.00020229816436767575, "__label__finance_business": 0.00024187564849853516, "__label__food_dining": 0.0006203651428222656, "__label__games": 0.0007605552673339844, "__label__hardware": 0.001514434814453125, "__label__health": 0.0009312629699707032, "__label__history": 0.00034809112548828125, "__label__home_hobbies": 0.00016164779663085938, "__label__industrial": 0.0006527900695800781, "__label__literature": 0.0007882118225097656, "__label__politics": 0.0003633499145507813, "__label__religion": 0.0008516311645507812, "__label__science_tech": 0.08306884765625, "__label__social_life": 0.0001232624053955078, "__label__software": 0.0053253173828125, "__label__software_dev": 0.89990234375, "__label__sports_fitness": 0.00039267539978027344, "__label__transportation": 0.0008387565612792969, "__label__travel": 0.00022971630096435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27126, 0.0077]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27126, 0.6797]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27126, 0.8842]], "google_gemma-3-12b-it_contains_pii": [[0, 2344, false], [2344, 6164, null], [6164, 10103, null], [10103, 14047, null], [14047, 18395, null], [18395, 19591, null], [19591, 23475, null], [23475, 27126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2344, true], [2344, 6164, null], [6164, 10103, null], [10103, 14047, null], [14047, 18395, null], [18395, 19591, null], [19591, 23475, null], [23475, 27126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27126, null]], "pdf_page_numbers": [[0, 2344, 1], [2344, 6164, 2], [6164, 10103, 3], [10103, 14047, 4], [14047, 18395, 5], [18395, 19591, 6], [19591, 23475, 7], [23475, 27126, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27126, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
1b7d91428f91ca2759dfc340576e8d14fbdb4b4a
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007/11688839_14.pdf", "len_cl100k_base": 6967, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34486, "total-output-tokens": 9175, "length": "2e12", "weborganizer": {"__label__adult": 0.0006318092346191406, "__label__art_design": 0.0007176399230957031, "__label__crime_law": 0.0005931854248046875, "__label__education_jobs": 0.0005502700805664062, "__label__entertainment": 0.00015115737915039062, "__label__fashion_beauty": 0.0002963542938232422, "__label__finance_business": 0.0004730224609375, "__label__food_dining": 0.0005712509155273438, "__label__games": 0.0015058517456054688, "__label__hardware": 0.0305938720703125, "__label__health": 0.0006909370422363281, "__label__history": 0.0005946159362792969, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.0017404556274414062, "__label__literature": 0.00026679039001464844, "__label__politics": 0.00042366981506347656, "__label__religion": 0.0008182525634765625, "__label__science_tech": 0.2890625, "__label__social_life": 6.139278411865234e-05, "__label__software": 0.01352691650390625, "__label__software_dev": 0.654296875, "__label__sports_fitness": 0.0005016326904296875, "__label__transportation": 0.0015201568603515625, "__label__travel": 0.00029015541076660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36302, 0.04371]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36302, 0.22771]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36302, 0.88817]], "google_gemma-3-12b-it_contains_pii": [[0, 2623, false], [2623, 5874, null], [5874, 8909, null], [8909, 11160, null], [11160, 13225, null], [13225, 15284, null], [15284, 17511, null], [17511, 20584, null], [20584, 21654, null], [21654, 24632, null], [24632, 26491, null], [26491, 28549, null], [28549, 31153, null], [31153, 34187, null], [34187, 36302, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2623, true], [2623, 5874, null], [5874, 8909, null], [8909, 11160, null], [11160, 13225, null], [13225, 15284, null], [15284, 17511, null], [17511, 20584, null], [20584, 21654, null], [21654, 24632, null], [24632, 26491, null], [26491, 28549, null], [28549, 31153, null], [31153, 34187, null], [34187, 36302, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36302, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36302, null]], "pdf_page_numbers": [[0, 2623, 1], [2623, 5874, 2], [5874, 8909, 3], [8909, 11160, 4], [11160, 13225, 5], [13225, 15284, 6], [15284, 17511, 7], [17511, 20584, 8], [20584, 21654, 9], [21654, 24632, 10], [24632, 26491, 11], [26491, 28549, 12], [28549, 31153, 13], [31153, 34187, 14], [34187, 36302, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36302, 0.0875]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
5753143fcc9085fd21e2d9b8cefa65cabb030216
|
[REMOVED]
|
{"Source-Url": "https://idus.us.es/xmlui/bitstream/handle/11441/65240/Analyzing%20Strategic.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 5514, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23683, "total-output-tokens": 6248, "length": "2e12", "weborganizer": {"__label__adult": 0.0005731582641601562, "__label__art_design": 0.0007891654968261719, "__label__crime_law": 0.0007281303405761719, "__label__education_jobs": 0.003910064697265625, "__label__entertainment": 0.0001882314682006836, "__label__fashion_beauty": 0.000316619873046875, "__label__finance_business": 0.01461029052734375, "__label__food_dining": 0.0006861686706542969, "__label__games": 0.0012140274047851562, "__label__hardware": 0.0013885498046875, "__label__health": 0.001316070556640625, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 0.0001474618911743164, "__label__industrial": 0.00128936767578125, "__label__literature": 0.0006327629089355469, "__label__politics": 0.0006117820739746094, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.1741943359375, "__label__social_life": 0.0001500844955444336, "__label__software": 0.0195770263671875, "__label__software_dev": 0.7744140625, "__label__sports_fitness": 0.0004165172576904297, "__label__transportation": 0.0012950897216796875, "__label__travel": 0.0003521442413330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27423, 0.03799]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27423, 0.1463]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27423, 0.91243]], "google_gemma-3-12b-it_contains_pii": [[0, 2357, false], [2357, 5861, null], [5861, 8785, null], [8785, 12089, null], [12089, 14804, null], [14804, 16629, null], [16629, 18510, null], [18510, 20104, null], [20104, 22038, null], [22038, 23964, null], [23964, 26771, null], [26771, 27423, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2357, true], [2357, 5861, null], [5861, 8785, null], [8785, 12089, null], [12089, 14804, null], [14804, 16629, null], [16629, 18510, null], [18510, 20104, null], [20104, 22038, null], [22038, 23964, null], [23964, 26771, null], [26771, 27423, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27423, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27423, null]], "pdf_page_numbers": [[0, 2357, 1], [2357, 5861, 2], [5861, 8785, 3], [8785, 12089, 4], [12089, 14804, 5], [14804, 16629, 6], [16629, 18510, 7], [18510, 20104, 8], [20104, 22038, 9], [22038, 23964, 10], [23964, 26771, 11], [26771, 27423, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27423, 0.20149]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
a5af76ca77a64b63317c5a23939481ec2d7afdcf
|
12-15-2012
Risk Mitigation in Corporate Participation with Open Source Communities: Protection and Compliance in an Open Source Supply Chain
Matt Germonprez
Brett Young
Lars Mathiassen
Julie E. Kendall
Ken E. Kendall
See next page for additional authors
Follow this and additional works at: http://aisel.aisnet.org/irwitpm2012
Recommended Citation
Germonprez, Matt; Young, Brett; Mathiassen, Lars; Kendall, Julie E.; Kendall, Ken E.; and Warner, Brian, "Risk Mitigation in Corporate Participation with Open Source Communities: Protection and Compliance in an Open Source Supply Chain" (2012). International Research Workshop on IT Project Management 2012. 3.
http://aisel.aisnet.org/irwitpm2012/3
This material is brought to you by the International Research Workshop on IT Project Management (IRWITPM) at AIS Electronic Library (AISeL). It has been accepted for inclusion in International Research Workshop on IT Project Management 2012 by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org.
Mitigation in Corporate Management of Open Source Community Participation: Protection and Compliance in an Open Source Supply Chain
Matt Germonprez
University of Nebraska at Omaha
Brett Young
Georgia Gwinnett College
Brian Warner
Linux Foundation
Julie Kendall
Rutgers University
Ken Kendall
Rutgers University
Lars Mathiassen
Georgia State University
Liang Cao
University of Nebraska at Omaha
ABSTRACT
Open source communities exist in large part through increasing participation from for-profit corporations. The balance between the seemingly conflicting ideals of open source communities and corporations creates a number of complex challenges for both. In this paper, we focus on corporate risk mitigation and the mandates on corporate participation in open source communities in light of open source license requirements. In response to these challenges, we aim to understand risk mitigation options within the dialectic of corporate participation with open source communities. Rather than emphasizing risk mitigation as ad hoc and emergent process focused on bottom lines and shareholder interests, our interest is in formalized instruments and project management processes that can help corporations mitigate risks associated with participation in open source communities through shared IT projects. Accordingly, we identify two key risk domains that corporations must be attendant to: property protection and compliance. In addition, we discuss risk mitigation sourcing, arguing that tools and processes for mitigating open source project risk do not stem solely from a corporation or solely from an open source community. Instead they originate from the interface between the two and can be paired in a complementary fashion in an overall project management process of risk mitigation.
Keywords
Open source community, corporate participation, risk mitigation, project management, licenses, compliance.
INTRODUCTION
Corporate participation with open source communities has become a viable business model in the development of IT artifacts that contain both corporate and community characteristics. Within this, neither a corporation nor an open source community is given preference over the other, as the design and development of a shared artifact is an activity of all involved. Open source design and development has moved beyond the image of the basement hacker to include Fortune 500 corporations leveraging, differentiating, and contributing for reasons of corporate value and community maintenance. Within this ecosystem of open source design and development, the importance of shared artifacts is derived from the distributed, yet communal, efforts from otherwise competitive corporations (Dahlander, 2007; Germonprez et al., 2011).
The Linux kernel is an open source artifact, freely available for public consumption and not owned by any corporation. The Apache web server is an open source artifact, used and modified without charge yet critical to the
---
5 This project is funded under the NSF VOSS program as award #1255426 Organizational Participation in Open Communities
success of many business practices. In both cases, the artifact is well recognized, clearly defined, and resides within an open source community. The artifact is designed and continues to evolve through both community and corporate ideals. Without shared participation, open source artifacts would not exist in their current forms. Linux and Apache illustrate high profile examples of open source and are not indicative of all open source projects. They are also not necessarily representative of the majority of open source projects that corporations are engaged in.
Open source software has a broad reach, tied to software packages used both internally and externally to a corporation. Unless sourced from a proprietary vendor, software packages are often comprised of source files licensed under a variety of different licensing conditions. A scan of Enyo 2.0, a JavaScript framework, reveals 1,400 files regulated by ten different open source licenses. In the case of Enyo, the implications of using the software package as part of a larger corporate offering has implications beyond simple attribution to the originating design of the Enyo package (Cheliotis, 2009). As is the case with Enyo and other open source packages, consuming and distributing multi-licensed open source software packages can affect corporate risk project management in open source community participation. For example, a software package that contains files licensed under the General Public License V2 requires provision of all source code that is connected to the licensed files for three years, when an open source package is used within a corporate product for sale. So, as Cisco modifies the Linux kernel in their Internet routers, they must post the Cisco-developed source code that interacts with the Linux kernel. In some situations, these requirements are well understood by a corporation and the release of necessary source code to an open source community is considered a cost of doing business. However, the Open Source Initiative recognizes over 60 open source licenses, all with varying degrees of requirements, responsibilities, and risks incurred by a user of open source software. It is therefore not unlikely that a corporation is unaware of the particular licensing requirements within a software package.
In this paper we explore the implications of open source software packages on corporate risk. We demonstrate how risks are inherent, yet can be mitigated in corporate participation with an open source community and we argue that risk mitigation originates from both within participating corporations and within open source communities. In particular, we address the following research question:
1) How is risk mitigation manifest with open source communities?
Corporate participation with open source communities is a practiced business approach in the design, development, and deployment of software packages. It is not a ‘one size fits all’ consideration, used by all corporations in all circumstances. However, it is a consideration that has gained increasing traction in corporations for leveraging an open source community to extend the design and development capacity within a corporation. Open source software has become a suitable option for corporations looking to expand their design and development options in fast-paced and highly competitive markets. It is against this backdrop we respond to the research question with the goal to develop a risk mitigation approach that can help alleviate the potential pitfalls associated with corporate participation with open source communities. As a result, we provide perspectives on the perplexing situation of how risk can be mitigated as a necessary part of using, developing, and managing open source projects as part of commercial product releases.
RISK IN CORPORATE PARTICIPATION WITH OPEN SOURCE COMMUNITIES
Risks exist in every software development project. Managing and mitigating risk is one key to successful IT project management (Du et al, 2007; Keil et al, 2008). The software development literature largely focuses on internal risks – e.g. the risks that occur inside a corporation. These risks include project scheduling issues, project personnel issues, project culture, control challenges, technical issues, software adoption issues, vendor selection and contracting difficulties, and relationship management problems.
---
6 http://www.enyojs.com
7 http://www.gnu.org/licenses/gpl-2.0.html
8 http://opensource.org/licenses/alphabetical
Risks involving open source software licensing are somewhat different (Al Marzouq et al., 2005). While it is possible to have a third party assume some responsibility to compensate an injured party after an event occurs, a corporation that sells an artifact that includes improper handling of licensed code remains legally responsible for the event. For example, if a corporation sells a software package using open source code and fails to abide by the licensing agreement for that code, that corporation must defend itself. Here lies open source software risk that must be mitigated. While open source software adoption risks have been a topic for corporations when weighing the benefits and consequences of open source software implementation (e.g. Daniels et al., 2011), risk and risk mitigation typically have not been a focus in the literature on open source exchange and management. In open source software projects, risk is inherent in a number of external places. In the design and development of open source software, external risk is noticeably manifest in software supply chains, necessitating risk mitigation strategies (Gefen and Carmel, 2008). This is evident as software design and development often entails the integration of open source code, which was developed beyond a corporation’s boundaries into an internal and corporate-maintained code stream.
Perhaps the most recognized conceptualization of corporate participation with open source software is an adoption approach of common, free, and open source software in day-to-day corporate activities (Castelluccio, 2008). This would include the cases for preferring Linux over Windows, Apache in favor of IIS, and MySQL in favor of Oracle. While there may be economic advantages for these cases within a corporate setting, we consider this style of participation to simply be one of adoption, not necessarily an engaged participation. In these cases, an open source community does not need to be engaged by a corporation; instead they contract vendors to provide installation, maintenance, and support much the same as proprietary software vendors. Risk is also evident in the adoption approach of common, free, and open source software as an open community responsible for the design, development, and maintenance of a particular open source project may disband, leaving little in the way of future support or product development. Risk may also manifest through a lack of tooling associated with an open source project in relation to proprietary systems (Yalta and Lucchetti, 2008), burdening an adopting corporation with the responsibility of designing and developing toolsets internally. Many of the risks incurred in the ‘adoption’ style of participation stem from the infancy or instability of an open source community in providing real value to a corporation often in the light of proprietary options (Ringle, 2004).
As a more active form of engagement, corporations can take a shared approach towards open source communities, deciding deliberately and strategically to participate with open source communities. A corporation may choose to participate for reasons of ‘upstreaming’ or ‘franchising’ corporate philosophy into an open source community. As an example, a corporation may participate with an open source community in an effort to embed a corporate (and competitive) form of virtualization into an open source operating system. If a corporation is successful in contributing their corporate view of virtualization to an open source community, distributions of the operating system will include and hence circulate this vision. In some cases, corporations may be forced to engage a shared approach in an open source community through current business partnerships that a corporation is engaged in. A corporation can have their ‘hand forced’ into participation with an open source community as they seek to develop or maintain a competitive position within a selective market. For example, participating with an open source community may be in an effort to broaden silicon chip distribution to include the smartphone and tablet markets. As smartphones and tablets can use a Linux-based operating system, licensed under Apache 2.0 and the General Public License (V2), corporations are required to participate in the respective open source communities as defined in the community licenses. Finally, a corporation may choose to participate in an effort to leverage the collaborative efforts of an open source community (Fitzgerald, 2006).
In a shared approach, participation is generally reciprocal between a corporation and an open source community (Feller et al., 2008). There exists an intention to leverage an open source community for reasons of corporate gain while at the same time contributing back to an open source community in efforts to provide its long term advancement and sustainability. Risks in a shared approach are often internal to a corporation since such time and effort may be lost if a corporation fails to effectively upstream contributions to an open source community. In this scenario, a participating corporation must maintain their own potential open source contributions internally, effectively negating the value in participation (Kogut and Metiu, 2001). Corporations additionally incur risk by potentially (and somewhat inadvertently) releasing intellectual property to an open source community due to open source licensing requirements (McGhee, 2007).
Finally, corporate participation with open source communities may be a *supply chain approach*, stemming from an exchange of open source software packages between corporate partners. In the case of open source supply, participation is not between corporation and an open source community but is instead between two corporations. In this, corporate participation is best understood with an open source software supply chain where software should be assumed to not be the domain of a single corporation, originating from and existing solely within one corporation. Instead, software should be assumed to originate from a variety of sources and exchanged between corporations as part of supplier-buyer chain of relationships. A software supply chain is not exempt from including open source packages and files, regulated under a variety of licenses. In such a software supply chain, attention to open source software and the requisite licenses is paramount in maintaining compliance with the originating open source communities. While this style of participation is between corporations in a supply chain, participation with an open source community remains implicit as a buying corporation is subject to the necessary terms and obligations associated with that software, and unfamiliarity with the community and its regulations is not an exemption.
In a case of a supply chain approach towards an open source community, it is recognized that open source software carries necessary licensing obligations, and corporations inherit the risk of the associated license responsibilities (Al Marzouq et al., 2005). Risk is inherent in a corporation-to-corporation exchange of software to which portions may be open source (Cheliotis, 2009). Risk is incurred via supply chain relationships as the establishment and refinement of supply chain partnerships is critical and valued component in the health of any corporation. Disturbance to these relationships can have complicating effects on an overall software supply chain.
Across each of the three styles, there are inherent risks associated with open source participation. Risks can include the potential release of intellectual property to an open community or partner corporation, resulting from an unsuitable interpretation of an open source license. Risk can also include the failure to provide requisite attribution to an originator of an open source package, resulting in a reduced standing within an open community or amongst partner corporations. Certain risks are more problematic than others and some risks are more evident across certain styles of corporation participation, across different software packages, and within variable open source communities. A review of Aksulu and Wade (2010) reveals little empirical work regarding risk, risk mitigation, and corporate participation in a supply chain, project management approach to open source communities. Of the research that addresses, or mentions risk, there is an unbalanced concentration toward risk and risk mitigation associated with an open source software adoption approach (Aksulu and Wade, 2010). Table 1 illustrates a summary of risk in corporate participation with open source communities, addressing our first research question.
In response to the differentiated risks across the various forms of participation, we empirically focus on what is evident to be the most poorly established and least understood area of risk in open source participation: open source participation in a software supply chain approach. In the following sections of the paper, we specifically consider risk in the context of a supply chain approach of corporate participation with open source communities. We understand risk mitigation approaches that exist within both corporations and open source communities, and based on that understanding we suggest that risk mitigation is a shared process between corporate participants in a project management effort to both improve and be improved by open source communities.
RESEARCH APPROACH
We applied a field study approach in the investigation of risk mitigation in a supply chain approach of corporate management in open source participation. Members of the research team have been investigating corporate participation with open source communities for the past two years, asking questions of why and how participation is manifest. Our investigation of risk mitigation is a progression from our prior work and is in accord with our strong corporate ties and open community understanding.
Our research specifically applied the field study approach in an engagement with corporate participants, the FOSSology community, and the SPDX community as forums for understanding risk mitigation. Over the course of
---
9 The field study is part of a larger NSF-funded initiative on open source participation: [http://nsf.gov/awardsearch/showAward.do?AwardNumber=1255426&WT.z_pims_id=503256](http://nsf.gov/awardsearch/showAward.do?AwardNumber=1255426&WT.z_pims_id=503256)
the project, we have been involved in twelve working group meetings across both communities, conducted and transcribed over 70 semi-structured interviews with corporate participants in open source communities, designed and developed advancements in both the FOSSology and SPDX communities, and we continue to maintain activity in integrating the two risk mitigation tools. Our engagement has been further documented through researcher notes, meeting minutes, and community presentations. Our approach has placed our team within the respective open source communities in an effort to best understand risk mitigation in a supply chain approach through active participation with both corporations and open source communities.
As mentioned, the research team has integrated with open source communities dedicated to risk mitigation, become active participants in their design and development efforts. In particular, the research team identified two open source communities that have mutual trajectories so that team participation and engagement in one can benefit the efforts of the other. The two risk mitigation projects include FOSSology,\textsuperscript{10} an open source data analysis tool that scans software packages for open source software and licenses. Figure 1 shows the interface for FOSSology where users can upload and scan software packages, and browse the results.

The second risk mitigation project is a specification for software package data exchange (SPDX). The tool specifies approximately 70 criteria in the exchange of software between corporations. An open source community associated with the development of the SPDX specification aims to "develop and promote adoption of a specification to enable any party in a software supply chain, from the original author to the final end user, to accurately communicate the licensing information for any piece of copyrightable material that such party may create, alter, combine, pass on, or receive, and to make such information available in a consistent, understandable, and re-usable fashion, with the aim of facilitating license and other policy compliance."\textsuperscript{11} Figure 2 shows a sample of the criteria evident in an SPDX document.
\textsuperscript{10} [http://www.fossology.org](http://www.fossology.org)
\textsuperscript{11} [http://spdx.org/content/vison-strategy-execution](http://spdx.org/content/vison-strategy-execution)
Figure 2. Sample SPDX Specification Criteria
With methodological consideration toward risk mitigation within open communities, the efforts of the research team have been both active and engaged. The first author has been an active member with two open source communities engaged in risk mitigation efforts for the last six months and has presented at LinuxCon 2012 on these efforts. The research team hosts a public instance of an open source project on risk mitigation. Finally, the research team includes a member from the largest non-profit consortium dedicated to fostering the growth of Linux, including one of our investigated risk mitigation projects.
FINDINGS
Our findings focus on the second of the two presented research questions; specifically how risk mitigation is manifest within open source communities. We found that risk mitigation resides both internally to corporations and also is distributed throughout an open community. Corporate and community approaches can work in combination and provide a broad approach toward risk mitigation. We will present three approaches toward risk mitigation stemming from our fieldwork with corporate participants in open source communities as well as our direct engagement with these communities.
12 http://fossology.ist.unomaha.edu
A Case for Software Scanning. Software scanning is becoming an increasingly relevant activity with respect to risk mitigation in corporate participation with open source communities. There are a number of open source and for-profit offerings associated with software scanning, with several corporations offering these services at an enterprise level. From a more local level within open source communities, tools have emerged to support software-scanning processes to “build a community to facilitate the study of free and open source software by providing free data analysis tools.”
Software scanning provides a level of understanding, not decision making, which can be used to support either individual or corporate open source efforts. Software scanning is most commonly used to support processes of license vetting. As part of a larger process of license and property protection decision making within corporations, software scanning tools have emerged as a key part of knowing the inherent risks in an open source package. In the case of the FOSSology, the open source community to which the research team participates, software scanning provides a window into the types and locations of licenses embedded in open source code. Figure 3 provides an image of the results from the earlier mentioned Enyo 2.0 Framework:
Figure 3: License Scanning Output for the Enyo 2.0 Framework
In this example, the Enyo 2.0 repository contains 38 licenses within the software package (10 unique). The licenses range from permissive (MIT) to restrictive (GPL), each carrying different obligations for a corporation using this software package. Understanding the obligations of each license is critical to managing and mitigating risk to a corporation wanting to utilize this package in its own commercially released products. This subsequently leads to the related consideration of what can be done with the scanned software license information.
A Case for Open Source Program Offices. With respect to a supply chain approach, corporations have developed internal mechanisms, both to manage the protection of intellectual property and to maintain necessary compliance with open source package licenses. Figure 4 illustrates a management review that an open source program office might conduct when evaluating internal open source projects, in part understood through software scanning results. In this example, a project team first creates a project incorporating open source code. This project is given management approval and is reviewed by a business attorney who offers guidance regarding legal risks. Once legal approval is given, an open source review board reviews the project using internal code evaluation tools to identify areas needing further review and then follows up with the originating project team. If pre-approved licenses are involved, the licensing team reviews the project and offers legal guidance regarding present risks. If no pre-approved licenses are involved, a new license review is performed. Finally, after completing all reviews, the project is approved or denied.
http://www.fossology.org
Open source review boards are generally comprised of corporate employees familiar with the complexities and nuances associated with open source community participation. Through an open source review board, risk is mitigated through corporate processes as in Figure 4, wherein supply chain participation is undertaken only after license and property concerns are satisfied.
**The Case of Open Source Data Exchange (Community).** Knowing license requirements with a software package are (see FOSSology) and knowing how to apply those findings (see open source program offices) is key in any risk mitigation agenda. Exchanging data between corporations, per an open source supply chain, represents a final consideration in understanding of risk mitigation in a supply chain approach to corporate participation with open source communities.
Emergent as risk mitigation tool from within open communities, the software package data exchange (SPDX) specification is “a standard format for communicating the components, licenses and copyrights associated with a software package. The SPDX standard helps facilitate compliance with free and open source software licenses by standardizing the way license information is shared across the software supply chain.” The goal of SPDX is to mitigate risk inherent between corporations in a supply chain of open source code. SPDX is analogous to a bill of materials describing which parts and components are included during the product’s manufacture. Similarly, SPDX articulates the material within a software package, including the evident licenses, insurance of SPDX validity, and SPDX author credentials. Figure 5 and Figure 6 illustrate SPDX output exchanged between corporations in a software supply chain.
---
14 [http://www.spdx.org](http://www.spdx.org)
In each figure, information is provided to articulate the delivery and receipt of software in a software supply chain. These figures only illustrate 15 of 70 fields in an SPDX document but demonstrate the nature and style of information exchanged in an effort to mitigate risk between corporations participating with open source communities.
We found that risk mitigation in an open source supply chain entails both community developed tools (software scanning and data exchange) and corporate processes (open source review board). These three approaches can be considered in relation to each other as a connected process for risk mitigation as seen in Figure 7:
In Figure 7, a software package scan would produce results that can be used per an open source program office review. Upon approval, the original software scanning results can be used to generate the data exchange documents. Across all, risk is driven out of the process by exposing, understanding, and expressing the obligations inherent in an open source software package. Continued work remains as the research team continues to be involved in the process model expressed in Figure 7, specifically focusing on how the results from software scanning can be expressed such that they can easily merge with data exchange.
CONCLUSIONS
As risk mitigation in an open source supply chain has origins in both a corporate and communal setting, it can be best understood as a shared process for open source project management. The efforts benefit both parties as corporations seek to leverage open communities in efforts to increase design and development capacities and communities seek to leverage corporations for improved market share and distribution. As both sides are inherently dependent on the other, developing approaches toward risk mitigation represents practical, business-driven solutions to develop long-term, productive relationships for all.
Drawing on the literature on risks and risk mitigation within the software discipline it is our intention to further develop this line of research. We have outlined the contours of some key risks and the nature of risk mitigation tactics in this paper. In future iterations, we will include a richer understanding of the risk domains that corporations face when engaging with open source communities in joint IT projects, of the different sources of mitigation that are available, and the heuristics by which managers apply this understanding to manage corporate projects in open source communities. In developing such insights into tools and processes for management, we also need to consider the appropriate form under which risks may be related to mitigation tactics (cf. Iversen et al. 2004).
REFERENCES
|
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1012&context=irwitpm2012", "len_cl100k_base": 5543, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 32100, "total-output-tokens": 7110, "length": "2e12", "weborganizer": {"__label__adult": 0.0007162094116210938, "__label__art_design": 0.0011701583862304688, "__label__crime_law": 0.004467010498046875, "__label__education_jobs": 0.01253509521484375, "__label__entertainment": 0.00023603439331054688, "__label__fashion_beauty": 0.0002982616424560547, "__label__finance_business": 0.111328125, "__label__food_dining": 0.0006742477416992188, "__label__games": 0.0017385482788085938, "__label__hardware": 0.0009279251098632812, "__label__health": 0.0011539459228515625, "__label__history": 0.0006155967712402344, "__label__home_hobbies": 0.0003933906555175781, "__label__industrial": 0.0013427734375, "__label__literature": 0.0010242462158203125, "__label__politics": 0.0018358230590820312, "__label__religion": 0.0005779266357421875, "__label__science_tech": 0.06182861328125, "__label__social_life": 0.0006051063537597656, "__label__software": 0.14404296875, "__label__software_dev": 0.650390625, "__label__sports_fitness": 0.0003676414489746094, "__label__transportation": 0.0010728836059570312, "__label__travel": 0.00048661231994628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33367, 0.02795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33367, 0.38904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33367, 0.91411]], "google_gemma-3-12b-it_contains_pii": [[0, 1069, false], [1069, 1069, null], [1069, 4177, null], [4177, 8699, null], [8699, 14168, null], [14168, 17417, null], [17417, 19179, null], [19179, 21660, null], [21660, 22951, null], [22951, 26065, null], [26065, 27864, null], [27864, 28528, null], [28528, 32379, null], [32379, 33367, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1069, true], [1069, 1069, null], [1069, 4177, null], [4177, 8699, null], [8699, 14168, null], [14168, 17417, null], [17417, 19179, null], [19179, 21660, null], [21660, 22951, null], [22951, 26065, null], [26065, 27864, null], [27864, 28528, null], [28528, 32379, null], [32379, 33367, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33367, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33367, null]], "pdf_page_numbers": [[0, 1069, 1], [1069, 1069, 2], [1069, 4177, 3], [4177, 8699, 4], [8699, 14168, 5], [14168, 17417, 6], [17417, 19179, 7], [19179, 21660, 8], [21660, 22951, 9], [22951, 26065, 10], [26065, 27864, 11], [27864, 28528, 12], [28528, 32379, 13], [32379, 33367, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33367, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
9b41361d48ea3b183f54404f761c0e7a297a0a5e
|
Abstract
Change impact analysis is generally regarded as a very difficult program comprehension problem. One of the reasons for this is that there is no universal definition for dependency between software artifacts, only algorithms that approximate the dependencies.
In the past two decades, different kinds of algorithms have been developed by researchers. But which algorithm is the most suitable in a specific situation, which one finds the relevant dependencies in the best way? What kinds of dependencies are important for the programmers? What kinds of algorithms do they work with? Finding the most relevant dependencies is difficult, and it is essentially a creative mental task.
A possible way to answer the above questions is to involve programmers in a survey, and listen to their subjective opinions based on expertise and experience in program comprehension. In this paper, we present the results of our survey on this theme. We wanted to know what the difference was between the results of some well-known algorithms and programmers’ opinions and, in addition, among programmers’ opinions as well. Hence we conducted a case study.
Keywords
Change impact analysis · software dependencies · JRipples · BEFRIEND
1 Introduction
During its lifecycle, a software program can have various releases in which new features are added, bugs are fixed or new requirements are met. In order to implement such changes in the software, not only do the developers add new modules to the program, but they also alter the existing code itself to make it fit the new demands or requirements. Consequently, as the program evolves, more developers may be involved and the potential impact of change may not be fully understood by a developer making later modifications on the software. This is why impact analysis techniques are often used to determine the potential effect of a proposed software change on a subset or the whole of a given program [38].
Over time, different impact analysis algorithms have been developed. One of the most general groups is called computation-based algorithms, where the computational relationships between program elements are tracked, in particular via program slicing, call graphs and other similar methods. Another approach is to analyze different software artefacts in order to find possible semantic links e.g. a historically consistent co-change of program elements [17].
As observed by Boehm [34], modifying software generally involves three phases, namely understanding the existing software, modifying the software, and revalidating the modified software. In order to successfully complete a modification task, programmers must locate and understand the part of the software system that will be modified.
When the code to be understood is completely new to the programmer, Pennington [26] found that the programmers first build a control flow abstraction of the program called the program model. If the program model representation exists, he showed that a situation model is developed. This representation uses the program model to create a data-flow/functional abstraction. Then the integrated model assumes that programmers can start building a conceptual model at any level that appears suitable. Programmers can switch between any of the three model components during the comprehension process.
In this survey, we examined the thinking and approaches applied by different programmers during several impact analysis
tasks to generalize the programmers’ decisions, and learn what kinds of dependencies they were able to identify. We used two different impact analysis tools, which present the potential dependencies in a different way. The impact sets found by using two different tools were compared and we examined the differences between programmers’ impact sets based on their qualifications and experience as well.
The rest of the paper is organized as follows. In Section 2 we review related work on the comparison of programmers’ views on program understanding. In Section 3 we continue a motivation example and draw up the main research questions. Then we overview the methods used in the study in Section 4. In Section 5 we describe the design and methodology of our study. Section 6 describes the results of the study, and Section 7 lists all the possible problems that might affect its validity. In the last section we draw some pertinent conclusions about the result of our survey.
### 2 Related work
Many empirical studies of programmers in software engineering have been reported in the literature. Basili et al. [27] laid the foundational work on this topic. They devised a framework for analyzing most of the experimental work performed in software engineering in the 1980s. They recommended that their framework be used to facilitate the definition, planning, operation, and interpretation of future studies as well. Moreover, the authors identified several key problem areas of experimentation in software engineering and discussed their possible solutions.
Ko et al. [30] presented a new model for program understanding as a process of searching, relating, and collecting information. Their goal was to investigate the programmers’ strategies for understanding and utilizing relevant information and discover ways in which tools and environments (e.g. Eclipse) might be related to these strategies. There were two types of dependency navigations that they outlined; those based on direct and indirect static dependencies. In contrast to their approach, we looked for more kinds of static dependencies in the programmers’ impact sets (direct and indirect dependencies as well) such as call, co-changing, slice and SEA relations.
Robillard et al. [31] not only looked at what developers do in general during a program modification task, but they also investigated what successful programmers do in contrast to unsuccessful ones. They concluded that in the context of a program investigation task, the systematic examination of a piece of code is generally more effective than the opportunistic approach. It tells us that algorithmic thinking is important in the understanding of a program, especially in the case of a modification task. We will try to identify the range of dependency methods available in programmers’ impact sets.
Mosemann and Wiedenbeck [29] presented three methods for collecting information about the program: sequential, control flow, and data flow. Novice programmers were asked to use only one of these methods to understand a program. They found that the sequential and control flow methods of navigation were significantly different from the mean of the data flow navigation. Novice programmers had great difficulty in reading the program using just data flow navigation. However, there was no significant difference between the sequential and control flow methods. In our survey, we compared the impact sets computed by programmers and algorithms. While the programmers could access the code before and after the examined code part – so the sequential flow was given –, in our experiment, slice information consisted of control and data dependencies, while SEA and call information only consisted of control dependencies. So in this survey we examined this kind of methods and co-changing as well.
The above-mentioned papers are about program comprehension by programmers. However, our aim was also to retrieve different kinds of impact sets and compare them with the impact set identified by programmers. We found some papers that discussed a comparison of impact sets identified different.
Lindvall and Sandahl [12] sought to quantify how well experienced software developers predicted changes by conducting RDIA (Requirement-Driven Impact Analysis), where RDIA in their case was the general activity of predicting changes based on the change request. They compared and evaluated the predictions by examining the changes in the concrete implementation. Their results indicated that the impacted set predicted by the programmers without any help is generally correct but underestimated.
In one of our previous articles [32] we used the same data as in this paper. Then we presented an empirical comparison of four static impact analysis techniques (call information, static slice, SEA relation and co-changing files retrieved from SVN repositories) and dependencies which were determined by programmers. The paper reported an empirical study that focused on how much the different kinds of dependency sets support program comprehension. In that article we examined static impact analysis algorithms, not the programmers’ opinions. Now we turn our attention to the programmers.
### 3 Motivation
There are several kinds of dependencies that may be found during a change impact analysis. Many algorithms exist which can determine the possible dependencies. In a certain situation it is important to know what kinds of dependencies are present. While some algorithms may provide irrelevant dependencies, some of them can be essential to propagate the given change. To illustrate the differences between impact sets computed by different algorithms, consider the example in Fig. 1.
When we compute the dependencies of these classes with different kinds of impact analysis algorithms, we may obtain quite different results. If we determine the impact sets of the particular classes with a call graph [21], we find that class C depends on class A and class B due to the method invocations. Using the static forward slice approach [33], we find that class C depends
on class A and class B due to control dependency and class B depends on class A because of data dependency. After determining Static Execute After (SEA) relations \[35\], each class is related to each class.
This example shows how the behaviour of the impact analysis algorithms can vary. But what about the programmers? If the programmers try to collect all the relevant dependencies for a potential change requirement, what kinds of dependencies are they interested in?
Our goal is to contrast the impact sets identified automatically by algorithms with those identified manually by programmers. With the help of this study, we can get a deeper insight into a programmer’s way of thinking and see what kinds of dependencies they find relevant or irrelevant in a given situation. We used two tools (JRipples \[37\] and BEFRIEND \[45\]) that can help the programmer to find the impact set of a given modification in a step-by-step fashion through direct dependencies, while BEFRIEND shows the potential direct and indirect dependencies without the dependency path. Here we formulate the following research questions:
- **Q1**: In the case of determining dependencies in an incremental way by using JRipples, what proportion of dependencies identified by programmers are identified by the different impact analysis algorithms?
- **Q2**: In the case of determining dependencies from a set of direct and indirect dependencies by using BEFRIEND, what proportion of dependencies accepted by programmers were identified by the different impact analysis algorithms?
- **Q3**: Is there any difference among programmers’ impact sets based on their qualifications and experience?
### 4 Impact analysis methods
During the evolution of a software program, programmers add new functionalities and release its new versions. If a programmer changes the source code, it may be difficult to determine the impact of the changes, especially in large applications, hence different tools (algorithms) are needed to handle this problem.
In the survey we applied different impact analysis algorithms. In this section, we will overview the algorithms that were used.
The algorithms were implemented within the JRipples Java tool and framework \[37\]-\[41\], which is an integrated tool in the Eclipse development environment supporting incremental change and relevant program analysis for the programmer, and it manages the organization of the steps that comprise the impact analysis and the subsequent change propagation. JRipples is based on the philosophy of ‘intelligent assistance’, which requires cooperation between the programmer and the tool itself.
First, the tool analyzes the program, keeps track of any inconsistencies, and then automatically marks the classes/methods which should be visited by the programmer. Its main advantage is that it covers the algorithmic parts of the change propagation, which are often difficult or error-prone for humans.
Since it is straightforward to incorporate other analyzers (algorithms) into JRipples, it can serve as a framework for identifying several kinds of static dependencies. JRipples itself supports analysis on the granularity of classes and methods. In our experimental study, we determine dependencies on the granularity of methods (except historical co-change), but we raise the granularity to the class level for our comparison.
We implemented the following algorithms within JRipples:
- **Callgraph.** We built a directed graph that represents calling relationships between methods \[21\]. The graph was built based on AST computed by Eclipse JDT.
- **Static slice.** We apply static forward program slicing \[33\] (considering data and control dependencies) to determine the impact of the method modifications. The static forward slices were computed by the Indus Java static slicer API \[23\]. A slice is performed for each statement.
- **Static Execute After (SEA).** According to the definition of SEA dependencies, method B depends on method A if and only if B may be executed after A in any possible execution of the program \[35\]-\[36\]. The computation of SEA relations is an efficient analysis algorithm, which is able to produce conservative impact sets at the method level. The determination of these relations is based on the ICCFG representation \[35\] of the graph.
the program. We built the ICCFG graph based on AST computed by Eclipse JDT. We implemented the SEA algorithm on this graph and determined all the method pairs which are in a SEA relationship.
- **Co-changing files retrieved from SVN repositories.** Some dependencies are explicitly observed in the code; the software engineer only ‘knows’ which certain set of modules needs to be changed \[^{[40]}\] to make a certain type of change. In such cases, to derive the set of source files impacted by a proposed change request, we can use historical data stored in a versioning system, namely Subversion (SVN). Since this way just the changed files can be retrieved, this analysis has only class granularity. A correlation value can be set between 0 and 1.0, if we would like to filter the co-changed classes found by the algorithms. For example, if the correlation value is 0.4, it means that class A depends on class B if in 40% of the commits when the A file changed, the B file changed as well. We got two result sets, one with a 0.4 correlation value and one with a 1.0 correlation value. We chose the correlation value of 40% because we did not want the union of the dependency sets to be overly large. The number of the SEA relations and the dependencies determined by programmers were the most extended and finding co-changing classes with a 40% correlation value gave the same amount of dependencies.
Altogether, we have 5 kinds of dependency sets (call, static forward slice, SEA, co-change\(_{0.4}\), and co-change\(_{1.0}\) relations). Two of these result sets have a method; two of them have a class; and one of them originally has a statement granularity but, of course, we raise all results to the class level to be able to compare them. Since the callgraph determines the call dependencies only one step at a time, we compute the transitive closure of the call relations of each method.
### 5 Description of the experiment
The experiment involving the participant programmers was performed in two stages (see Fig.2). First, the participants were asked to use JRipples in 7 different use cases to discover the impact set of the hypothetical changes in some particular methods of our chosen sample project. Secondly, for each scenario we stored their results together with the results produced by the specific algorithms mentioned in the previous section in a common repository (BEFRIEND) that served as a control benchmark. Then we asked the participants to evaluate all of the stored dependencies individually. By doing this, we were able to calculate valuable statistics about the relationship between the different impact sets. The following subsections provide a detailed description of the above-mentioned stages and the preparation.
#### 5.1 Subject project
First, we set up a test environment. We needed a sample project where the impact sets were defined according to the hypothetical modifications. When choosing the test project, we took the following into account:
- The code is written in Java, since JRipples analyzes only Java code.
- It has an accessible SVN repository with an extended history.
- It is a relatively complex, but not overly large piece of code – since the Indus slicer could not produce slices for large programs due to excessive memory consumption.
- It is compatible with JRE 1.4, since the Indus static slicer works only on this version of Java code.
- The code should be unknown to the participants, but be readily comprehensible.
Based on these requirements, we found an open source Java project called ownSync\[^{[1]}\] This is a small Java application that can synchronize two folders (on different machines or on the same machine) in both directions. The main characteristics of the sample project can be seen in Table 1.
<table>
<thead>
<tr>
<th>No. of classes</th>
<th>No. of methods</th>
<th>LOC</th>
<th>Non-empty LOC</th>
<th>No. of commits</th>
</tr>
</thead>
<tbody>
<tr>
<td>30</td>
<td>234</td>
<td>3666</td>
<td>3108</td>
<td>92</td>
</tr>
</tbody>
</table>
In this project, we defined some hypothetical change scenarios. We gave 7 methods from the sample project to the programmers so that they could examine them and determine the impact sets of their hypothetical changes. The methods were the following:
- `writeFolderState` of FolderState class,
- `internalMoveFile` of SyncTrashbox class,
- `loadConfig` of OwnSyncConfiguration class,
- `DeleteFolderAction` of DeleteFolderAction class,
- `forceDelete` of FileUtils class,
- `getSyncFolder` of FolderConfiguration class,
- `isAnyActionFailed` of OwnSyncResult class.
This selection was based purely on investigating the method types and their call information. Among them there are methods such as a constructor, a recursive, a getter method, a simple and a complex one, one which is called several times and one which calls several methods.
\[^{[1]}\] http://sourceforge.net/projects/ownsync/
5.2 Participants
After selecting the sample project, 11 programmers with different qualifications and experience were invited to participate in our experiment. The group of participants consisted of 4 computer science students, 5 PhD students, and 2 software developers. Most of them have been working as software developer for years: they have experience in Java and program analysis as well. It is also interesting to note that the PhD students all attended an Impact Analysis course. From here on, the participants will be referred to simply as programmers.
5.3 Overview of the experiment
First, the list of the 7 methods from our sample project was given to each programmer and the task of the programmers was to apply JRipples to discover all of the methods impacted by any possible change made in these methods. As the starting point of the change (concept location) was found by the programmers with the help of JRipples, the remaining methods of the impact set were discovered in a step-by-step look-at-the-neighbours fashion in the dependency graph built by JRipples.
We logged the programmers’ actions to retrieve the dependencies, which were determined by the participants using JRipples. After everyone had finished their first stage task, the logs were collected. The log files contained the method level dependency for each scenario. Despite the fact that the programmers searched for dependencies at the method level, we raised the results to the class level.
The algorithms mentioned in Section 4 were implemented within JRipples and we collected all the class level impact sets for the same 7 criteria produced by the different algorithms. We got the union of all class level dependencies for each kind of impact set for all 7 methods.
Before the second stage, the union of the results, either found by a programmer or an algorithm, were provided together in BEFRIEND, whose database used in our experiment is publicly available online. BEFRIEND (BEEnchmark For Reverse Engineering tools workiNg on source coDe) is a general purpose benchmark tool. The benchmark was successfully applied for evaluating and comparing design pattern miner tools, clone detector tools, rule violation checkers, and now impact analysis algorithms. Although BEFRIEND is designed to be very general, some major improvements were required in order to make it capable of evaluating and comparing impact analysis results. After the BEFRIEND improvements, the union of the impact sets computed by algorithms or programmers was uploaded to the benchmark grouped by the scenarios.
In the second stage, the programmers evaluated the class level dependencies grouped by the scenarios without knowing which tool or programmer had found them. The programmers were asked the following question: ‘Do you think there is a real dependency between these classes?’ for each uploaded dependency. There were 4 possible answers to this question, which were
- Yes, I am sure that there is a dependency. (100%)
- I think there is a dependency. (66%)
- I think there is NO dependency. (33%)
- No, I am sure that there is no dependency. (0%)
We provided the opportunity of the evaluators not only to choose yes/no answers, but to describe their level of confidence. Each answer was associated with a percentage value forming a numerical scale from the firm negative answer through the solid negative and solid positive answer to the firm positive answer.
The second stage was complete when all the contributing programmers evaluated every single dependency. The outcome of the second stage yielded some valuable statistics that could be used as input for our key research questions.
6 Results and discussion
In this section, we supply concrete answers to our research questions. We calculated statistics from the different impact sets from Stage 1 and Stage 2. The two stages are very different: tools with different user interfaces and different confidence levels. In the first stage, the programmers determined the dependencies in a step-by-step fashion where only one piece of source code could be seen at the same time and only a dependency or not a dependency could be stated. Unlike JRipples, in the second stage BEFRIEND displays not only all the dependency candidates, but both sources of the classes of a certain dependency as well. BEFRIEND returns 4 possible values (0%, 33%, 66%, and 100%) in order to characterize the programmers’ level of confidence.
6.1 Q1: In the case of determining dependencies in an incremental way by using JRipples, what proportion of dependencies identified by programmers are identified by the different impact analysis algorithms?
In the first stage, the programmers used the JRipples tool where they could identify dependencies incrementally via the dependency graph. They had to decide whether potential dependencies were really dependencies or not. The difficulty in using JRipples is that the potential dependencies appear step-by-step and the programmers can follow only one step graphically and they must keep the former dependencies (the path of origin) in their mind so as to think in a transitive way. The programmers can easily lose track of some information or overlook something important.
The algorithms and the programmers recognized 118 dependencies in total. Table 3 lists how many dependencies were determined by the given algorithms and the given programmers.
Table 3 shows the percentage of the dependencies identified by the given programmer obtained from the impact sets of different algorithms. There are dependencies that are not identified by any algorithms and their values can be seen in the last row (others). The sum of the rates in a column is over 100% due to
Fig. 2. An overview of the empirical study.
Tab. 2. The number of dependencies identified by the algorithms and the programmers.
<table>
<thead>
<tr>
<th>Agent</th>
<th>No. of identified dependencies</th>
</tr>
</thead>
<tbody>
<tr>
<td>call</td>
<td>15</td>
</tr>
<tr>
<td>slice</td>
<td>13</td>
</tr>
<tr>
<td>SEA</td>
<td>62</td>
</tr>
<tr>
<td>co-change1.0</td>
<td>4</td>
</tr>
<tr>
<td>co-change2.4</td>
<td>37</td>
</tr>
<tr>
<td>programmer1</td>
<td>19</td>
</tr>
<tr>
<td>programmer2</td>
<td>14</td>
</tr>
<tr>
<td>programmer3</td>
<td>39</td>
</tr>
<tr>
<td>programmer4</td>
<td>15</td>
</tr>
<tr>
<td>programmer5</td>
<td>26</td>
</tr>
<tr>
<td>programmer6</td>
<td>6</td>
</tr>
<tr>
<td>programmer7</td>
<td>17</td>
</tr>
<tr>
<td>programmer8</td>
<td>36</td>
</tr>
<tr>
<td>programmer9</td>
<td>27</td>
</tr>
<tr>
<td>programmer10</td>
<td>12</td>
</tr>
<tr>
<td>programmer11</td>
<td>13</td>
</tr>
</tbody>
</table>
6.2 Q2: In the case of determining dependencies from a set of direct and indirect dependencies by using BEFRIEND, what proportion of dependencies accepted by programmers were identified by the different impact analysis algorithms?
In the second stage the programmers had to determine dependencies again, but this time using the BEFRIEND tool. Here they had to decide whether a set of potential dependencies were really dependencies or not. The programmers could see all of the dependencies grouped by the scenarios, along with the direct and the indirect dependencies. The programmer could see the start and the end point of the dependency path, but the path was missing. Here we regard a potential dependency as a real dependency if the programmer’s vote was at least 66%.
As seen in Figure 3, in most cases the programmers found more dependencies in the second stage than in the first stage. One reason for this difference is that in the second stage, we treat a dependency as a real dependency if the programmers’ votes are at least 66%, while with JRipples they can only say whether it is dependency or not. So the accepted dependencies in the second stage consist of dependencies with less than 100% confidence level as well. Another reason is that in the first stage...
A comparison of programmers’ opinions in change impact analysis
In the second stage, we can see that programmer tendencies from the impact sets were missing, but several appeared in this table, as a general rule, we can say that a few dependencies in the first stage and the second stage for each programmer. In Table 5, the values in the table have been determined the intersection and the difference between the dependencies identified in the first stage and the second stage for each programmer. In the first stage, the programmers can see all of the dependencies for a scenario (dependencies that were identified by algorithms or programmers in the first stage). In the first stage, the programmer can follow only one step graphically and they must keep the dependencies in mind in order to think in a transitive way. However, with BEFRIEND, they see the transitive dependencies and they only have to examine the given dependency and not keep information in their mind from an earlier step. The programmers can easily lose track of some information or overlook important information in the first stage.
Not surprisingly, we see that the programmers identified more dependencies in the second stage than in the first stage. We determined the intersection and the differences between the dependency sets identified in the first stage and the second stage for each programmer. In Table 5, the values in the table have been normalized by the size of the union of the given sets. According to this table, as a general rule, we can say that a few dependencies from the impact sets were missing, but several appeared in the second stage. We can see that programmers found the same number of dependencies in both stages. But they are not the same dependencies; only 50% of these dependencies are the same, a quarter of the dependencies are absent and a quarter of the dependencies are new, so the contents of the set changed.
More precisely, the dependencies that were identified in the first stage, but rejected in the second stage are mainly SEA relations. This is due to the visual presentation of the tools: with JRipples the programmer could follow the dependency path, but with BEFRIEND to identify a two-or-more-unit-distance SEA relation is much more difficult. In the first stage, the programmers found a relatively large amount of SEA relations, some of which were rejected in the second stage. In contrast, the other kinds of dependencies (call, slice, co-change) were identified in smaller numbers in the first stage, so in the second stage the programmers were easily able to accept these kinds of new dependencies.
Table 5 shows that in the second stage the impact sets got by the programmers contain a higher percentage of dependencies identified by algorithms. The reason could be that if the programmers can see the possible dependencies together, they can examine them individually and it is easier to decide whether it is really a dependency to at least a 66% confidence level.
The SEA relations are present in smaller amounts, despite the increased number of identified dependencies. There are more identified dependencies, and the proportion of SEA relations is lower. The programmers did not discover new SEA relations. This may be because of the visual presentation of the BEFRIEND tool, and the indirect dependencies with a greater distance cannot be so easily identified based on the start and the end points.
On average, the amount of the dependencies identified only by programmers and not by any algorithm is the same, although the values got by the given programmers are not the same as in the first stage. The lower values increased and the higher values decreased. In the case of higher values, the dependencies not identified by any algorithms were simply absent. If the programmer did not find so many other dependencies in the first stage, they only accepted some dependencies identified by other programmers.
Although in the second stage the dependencies identified by an algorithm were in the potential dependencies which they had to examine, they identified many more dependencies that had been identified by one or more algorithms/programmers. Table 7 tells us that the scores are higher than in the first stage. The rank of the average values is different as well. Call relations were covered the best (49% of the call relations were identified on average); this kind of dependency can be recognized relatively easily, especially with the help of the recommendations by those programmers who identified more call relations. The data and control dependencies identified came to 36% of cases, while in...
6.3 Q3: Is there any difference among programmers’ impact sets based on their qualifications and experience?
As we mentioned earlier, the participants are PhD students, computer science students and developers. There were 2 female PhD students (programmer 7 and programmer 9), while the others were males. Programmer 3, programmer 5, programmer 7, programmer 8, and programmer 9 were PhD students, programmer 1 and programmer 4 were developers, and programmer 2, programmer 6, programmer 10 and programmer 11 were computer science students.
According to Table 5, programmers 3, programmers, programmer 7, programmers 8, and programmer 9 covered the results of the algorithms better than the rest. These participants were the PhD students. Fig. 4 contains the average values in Table 5 for participants grouped according to qualifications and experience. This diagram also tells us that the PhD students covered best the different kinds of dependencies.
In Table 5 and Table 6 the developers and the students found more SEA relations, which means that in practice they consider SEA relations more useful than the PhD students. If a programmer has less background theoretical knowledge of impact analysis algorithms, the classes which are statically executed after the examined class are the most helpful during program comprehension or impact analysis. The PhD students are familiar with impact analysis algorithms, and about half of their identified dependencies are SEA relations, while other relations were identified with a higher score. The PhD students must have had other information – like comments and method with the same body – to go on in their final assessments.
7 Threats to Validity
This paper is an empirical study, and it has limitations that must be taken into account when evaluating the results and generalizing the findings to other contexts.
First, our hypothetical change requests may not equally represent real maintenance situations. If a programmer has a maintenance task, there is a certain program point in a method which needs some modification. By contrast, in our experiment the programmer had to find all the methods/classes impacted by any changes of the forward defined methods. This is necessary because the algorithms have a method or class granularity, not a statement granularity. However, for the testers this may repre-
sent a real maintenance situation: if the developers just supply the names of the changed methods to the testers, they must proceed from these methods to determine their impact sets and the necessary test cases.
There is another factor which is uncommon in software maintenance: the programmers are not familiar with the test project. Not every participant knows all the projects equally well, hence we chose a project which was not known to any of them. And here, the participants were all computer science students, PhD students or software engineers. Most of them were not familiar with impact analysis, but were common programmers who identified dependencies to the best of their knowledge and experience.
We considered algorithms that found impact sets for only one program. Other constraints were also mentioned in Section 5 (only Java code, memory consumption limitation, extended SVN history, etc.). In this case, the programmer can understand the project much better despite only having a partial understanding of several projects. Nevertheless, this project is an actual, non-trivial software system with a real SVN history. While only one subject system was examined, the empirical study has low statistical predictive power. We cannot claim that the results are generalizable.
When we compared the programmer evaluations, we noticed that they insisted on their previously observed dependencies, so it is possible that they remembered their opinions from the first stage. The solution to this might be to split the programmers into two groups and one of the groups determines dependencies only in the first stage, while the others do it only in the second stage.
8 Conclusions
Here we presented a detailed empirical comparison of impact sets determined by programmers and impact analysis algorithms. To investigate the relations between dependencies identified by programmers and impact analysis algorithms, we carried out a case study to examine the programmers’ thinking during a program comprehension task and to find out what kind of impact analysis methods they are likely to apply. We learned that the qualifications and experience of the participants also affected the composition of the impact sets. Our main goal was to investigate programmers’ strategies for understanding and utilizing relevant information and to find different tools to help them during impact analysis sessions.
Based on our analysis of the data collected during the study, we found that different programmers using different tools for different impact sets. Due to the different visualization technologies, the programmers also identified different dependencies. The majority of them neglected SEA relations and accepted all kinds of new dependencies. From a concrete list of dependencies (see BEFRIEND), they identified more dependencies. However, the new dependencies had a lower confidence level. Here we treated a dependency as a real dependency if the programmer voted to at least a 66% confidence level.
We found that SEA impact sets cover the programmers’ impact sets the best. Most of their dependencies were SEA relations, and the developers and the students best recognized these kinds of dependencies. Without extra knowledge about impact analysis methods, they most easily recognized those methods which are executed after the method examined.
In contrast, PhD students thought in a systematic way. They covered best the dependencies identified by any impact analysis algorithms. They learned about these kinds of methods in courses and actively looked for them in the code.
References
Fig. 4. Identified dependencies grouped according to participants’ qualifications.
15 Cohen J. A coefficient of agreement for nominal scales, Psychological Bulletin, 20, (1960), 37–46, http://www.bibsonomy.org/bibtex/24930490bb4e1e03f35b7c1dd7eccdid8/chat0
23 Indus project: Java program slicer and static analysis tools., http://indus.projects.cis.ksu.edu/
28 Corritore C L, Wiedenbeck S. An exploratory study of program compre-
45 The BEFRIEND homepage, available at http://www.inf.u-szeged.hu/befriend/
|
{"Source-Url": "https://pp.bme.hu/ee/article/download/827/446", "len_cl100k_base": 7476, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37401, "total-output-tokens": 10918, "length": "2e12", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.00025343894958496094, "__label__crime_law": 0.00028967857360839844, "__label__education_jobs": 0.0009446144104003906, "__label__entertainment": 4.571676254272461e-05, "__label__fashion_beauty": 0.0001277923583984375, "__label__finance_business": 0.0001500844955444336, "__label__food_dining": 0.000255584716796875, "__label__games": 0.00048661231994628906, "__label__hardware": 0.0004131793975830078, "__label__health": 0.0002894401550292969, "__label__history": 0.00013911724090576172, "__label__home_hobbies": 6.16908073425293e-05, "__label__industrial": 0.00020503997802734375, "__label__literature": 0.00023233890533447263, "__label__politics": 0.00017631053924560547, "__label__religion": 0.00031375885009765625, "__label__science_tech": 0.0025959014892578125, "__label__social_life": 9.262561798095704e-05, "__label__software": 0.004077911376953125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002579689025878906, "__label__transportation": 0.00030350685119628906, "__label__travel": 0.00015628337860107422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46563, 0.04072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46563, 0.54458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46563, 0.92128]], "google_gemma-3-12b-it_contains_pii": [[0, 3464, false], [3464, 9562, null], [9562, 13898, null], [13898, 18799, null], [18799, 24514, null], [24514, 26755, null], [26755, 31400, null], [31400, 33765, null], [33765, 37630, null], [37630, 43070, null], [43070, 46563, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3464, true], [3464, 9562, null], [9562, 13898, null], [13898, 18799, null], [18799, 24514, null], [24514, 26755, null], [26755, 31400, null], [31400, 33765, null], [33765, 37630, null], [37630, 43070, null], [43070, 46563, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46563, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46563, null]], "pdf_page_numbers": [[0, 3464, 1], [3464, 9562, 2], [9562, 13898, 3], [13898, 18799, 4], [18799, 24514, 5], [24514, 26755, 6], [26755, 31400, 7], [31400, 33765, 8], [33765, 37630, 9], [37630, 43070, 10], [43070, 46563, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46563, 0.11538]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
a6009b99ae0670a1ff2e1091844ceec4a4c214eb
|
WEB LOG MINING USING MULTI ITEM SEQUENTIAL PATTERN BASED ON PLWAP
Jaymin Desai¹, Mrs. Risha Tiwari²
¹Post Graduate Student, ²Professor,
Dept. of Computer Engg., Hasmukh Goswami Collage of Engineering, Ahmedabad, Gujarat, India.
Abstract: Web Log Mining (WLM) is the process to extract information from the Web Log data. Web logs records user activities and website resources usage when user browses the website. Sequential pattern mining (SPM) is an important data mining task of discovering timelated behaviors in sequence databases. SPM technology has been applied in many domains, like web-log analysis, the analyses of customer purchase behavior, process analysis of scientific experiments, medical record analysis etc. Using SPM methods for web log mining we can propose a good recommendation for web. It can be more beneficial to find the sequence of users’ behavior in web usage mining. System generates pattern by assuming that user access only one page at a given point in time. In actual system when user searches for any item he may load multiple pages for the same at a given point in time. By considering all the pages for the same parent page we can generate more useful patterns.
Keywords: Sequential pattern mining, PrefixSpan, PLWAPAlgo.
I. INTRODUCTION
Web Log Mining (WLM) is the process of extracting useful information from server logs. Web Usage Mining is the application of data mining techniques to discover interesting usage patterns from Web data in order to understand and better serve the needs of Web-based applications. Web Log Mining (WLM) is the process to extract information from the Web Log data. Web logs records user activities and website resources usage when user browses the website. They are one of the primary sources that can be analyzed to mine valuable knowledge. Web log mining may reveal interesting and unknown knowledge about both the user and website. Such knowledge can be used by different special purpose to perform task such as analyzing system performance, understanding internet traffic, improving system design, modeling user behavior and business intelligence. Sequential Pattern Mining (SPM) is an important data mining task of discovering Time-related behaviors in sequence databases. Sequential Pattern mining is a topic of data mining concerned with finding statistically relevant patterns between data examples where the values are delivered in a sequence. The concept of sequence Data Mining was first introduced by Rakesh Agrawal and RamakrishnanSrikant in the year 1995. SPM technology has been applied in many domains, like web-log analysis, the analyses of customer purchase behavior, process analysis of scientific experiments, medical record analysis etc. Sequential pattern mining discovers frequent subsequences as patterns in a sequence database. A sequence database stores a number of records, where all records are sequences of ordered events, with or without concrete notions of time. An example sequence database is retail customer transactions or purchase sequences in a grocery store showing, for each customer, the collection of store items they purchased every week for one month. With using SPM methods for web log mining we can propose a good recommendation for web. It can be more beneficial to find the sequence of users’ behavior in web usage mining. In sequential pattern mining for web WLM technique is very useful. By extracting the information from the web logs which are nothing but the activities of user. Using web log mining with SPM technique it helps to find frequent pattern and better recommendation. WLM is an important application of sequential pattern mining concerned with finding user navigational patterns on the World Wide Web by extracting knowledge from web logs, where ordered sequences of events in the sequence database are composed of single items and not sets of items. In reality when user search for particular keyword or system he may load more than during the others are loading in specific time interval. And it may or may not helpful for the user. Existing systems do consider only single page at a given point in time with the assumption that a web user can physically access only one web page at any given point in time. When user searches for any content he may load other pages while other is loading which may be useful. We propose a system in which we take multiple web pages into account for recommendation. We consider those pages which were surfed together by same user for the same purpose. So we may provide better recommendation with this approach.
II. LITERATURE SURVEY
Web usage mining is an important application of sequential pattern mining concerned with finding user navigational patterns on the world wide web by extracting knowledge from web logs, where ordered sequences of events in the sequence database are composed of single items and not sets of items, with the assumption that a web user can physically access only one web page at any given point in time. If a time window for access is considered that may allow a web user to browse acollection of web pages over a specified period of time, it then reverts back to a general sequence database. Sequential pattern mining can be classified into three main categories, namely, apriori-based, pattern-growth, and early-pruning with a fourth category as a hybrid of the main three. That investigation of sequential pattern-mining algorithms in the literature shows that the important heuristics employed include the following: using optimally
sized data structure representations of the sequence database; early support counting; and maintaining a narrow search space. The quest for finding a reliable sequential pattern-mining algorithm should take these points into consideration. Improving the efficiency and representation or managing the database, so based on these criteria’s sequential pattern mining is classified into two major groups, Apriori Based and Pattern Growth based algorithms. Comparative analysis of various mining algorithms, it is clear that pattern growth based algorithms are more efficient with respect to running time, space utilization and scalability.
III. ANALYSIS
Web Log Mining (WLM) is the process to extract information from the Web Log data. Web logs records user activities and website resources usage when user browses the website. Sequential Pattern Mining (SPM) is an important data mining task of discovering time-related behaviors in sequence databases. SPM technology has been applied in many domains like web-log analysis, the analyses of customer purchase behavior, process analysis of scientific experiments, medical record analysis etc. There are so many algorithms of Sequence pattern Mining and some of the algorithms are specially for web log mining or single item set sequence or a variation of web log mining namely WAP-Mine algorithm, PLWAP algorithm and LAPIN algorithm.
Key features of different techniques of sequential pattern mining –
Apriori based methods:
- Breadth first search : Apriori-based algorithms are described as breath-first (level-wise) search algorithms because they construct all k-sequences together in each kth iteration of the algorithm as they traverse the search space.
- Generate and test : Algorithms that depend on this feature only display an inefficient pruning method and generate an explosive number of candidate sequences, consuming a lot of memory in the early stages of mining.
- Multiple database scan : It is a very undesirable characteristic of most apriori-based algorithms. Requires a lot of processing time and I/O cost. A solution to this limitation is to scan the database only once or twice to create a temporary data structure, which holds support information used during mining.
Pattern growth based methods:
- Sampling / Compression : Compression is used in the data structure that holds the candidate sequences, usually a tree. Shared prefixes in sequences are represented in the tree by one branch; each node represents an item in the sequence alongside with its support count.
- The problem with sampling is that the support threshold must be kept small, which causes a combinatorial explosion in the number of candidate patterns.
- Candidate Sequence Pruning : Pattern-growth algorithms that can prune candidate sequences early display a smaller search space and maintain a more directed and narrower search procedure. Prefix span uses direct antimonotonic app of apriori property to prune candidate sequence along with projected database. PLWAP has a position-coded feature that enables it to identify locations of nodes relevant to each other as a look-ahead capability and to prune candidate sequences early in the mining process.
- Search Space Partitioning : It allows partitioning of the generated search space of large candidate sequences for efficient memory management. WAP-mine and PLWAP handle sub trees of a tree-projection structure recursively. Once the search space is partitioned, smaller partitions can be mined in parallel. Free span uses projected database to generate database annotations that guide the minimum process to find frequent pattern faster.
- Tree Projection : Here algorithms implement a physical tree data structure representation of the search space, which is then traversed breadth-first or depth-first in search of frequent sequences. WAP-mine uses the WAP-tree which is generated in only two scans of the sequence database and is used instead of the database to store candidate sequences. FS-Miner [El-Sayed et al. 2004] uses the FS-tree, which allows the mining process to start with 2-sequences immediately from the second scan of the database.
- Depth first Traversal : Reason for including this as a feature on its own because it is very important to have in any algorithm that uses a tree model. It has been stressed a lot and made very clear in several works that depth-first search of the search space makes a big difference in performance, and also helps in the early pruning of candidate sequences as well as mining of closed sequences.
- Suffix / Prefix growth : Algorithms that depend on projected databases and conditional search in trees first find the frequent 1-sequences, and hold each frequent item as either prefix or suffix, then start building candidate sequences around these items and mine them recursively. This greatly reduces the amount of memory required to store all the different candidate sequences that share the same prefix/suffix.
- Memory only : This feature targets algorithms that do not spawn an explosive number of candidate sequences, which enables them to have minimum I/O cost.
Early pruning based methods:
- Support counting avoidance : Several recent algorithms have found a way to compute support of
candidate sequences without carrying a count throughout the mining process, and without scanning the sequence database iteratively. It is very important for an efficient algorithm not to scan the sequence database each time to compute support. A sequence database can be removed from memory and no longer be used once the algorithm finds a way to store candidate sequences along with support counts in a tree structure, or any other representation for that matter.
- **Vertical projection of db**: The mining process uses only the vertical layout tables to generate candidate sequences and counts support in different ways. SPADE uses less memory because the sequence database is no longer required during mining. The amount of computation incurred by bitwise (usually AND) operations used to count the support for each candidate sequence.
- **Position Coded**: Key idea for early pruning methods. It enables an algorithm to look-ahead to avoid generating infrequent candidate sequences. This feature also plays a major role in PLWAP, making it a hybrid pattern-growth/early pruning algorithm that outperforms WAP-mine and Prefix Span with low minimum support when there is large amount of mined frequent patterns.
### IV. PROPOSED WORK
We have considered two approaches namely:
- Closed sequence pattern mining
- An improved approach to PLWAP without the assumption that web user can physically access only one web page at given point in time.
We have preferred second approach because in first approach which algorithm we are using it already using the technique of closed sequence pruning. Sequential pattern mining concerned with finding user navigational patterns on the world wide web by extracting knowledge from web logs, where ordered sequences of events in the sequence database are composed of single items and not sets of items, with the assumption that a web user can physically access only one web page at any given point in time. The propose method is to assign multiple pages which could be accessed in a specific time interval from the same parent node but till now we could not see this consideration, so the probable approach is basically to perform sequential pattern mining using this approach. Web usage mining is an important application of sequential pattern mining concerned with finding user navigational patterns on the world wide web by extracting knowledge from web logs, where ordered sequences of events in the sequence database are composed of single items and not sets of items, with the assumption that a web user can physically access only one web page at any given point in time. We surf internet for any information and mostly we get the information, but when we are searching for any term we usually open multiple pages while others are loading to find the result. So basically the idea is to consider multiple pages instead of single page while mining the weblog data. Instead of considering single web page as a single node we can store multiple pages in a single node which were surfed by user in a specific time interval and to find the same information. By doing this we can find the frequent pattern and also recommend more pages.
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Data set size</th>
<th>Minimum Support</th>
<th>Execution Time (sec)</th>
<th>Memory Usage (MB)</th>
</tr>
</thead>
<tbody>
<tr>
<td>GSP Apriori-based</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>>3000</td>
<td>800</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>23</td>
<td>687</td>
</tr>
<tr>
<td>SPAM Apriori-based</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>136</td>
<td>574</td>
</tr>
<tr>
<td>PrefixSpan Pattern-Growth</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>31</td>
<td>13</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>1958</td>
<td>525</td>
</tr>
<tr>
<td>WAP-mine Pattern-Growth</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>798</td>
<td>320</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>27</td>
<td>0.556</td>
</tr>
<tr>
<td>LAPIN Suffix Early Pruning</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>>3000</td>
<td>-</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>7</td>
<td>8</td>
</tr>
<tr>
<td>PLWAP Hybrid</td>
<td>Medium (1 ≤ D ≤ 200K)</td>
<td>Low (0.1%)</td>
<td>201</td>
<td>300</td>
</tr>
<tr>
<td></td>
<td>Large (D > 800K)</td>
<td>Low (0.1%)</td>
<td>10</td>
<td>0.556</td>
</tr>
</tbody>
</table>
Algorithm of MPLWAP tree:
**Input:** Web Access Sequence
**Output:** MPLWAP Tree
- It scans the access sequence database first time to obtain all events. Events have support greater than or equal to minimum support.
- Identify the number of events in a single node.
- Each node in a tree registers three information: number of items, pointer to the item, position code. And Item registers information: count.
- Scan the database second time to obtain frequent sequence S.
- Build tree data structure.
- Considering the first event, increment the count of the same if exist, otherwise.
- Check parent page of current node and for the first event is same.
- If yes same then put event into current node and increase number of items to the node and also assign count 1 to that event.
- Otherwise create new child node and set count of that to one for event and make that event as current node. Also assign position code for that.
- Add current node to the sequence.
This is an existing algorithm in which they mine using the header table. In our proposed algorithm we have two option to generate header table, first one is, with one of the item from the node to the items in the tree and second one is, consider the whole node to generate header table for items.
Implementation-
- Data set Details
- Synthetic Data
Synthetic datasets are generated using the publicly available synthetic data generation program of the IBM Quest data mining project at http://www.almaden.ibm.com/cs/quest/, which has been used in most sequential pattern mining studies. The parameters shown below are used to generate the data sets.
[D]: Number of sequences in the database
[C]: Average length of the sequences
[S]: Average length of maximal potentially frequent sequence
[N]: number of events
For example, C10.S5.N2000.D60k means that [C] = 10, [S] = 5, [N]= 2000, and [D] = 60k. It represents a group of data with average length of the sequences as 10, the average length of maximal potentially frequent sequence is 5, the number of individual events in the database is 2000, and the total number of sequences in database is 60 thousand.
Real time data
Real time data obtained from the Website URL: http://www.audiorec.co.uk/
It is basically E-Commerce website. There are total 22 fields and some of the relevant fields are: Date, Time, Serve Name, Server IP, CS-method, CS-uri-query, S-port, C-IP, CS-version, CS-user agent, CS-cookie, CS-Refer and Time taken. We have implemented a proposed algorithm to generate MPLWAP-Tree. The MPLWAP algorithm scans the access sequence database first time to obtain the support of all events in the event set, E. All events that have a support of greater than or equal to the minimum support are frequent. Each node in a MPLWAP-tree registers three information: number of items, pointer to the item, position code. And Item registers information: count. The root of the tree is a special virtual node with an empty label and count 0.
V. CONCLUSION
The algorithm MPLWAP proposed in this thesis, improves on mining efficiency by accommodating multiple pages in a single node instead of single page in single node as done by PLWAP mining algorithm. MPLWAP accommodates multiple web pages in a single node. By considering that the user can surf more than one page in a specific time interval we accommodate multiple web pages in a single node by checking the referred url of the respective web pages. MPLWAP provides multi-item support. Even though the execution time of MPLWAP is higher than PLWAP, the patterns generated from MPLWAP are more than PLWAP mining algorithm. Experiments show that mining of MPLWAP tree gives more patterns than PLWAP tree. Thus if we consider multi-item sequence, we can extract useful patterns from the web log data and it can be useful for web recommendation and personalization.
Future Work
Future work should consider applying MPLWAP-tree mining techniques to distributed mining as well as to incremental mining of web logs and sequential patterns. By using the previous records (patterns) of the user, pattern discovery for particular person can be done. According to the application, customized recommendation can be done.
REFERENCE
Books
[1] Mining the web Discovering knowledge from hypertext data by Soumen Chakrabarti
Web Site
Papers
[8] PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth JianPei, Jiawei Han, BehzadMortazavi-Ash, Helen Pinto IEEE-2013
[9] Sequential Pattern Mining Methods: A Snap Shot Niti Desai, Amit Ganatra IOSRJCE-2013
www.ijtre.com Copyright 2017. All rights reserved.
|
{"Source-Url": "https://ijtre.com/wp-content/uploads/2021/10/2017040868.pdf", "len_cl100k_base": 4314, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15871, "total-output-tokens": 4960, "length": "2e12", "weborganizer": {"__label__adult": 0.0003192424774169922, "__label__art_design": 0.0003786087036132813, "__label__crime_law": 0.0006504058837890625, "__label__education_jobs": 0.00238037109375, "__label__entertainment": 0.00010579824447631836, "__label__fashion_beauty": 0.00017893314361572266, "__label__finance_business": 0.00046372413635253906, "__label__food_dining": 0.00034356117248535156, "__label__games": 0.0008873939514160156, "__label__hardware": 0.0014801025390625, "__label__health": 0.0006213188171386719, "__label__history": 0.00035834312438964844, "__label__home_hobbies": 0.0001735687255859375, "__label__industrial": 0.0006909370422363281, "__label__literature": 0.0004734992980957031, "__label__politics": 0.0002582073211669922, "__label__religion": 0.0004239082336425781, "__label__science_tech": 0.230224609375, "__label__social_life": 0.0001837015151977539, "__label__software": 0.049224853515625, "__label__software_dev": 0.70947265625, "__label__sports_fitness": 0.000278472900390625, "__label__transportation": 0.0003654956817626953, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21601, 0.02856]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21601, 0.44061]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21601, 0.89308]], "google_gemma-3-12b-it_contains_pii": [[0, 5539, false], [5539, 10790, null], [10790, 16246, null], [16246, 21154, null], [21154, 21601, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5539, true], [5539, 10790, null], [10790, 16246, null], [16246, 21154, null], [21154, 21601, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21601, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21601, null]], "pdf_page_numbers": [[0, 5539, 1], [5539, 10790, 2], [10790, 16246, 3], [16246, 21154, 4], [21154, 21601, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21601, 0.13208]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
27e6df15f83a25effa9ab6f05b610417b16b2a2f
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007/978-3-642-38709-8_37.pdf", "len_cl100k_base": 7148, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34669, "total-output-tokens": 9712, "length": "2e12", "weborganizer": {"__label__adult": 0.0003733634948730469, "__label__art_design": 0.0006957054138183594, "__label__crime_law": 0.0003566741943359375, "__label__education_jobs": 0.00449371337890625, "__label__entertainment": 0.00014197826385498047, "__label__fashion_beauty": 0.0002415180206298828, "__label__finance_business": 0.0009965896606445312, "__label__food_dining": 0.0003790855407714844, "__label__games": 0.0007987022399902344, "__label__hardware": 0.0007009506225585938, "__label__health": 0.0005993843078613281, "__label__history": 0.0003726482391357422, "__label__home_hobbies": 0.0001277923583984375, "__label__industrial": 0.0005526542663574219, "__label__literature": 0.0007195472717285156, "__label__politics": 0.00029158592224121094, "__label__religion": 0.0004553794860839844, "__label__science_tech": 0.1083984375, "__label__social_life": 0.0001914501190185547, "__label__software": 0.02899169921875, "__label__software_dev": 0.84912109375, "__label__sports_fitness": 0.0002257823944091797, "__label__transportation": 0.0006070137023925781, "__label__travel": 0.0002104043960571289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45464, 0.03196]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45464, 0.44394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45464, 0.8998]], "google_gemma-3-12b-it_contains_pii": [[0, 2648, false], [2648, 5517, null], [5517, 9073, null], [9073, 12433, null], [12433, 15815, null], [15815, 17795, null], [17795, 20137, null], [20137, 23386, null], [23386, 25882, null], [25882, 29017, null], [29017, 29991, null], [29991, 32528, null], [32528, 35973, null], [35973, 39372, null], [39372, 42867, null], [42867, 45464, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2648, true], [2648, 5517, null], [5517, 9073, null], [9073, 12433, null], [12433, 15815, null], [15815, 17795, null], [17795, 20137, null], [20137, 23386, null], [23386, 25882, null], [25882, 29017, null], [29017, 29991, null], [29991, 32528, null], [32528, 35973, null], [35973, 39372, null], [39372, 42867, null], [42867, 45464, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45464, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45464, null]], "pdf_page_numbers": [[0, 2648, 1], [2648, 5517, 2], [5517, 9073, 3], [9073, 12433, 4], [12433, 15815, 5], [15815, 17795, 6], [17795, 20137, 7], [20137, 23386, 8], [23386, 25882, 9], [25882, 29017, 10], [29017, 29991, 11], [29991, 32528, 12], [32528, 35973, 13], [35973, 39372, 14], [39372, 42867, 15], [42867, 45464, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45464, 0.06977]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
71bb8db602cc2951583c27bbddb88ada8b11d78a
|
Generating of Business Database Application Elements
Artur Kornatka\textsuperscript{1,2}\textasteriskaccent
\textsuperscript{1}Institute of Computer Science, Maria Curie-Sklodowska University,
pl. M. Curie-Sklodowskiej 5, 20-031 Lublin, Poland.
\textsuperscript{2}Department of Computer Science, Nowy Sącz School of Business - National-Louis University,
Zielona 27, 33-300 Nowy Sącz, Poland
Abstract – The paper presents the outline of an innovative conception of the functioning of generator for business database applications elements and shows also the working principles of the author’s prototype system named BACG (Business Application Code Generator) which implements the aforementioned conception.
1 Introduction
The main factor determining the cost of software creation is the value of programmers labour expenses [1]. High competition on the software market for business application compels producers to reduce these costs. Substantial reduction of expenses pertaining to implementation of information systems can be achieved through using the specialized generators which enable replacement of programmers and speed up the process of software production. Usage of optimally working generators allows for automation of selected stages of business application developments while at the same time keeping high quality standards of the final product. In this case it is very important that all generated elements differ as little as possible from those created by a programmer. Skilful application of such specialized systems by companies producing business applications, may be a key factor determining their competitive advantage on the market.
The main aim of the paper is to present an innovative concept of functioning of generator for selected code elements of database business application and showing
\textsuperscript{\textasteriskaccent}akornatka@gmail.com
the principles of working of the author’s prototype system called BACG (Business Application Code Generator), which is meant to accomplish this concept.
The BACG system is an innovative generator of business application elements. The main feature which makes it different form the other similar tools ([2], [3]) is its focus on the generating of optimal and professional code which is coherent with the standing design patterns. The code is created automatically according to the rules and principles obligatory in advanced business applications development, and is almost indiscernible from the one that would be created by a professional programmer. Thus, the BACG system replaces a programmer during development of advanced code in contrast to the existing systems which concentrate on the generating of suitable forms only without paying attention to the quality of the built code.
The BACG system was created with the help of the Microsoft Visual Studio 2010. The application code was written in the C# language. The generator is able to work with the databases managed by the Microsoft SQL Server 2008.
All code elements which have been generated by the BACG system are compatible with the MVVM design pattern. The second section of the present work contains description of this template and technology used in the generated parts of application.
The third section describes the process of generating of selected business application elements.
The research results are presented in section 4 where the innovative features of the BACG system have been gathered and further development of the system is presented.
2 Theory – short description of MVVM, WPF, and XAML
WPF (Windows Presentation Foundation) is a presentation system for building Windows client applications with visually stunning user experiences. It defines the latest programming standard for the expanded user interfaces. Due to this technology the programmers can use up-to-date controls which allow to employ the full possibilities that are offered by the Microsoft operating system.
XAML (eXtensible Application Markup Language) is a declarative markup language created by Microsoft for programming user interfaces created in the WPF technology. The syntax of XML is based on the classical XML language. Subsequent tags of the XAML language describe all elements of the WPF user interface.
More details on WPF and XAML can be found in [4].
MVVM (Model-View-ViewModel) is a modern design pattern used for creating applications with the extended presentation part. The main purpose of this pattern is the separation of three basic layers of the application. According to [5] these are the Model, View, and ViewModel layers.
The Model layer of MVVM describes data logic or business logic of the application. This layer is completely independent of the user interface. It consists of many business objects which implement specific goals of the application.
The View layer of MVVM consists of visual elements of the application. A view may be an application window or the user control (UserControl) which can be placed on any application window.
The ViewModel layer of MVVM makes up a connection between a model and a view. The main task of objects in this layer is retrieving the selected data from a source, next transforming them into the form characteristic of the given view, or passing the properly modified information from the view to the data source.
The MVVM design pattern is ideally suited for creating user interfaces with the help of the WPF technology.
More information on MVVM can be found in [6], [7], [8].
3 The process of generating of business database applications
Let us assume that using the Microsoft Visual Studio 2010 development environment we want to create a professional business application. The programming language we select is C#. We also assume that the database of this application has been created in Microsoft SQL Server 2008. The database should contain all necessary tables and relationships between them. After all necessary parameters for setting up a connection with the database have been introduced, we can set about to apply the BACG system in order to generate the elements of business application which is being created. After launching, the BACG system asks for providing the name of a directory where all generated elements will be stored, next it retrieves information about the structure of selected tables from the database server and displays it on the screen.
The process of generating of database business application elements by the BACG system can be divided into five main stages. It starts with the generating of the elementary classes and fundamental stored procedures. Next, the system creates subsequent elements compatible with the above described design pattern MVVM. Thus BACG generates the Model layer, that is the classes which are responsible for contact with the database. The next stage is the creation of the View layer, that is advanced views realizing specific scenarios. In this stage the components of the ViewModel layers are also generated which are meant for intermediation between the Model and View layers.
After this cursory description of operating of the generator, we can move on to the detailed description of the aforementioned parts.
3.1 Elementary classes generation
In the first stage of generating of database business application elements the BACG system creates a collection of elementary classes. During this stage a collection of classes which facilitates the basic operations comes into being.
The first element to be generated is the AccessToDataBase class. It contains a private field named connectionString which keeps all parameters necessary for setting up the connection with SQL Server and the database created on this server. A constructor
of this class initializes this field. The key element here is the method named Create-Connection() which creates an open connection with the database (on the server) and returns an object of the SQLConnection type.
In this stage a class named ObjectQuery is also created. It contains a set of static methods responsible for calling SQL queries and various stored procedures which are located on the database server. An example of such method can be a function named RunSingleValueProcedure(String procedureName) which runs the procedure (passed to it by its name) returning single value.
The next class which is generated in this stage is named DelegateCommand. It enables calling the indicated function specified in the class layer ViewModel by the element defined in the layer View. All of this is accomplished through the Command mechanism, which is supported by the WPF technology.
This stage is finalized by the creating of the ComboBoxKeyAndValue class which is necessary for the correct functioning of the controls ComboBox on the View layer views.
3.2 Stored procedures generating
In the subsequent stage of the generator run we can choose the tables which – in the future application – will be subjected to such operations as: adding, deleting, retrieving all or selected records. For each of these tables BACG generates a suitable SQL query code which performs the mentioned operations. Then on the basis of each such SQL query, the SQL code creating the stored procedure on the server is formed. BACG sends and executes this code on the SQL Server. As a consequence of this process, a set of stored procedures on the database server comes into being which enables addition, deletion, retrieval of all or selected records. Of course, these procedures are generated for all tables indicated by the user of the system.
3.3 Generating of the classes responsible for the database operations
After the elementary classes and stored procedures have been created, the generator proceeds to the creation of layer Model elements according to the MVVM design pattern.
3.3.1 “Type R” classes
The entity class – which will be called “type R” – is created for each table from the collection selected in section 3.2. During the construction of these classes a popular C# mechanism of properties is used. Hence, for each table field a property is created in the corresponding “type R” class, which has the same name and associate type.
The creation of every property is accompanied by generating two methods: get and set which allow reading and setting values to specified private fields of the class. The set method contains a mechanism for checking whether a new value is the same as the existing one. In this case a new assignment is not carried out. During the “type R”
class generation, a special attention should be paid to the table fields which are foreign keys (connected with some record from the related table). In this case, apart from the standard field and property, an additional field is created (and the corresponding property) in the “type R” class and its type is determined by the “type R” class of the related table. In this case, apart from the access to the foreign key values, we can obtain a direct access to the related object. Calling the get method for such a field (of related object) is bound up with running on the database server a suitable stored procedure which takes data from the related record. Data are assigned to the object and the field becomes a reference to this object.
3.3.2 “Type C” classes
When the “type C” class generation is completed, the BACG system starts generating classes which are responsible for several database operations. Thus for each table selected in point 3.2 we can create a class which will enable us to add, delete, update, or read records from this table. We will call such classes “type C”.
Here are some exemplary functions contained in the “type C” class:
- the public method Add(...) to which an object of the “type R” class is passed, and then by calling a proper system procedure of the database server it adds a new related to this object record in the table
- the public method Delete(...) which calls a proper stored procedure to delete a specified database record
- the public method GetAll(...) which returns a collection of “type R” classes; this method calls a stored procedure retrieving all records form the selected table, and next it creates the proper object of the “type R” class from each record and adds it to the objects collection.
3.4 Views generation
The next stage of the database business application elements generation by the BACG system is the creation of views, that is the elements of the View layer according to the MVVM design pattern. Views are created with the help of the WPF technology and XAML language.
A user of the BACG system picks up a table for which he would like to create the views, and next he invokes a special window of the generator. In this window BACG displays all fields of the selected table together with their properties. Apart from that it presents tables related to the one selected. After the selection of the foreign key we see all fields in the related table. Moreover, the window allows us to choose the scenario of the object which is being created.
Currently the BACG system generates views according to two scenarios.
3.4.1 The first scenario – ObjectView
The BACG generator is capable of creating a view according to the first scenario. This scenario allows to save, edit, and update the selected record in the table stored in the database – which is of the “type R” class object. For example, if we want to create a view allowing to save a new record in a table, first we point to the fields which are to be filled. According to the WPF rules the BACG system, with the help of the XAML language, creates a special UserControl and for each pointed field adds a label describing this field (usually a component of the Label type), and next adds the editable field (e.g. TextBox or DataPicker). UserControl is associated with a suitable object of the ViewModel class and each editable field is assigned to the suitable property defined in the class of this object (see section 3.5). Fields association is performed through the Binding mechanism which is supported by the WPF technology. All labels and fields are placed in a special component Grid which controls a displacement of these elements. However, the fields corresponding to the table foreign keys are treated in a special way. For example, for such fields there can be created a special ComboBox which is able to display the related records from the related table so instead of filling in the value of the foreign key one can choose from the expandable list.
The UserControl element contains also the button which invokes a save function from a suitable ViewModel class. This action is realized by the Command of the Button control property.
Created UserControl can be placed in any window of the business application.
subsection The second scenario – ShowAllView The BACG generator is able to create a view according to the second scenario. This scenario allows to present all records from the indicated table or all objects which are returned by the function GetAll() called for an object of the chosen “type C” class. Similarly to the previous case each view is a separate UserControl. The DataGrid component (available in WPF) is responsible for displaying objects in a table. The whole UserControl is associated with a suitable object of a specially created ViewModel class (see subsection 3.5). Further, DataGrid has been associated with a suitable collection of objects defined in this class. The Binding mechanism has also been used in this case. The DataGrid type column is generated depending on the field type displayed in this column. Instead of showing the foreign keys the suitable data from the related record/object are presented. The user decides about which data are displayed.
3.5 Class ViewModel generation
When subsequent views are generated, the BACG system creates the classes of the ViewModel layer. The main task of these classes is providing data in a suitable form to the views or passing from the view properly modified information to a data source. These classes act as intermediary between the View layer elements (created in the WPF technology) and the Model layer classes. In the same way as with the views, which are generated by two scenarios, we can also divide the ViewModel layer classes into two categories. The first category includes the classes created for the views produced by
the first scenario, the second one includes those for the views produced by the second scenario.
3.5.1 The ViewModel class generated for the views according to the first scenario
All classes of the ViewModel, generated for the views according to the first scenario, contain two fields. One field is an object of the suitable “type R” class, and the second field is an object of the related “type C” class. The “type R” and “type C” classes are those that have been created for the tables on which the view is to act upon. Additionally, the ViewModel layer classes contain a constructor which initializes their files based on its own parameters. The key element of these classes is the Properties area which contains a collection of properties which correspond to the editable controls of the view. Of course, these controls are bound to these properties through the Binding mechanism. The get method obtains a suitable field of the above described object of the “type R” class while the set method assigns a value to it. The generator pays attention to the consistence of the types between the “type R” class properties and the class ViewModel properties.
Special properties have been generated for the elements of the ComboBox type views which return a list of ComboBoxKeyAndValue type objects. These properties, using their own get method (), call a specially generated function GetAllOnlySelected-Fields_FieldsName() for a “type C” class object, which returns the pointed collection. Thus, the generator has to complete the code of the suitable “type C” classes with new functions. In consequence, it requires to create new stored procedures which are to provide selected data to the returned objects collection of the ComboBoxKeyAndValue type.
The last elements of these classes are special properties (ICommand type) which in the get() method create a command of the DelegateCommand type by calling suitable methods for the “type C” class object. As an example we can mention here the property public ICommand SaveCommand which creates a new object of the DelegateCommand class in the method get() by calling the method Add() for the “type C” class object.
3.5.2 The ViewModel class generated for the views according to the second scenario
Classes of the ViewModel, generated for the views according to the second scenario, contain one field. It is an object of the related “type C” class. Moreover, they contain the initializing constructor of the class field, based on its own parameter. The key part of these classes is the region Properties where the property named Show is located, which in the get() method returns a list of objects of the specially defined Object-NameForAllView type by calling a new function getAllView() for an object of the “type C” class. The ObjectNameForAllView type is a separately generated class which is composed only of properties created on the fields that a user would like to see in the view that is being created. In this situation it is necessary to complete a suitable “type
Generating of Business Database Application Elements
C" class with the getForAllView() method whose purpose is creating a collection of objects of the ObjectNameForAllView type and filling in the data in these objects. Of course, in this case also the retrieving of suitable data from a database is bound with the calling the proper stored procedures kept in the SQL Server database management system.
The Show property is associated in the view with a DataGrid component and its subsequent columns with subsequent properties of the new ObjectNameForAllView type.
4 Summary
The BACG generator is an innovative system which automatically creates selected code elements of the database business applications. Its innovative character is realized through the following features:
- the system generates the optimal, professional source code using up-to-date advanced programming technologies and binds it with the business presentation layer (in contrast to the other generators which focus mainly on the forms creation without paying attention to the code quality),
- generated elements of the business application are compliant with the current Model View ViewModel (MVVM) design pattern,
- owing to the application of the readable design pattern a programmer retains a full control over the automatically generated code, and when the needs arise it can be easily modified and completed,
- each layer of the generated application is created independently, hence it is easy to modify one layer without interfering with the others (e.g. in the case when the user interface has to be changed),
- all generated views are created with painstaking attention to their business functionality and future application for the more elaborate generator,
- the BACG system uses up-to-date technologies for building the advanced Windows forms,
- the BACG system guarantees an effective code which is responsible for the access to the database (the Model layer) by strict binding with the stored procedures located on the database server.
In the future development of the BACG system the author plans:
- to extend the module for view generating so that it will enable to create more professional forms meeting the sophisticated business demands,
- to add a mechanism for extracting the interfaces and abstract classes in the process of code generation for specialized business functions,
- to create a mechanism enabling a cooperation with the object query language, operating on the objects of the Model layer which will be tightly bound with the generator of the suitable stored procedures kept in the database servers.
References
|
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3343/2537", "len_cl100k_base": 4268, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19063, "total-output-tokens": 4964, "length": "2e12", "weborganizer": {"__label__adult": 0.0003371238708496094, "__label__art_design": 0.00023818016052246096, "__label__crime_law": 0.0002359151840209961, "__label__education_jobs": 0.0004041194915771485, "__label__entertainment": 3.4809112548828125e-05, "__label__fashion_beauty": 0.00011670589447021484, "__label__finance_business": 0.0002989768981933594, "__label__food_dining": 0.0002856254577636719, "__label__games": 0.0003123283386230469, "__label__hardware": 0.0004799365997314453, "__label__health": 0.0002598762512207031, "__label__history": 0.00012576580047607422, "__label__home_hobbies": 5.519390106201172e-05, "__label__industrial": 0.0002524852752685547, "__label__literature": 0.00011909008026123048, "__label__politics": 0.00016808509826660156, "__label__religion": 0.0002574920654296875, "__label__science_tech": 0.0016031265258789062, "__label__social_life": 4.363059997558594e-05, "__label__software": 0.003770828247070313, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.00020301342010498047, "__label__transportation": 0.000362396240234375, "__label__travel": 0.00018024444580078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22940, 0.02117]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22940, 0.56968]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22940, 0.90514]], "google_gemma-3-12b-it_contains_pii": [[0, 1879, false], [1879, 4816, null], [4816, 7712, null], [7712, 10491, null], [10491, 13079, null], [13079, 16346, null], [16346, 19373, null], [19373, 21984, null], [21984, 22940, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1879, true], [1879, 4816, null], [4816, 7712, null], [7712, 10491, null], [10491, 13079, null], [13079, 16346, null], [16346, 19373, null], [19373, 21984, null], [21984, 22940, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22940, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22940, null]], "pdf_page_numbers": [[0, 1879, 1], [1879, 4816, 2], [4816, 7712, 3], [7712, 10491, 4], [10491, 13079, 5], [13079, 16346, 6], [16346, 19373, 7], [19373, 21984, 8], [21984, 22940, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22940, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2dcc2ee7ef623d657a4743f653b9f1f982fbacaa
|
A Transformer-based Approach for Source Code Summarization
Wasi Uddin Ahmad
University of California, Los Angeles
wasiahmad@cs.ucla.edu
Saikat Chakraborty
Columbia University
saikatc@cs.columbia.edu
Baishakhi Ray
Columbia University
rayb@cs.columbia.edu
Kai-Wei Chang
University of California, Los Angeles
kwchang@cs.ucla.edu
Abstract
Generating a readable summary that describes the functionality of a program is known as source code summarization. In this task, learning code representation by modeling the pairwise relationship between code tokens to capture their long-range dependencies is crucial. To learn code representation for summarization, we explore the Transformer model that uses a self-attention mechanism and has shown to be effective in capturing long-range dependencies. In this work, we show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin. We perform extensive analysis and ablation studies that reveal several important findings, e.g., the absolute encoding of source code tokens’ position hinders, while relative encoding significantly improves the summarization performance. We have made our code publicly available\(^1\) to facilitate future research.
1 Introduction
Program comprehension is an indispensable ingredient of software development and maintenance (Xia et al., 2018). A natural language summary of source code facilitates program comprehension by reducing developers’ efforts significantly (Sridhara et al., 2010). Source code summarization refers to the task of creating readable summaries that describe the functionality of a program.
With the advancement of deep learning and the availability of large-scale data through a vast number of open-source repositories, automatic source code summarizing has drawn attention from researchers. Most of the neural approaches generate source code summaries in a sequence-to-sequence fashion. One of the initial works Iyer et al. (2016) trained an embedding matrix to represent the individual code tokens and combine them with a Recurrent Neural Network (RNN) via an attention mechanism to generate a natural language summary. Subsequent works (Liang and Zhu, 2018; Hu et al., 2018a,b) adopted the traditional RNN-based sequence-to-sequence network (Sutskever et al., 2014) with attention mechanism (Luong et al., 2015) on different abstractions of code.
The RNN-based sequence models have two limitations in learning source code representations. First, they do not model the non-sequential structure of source code as they process the code tokens sequentially. Second, source code can be very long, and thus RNN-based models may fail to capture the long-range dependencies between code tokens. In contrast to the RNN-based models, Transformer (Vaswani et al., 2017), which leverages self-attention mechanism, can capture long-range dependencies. Transformers have been shown to perform well on many natural language generation tasks such as machine translation (Wang et al., 2019), text summarization (You et al., 2019), story generation (Fan et al., 2018), etc.
To learn the order of tokens in a sequence or to model the relationship between tokens, Transformer requires to be injected with positional encodings (Vaswani et al., 2017; Shaw et al., 2018; Shiv and Quirk, 2019). In this work, we show that, by modeling the pairwise relationship between source code tokens using relative position representation (Shaw et al., 2018), we can achieve significant improvements over learning sequence information of code tokens using absolute position representation (Vaswani et al., 2017).
We want to emphasize that our proposed approach is simple but effective as it outperforms the fancy and sophisticated state-of-the-art source code summarization techniques by a significant margin. We perform experiments on two well-studied datasets collected from GitHub, and the results endorse the effectiveness of our approach.
\(^1\)https://github.com/wasiahmad/NeuralCodeSum
over the state-of-the-art solutions. In addition, we provide a detailed ablation study to quantify the effect of several design choices in the Transformer to deliver a strong baseline for future research.
2 Proposed Approach
We propose to use Transformer (Vaswani et al., 2017) to generate a natural language summary given a piece of source code. Both the code and summary is a sequence of tokens that are represented by a sequence of vectors, $x = (x_1, \ldots, x_n)$ where $x_i \in \mathbb{R}^{d_{model}}$. In this section, we briefly describe the Transformer architecture (§ 2.1) and how to model the order of source code tokens or their pairwise relationship (§ 2.2) in Transformer.
2.1 Architecture
The Transformer consists of stacked multi-head attention and parameterized linear transformation layers for both the encoder and decoder. At each layer, the multi-head attention employs $h$ attention heads and performs the self-attention mechanism.
Self-Attention. We describe the self-attention mechanism based on Shaw et al. (2018). In each attention head, the input sequence of vectors, $x = (x_1, \ldots, x_n)$ where $x_i \in \mathbb{R}^{d_{model}}$ are transformed into the sequence of output vectors, $o = (o_1, \ldots, o_n)$ where $o_i \in \mathbb{R}^{d_k}$ as:
$$o_i = \sum_{j=1}^{n} \alpha_{ij} (x_j W^V),$$
$$e_{ij} = \frac{x_i W^Q (x_j W^K)^T}{\sqrt{d_k}},$$
where $\alpha_{ij} = \frac{\exp e_{ij}}{\sum_{k=1}^{n} \exp e_{ik}}$ and $W^Q, W^K \in \mathbb{R}^{d_{model} \times d_k}$, $W^V \in \mathbb{R}^{d_k}$ are the parameters that are unique per layer and attention head.
Copy Attention. We incorporate the copying mechanism (See et al., 2017) in the Transformer to allow both generating words from vocabulary and copying from the input source code. We use an additional attention layer to learn the copy distribution on top of the decoder stack (Nishida et al., 2019). The copy attention enables the Transformer to copy rare tokens (e.g., function names, variable names) from source code and thus improves the summarization performance significantly (§ 3.2).
2.2 Position Representations
Now, we discuss how to learn the order of source code tokens or model their pairwise relationship.
<table>
<thead>
<tr>
<th>Dataset</th>
<th>Java</th>
<th>Python</th>
</tr>
</thead>
<tbody>
<tr>
<td>Train</td>
<td>69,708</td>
<td>55,538</td>
</tr>
<tr>
<td>Validation</td>
<td>8,714</td>
<td>18,505</td>
</tr>
<tr>
<td>Test</td>
<td>8,714</td>
<td>18,502</td>
</tr>
<tr>
<td>Unique tokens in code</td>
<td>66,650</td>
<td>307,596</td>
</tr>
<tr>
<td>Unique tokens in summary</td>
<td>46,895</td>
<td>56,189</td>
</tr>
<tr>
<td>Avg. tokens in code</td>
<td>120.16</td>
<td>47.98</td>
</tr>
<tr>
<td>Avg. tokens in summary</td>
<td>17.73</td>
<td>9.48</td>
</tr>
</tbody>
</table>
Table 1: Statistics of the experiment datasets. We thank the authors of Wei et al. (2019) for kindly sharing the Python dataset splits. The Java dataset splits are publicly available.
Encoding absolute position. To allow the Transformer to utilize the order information of source code tokens, we train an embedding matrix $W^P$ that learns to encode tokens’ absolute positions into vectors of dimension $d_{model}$. However, we show that capturing the order of code tokens is not helpful to learn source code representations and leads to poor summarization performance (§ 3.2).
It is important to note that we train another embedding matrix $W^P_d$ that learns to encode the absolute positions of summary tokens.
Encoding pairwise relationship. The semantic representation of a code does not rely on the absolute positions of its tokens. Instead, their mutual interactions influence the meaning of the source code. For instance, semantic meaning of the expressions $a+b$ and $b+a$ are the same.
To encode the pairwise relationships between input elements, Shaw et al. (2018) extended the self-attention mechanism as follows.
$$o_i = \sum_{j=1}^{n} \alpha_{ij} (x_j W^V + a_{ij}),$$
$$e_{ij} = \frac{x_i W^Q (x_j W^K + a_{ij} K)^T}{\sqrt{d_k}},$$
where, $a_{ij}^V$ and $a_{ij}^K$ are relative positional representations for the two position $i$ and $j$. Shaw et al. (2018) suggested clipping the maximum relative position to a maximum absolute value of $k$ as they hypothesize that precise relative position information is not useful beyond a certain distance.
$$a_{ij}^K = w_{clip(j-i,k)}, a_{ij}^V = w_{clip(j-i,k)}^V,$$
$$clip(x, k) = \max(-k, \min(k, x)).$$
Hence, we learn $2k + 1$ relative position representations: $(w_{-k}^V, \ldots, w_k^V)$, and $(w_{-k}, \ldots, w_k)$.
In this work, we do not study alternative ways of learning position representation for the summary tokens.
Table 2: Comparison of our proposed approach with the baseline methods. The results of the baseline methods are directly reported from (Wei et al., 2019). The “Base Model” refers to the vanilla Transformer (uses absolute position representations) and the “Full Model” uses relative position representations and includes copy attention.
In this work, we study an alternative of the relative position representations that ignores the directional information (Ahmad et al., 2019). In other words, the information whether the \( j \)'th token is on the left or right of the \( i \)'th token is ignored.
\[
a^K_{ij} = w^K_{\text{clip}(|j-i|, k)}, \quad a^V_{ij} = w^V_{\text{clip}(|j-i|, k)},
\]
\[
\text{clip}(x, k) = \min(|x|, k).
\]
3 Experiment
3.1 Setup
Datasets and Pre-processing. We conduct our experiments on a Java dataset (Hu et al., 2018b) and a Python dataset (Wan et al., 2018). The statistics of the two datasets are shown in Table 1. In addition to the pre-processing steps followed by Wei et al. (2019), we split source code tokens of the form CamelCase and snake_case to respective sub-tokens\(^3\). We show that such a split of code tokens improves the summarization performance.
Metrics. We evaluate the source code summarization performance using three metrics, BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004).
Baselines. We compare our Transformer-based source code summarization approach with five baseline methods reported in Wei et al. (2019) and their proposed Dual model. We refer the readers to (Wei et al., 2019) for the details about the hyperparameter of all the baseline methods.
Hyper-parameters. We follow Wei et al. (2019) to set the maximum lengths and vocabulary sizes for code and summaries in both the datasets. We train the Transformer models using Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of \( 10^{-4} \). We set the mini-batch size and dropout rate to 32 and 0.2, respectively. We train the Transformer models for a maximum of 200 epochs and perform early stop if the validation performance does not improve for 20 consecutive iterations. We use a beam search during inference and set the beam size to 4. Detailed hyper-parameter settings can be found in Appendix A.
3.2 Results and Analysis
Overall results. The overall results of our proposed model and baselines are presented in Table 2. The result shows that the Base model outperforms the baselines (except for ROUGE-L in java), while the Full model improves the performance further.\(^4\) We ran the Base model on the original datasets (without splitting the CamelCase and snake_case code tokens) and observed that the performance drops by 0.60, 0.72 BLEU and 1.66, 2.09 ROUGE-L points for the Java and Python datasets respectively. We provide a few qualitative examples in Appendix C showing the usefulness of the Full model over the Base model.
Unlike the baseline approaches, our proposed model employs the copy attention mechanism. As shown in Table 2, the copy attention improves the performance 0.44 and 0.88 BLEU points for the Java and Python datasets respectively.
Impact of position representation. We perform an ablation study to investigate the benefits of code and summary tokenization.
\(^3\)The CamelCase and snake_case tokenization reduces the vocabulary significantly. For example, the number of unique tokens in Java source code reduced from 292,626 to 66,650.
\(^4\)We observe a more significant gain on the Python dataset and a detailed discussion on it is provided in Appendix B.
of encoding the absolute position of code tokens or modeling their pairwise relationship for the source code summarization task, and the results are presented in Table 3 and 4. Table 3 demonstrates that learning the absolute position of code tokens are not effective as we can see it slightly hurts the performance compared to when it is excluded. This empirical finding corroborates the design choice of Iyer et al. (2016), where they did not use the sequence information of the source code tokens.
On the other hand, we observe that learning the pairwise relationship between source code tokens via relative position representations helps as Table 4 demonstrates higher performance. We vary the clipping distance, $k$, and consider ignoring the directional information while modeling the pairwise relationship. The empirical results suggest that the directional information is indeed important while 16, 32, and $2^i$ (where $i = 1, \ldots, L$; $L = 6$) represents a layer-wise relative distance for Transformer.
### Varying model size and number of layers
We perform ablation study by varying $d_{model}$ and $l$ and the results are presented in Table 5. In our experiments, we observe that a deeper model (more layers) performs better than a wider model (larger $d_{model}$). Intuitively, the source code summarization task depends on more semantic information than syntactic, and thus deeper model helps.
#### Use of Abstract Syntax Tree (AST)
We perform additional experiments to employ the abstract syntax tree (AST) structure of source code in the Transformer. We follow Hu et al. (2018a) and use the Structure-based Traversal (SBT) technique to transform the AST structure into a linear sequence. We keep our proposed Transformer architecture intact, except in the copy attention mechanism, we use a mask to block copying the non-terminal tokens from the input sequence. It is important to note that, with and without AST, the average length of the input code sequences is 172 and 120, respectively. Since the complexity of the Transformer is $O(n^2 \times d)$ where $n$ is the input sequence length, hence, the use of AST comes with an additional cost. Our experimental findings suggest that the incorporation of AST information in the Transformer does not result in an improvement in source code summarization. We hypothesize that the exploitation of the code structure information in summarization has limited advantage, and it diminishes as the Transformer learns it implicitly with relative position representation.
#### Qualitative analysis
We provide a couple of examples in Table 6 to demonstrate the usefulness of our proposed approach qualitatively (more examples are provided in Table 9 and 10 in the Appendix). The qualitative analysis reveals that, in comparison to the Vanilla Transformer model, the copy enabled model generates shorter summaries.
---
**Table 3:** Ablation study on absolute positional representations using the “Base Model” on the Java dataset.
<table>
<thead>
<tr>
<th>Source</th>
<th>Target</th>
<th>BLEU</th>
<th>METEOR</th>
<th>ROUGE-L</th>
</tr>
</thead>
<tbody>
<tr>
<td>✓</td>
<td>✓</td>
<td>43.41</td>
<td>25.91</td>
<td>52.71</td>
</tr>
<tr>
<td>✓</td>
<td>✗</td>
<td>42.34</td>
<td>24.74</td>
<td>50.96</td>
</tr>
<tr>
<td>✗</td>
<td>✓</td>
<td><strong>43.59</strong></td>
<td><strong>26.00</strong></td>
<td><strong>52.88</strong></td>
</tr>
<tr>
<td>✗</td>
<td>✗</td>
<td>41.85</td>
<td>24.32</td>
<td>50.87</td>
</tr>
</tbody>
</table>
**Table 4:** Ablation study on relative positional representations (in encoding) for Transformer. While 8, 16, and 32 represents a fixed relative distance for all the layers, $2^i$ (where $i = 1, \ldots, L$; $L = 6$) represents a layer-wise relative distance for Transformer.
<table>
<thead>
<tr>
<th>$k$</th>
<th>Directional</th>
<th>BLEU</th>
<th>METEOR</th>
<th>ROUGE-L</th>
</tr>
</thead>
<tbody>
<tr>
<td>8</td>
<td>✓</td>
<td>44.22</td>
<td>26.35</td>
<td>53.86</td>
</tr>
<tr>
<td></td>
<td>✗</td>
<td>42.61</td>
<td>24.67</td>
<td>51.10</td>
</tr>
<tr>
<td>16</td>
<td>✓</td>
<td>44.14</td>
<td>26.34</td>
<td>53.95</td>
</tr>
<tr>
<td></td>
<td>✗</td>
<td>44.06</td>
<td>26.31</td>
<td>53.51</td>
</tr>
<tr>
<td>32</td>
<td>✓</td>
<td><strong>44.55</strong></td>
<td><strong>26.66</strong></td>
<td><strong>54.30</strong></td>
</tr>
<tr>
<td></td>
<td>✗</td>
<td>43.95</td>
<td>26.28</td>
<td>53.24</td>
</tr>
<tr>
<td>$2^5$</td>
<td>✓</td>
<td>43.37</td>
<td>26.58</td>
<td>53.96</td>
</tr>
<tr>
<td></td>
<td>✗</td>
<td>43.58</td>
<td>25.95</td>
<td>52.73</td>
</tr>
</tbody>
</table>
**Table 5:** Ablation study on the hidden size and number of layers for the “Base Model” on the Java dataset. We use $d_{model} = H, d_{ff} = 4H, h = 8$, and $d_k = d_v = 64$ in all settings. We set $l = 6$ and $d_{model} = 512$ while varying $d_{model}$ and $l$ respectively.
<table>
<thead>
<tr>
<th>#Param.</th>
<th>BLEU</th>
<th>METEOR</th>
<th>ROUGE-L</th>
</tr>
</thead>
<tbody>
<tr>
<td>Varying the number of layers ($l$)</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>22.1</td>
<td>41.26</td>
<td>23.54</td>
</tr>
<tr>
<td>6</td>
<td>44.1</td>
<td>43.41</td>
<td>25.91</td>
</tr>
<tr>
<td>9</td>
<td>66.2</td>
<td>45.03</td>
<td>27.21</td>
</tr>
<tr>
<td>12</td>
<td>88.3</td>
<td><strong>45.56</strong></td>
<td><strong>27.64</strong></td>
</tr>
</tbody>
</table>
| Varying the model size ($d_{model}$) | | | |
| 256 | 15.8 | 38.21 | 21.54 | 48.63 |
| 384 | 28.4 | 41.71 | 24.51 | 51.42 |
| 512 | 44.4 | 43.41 | 25.91 | 52.71 |
| 768 | 85.1 | **45.29** | **27.56** | **54.39** |
---
3Considering the model complexity, we do not increase the model size or number of layers further.
public static String selectText(XPathExpression expr, Node context) {
try {
return (String)expr.evaluate(context, XPathConstants.STRING );
} catch (XPathExpressionException e) {
throw new XmlException(e);
}
}
Base Model: evaluates the xpath expression to a xpath expression.
Full Model w/o Relative Position: evaluates the xpath expression.
Full Model w/o Copy Attention: evaluates the xpath expression as a single element.
Full Model: evaluates the xpath expression as a text string.
Human Written: evaluates the xpath expression as text.
def get_hosting_service(name):
try:
return
except ItemLookupError:
return None
Base Model: returns the color limits from the current service name.
Full Model w/o Relative Position: return the hosting service.
Full Model w/o Copy Attention: return the name of the service.
Full Model: return the hosting service name.
Human Written: return the hosting service with the given name.
Table 6: Qualitative example of different models’ performance on Java and Python datasets.
with more accurate keywords. Besides, we observe that in a copy enabled model, frequent tokens in the code snippet get a higher copy probability when relative position representations are used, in comparison to absolute position representations. We suspect this is due to the flexibility of learning the relation between code tokens without relying on their absolute position.
4 Related Work
Most of the neural source code summarization approaches frame the problem as a sequence generation task and use recurrent encoder-decoder networks with attention mechanisms as the fundamental building blocks (Iyer et al., 2016; Liang and Zhu, 2018; Hu et al., 2018a,b). Different from these works, Allamanis et al. (2016) proposed a convolutional attention model to summarize the source codes into short, name-like summaries.
Recent works in code summarization utilize structural information of a program in the form of Abstract Syntax Tree (AST) that can be encoded using tree structure encoders such as Tree-LSTM (Shido et al., 2019), Tree-Transformer (Harer et al., 2019), and Graph Neural Network (LeClair et al., 2020). In contrast, Hu et al. (2018a) proposed a structure based traversal (SBT) method to flatten the AST into a sequence and showed improvement over the AST based methods. Later, LeClair et al. (2019) used the SBT method and decoupled the code structure from the code tokens to learn better structure representation.
Among other noteworthy works, API usage information (Hu et al., 2018b), reinforcement learning (Wan et al., 2018), dual learning (Wei et al., 2019), retrieval-based techniques (Zhang et al., 2020) are leveraged to further enhance the code summarization models. We can enhance a Transformer with previously proposed techniques; however, in this work, we limit ourselves to study different design choices for a Transformer without breaking its’ core architectural design philosophy.
5 Conclusion
This paper empirically investigates the advantage of using the Transformer model for the source code summarization task. We demonstrate that the Transformer with relative position representations and copy attention outperforms state-of-the-art approaches by a large margin. In our future work, we want to study the effective incorporation of code structure into the Transformer and apply the techniques in other software engineering sequence generation tasks (e.g., commit message generation for source code changes).
Acknowledgments
This work was supported in part by National Science Foundation Grant OAC 1920462, CCF 1845893, CCF 1822965, CNS 1842456.
References
A Hyper-Parameters
Table 7 summarizes the hyper-parameters that we used in our experiments.
<table>
<thead>
<tr>
<th>Hyper-parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Embedding</td>
<td></td>
</tr>
<tr>
<td>$k$</td>
<td>16</td>
</tr>
<tr>
<td>Model</td>
<td></td>
</tr>
<tr>
<td>$l$</td>
<td>6</td>
</tr>
<tr>
<td>$h$</td>
<td>8</td>
</tr>
<tr>
<td>$d_{\text{model}}$</td>
<td>512</td>
</tr>
<tr>
<td>$d_k, d_v$</td>
<td>64</td>
</tr>
<tr>
<td>$d_{ff}$</td>
<td>2048</td>
</tr>
<tr>
<td>Training</td>
<td></td>
</tr>
<tr>
<td>dropout</td>
<td>0.2</td>
</tr>
<tr>
<td>optimizer</td>
<td>Adam</td>
</tr>
<tr>
<td>learning rate</td>
<td>0.0001</td>
</tr>
<tr>
<td>batch size</td>
<td>32</td>
</tr>
<tr>
<td>Testing</td>
<td></td>
</tr>
<tr>
<td>beam size</td>
<td>4</td>
</tr>
</tbody>
</table>
Table 7: Hyper-parameters in our experiments. $l$ and $h$ indicates the number of layers and heads in Transformer respectively. $k$ refers to the clipping distance in relative position representations in Transformer.
B Recurrent Encoder-Decoder vs. Transformer on Python Dataset
While conducting our study using the Transformer on the Python dataset, we observed a significant gain over the state-of-the-art methods as reported in Wei et al. (2019). However, our initial experiments on this dataset using recurrent sequence-to-sequence models also demonstrated higher performance compared to the results report in Wei et al. (2019). We suspect that such lower performance is due to not tuning the hyper-parameters correctly. So for the sake of fairness and to investigate the true advantages of Transformer, we present a comparison on recurrent Seq2seq model and Transformer in Table 8 using our implementation.\(^6\)
<table>
<thead>
<tr>
<th>Models</th>
<th>BLEU</th>
<th>METEOR</th>
<th>ROUGE-L</th>
</tr>
</thead>
<tbody>
<tr>
<td>Seq2seq</td>
<td>30.57</td>
<td>17.86</td>
<td>43.64</td>
</tr>
<tr>
<td>Seq2seq w/ copy attention</td>
<td>29.08</td>
<td>17.12</td>
<td>42.97</td>
</tr>
<tr>
<td>Transformer</td>
<td>31.08</td>
<td>18.57</td>
<td>44.31</td>
</tr>
<tr>
<td>Transformer w/ copy attention</td>
<td><strong>31.38</strong></td>
<td><strong>18.69</strong></td>
<td><strong>44.68</strong></td>
</tr>
</tbody>
</table>
Table 8: Comparison between recurrent sequence-to-sequence (Seq2seq) model and Transformer with copy attention mechanism on the Python dataset.
We can see from Table 8, the performance of the recurrent Seq2seq model is much better than the results reported in prior works. However, to our surprise, the copy attention mechanism does not result in improvement for the recurrent Seq2seq model. When we looked into the training perplexity and the validation performance, we also observed lower performance in comparison to the base recurrent Seq2seq model. In comparison, our proposed Transformer-based approach outperforms the recurrent Seq2seq models by a large margin showing its effectiveness for source code summarization.
\(^6\)Our implementation is based on Open-NMT (Klein et al., 2017) and PyTorch 1.3.
C Qualitative Examples
```java
public static terminal find(String with_name) {
if (with_name == null)
return null;
else
return (terminal) all.get(with_name);
}
```
**Base Model:** lookup a non terminal by name string
**Full Model w/o Relative Position:** lookup a terminal terminal by name string
**Full Model w/o Copy Attention:** lookup a non terminal by name string
**Full Model:** lookup a terminal by name
**Human Written:** lookup a terminal by name string.
```java
public static String selectText(XPathExpression expr, Node context) {
try {
return (String) expr.evaluate(context, XPathConstants.STRING);
} catch (XPathExpressionException e) {
throw new XmlException(e);
}
}
```
**Base Model:** evaluates the xpath expression to a xpath expression
**Full Model w/o Relative Position:** evaluates the xpath expression
**Full Model w/o Copy Attention:** evaluates the xpath expression as a single element
**Full Model:** evaluates the xpath expression as a text string
**Human Written:** evaluates the xpath expression as text.
```java
public CTaggingPanel(final JFrame parent, final ZyGraph graph, final ITagManager manager) {
super(new BorderLayout());
mtagsTree = new CTagsTree(parent, graph, manager);
final JScrollPane pane = new JScrollPane(mtagsTree);
pane.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED);
pane.setHorizontalScrollBarPolicy(ScrollPaneConstants.HORIZONTAL_SCROLLBAR_AS_NEEDED);
add(pane);
setBorder(new TitledBorder(new LineBorder(Color.LIGHT_GRAY, NUM, BOOL), STRING));
setDoubleBuffered(BOOL);
}
```
**Base Model:** creates a new dnetscapessslservername dialog.
**Full Model w/o Relative Position:** creates a new settings dialog.
**Full Model w/o Copy Attention:** creates a new toolbar panel.
**Full Model:** creates a new api panel object.
**Human Written:** creates a new panel object.
```java
public DSignCsr(JFrame parent, PKCS10CertificationRequest pkcs10Csr, File csrFile, PrivateKey signPrivateKey, KeyPairType signKeyPairType, X509Certificate verificationCertificate, Provider provider) throws CryptoException{
super(parent, Dialog.ModalityType.DOCUMENT_MODAL);
this.pkcs10Csr = pkcs10Csr;
this.csrFile = csrFile;
this.signPrivateKey = signPrivateKey;
this.signKeyPairType = signKeyPairType;
this.verificationCertificate = verificationCertificate;
this.provider = provider;
setTitle(res.getString(STRING));
initComponents();
}
```
**Base Model:** creates a new dsigncsr dialog for a spkac formatted csr.
**Full Model w/o Relative Position:** creates a new signer dialog for a pkcs #10 formatted.
**Full Model w/o Copy Attention:** creates a new dsigncsr dialog for a spkac formatted csr.
**Full Model:** creates a new dsigncsr dialog for a pkcs #10 formatted csr.
**Human Written:** creates a new dsigncsr dialog for a pkcs #10 formatted csr.
Table 9: Qualitative example of different models’ performance in Java dataset.
def get_hosting_service(name):
try:
return hosting_service_registry.get(u'hosting service id', name)
except ItemLookupError:
return None
Base Model: returns the color limits from the current service name.
Full Model w/o Relative Position: return the hosting service.
Full Model w/o Copy Attention: return the name of the service.
Full Model: return the hosting service name.
Human Written: return the hosting service with the given name.
def save_pickle(obj, fname):
with get_file_obj(fname, 'wb') as fout:
cPickle.dump(obj, fout, protocol=-1)
Base Model: pickle object obj to file fname.
Full Model w/o Relative Position: save object to file.
Full Model w/o Copy Attention: raw data: object obj to file fname.
Full Model: save object to file fname.
Human Written: save the object to file via pickling.
def get_temp_dir:
temp = get_environ_variable('TMP')
if temp is None:
temp = get_environ_variable('TEMP')
if temp is None or '' in temp and os.name == 'nt':
temp = 'C:\temp'
if temp None or '' in temp and os.name == 'posix':
temp = '/tmp'
return temp
Base Model: returns the name of the sample environment variable.
Full Model w/o Relative Position: returns the next temporary directory of a file.
Full Model w/o Copy Attention: get the directory related to store the stubbed.
Full Model: return a temporary filename.
Human Written: returns a temporary directory.
def get_exploration_memcache_key(exploration_id, version=None):
if version:
return 'exploration-version %s %s' % exploration_id, version
else:
return 'exploration %s' % exploration_id
Base Model: returns the key for an instance for the project.
Full Model w/o Relative Position: returns a memcache key for the given version.
Full Model w/o Copy Attention: returns a memcache for the exploration id.
Full Model: returns a memcache key for the specified exploration.
Human Written: returns a memcache key for an exploration.
def get_svc_avail_path():
return AVAIL_SVR_DIRS
Base Model: get the actual path.
Full Model w/o Relative Position: returns a list of services.
Full Model w/o Copy Attention: return a list of services that are available.
Full Model: returns a list of available services.
Human Written: return list of paths that may contain available services.
def volume_attach(provider, names, **kwargs):
client.get_client_info()
client.extra_action(provider=provider, names=names, action='volume attach', **kwargs)
return info
Base Model: attempt to attach volume.
Full Model w/o Relative Position: attach volume cli example:.
Full Model w/o Copy Attention: attach volume cli example:.
Full Model: attach volume information cli example:.
Human Written: attach volume to a server cli example:.
Table 10: Qualitative example of different models’ performance in Python dataset.
|
{"Source-Url": "https://wasiahmad.github.io/files/publications/2020/transformer_for_code_summ.pdf", "len_cl100k_base": 7827, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32545, "total-output-tokens": 10622, "length": "2e12", "weborganizer": {"__label__adult": 0.0003962516784667969, "__label__art_design": 0.0003159046173095703, "__label__crime_law": 0.0002524852752685547, "__label__education_jobs": 0.0004982948303222656, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.00014579296112060547, "__label__finance_business": 0.00014269351959228516, "__label__food_dining": 0.0002562999725341797, "__label__games": 0.00041413307189941406, "__label__hardware": 0.0007395744323730469, "__label__health": 0.0003044605255126953, "__label__history": 0.00014281272888183594, "__label__home_hobbies": 7.367134094238281e-05, "__label__industrial": 0.00025463104248046875, "__label__literature": 0.00021696090698242188, "__label__politics": 0.00016117095947265625, "__label__religion": 0.0003466606140136719, "__label__science_tech": 0.00782012939453125, "__label__social_life": 7.68899917602539e-05, "__label__software": 0.005344390869140625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00023746490478515625, "__label__transportation": 0.0003714561462402344, "__label__travel": 0.00016999244689941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38241, 0.03878]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38241, 0.23655]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38241, 0.77373]], "google_gemma-3-12b-it_contains_pii": [[0, 4033, false], [4033, 8553, null], [8553, 12132, null], [12132, 17196, null], [17196, 20850, null], [20850, 25907, null], [25907, 29665, null], [29665, 32334, null], [32334, 35352, null], [35352, 38241, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4033, true], [4033, 8553, null], [8553, 12132, null], [12132, 17196, null], [17196, 20850, null], [20850, 25907, null], [25907, 29665, null], [29665, 32334, null], [32334, 35352, null], [35352, 38241, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38241, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38241, null]], "pdf_page_numbers": [[0, 4033, 1], [4033, 8553, 2], [8553, 12132, 3], [12132, 17196, 4], [17196, 20850, 5], [20850, 25907, 6], [25907, 29665, 7], [29665, 32334, 8], [32334, 35352, 9], [35352, 38241, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38241, 0.1791]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7d7e64786d0e2b5b6aaff875cb237f826747f0c1
|
[REMOVED]
|
{"Source-Url": "https://www.nics.uma.es/sites/default/files/papers/Agudo2008d.pdf", "len_cl100k_base": 7031, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 37790, "total-output-tokens": 8774, "length": "2e12", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.0005025863647460938, "__label__crime_law": 0.0009813308715820312, "__label__education_jobs": 0.0008893013000488281, "__label__entertainment": 9.14931297302246e-05, "__label__fashion_beauty": 0.00019884109497070312, "__label__finance_business": 0.0007596015930175781, "__label__food_dining": 0.0003342628479003906, "__label__games": 0.0004377365112304687, "__label__hardware": 0.001773834228515625, "__label__health": 0.0007004737854003906, "__label__history": 0.0003440380096435547, "__label__home_hobbies": 0.00011473894119262697, "__label__industrial": 0.0006422996520996094, "__label__literature": 0.0003139972686767578, "__label__politics": 0.0004835128784179687, "__label__religion": 0.0004565715789794922, "__label__science_tech": 0.2099609375, "__label__social_life": 0.00014853477478027344, "__label__software": 0.052459716796875, "__label__software_dev": 0.72705078125, "__label__sports_fitness": 0.00022208690643310547, "__label__transportation": 0.0005869865417480469, "__label__travel": 0.00023686885833740232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38246, 0.02362]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38246, 0.42494]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38246, 0.89761]], "google_gemma-3-12b-it_contains_pii": [[0, 3304, false], [3304, 7406, null], [7406, 10662, null], [10662, 12474, null], [12474, 14344, null], [14344, 16583, null], [16583, 18242, null], [18242, 22510, null], [22510, 25545, null], [25545, 28170, null], [28170, 31372, null], [31372, 35303, null], [35303, 38246, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3304, true], [3304, 7406, null], [7406, 10662, null], [10662, 12474, null], [12474, 14344, null], [14344, 16583, null], [16583, 18242, null], [18242, 22510, null], [22510, 25545, null], [25545, 28170, null], [28170, 31372, null], [31372, 35303, null], [35303, 38246, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38246, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38246, null]], "pdf_page_numbers": [[0, 3304, 1], [3304, 7406, 2], [7406, 10662, 3], [10662, 12474, 4], [12474, 14344, 5], [14344, 16583, 6], [16583, 18242, 7], [18242, 22510, 8], [22510, 25545, 9], [25545, 28170, 10], [28170, 31372, 11], [31372, 35303, 12], [35303, 38246, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38246, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
014adb8d3bc11bb484a2b53e9347cdf30e90d6cb
|
Decentralized Data Flows in Algebraic Service Compositions for the Scalability of IoT Systems
Damian Arellanes and Kung-Kiu Lau
School of Computer Science
The University of Manchester
Manchester M13 9PL, United Kingdom
{damian.arellanesmolina, kung-kiu.lau}@manchester.ac.uk
Abstract—With the advent of the Internet of Things, scalability becomes a significant concern due to the huge amount of data involved in IoT systems. A centralized data exchange is not desirable as it leads to a single performance bottleneck. Although a distributed exchange removes the central bottleneck, it has network performance issues as data passes among multiple coordinators. A decentralized data flow exchange is the only solution that fully enables the realization of efficient IoT systems as there is no single performance bottleneck and the network overhead is minimized. In this paper, we present an approach that leverages the algebraic semantics of DX-MAN for realizing decentralized data flows in IoT systems. As data flows are not mixed with control flows in algebraic service compositions, we developed an algorithm that smoothly analyzes data dependencies for the generation of a direct relationship between data consumers and data producers. The result prevents passing data alongside control among multiple coordinators because data is only read and written on a data space. We validate our approach using the Blockchain as the data space and conducted experiments to evaluate the scalability of our approach. Our results show that our approach scales well with the size of IoT systems.
Index Terms—Internet of Things, decentralized data flows, Blockchain, DX-MAN, exogenous connectors, scalability, separation between control and data, algebraic service composition
I. INTRODUCTION
The Internet of Things (IoT) envisions a world where everything will be interconnected through distributed services. As new challenges are forthcoming, this paradigm requires a shift in our way of building software systems. With the rapid advancement in hardware, the number of connected things is considerably increasing to the extent that scalability becomes a significant concern due to the huge amount of data involved in IoT systems. Thus, IoT services shall exchange data over the Internet using efficient approaches.
Although a centralized data exchange approach has been successful in enterprise systems, it will easily cause a bottleneck in IoT systems which potentially generate huge amount of data continuously. To avoid the bottleneck, a distributed approach can be used to distribute the load of data over multiple coordinators. However, this would introduce unnecessary network overhead as data is passed among many loci of control.
A decentralized data exchange approach is the most efficient solution to tackle the imminent scale of IoT systems, as it achieves better response time and throughput by minimizing network hops [1], [2], [3], [4], [5]. However, exchanging data among loosely-coupled IoT services is challenging, specially in resource-constrained environments where things have poor network connection and low disk space.
Moreover, constructing data dependency graphs is not trivial when control flow and data flow are tightly coupled. The separation of such concerns would allow a separate reasoning, monitoring, maintenance and evolution of both control and data [6]. Consequently, an efficient data exchange approach can be realized without considering control flow. Thus, the number of messages transmitted over the Internet can be reduced considerably.
This paper proposes an approach that leverages the algebraic semantics of DX-MAN [7], [8] for the realization of decentralized data flows in IoT systems. The algebraic semantics of DX-MAN allows a well-defined structure of data flows which are smoothly analyzed by an algorithm, in order to form a direct relationship between data consumers and data producers. For this analysis, the algorithm particularly takes advantage of the fact that DX-MAN separates control flow and data flow.
The rest of the paper is organized as follows. Sect. II introduces the composition semantics of the DX-MAN model. Sect. III describes its data flow dimension. Sect. IV presents the algorithm that analyzes data flows. Sect. V presents the implementation of our approach. Sect. VI outlines a quantitative evaluation of our approach. Finally, we present the related work in Sect. VII and the conclusions in Sect. VIII.
II. DX-MAN MODEL
DX-MAN is an algebraic model for IoT systems where services and exogenous connectors are first-class entities. An exogenous connector is a variability operator that defines multiple workflows with explicit control flow, while a DX-MAN service is a distributed software unit that exposes a set of operations through a well-defined interface.
An atomic service provides a set of operations and it is formed by connecting an invocation connector with a computation unit. A computation unit represents an actual service implementation (e.g., a RESTful Microservice or a WS-* service) and it is not allowed to call other computation units. The red arrows in Fig. 1(a) show that, as a consequence of the algebraic semantics, the interface of an atomic service has all the operations in the computation unit. An invocation
A DX-MAN operation is a set of input parameters and output parameters. An input parameter defines the required data to perform a computation, while an output parameter is the resulting data from a specific computation. Although exogenous connectors do not provide any operation (because they do not perform any computation), some of them require data. In particular, selector connectors, looping adapters and guard adapters require input values to evaluate boolean conditions. Connectors do not have any parameters by default since designers define the parameters they require when choosing a workflow. Workflow selection is out of the scope of this paper, but we refer the reader to our previous paper on workflow variability [8].
In addition to the operations created on algebraic composition, custom operations can be defined in composite services. This is particularly useful when designers want to hide the operations created during algebraic composition or when designers want to create a unified interface for a composite service.
A data connector defines explicit data flow by connecting a source parameter with a destination parameter. Fig. 2 shows that an algebraic data connector is automatically created during composition and it is available for all the workflows defined by a composite. In particular, an algebraic data connector connects two parameters vertically, i.e., in a bottom-up way for outputs or in a top-down fashion for inputs. The top-down approach connects a parameter of a composite service operation to a parameter of a sub-service operation, whilst the bottom-up approach means the other way round. Fig. 3 shows the data connection rules, where we can see that the algebraic data connectors are defined in four different ways.
A custom data connector is manually created by a designer for only one workflow. Custom data connectors connect two parameters either vertically or horizontally. An horizontal approach connects the parameters of two sub-service operations, or an operation parameter with an exogenous input. A quick glance at Fig. 3 reveals that a designer is allowed to connect parameters in 16 different ways.
A designer uses custom data connectors to define data flows for a particular workflow. Currently, DX-MAN supports the most common patterns: sequencing and map-reduce. For the sequencing pattern, the parameters of two different operations are horizontally connected. Fig. 4 shows an example of this pattern, where operation OpB requires data from operation OpA. In particular, a custom data connector links the output A0 with the input B0, while another custom data connector connects the output A1 with the input B1. To improve readability, we ignore algebraic data connectors.
A data processor is particularly useful when data pre-processing needs to be done before executing an operation. It waits until all input values have been received, then performs some computation and returns transformed data in the form of outputs. A mapper executes a user-defined function on each input value received. A reducer takes the result from a mapper and executes a user-defined reduce function on inputs. A reducer can also be used in isolation to perform straightforward computation such as combining data into a list. Fig. 5 shows...
an example of the map-reduce pattern, where operation \( \text{opB} \) requires the pre-processing of data generated by operation \( \text{opA} \). In particular, two custom data connectors link the input \( A0 \) and the output \( A1 \) with the inputs of the mapper. The output of the mapper is connected to the input of the reducer and, similarly, the output of the reducer is connected to the input \( B0 \). Please note that \( A0 \) can only be connected from the composite service operation, according to the rules shown in Fig. 3.
**IV. ANALYSIS OF DATA CONNECTORS**
Algebraic service composition and the separation of concerns are key enablers for the realization of decentralized data flows. The separation between control and data allows a separate reasoning of these dimensions. In particular, exogenous connectors provide a hierarchical control flow structure that is completely separated from the data flow structure enabled by data connectors. The data connections in a composite service form a well-structure data dependency graph that is analyzed at deployment-time by means of the Algorithm 1. To understand this algorithm, it is necessary to underline some formal definitions.

**Algorithm 1** Algorithm for the analysis of data connectors
1. **procedure** \( \text{ANALYZE}(dc) \) \( \triangleright \) \( dc \in \mathbb{D} \mathbb{C} \)
2. \( X_w \leftarrow \emptyset \) \( \triangleright \) \( X_w = \{ x \mid x \in \mathbb{D} \} \)
3. \( Y_r \leftarrow \emptyset \) \( \triangleright \) \( Y_r = \{ y \mid y \in \mathbb{D} \} \)
4. if \( \Pi_1(dc) \notin \mathbb{D} \mathbb{D} \cap \Pi_1(dc) \in \text{dom}(R) \) then
5. \( X_w \leftarrow R(\Pi_1(dc)) \)
6. else
7. \( X_w \leftarrow \{ \Pi_1(dc) \} \)
8. if \( \Pi_2(dc) \notin \mathbb{D} \mathbb{D} \cap \Pi_2(dc) \in \text{dom}(W) \) then
9. \( Y_r \leftarrow W(\Pi_2(dc)) \)
10. for each \( y \in Y_r \) do
11. \( R \oplus \{ y \rightarrow R(y) \} \)
12. else
13. \( Y_r \leftarrow \{ \Pi_2(dc) \} \)
14. for each \( y \in Y_r \) do
15. \( R \oplus \{ y \rightarrow R(y) \} \)
16. for each \( x \in X_w \) do
17. \( W \oplus \{ x \rightarrow W(x) \} \)
Let \( \mathbb{D} \) be the data type, \( \mathbb{P} \mathbb{D} \) the type of processor parameters, \( \mathbb{O} \mathbb{D} \) the type of operation parameters and \( \mathbb{C} \mathbb{D} \) the type of exogenous connector inputs, such that \( \mathbb{P} \mathbb{D}, \mathbb{O} \mathbb{D}, \mathbb{C} \mathbb{D} \subseteq \mathbb{D} \). A data connector is then a tuple of type \( \mathbb{D} \mathbb{C} : \mathbb{D} \times \mathbb{D} \) that connects a source \( \in \mathbb{D} \) parameter with an origin \( \in \mathbb{D} \) parameter.
**Reader parameters** are the entities that directly consume data produced by writer parameters. \( I_w \) is the set of inputs that read data during a workflow execution, namely the inputs of atomic service operations, the inputs of exogenous connectors and the inputs of data processors. \( O_r \) is the set of operation outputs in the top-level composite, useful for reading data resulting from a workflow execution. The set \( I_w \) represents the required data for a workflow execution, which are the inputs of operations in the top-level composite. \( O_w \) is the set of outputs that write data during a workflow execution, namely the outputs of atomic service operations and the outputs of data processors.
Basically, the Algorithm 1 analyzes data connectors for all composite services, in order to create a relationship between reader parameter and writer parameters, while ignoring those parameters who do not need to manipulate data. It receives a data connector \( dc \in \mathbb{D} \mathbb{C} \) as an input, and uses \( R = I_w \cup O_r \rightarrow \{ w \mid w \subset I_w \cup O_w \} \) for mapping a reader parameter to a set of writer parameters and \( W = I_w \cup O_w \rightarrow \{ r \mid r \subset I_r \cup O_r \} \) for mapping a writer parameter to a set of reader parameters.
The Algorithm 1 creates two empty sets \( X_w \) and \( Y_r \), in order to analyze the endpoints of a data connector \( dc \in DC \). \( X_w \) is the set of parameters connected to the source parameter \( \Pi_1(dc) \) if \( \forall \Pi_1(dc) \) is not a data processor parameter and \( \Pi_2(dc) \) has incoming data connectors; otherwise, \( X_w \) only contains \( \Pi_1(dc) \). Similarly, if the destination parameter \( \Pi_2(dc) \) is not a data processor parameter and \( \Pi_2(dc) \) has outgoing data connectors, then \( Y_r \) is the set of parameters connected from \( \Pi_2(dc) \) and \( X_w \) (without \( \Pi_2(dc) \)) is added into the writers of each element \( y \in Y_r \); otherwise, \( Y_r \) only contains \( \Pi_2(dc) \). Finally, \( X_w \) is added into the writers of each element \( Y \in Y_r \), while the set \( Y_r \) is added into the readers of each element \( x \in X_w \). The result of the algorithm is a mapping of reader parameters to writer parameters.
V. IMPLEMENTATION
We implemented our approach on top of the DX-MAN Platform [9], and we used the Blockchain as the underlying data space for persisting parameter values while leveraging the capabilities provided by these decentralized platforms, such as performance, security and auditability. Furthermore, the Blockchain ensures that every service is the owner of its own data, while data provenance is provided to discover data flows (i.e., how data is moved between services) or to find out how parameters change over time. In particular, we defined three smart contracts using Hyperledger Composer 0.20.0 for executing transactions on Hyperledger Fabric 1.2. We do not show the source code due to space constraints, but it is available at .
The DX-MAN platform provides an API to support the three phases of a DX-MAN system lifecycle: design-time, deployment-time and run-time. Composite service templates only contain algebraic data connectors, as they represent a general design with multiple workflows. Using API constructs, a designer chooses a workflow and defines custom data connectors (and perhaps data processors) for every composite service involved. Data processor functions are defined by designers using API constructs.
The Algorithm 1 analyzes the data connectors defined at design-time, in order to construct the readers map at deployment-time. In particular, the map is a Java HashMap where the keys are reader parameter UUIDs and the values are lists of writer parameter UUIDs. After getting the map for a given workflow, reader parameters (with their respective list of writers) are stored as assets in the Blockchain by means of the transaction CreateParameters.
At run-time, exogenous connectors pass control using CoAP messages. In particular, an invocation connector performs five steps to invoke an operation, as shown in Fig. 6. Although the rest of exogenous connectors behave similarly, they only perform the first two steps. First, the invocation connector uses the transaction readParameters to read all input values from the Blockchain. For a given input, the Blockchain reads values directly from the writers list. As there might be multiple writer parameters, this transaction returns a list of the most recent input values that were updated during the workflow execution. Hence, a timestamp is set whenever a parameter value is updated. readParameters returns an exception if there are no input values. Output values are written onto the data space as soon as they are available, even before control reaches data consumers. Thus, having concurrent connectors (e.g., a parallel connector) may lead to synchronization issues during workflow execution. To solve this, control flow blocks in the invocation connector until all input values are read.

Once all inputs are ready, the invocation connector invokes the implementation of an operation by passing the respective input values. Then, the operation performs some computation and returns the result in the form of outputs. Finally, the invocation connector writes the output values onto the Blockchain using the transaction updateParameters.
An UpdateParameterEvent is published whenever a new parameter value has been updated. During deployment, the platform automatically subscribes data processor instances to the events produced by the respective writer parameters. Thus, a data processor instance waits until it receives all events, before performing its respective designer-defined computation. Although our current implementation supports only mappers and reducers, more data processors can be introduced using the semantics of a data processor presented in Sect. III, e.g., we can add a shuffler to sort data by key.
Our approach enables transparent data exchange as data routing is embodied in the Blockchain. Thus, reader parameters are not aware where the data comes from, and writer parameters do not know who reads the data they produce. Furthermore, the map generated by the Algorithm 1 avoids the inefficient approach of passing values through data connectors during workflow execution. Thus, exogenous connectors and data processors read data directly from parameters who only write values onto the Blockchain. Undoubtedly, this enables a transparent decentralized data exchange.
VI. EVALUATION
In this section, we present a comparative evaluation between distributed data flows and decentralized data flows for a DX-MAN composition. In the former approach, data is passed over the network through data connectors, whereas the second approach is our solution. Our evaluation intends to answer two major research questions: (A) Does the approach scale with the number of data connectors? and (B) Under which conditions is decentralized data exchange beneficial?
As a DX-MAN composition has a multi-level hierarchical structure, an algebraic data connector passes a data value vertically in a bottom-up way (for inputs) or in a top-down fashion (for outputs) while a custom data connector passes values horizontally or vertically. For our evaluation, we only consider vertical routing through algebraic data connectors.
\[
M_p = \{ \lambda_j | \lambda_j \in \mathbb{R} \}
\]
is the set of network message costs for vertically routing the value of a parameter \( p \), where \( \lambda_j \) is the
cost of passing that value through an algebraic data connector \( j \). Likewise, \( \Gamma_p \) and \( \omega_p \) are the costs of reading and writing the value on the data space, respectively.
Equations 1 and 2 calculate the total message cost of routing a value with a distributed approach. In particular, equation 1 is used for input values, whilst equation 2 is used for output values. As the decentralized approach does not pass values through data connectors, the total message cost of routing the value of \( p \) is \( \Gamma_p \) for inputs, and \( \omega_p \) for outputs.
\[
\Gamma_p + \sum_{j=0}^{[M_p]-1} \lambda_j \quad (1)
\]
\[
\omega_p + \sum_{j=0}^{[M_p]-1} \lambda_j \quad (2)
\]
Fig. 7 depicts the DX-MAN composition that we consider for our evaluation, which has three levels, three atomic services and two composite services. The composites ServiceD and ServiceE have three and five data connectors, respectively. Fig. 7 shows that a data connector has a \( \lambda_{j} \in [0,7] \) cost of passing a value over the network. Then, the vertical routing sets for the parameters are \( M_{A0} = \{\lambda_3\} \), \( M_{A1} = \{\lambda_4\} \), \( M_{B0} = \{\lambda_0, \lambda_5\} \), \( M_{B1} = \{\lambda_1, \lambda_6\} \), and \( M_{C0} = \{\lambda_2, \lambda_7\} \).
Suppose that a specific workflow requires the invocation of the operations \( opA \) and \( opC \). Using a distributed approach would require passing and reading values for two inputs, and returning and writing one output value. Therefore, according to equations 1 and 2, the total message cost would be \( \lambda_3 + \lambda_4 + \lambda_2 + \lambda_7 + \Gamma_{A0} + \omega_{A1} + \Gamma_{C0} \). Remarkably, the total message cost using the decentralized approach would be \( \Gamma_{A0} + \omega_{A1} + \Gamma_{C0} \).
A. RQ1: Does the approach scale with the number of data connectors?
We conducted an experiment that dynamically increases the number of data connectors of the DX-MAN composition depicted in Fig. 7. The experiment is carried out in 100000 steps with \( \Gamma_{A0} = \omega_{A1} = \Gamma_{B0} = \omega_{B1} = \Gamma_{C0} = 1 \).
For each step of the experiment, we add a new parameter in a random atomic operation. As a consequence of algebraic composition, another parameter is added in the respective composite operation and a data connector links these parameters.
In this experiment, we particularly compare the cost of the distributed approach vs. the cost of the decentralized approach. Rather than computing the costs for the invocation of specific operations, we compute the total costs for the DX-MAN composition using \( \Gamma_{A0} + \omega_{A1} + \Gamma_{B0} + \omega_{B1} + \Gamma_{C0} + \sum_{j=0}^{7} \lambda_j \). Fig. 8 shows that the costs grow linearly with the number of data connectors, and that the decentralized approach outperforms its counterpart by reducing costs by a factor of 2.67 in average.
B. RQ2: Under which conditions is decentralized data exchange beneficial?
We conducted an experiment of 100000 steps to see the benefit of the decentralized approach as the number of levels of the composition increases. We particularly consider the total costs for the input 0 and we assume that \( \Gamma_{A0} = 1 \). At each step, the number of levels is increased by 1 and \( \sum_{j=0}^{[M_{A0}]-1} \lambda_j \) by 0.0004. Thus, increasing the sum of vertical costs means that \( \frac{\sum_{j=0}^{[M_{A0}]-1} \lambda_j}{\Gamma_{A0} + \sum_{j=0}^{[M_{A0}]-1} \lambda_j} = 1 \) and increasing the number of levels by 1 means that \( [M_{A0}] \) is also increased by 1. The improvement rate of the decentralized data exchange is \( 1 - \frac{\Gamma_{A0} + \sum_{j=0}^{[M_{A0}]-1} \lambda_j}{\Gamma_{A0} + \sum_{j=0}^{[M_{A0}]-1} \lambda_j} \).
Fig. 9 shows the results of this experiment, where it is clear that the benefit of the decentralized approach becomes more evident as the number of levels of the composition increases. This is because the number of data connectors increases with the number of levels and so the cost of the distributed approach. The only way a distributed approach would outperform the decentralized one is when the cost of performing operations on the data space is more expensive than the total cost of passing values vertically. In particular, for our experiment the DX-MAN composition gets a benefit only if \( \Gamma_{A0} < \sum_{j=0}^{[M_{A0}]-1} \lambda_j \).
VII. Related Work
To the best of our knowledge, there are no solutions to enable decentralized data flows in IoT systems. In this section we present SOA-based solutions as they are applicable to IoT.
We classified our findings into three categories, depending on
the composition semantics the approaches are built on: orchestra-
tion (with central control flows and decentralized data flows),
decentralized orchestration, data flows and choreographies.
Approaches belonging to the first category [10], [1] partially
separate data from control so as to enable P2P data exchanges.
To do so, an orchestrator coordinates the exchanges by passing
data references alongside control. Thus, extra network traffic is
introduced as data references (and acknowledge messages) are
transferred over the network. These approaches are typically
based on proxies that keep data, thus representing an issue for
things with low disk space. By contrast, DX-MAN does not
require any coordinator for the data exchange, and exogenous
connectors do not store data. Besides, exogenous connectors do
not exchange references, thanks to the separation of concerns.
Only few approaches discuss data decentralization using
the semantics of decentralized orchestration. [11] stores data
and control in distributed tuple spaces which may become
a bottleneck in IoT environments that continuously generate
huge amount of data. [3] solves that issue by storing references
instead of values. However, references are needed because data
is mixed with control. Moreover, [3] requires the maintenance
of tuple spaces for passing references and databases for storing
data. DX-MAN only reads and writes onto the data space.
Although distributed data flows [12] allocate flows over
different things, there is a master engine that coordinates data
exchange for slave engines. Hence, this approach introduces
extra network hops as data is passed among multiple engines.
Although Service Invocation Triggers [2] exchange data directly,
they rely on workflows that do contain loops and conditionals.
This limitation arises from the fact that it is not trivial to
analyze data dependencies when control is mixed with data.
A choreography describes interactions among participants
using decentralized message exchanges (a.k.a. conversations).
Workflow participants [13], pass data among multiple engines
leading to network degradation. Although services may ex-
change data through direct message passing, they are not
reusable because data and control are mixed [6]. [4] uses peers
to exchange data and invoke services, thus separating control
and computation. However, peers pass data alongside control
according to predefined conversations, leading to the issues
discussed in [5]. Although [14] proposes the separation between
data and control for choreographies, it uses a middleware which
may potentially become a central bottleneck.
VIII. CONCLUSIONS
In this paper, we presented an approach on top of DX-
MAN to enable decentralized data flows in IoT systems. At
design-time, the algebraic semantics of DX-MAN enables a
well-defined structure of data connections. As data connections
are not mixed with control flow structures, then an algorithm
smoothly analyzes data connections at deployment-time. The
result is a mapping between reader parameters and writer par-
parameters, which prevents passing values through data connectors.
In our current implementation, the Blockchain embodies this
mapping to manage data values at run-time.
DX-MAN is currently the only service model that provides
the separation between data flow, control flow and computation;
thus, allowing a separate reasoning, monitoring, maintenance
and evolution of these concerns. In particular, separating data
flow from control flow prevents passing data alongside control
among exogenous connectors, and enables the use of different
technologies to handle data flows and control flows separately.
Our experiments confirm that our approach scales well with
the number of data connectors and the number of levels of a
DX-MAN composition. They also suggest that our approach
provides the best performance when the cost of performing
operations on the data space is less than the cost of passing data
over the network. Thus, our approach is extremely beneficial
for IoT systems consisting of plenty of services.
REFERENCES
[1] A. Barker et al., “Reducing Data Transfer in Service-Oriented Archi-
Comp. and Net., vol. 6, no. 1, pp. 81–90, 2009.
Interactions for the Scalability of the Internet of Things,” in IEEE ICIIOT,
[8] ——, “Algebraic Service Composition for User-Centric IoT Applications,”
pp. 56–69.
283–286.
[10] D. Liu, “Data-flow Distribution in FICAS Service Composition Infra-
structure,” 2002.
Dataflow approach,” in IOT, 2015, pp. 155–162.
Service Choreographies,” in On the Move to Meaningful Internet Syst.,
|
{"Source-Url": "https://www.research.manchester.ac.uk/portal/files/85913574/1570514450.pdf", "len_cl100k_base": 6342, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25106, "total-output-tokens": 7431, "length": "2e12", "weborganizer": {"__label__adult": 0.00035572052001953125, "__label__art_design": 0.0005049705505371094, "__label__crime_law": 0.0004134178161621094, "__label__education_jobs": 0.000911712646484375, "__label__entertainment": 0.00013458728790283203, "__label__fashion_beauty": 0.00021564960479736328, "__label__finance_business": 0.0006594657897949219, "__label__food_dining": 0.0004730224609375, "__label__games": 0.0005593299865722656, "__label__hardware": 0.0018291473388671875, "__label__health": 0.000988006591796875, "__label__history": 0.0004401206970214844, "__label__home_hobbies": 0.0001251697540283203, "__label__industrial": 0.0008172988891601562, "__label__literature": 0.000453948974609375, "__label__politics": 0.0004336833953857422, "__label__religion": 0.0005936622619628906, "__label__science_tech": 0.360107421875, "__label__social_life": 0.00013113021850585938, "__label__software": 0.0154266357421875, "__label__software_dev": 0.61279296875, "__label__sports_fitness": 0.0003352165222167969, "__label__transportation": 0.0009908676147460938, "__label__travel": 0.0002505779266357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29939, 0.03421]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29939, 0.50454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29939, 0.8552]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5313, false], [5313, 8597, null], [8597, 12680, null], [12680, 19057, null], [19057, 23703, null], [23703, 29939, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5313, true], [5313, 8597, null], [8597, 12680, null], [12680, 19057, null], [19057, 23703, null], [23703, 29939, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29939, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29939, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5313, 2], [5313, 8597, 3], [8597, 12680, 4], [12680, 19057, 5], [19057, 23703, 6], [23703, 29939, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29939, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
28a8f67d463f62c158972c7a9bcae2c6a6280f6f
|
This specification defines an XMPP protocol extension for data forms that can be used in workflows such as service configuration as well as for application-specific data description and reporting. The protocol includes lightweight semantics for forms processing (such as request, response, submit, and cancel), defines several common field types (boolean, list options with single or multiple choice, text with single line or multiple lines, single or multiple JabberIDs, hidden fields, etc.), provides extensibility for future data types, and can be embedded in a wide range of applications. The protocol is not intended to provide complete forms-processing functionality as is provided in the W3C XForms technology, but instead provides a basic subset of such functionality for use by XMPP entities.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
1 Introduction
Several existing Jabber/XMPP protocols involve the exchange of structured data between users and applications for common tasks such as registration (In-Band Registration (XEP-0077)\(^1\)) and searching (Jabber Search (XEP-0055)\(^2\)). Unfortunately, these early protocols were "hard coded" and thus place significant restrictions on the range of information that can be exchanged. Furthermore, other protocols (e.g., Multi-User Chat (XEP-0045)\(^3\)) may need to exchange data for purposes such as configuration, but the configuration options may differ depending on the specific implementation or deployment. Finally, developers may want to extend other protocols (e.g., Service Discovery (XEP-0030)\(^4\)) in a flexible manner in order to provide information that is not defined in the base protocol. In all of these cases, it would be helpful to use a generic data description format that can be used for dynamic forms generation and data "modelling" in a variety of circumstances.
An example may be helpful. Let us imagine that when a user creates a multi-user chatroom on a text conferencing service, the service allows the user to configure the room in various ways. Most implementations will probably provide a somewhat common set of configurable features (discussion logging, maximum number of room occupants, etc.), there will be some divergence: perhaps one implementation will enable archiving of the room log in a variety of file types (XML, HTML, PDF, etc.) and for a variety of time periods (hourly, daily, weekly, etc.), whereas another implementation may present a boolean on/off choice of logging in only one format (e.g., daily logs saved in HTML). Obviously, the first implementation will have more configuration options than the second implementation. Rather than "hard-coding" every option via distinct XML elements (e.g., `<room_logging_period/>`), a better design would involve a more flexible format.
The 'jabber:x:data' protocol described herein defines such a flexible format for use by Jabber/XMPP entities, steering a middle course between the simplicity of "name-value" pairs and the complexity of XForms 1.0\(^5\) (on which development had just begun when this protocol was designed). In many ways, 'jabber:x:data' is similar to the Forms Module of XHTML 1.0\(^6\); however, it provides several Jabber-specific data types, enables applications to require data fields, integrates more naturally into the "workflow" semantics of IQ stanzas, and can be included as an extension of existing Jabber/XMPP protocols in ways that the XHTML Forms Module could not when this protocol was developed (especially because Modularization of XHTML\(^7\) did not exist at that time).
---
\(^5\)XForms 1.0 <http://www.w3.org/TR/xforms>.
\(^6\)XHTML 1.0 <http://www.w3.org/TR/xhtml1>.
\(^7\)Modularization of XHTML <http://www.w3.org/TR/2004/WD-xhtml-modularization-20040218/>.
2 Requirements
This document addresses the following requirements:
1. **Data Gathering** -- the protocol should enable a form-processing entity (commonly a server, service, or bot) to gather data from a form-submitting entity (commonly a client controlled by a human user); this should be done via distinct data fields (e.g., items in a questionnaire or configuration form), each of which can be a different data "type" and enable free-form input or a choice between multiple options (as is familiar from HTML forms).
2. **Data Reporting** -- the protocol should enable a form-processing entity to report data (e.g., search results) to a form-submitting entity, again via distinct data fields.
3. **Portability** -- the protocol should as much as possible define generic data formats and basic datatypes only; hints may be provided regarding the user interface, but they should be hints only and not hard-and-fast requirements.
4. **Simplicity** -- the protocol should be simple for clients to implement, and most of the complexity (e.g., data validation and processing) should be the responsibility of servers and components rather than clients.
5. **Flexibility** -- the protocol should be flexible and extensible rather than "hard-coded".
6. **Compatibility** -- the protocol should define an extension to existing Jabber/XMPP protocols and not break existing implementations unless absolutely necessary.
3 Protocol
The base syntax for the 'jabber:x:data' namespace is as follows (a formal description can be found in the XML Schema section below):
```xml
<x xmlns='jabber:x:data'
type='{form-type}'>
<title/>
<instructions/>
<field var='field-name'
type='{field-type}'
label='description'>
<desc/>
</field>
<required/>
<value>field-value</value>
<option label='option-label'><value>option-value</value></option>
<option label='option-label'><value>option-value</value></option>
</x>
```
3 PROTOCOL
The <x/> element qualified by the 'jabber:x:data' namespace SHOULD be included either directly as a first-level child of a <message/> stanza or as a second-level child of an <iq/> stanza (where the first-level child is an element qualified by a "wrapper" namespace); see also the restrictions enumerated below.
The OPTIONAL <title/> and <instructions/> elements enable the form-processing entity to label the form as a whole and specify natural-language instructions to be followed by the form-submitting entity. The XML character data for these elements SHOULD NOT contain newlines (the \n and \r characters), and any handling of newlines (e.g., presentation in a user interface) is unspecified herein; however, multiple instances of the <instructions/> element MAY be included.
3.1 Form Types
The data gathered or provided in a 'jabber:x:data' form can be situated in a number of different contexts. Examples include an empty form that needs to be filled out, a completed form, the results of a submission, a search result, or simply a set of data that is encapsulated using the 'jabber:x:data' namespace. The full context for the data is provided by three things:
1. the "wrapper" protocol (i.e., the namespace whose root element is the direct child of the <iq/> stanza and the parent of the <x/> element qualified by the 'jabber:x:data' namespace)
2. the place of the form within a transaction (e.g., an IQ "set" or "result") or structured conversation (e.g., a message <thread/>)
3. the 'type' attribute on the form's root <x/> element
The first two pieces of contextual information are provided by other protocols, whereas the form types are described in the following table.
<table>
<thead>
<tr>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>form</td>
<td>The form-processing entity is asking the form-submitting entity to complete a form.</td>
</tr>
<tr>
<td>submit</td>
<td>The form-submitting entity is submitting data to the form-processing entity. The submission MAY include fields that were not provided in the empty form, but the form-processing entity MUST ignore any fields that it does not understand. Furthermore, the submission MAY omit fields not marked with <required/> by the form-processing entity.</td>
</tr>
<tr>
<td>cancel</td>
<td>The form-submitting entity has cancelled submission of data to the form-processing entity.</td>
</tr>
<tr>
<td>result</td>
<td>The form-processing entity is returning data (e.g., search results) to the form-submitting entity, or the data is a generic data set.</td>
</tr>
</tbody>
</table>
In order to maintain the context of the data as captured in the form type, the following rules MUST be observed:
- For <iq/> stanzas, the root element qualified by the "wrapper" namespace in a form of type "form" or "submit" MUST be returned in a form of type "result". The <x/> element qualified by the 'jabber:x:data' namespace MUST be a child of the "wrapper" namespace's root element. As defined in XMPP Core, the 'id' attribute MUST be copied in the IQ result. For data forms of type "form" or "result", the <iq/> stanza SHOULD be of type "result". For data forms of type "submit" or "cancel", the <iq/> stanza SHOULD be of type "set".
- For <message/> stanzas, the <thread/> SHOULD be copied in the reply if provided. The <x/> element qualified by the 'jabber:x:data' namespace MUST be a child of the <message/> stanza.
### 3.2 The Field Element
A data form of type "form", "submit", or "result" SHOULD contain at least one <field/> element; a data form of type "cancel" SHOULD NOT contain any <field/> elements. The <field/> element MAY contain any of the following child elements:
- <desc/> The XML character data of this element provides a natural-language description of the field, intended for presentation in a user-agent (e.g., as a "tool-tip", help button, or explanatory text provided near the field). The <desc/> element SHOULD NOT contain newlines (the \n and \r characters), since layout is the responsibility of a user agent, and any handling of newlines (e.g., presentation in a user interface) is unspecified herein. (Note: To provide a description of a field, it is RECOMMENDED to use a <desc/> element rather than a separate <field/> element of type "fixed".)
- <required/> This element, which MUST be empty, flags the field as required in order for the form to be considered valid.
- <value/> The XML character data of this element defines the default value for the field (according to the form-processing entity) in a data form of type "form", the data provided by a form-submitting entity in a data form of type "submit", or a data result in a data form of type "result". In data forms of type "form", if the form-processing entity provides a default value via the <value/> element, then the form-submitting entity SHOULD NOT attempt to enforce a different default value (although it MAY do so to respect user preferences).
---
preferences or anticipate expected user input). Fields of type list-multi, jid-multi, text-multi, and hidden MAY contain more than one <value/> element; all other field types MUST NOT contain more than one <value/> element.
<option/> One of the options in a field of type "list-single" or "list-multi". The XML character of the <value/> child defines the option value, and the 'label' attribute defines a human-readable name for the option. The <option/> element MUST contain one and only one <value/> child. If the field is not of type "list-single" or "list-multi", it MUST NOT contain an <option/> element.
If the <field/> element type is anything other than "fixed" (see below), it MUST possess a 'var' attribute that uniquely identifies the field in the context of the form (if it is "fixed", it MAY possess a 'var' attribute). The <field/> element MAY possess a 'label' attribute that defines a human-readable name for the field.
The 'type' attribute defines the data "type" of the field data. The following rules apply for that attribute:
- For data forms of type "form", each <field/> element SHOULD possess a 'type' attribute. If the 'type' attribute is absent, the default of "text-single" is to be applied.
- For data forms of type "submit", "result" or "error", the receiving entity can infer the 'type' attribute value from context. Nevertheless, the 'type' attribute MAY be present for clarity. Note that forms of type "error" SHOULD NOT have any <field/> elements.
If fields are presented in a user interface (e.g., as items in a questionnaire or form result), the order of the field elements in the XML SHOULD determine the order of items presented to the user.
3.3 Field Types
The following field types represent data "types" that are commonly exchanged between Jabber/XMPP entities. These field types are not intended to be as comprehensive as the datatypes defined in, for example, XML Schema Part 2 ⁹, nor do they define user interface elements.
### Type Description
<table>
<thead>
<tr>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>boolean</td>
<td>The field enables an entity to gather or provide an either-or choice between two options. The default value is "false". In accordance with Section 3.2.2.1 of XML Schema Part 2: Datatypes, the allowable lexical representations for the xs:boolean datatype are the strings "0" and "false" for the concept 'false' and the strings "1" and "true" for the concept 'true'; implementations MUST support both styles of lexical representation.</td>
</tr>
<tr>
<td>fixed</td>
<td>The field is intended for data description (e.g., human-readable text such as "section" headers) rather than data gathering or provision. The <value/> child SHOULD NOT contain newlines (the \n and \r characters); instead an application SHOULD generate multiple fixed fields, each with one <value/> child.</td>
</tr>
<tr>
<td>hidden</td>
<td>The field is not shown to the form-submitting entity, but instead is returned with the form. The form-submitting entity SHOULD NOT modify the value of a hidden field, but MAY do so if such behavior is defined for the "using protocol".</td>
</tr>
<tr>
<td>jid-multi</td>
<td>The field enables an entity to gather or provide multiple Jabber IDs. Each provided JID SHOULD be unique (as determined by comparison that includes application of the Nodeprep, Nameprep, and Resourceprep profiles of Stringprep as specified in XMPP Core), and duplicate JIDs MUST be ignored. *</td>
</tr>
<tr>
<td>jid-single</td>
<td>The field enables an entity to gather or provide a single Jabber ID. *</td>
</tr>
<tr>
<td>list-multi</td>
<td>The field enables an entity to gather or provide one or more options from among many. A form-submitting entity chooses one or more items from among the options presented by the form-processing entity and MUST NOT insert new options. The form-submitting entity MUST NOT modify the order of items as received from the form-processing entity, since the order of items MAY be significant. **</td>
</tr>
<tr>
<td>list-single</td>
<td>The field enables an entity to gather or provide one option from among many. A form-submitting entity chooses one item from among the options presented by the form-processing entity and MUST NOT insert new options. **</td>
</tr>
<tr>
<td>text-multi</td>
<td>The field enables an entity to gather or provide multiple lines of text. ***</td>
</tr>
<tr>
<td>text-private</td>
<td>The field enables an entity to gather or provide a single line or word of text, which shall be obscured in an interface (e.g., with multiple instances of the asterisk character).</td>
</tr>
<tr>
<td>text-single</td>
<td>The field enables an entity to gather or provide a single line or word of text, which may be shown in an interface. This field type is the default and MUST be assumed if a form-submitting entity receives a field type it does not understand.</td>
</tr>
</tbody>
</table>
* Note: Data provided for fields of type "jid-single" or "jid-multi" MUST contain one or more valid Jabber IDs, where validity is determined by the addressing rules defined in XMPP Core (see the Data Validation section below).
* Note: The <option/> elements in list-multi and list-single fields MUST be unique, where
uniqueness is determined by the value of the 'label' attribute and the XML character data of
the <value/> element (i.e., both must be unique).
** Note: Data provided for fields of type "text-multi" SHOULD NOT contain any newlines
(the \n and \r characters). Instead, the application SHOULD split the data into multiple
strings (based on the newlines inserted by the platform), then specify each string as the XML
character data of a distinct <value/> element. Similarly, an application that receives multiple
<value/> elements for a field of type "text-multi" SHOULD merge the XML character data of
the value elements into one text block for presentation to a user, with each string separated
by a newline character as appropriate for that platform.
3.4 Multiple Items in Form Results
In some contexts (e.g., the results of a search request), it may be necessary to communicate
multiple items. Therefore, a data form of type "result" MAY contain two child elements not
described in the basic syntax above:
1. One and only one <reported/> element, which can be understood as a "table header"
describing the data to follow.
2. Zero or more <item/> elements, which can be understood as "table cells" containing data
(if any) that matches the request.
The <reported/> element MUST appear before any <item/> element inside the <x/> element.
Forms of this type MUST NOT contain any top-level fields other than <reported/> and <item/>.
Older revisions of this XEP (before 2.12.0) did not contain an explicit requirement for the
ordering between <reported> and <item>. Implementations are therefore encouraged to be
flexible when processing incoming data, as there might still be implementations which do not
implement a strict ordering when generating reports. Similarly, revisions of this XEP before
2.13.1 were ambiguous about whether <reported/> and <item/> elements could co-exist with
other top-level elements such as <field/> and <title/> and various implementations are known
to have handled this in different ways.
The syntax is as follows:
```
<x xmlns='jabber:x:data'
type='result'>
<reported>
<field var='field-name' label='description' type='{field-type}'/>
</reported>
<item>
<field var='field-name'>
<value>field-value</value>
</field>
</item>
</x>
```
Each of these <item/> elements and the <reported/> element MUST contain one or more <field/> children. The <reported/> element defines the data format for the result items by specifying the fields to be expected for each item; for this reason, its <field/> elements SHOULD possess a 'type' attribute and 'label' attribute in addition to the 'var' attribute, and SHOULD NOT contain a <value/> element. Each <item/> element defines one item in the result set, and MUST contain each field specified in the <reported/> element (although the XML character data of the <value/> element MAY be null).
3.5 Incomplete Submission Form Handling
An incomplete submission form is a data form of the type "submit" that contains all required fields but some optional fields are omitted. The receiving entity of an incomplete submission form SHOULD only process (e.g., apply) the submitted fields. If applicable, the values of the omitted fields SHOULD keep their current value. The current value is often the value found in the corresponding form of the type "form".
4 Data Validation
Data validation is the responsibility of the form-processing entity (commonly a server, service, or bot) rather than the form-submitting entity (commonly a client controlled by a human user). This helps to meet the requirement for keeping client implementations simple. If the form-processing entity determines that the data provided is not valid, it SHOULD return a "Not Acceptable" error, optionally providing a textual explanation in the XMPP <text/> element or an application-specific child element that identifies the problem (see Error Condition Mappings (XEP-0086) for information about mappings and formats).
5 Examples
For the sake of the following examples, let us suppose that there exists a bot hosting service on the Jabber network, located at <botster.shakespeare.lit>. This service enables registered users to create and configure new bots, find and interact with existing bots, and so on.
---
We will assume that these interactions occur using the Ad-Hoc Commands (XEP-0050) protocol, which is used as a "wrapper" protocol for data forms qualified by the 'jabber:x:data' namespace. The examples in the sections that follow show most of the features of the data forms protocol described above.
Note: Additional examples can be found in the specifications for various "using protocols", such as XEP-0045: Multi-User Chat and XEP-0055: Jabber Search.
5.1 Configuration
The first step is for a user to create a new bot on the hosting service. We will assume that this is done by sending a "create" command to the desired bot:
Listing 1: User Requests Bot Creation
```xml
<iq from='romeo@montague.net/home' to='joogle@botster.shakespeare.lit' type='get' xml:lang='en' id='create1'>
<command xmlns='http://jabber.org/protocol/commands' node='create' action='execute'/>
</iq>
```
The hosting service then returns a data form to the user:
Listing 2: Service Returns Bot Creation Form
```xml
<iq from='joogle@botster.shakespeare.lit' to='romeo@montague.net/home' type='result' xml:lang='en' id='create1'>
<command xmlns='http://jabber.org/protocol/commands' node='create' sessionid='create:20040408T0128Z' status='executing'>
<x xmlns='jabber:x:data' type='form'>
<title>Bot Configuration</title>
<instructions>Fill out this form to configure your new bot!</instructions>
<field type='hidden' var='FORM_TYPE'>
<value>jabber:bot</value>
</field>
</x>
</command>
</iq>
```
<field type='fixed'><value>Section 1: Bot Info</value></field>
<field type='text-single'
label='The name of your bot'
var='botname'/>
<field type='text-multi'
label='Helpful description of your bot'
var='description'/>
<field type='boolean'
label='Public bot?'
var='public' />
<required/>
</field>
<field type='text-private'
label='Password for special access'
var='password'/>
<field type='fixed'><value>Section 2: Features</value></field>
<field type='list-multi'
label='What features will the bot support?'
var='features'>
<option label='Contests'><value>contests</value></option>
<option label='News'><value>news</value></option>
<option label='Polls'><value>polls</value></option>
<option label='Reminders'><value>reminders</value></option>
<option label='Search'><value>search</value></option>
</field>
<field type='fixed'><value>Section 3: Subscriber List</value></field>
<field type='list-single'
label='Maximum number of subscribers'
var='maxsubs'>
<value>20</value>
<option label='10'><value>10</value></option>
<option label='20'><value>20</value></option>
<option label='30'><value>30</value></option>
<option label='50'><value>50</value></option>
<option label='100'><value>100</value></option>
<option label='None'><value>none</value></option>
</field>
<field type='fixed'><value>Section 4: Invitations</value></field>
<field type='jid-multi'
label='People to invite'
var='invitelist'>
<desc>Tell all your friends about your new bot!</desc>
</field>
The user then submits the configuration form:
Listing 3: User Submits Bot Creation Form
```xml
<iq from='romeo@montague.net/home' to='joogle@botster.shakespeare.lit' type='set' xml:lang='en' id='create2'>
<command xmlns='http://jabber.org/protocol/commands' node='create' sessionid='create:20040408T0128Z'>
<x xmlns='jabber:x:data' type='submit'>
<field type='hidden' var='FORM_TYPE'>
<value>jabber:bot</value>
</field>
<field type='text-single' var='botname'>
<value>The Jabber Google Bot</value>
</field>
<field type='text-multi' var='description'>
<value>This bot enables you to send requests to Google and receive the search results right in your Jabber client. It's really cool!</value>
<value>It even supports Google News!</value>
</field>
<field type='boolean' var='public'>
<value>0</value>
</field>
<field type='text-private' var='password'>
<value>v3r0na</value>
</field>
<field type='list-multi' var='features'>
<value>news</value>
<value>search</value>
</field>
<field type='list-single' var='maxsubs'>
<value>50</value>
</field>
<field type='jid-multi' var='invitelist'>
<value>juliet@capulet.com</value>
<value>benvolio@montague.net</value>
</field>
</x>
</command>
</iq>
```
The service then returns the results to the user:
5.2 Search
Now that the user has created this search bot, let us suppose that one of the friends he has invited decides to try it out by sending a search request:
Listing 6: Service Returns Search Form
```xml
<iq xml:lang='en' id='search1'>
<command xmlns='http://jabber.org/protocol/commands'
node='search'
action='execute'/>
</iq>
```
Listing 7: User Submits Search Form
```xml
<iq from='juliet@capulet.com/chamber'
to='joogle@botster.shakespeare.lit'
type='get'
xml:lang='en'
id='search1'>
<command xmlns='http://jabber.org/protocol/commands'
node='search'>
<x xmlns='jabber:x:data' type='form'>
<title>Joogle Search</title>
.instructions>Fill out this form to search for information!
</instructions>
<field type='text-single'
var='search_request'>
<required/>
</field>
</x>
</command>
</iq>
```
Listing 8: Service Returns Search Results
```xml
<iq from='joogle@botster.shakespeare.lit'
to='juliet@capulet.com/chamber'>
```
13
<command xmlns='http://jabber.org/protocol/commands'
node='search'
status='completed'>
<x xmlns='jabber:x:data' type='result'>
<title>Joogle Search: verona</title>
<reported>
<field var='name'/>
<field var='url'/>
</reported>
<item>
<field var='name'>Comune di Verona - Benvenuti nel sito ufficiale</field>
<field var='url'>http://www.comune.verona.it</field>
</item>
<item>
<field var='name'>benvenuto!</field>
<field var='url'>http://www.hellasverona.it</field>
</item>
<item>
<field var='name'>Università degli Studi di Verona - Home Page</field>
<field var='url'>http://www.univr.it</field>
</item>
<item>
<field var='name'>Aeroporti del Garda</field>
<field var='url'>http://www.aeroportoverona.it</field>
</item>
<item>
<field var='name'>Veronafiere - fiera di Verona</field>
</item>
</x>
</command>
6 Service Discovery
If an entity supports inclusion of the `<x/>` element qualified by the 'jabber:x:data' namespace as a direct child of the `<message/>` stanza type, it MUST report support by including a service discovery feature of "jabber:x:data" (see Protocol Namespaces regarding issuance of one or more permanent namespaces) in response to a Service Discovery information request:
Listing 9: Service Discovery information request
```xml
<iq type='get' from='romeo@montague.net/orchard' to='juliet@capulet.com/balcony' id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
Listing 10: Service Discovery information response
```xml
<iq type='result' from='juliet@capulet.com/balcony' to='romeo@montague.net/orchard' id='disco1'>
<query xmlns='http://jabber.org/protocol/disco#info'>
...
<feature var='jabber:x:data'/>
...
</query>
</iq>
```
If an entity supports data forms indirectly through inclusion of data forms in a wrapper namespace (rather than directly within a `<message/>` stanza), it MUST NOT advertise support for the 'jabber:x:data' namespace, since support is implicit in support for the wrapper protocol.
7 Security Considerations
There are no security concerns related to this specification above and beyond those described in the relevant section of XMPP Core.
8 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA) 12.
9 XMPP Registrar Considerations
9.1 Protocol Namespaces
The XMPP Registrar 13 includes the 'jabber:x:data' namespace in its registry of protocol namespaces.
9.2 Parameter Values
The XMPP Registrar maintains a registry of parameter values related to the 'jabber:x:data' namespace, specifically as defined in Field Standardization for Data Forms (XEP-0068) 14; the registry is located at <https://xmpp.org/registrar/formtypes.html>.
10 XML Schema
This schema is descriptive, not normative.
```xml
<?xml version='1.0' encoding='UTF-8'?>
<xs:schema
xmlns:xs='http://www.w3.org/2001/XMLSchema'
targetNamespace='jabber:x:data'
xmlns='jabber:x:data'
elementFormDefault='qualified'>
<xs:annotation>
<xs:documentation>
The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
</xs:documentation>
</xs:annotation>
</xs:schema>
```
13The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
The protocol documented by this schema is defined in XEP-0004: http://www.xmpp.org/extensions/xep-0004.html
<xs:enumeration value='text-multi'/>
<xs:enumeration value='text-private'/>
<xs:enumeration value='text-single'/>
</xs:restriction>
</xs:simpleType>
</xs:attribute>
<xs:attribute name='var' type='xs:string' use='optional'/>
</xs:complexType>
</xs:element>
<xs:element name='option'>
<xs:complexType>
<xs:sequence>
<xs:element ref='value'/>
</xs:sequence>
<xs:attribute name='label' type='xs:string' use='optional'/>
</xs:complexType>
</xs:element>
<xs:element name='value' type='xs:string'/>
<xs:element name='reported'>
<xs:annotation>
<xs:documentation>
When contained in a "reported" element, the "field" element SHOULD NOT contain a "value" child.
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element ref='field' maxOccurs='unbounded'/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name='item'>
<xs:complexType>
<xs:sequence>
<xs:element ref='field' maxOccurs='unbounded'/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:simpleType name='empty'>
<xs:restriction base='xs:string'>
<xs:enumeration value=''/>
</xs:restriction>
</xs:simpleType>
11 Changes in Final State
The following substantive protocol changes have been made while this specification has been in the Final state:
• Specified that the 'var' attribute is required for all field types except "fixed", for which the 'var' attribute is optional.
• Specified when to advertise support via service discovery.
• Removed references to <presence/> stanzas.
12 Changes in Draft State
The following substantive protocol changes were made while this specification was in the Draft state:
• The <x/> element MAY be included directly in <message/> and <presence/> stanzas.
• The <x/> element MAY contain a <title/> child for forms and results.
• The <x/> element MUST possess a 'type' attribute.
• A <field/> element MAY be of type='jid-single'.
• Results MAY be reported back in <item/> tags.
• Results MAY contain a <reported/> element with result set.
• The <reported/> fields MAY possess a 'type' attribute to provide hints about how to interact with the data (type='jid-single' being the most useful).
|
{"Source-Url": "https://xmpp.org/extensions/xep-0004.pdf", "len_cl100k_base": 8076, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 62102, "total-output-tokens": 9435, "length": "2e12", "weborganizer": {"__label__adult": 0.0002491474151611328, "__label__art_design": 0.0003845691680908203, "__label__crime_law": 0.0007371902465820312, "__label__education_jobs": 0.0005617141723632812, "__label__entertainment": 9.137392044067384e-05, "__label__fashion_beauty": 0.00010633468627929688, "__label__finance_business": 0.00047087669372558594, "__label__food_dining": 0.00015926361083984375, "__label__games": 0.0004875659942626953, "__label__hardware": 0.0007171630859375, "__label__health": 0.00014197826385498047, "__label__history": 0.00021338462829589844, "__label__home_hobbies": 5.060434341430664e-05, "__label__industrial": 0.0002416372299194336, "__label__literature": 0.00029397010803222656, "__label__politics": 0.00023651123046875, "__label__religion": 0.00025391578674316406, "__label__science_tech": 0.014312744140625, "__label__social_life": 7.534027099609375e-05, "__label__software": 0.06451416015625, "__label__software_dev": 0.9150390625, "__label__sports_fitness": 0.0001550912857055664, "__label__transportation": 0.00019609928131103516, "__label__travel": 0.00013756752014160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34464, 0.0151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34464, 0.30065]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34464, 0.73036]], "google_gemma-3-12b-it_contains_pii": [[0, 802, false], [802, 3337, null], [3337, 3337, null], [3337, 6560, null], [6560, 8515, null], [8515, 11093, null], [11093, 13561, null], [13561, 15604, null], [15604, 18614, null], [18614, 20911, null], [20911, 22980, null], [22980, 24575, null], [24575, 26147, null], [26147, 27466, null], [27466, 27630, null], [27630, 28441, null], [28441, 29389, null], [29389, 30723, null], [30723, 32239, null], [32239, 32347, null], [32347, 33443, null], [33443, 34464, null]], "google_gemma-3-12b-it_is_public_document": [[0, 802, true], [802, 3337, null], [3337, 3337, null], [3337, 6560, null], [6560, 8515, null], [8515, 11093, null], [11093, 13561, null], [13561, 15604, null], [15604, 18614, null], [18614, 20911, null], [20911, 22980, null], [22980, 24575, null], [24575, 26147, null], [26147, 27466, null], [27466, 27630, null], [27630, 28441, null], [28441, 29389, null], [29389, 30723, null], [30723, 32239, null], [32239, 32347, null], [32347, 33443, null], [33443, 34464, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34464, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34464, null]], "pdf_page_numbers": [[0, 802, 1], [802, 3337, 2], [3337, 3337, 3], [3337, 6560, 4], [6560, 8515, 5], [8515, 11093, 6], [11093, 13561, 7], [13561, 15604, 8], [15604, 18614, 9], [18614, 20911, 10], [20911, 22980, 11], [22980, 24575, 12], [24575, 26147, 13], [26147, 27466, 14], [27466, 27630, 15], [27630, 28441, 16], [28441, 29389, 17], [29389, 30723, 18], [30723, 32239, 19], [32239, 32347, 20], [32347, 33443, 21], [33443, 34464, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34464, 0.04225]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
d6750204c61a4ef44801839a777cc20c610545aa
|
Writing a CSPICE (C) Based Program
January 2020
Undefined variables are displayed in red
Results are displayed in blue
First, let's go over the important steps in the process of writing a CSPICE-based program and putting it to work:
• Understand the geometry problem.
• Identify the set of SPICE kernels that contain the data needed to perform the computation.
• Select the SPICE APIs needed to compute the quantities of interest.
• Write and compile the program.
• Get actual kernel files and verify that they contain the data needed to support the computation for the time(s) of interest.
• Run the program.
To illustrate these steps, let's write a program that computes the apparent intersection of the boresight ray of a given CASSINI science instrument with the surface of a given Saturnian satellite. The program will compute
• Planetocentric and planetodetic (geodetic) latitudes and longitudes of the intercept point.
• Range from spacecraft to intercept.
• Illumination angles (phase, solar incidence, and emission) at the intercept point.
We want the boresight intercept on the surface, range from s/c to intercept, and illumination angles at the intercept point.
When? **TIME** (UTC, TDB or TT)
On what object? **satnm**
In what frame? **fixref**
For which instrument? **instnm**
For what spacecraft? **scnm**
Using what model? **setupf**
Time transformation kernels
Orientation models
Instrument descriptions
Shapes of satellites, planets
Ephemerides for spacecraft, Saturn barycenter and satellites.
Data required to compute vectors, rotations and other parameters shown in the picture are stored in the SPICE kernels listed below.
Note: these kernels have been selected to support this presentation; they should not be assumed to be appropriate for user applications.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Kernel Type</th>
<th>File name</th>
</tr>
</thead>
<tbody>
<tr>
<td>time conversions</td>
<td>generic LSK</td>
<td>naif0009.tls</td>
</tr>
<tr>
<td>satellite orientation</td>
<td>CASSINI SCLK</td>
<td>cas00084.tsc</td>
</tr>
<tr>
<td>satellite shape</td>
<td>CASSINI PCK</td>
<td>cpck05Mar2004.tpc</td>
</tr>
<tr>
<td>satellite position</td>
<td>planet/sat</td>
<td></td>
</tr>
<tr>
<td>planet barycenter position</td>
<td>ephemeris SPK</td>
<td>020514_SE_SAT105.bsp</td>
</tr>
<tr>
<td>spacecraft position</td>
<td>planet SPK</td>
<td>981005_PLTEPH-DE405S.bsp</td>
</tr>
<tr>
<td>spacecraft orientation</td>
<td>spacecraft SPK</td>
<td>030201AP_SK_SM546_T45.bsp</td>
</tr>
<tr>
<td>instrument alignment</td>
<td>spacecraft CK</td>
<td>04135_04171pc_psiv2.bc</td>
</tr>
<tr>
<td>instrument boresight</td>
<td>CASSINI FK</td>
<td>cas_v37.tf</td>
</tr>
<tr>
<td></td>
<td>Instrument IK</td>
<td>cas_iss_v09.ti</td>
</tr>
</tbody>
</table>
The easiest and most flexible way to make required kernels available to the program is via `furnsh_c`. For this example we make a setup file (also called a “metakernel” or “furnsh kernel”) containing a list of kernels to be loaded:
Note: these kernels have been selected to support this presentation; they should not be assumed to be appropriate for user applications.
```
\begindata
KERNELS_TO_LOAD = ('naif0009.tls', 'cas00084.tsc', 'cpck05Mar2004.tpc',
'020514_SE_SAT105.bsp', '981005_PLTEPH-DE405S.bsp',
'030201AP_SK_SM546_T45.bsp', '04135_04171pc_psiv2.bc',
'cas_v37.tf', 'cas_iss_v09.ti')
\begintext
and we make the program prompt for the name of this setup file:
```
prompt_c ( "Enter setup file name > ", FILESZ, setupf );
furnsh_c ( setupf );
• Prompt for setup file (“metakernel”) name; load kernels specified via setup file. (Done on previous chart.)
• Prompt for user inputs required to completely specify problem. Obtain further inputs required by geometry routines via CSPICE calls.
• Compute the intersection of the boresight direction ray with the surface of the satellite, presented as a triaxial ellipsoid.
If there is an intersection,
• Convert Cartesian coordinates of the intercept point to planetocentric latitudinal and planetodetic coordinates
• Compute spacecraft-to-intercept point range
• Find the illumination angles (phase, solar incidence, and emission) at the intercept point
• Display the results.
We discuss the geometric portion of the problem next.
Compute the intercept point (point) of the boresight vector (insite) specified in the instrument frame (iframe) of the instrument mounted on the spacecraft (scnm) with the surface of the satellite (satnm) at the TDB time of interest (et) in the satellite’s body-fixed frame (fixref). This call also returns the light-time corrected epoch at the intercept point (trgepc), the spacecraft-to-intercept point vector (srfvec), and a flag indicating whether the intercept was found (found). We use "converged Newtonian" light time plus stellar aberration corrections to produce the most accurate surface intercept solution possible. We model the surface of the satellite as an ellipsoid.
```
sincpt_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm, iframe, insite, point, &trgepc, srfvec, &found );
```
The range we want is obtained from the outputs of sincpt_c. These outputs are defined only if a surface intercept is found. If found is true, the spacecraft-to-surface intercept range is the norm of the output argument srfvec. Units are km. We use the CSPICE function vnorm_c to obtain the norm:
```
vnorm_c ( srfvec )
```
We'll write out the range data along with the other program results.
Compute the planetocentric latitude \((pclat)\) and longitude \((pclon)\), as well as the planetodetic latitude \((pdlat)\) and longitude \((pdlon)\) of the intersection point.
```c
if ( found )
{
reclat_c ( point, &r, &pclon, &pclat );
/* Let \(re\), \(rp\), and \(f\) be the satellite's longer equatorial radius, polar radius, and flattening factor. */
re = radii[0];
rp = radii[2];
f = (re - rp) / re;
recgeo_c ( point, re, f, &pdlon, &pdlat, &alt);
}
```
The illumination angles we want are the outputs of \(ilumin_c\). Units are radians.
```c
ilumin_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm, point,
&trgepc, srfvec, &phase, &solar, &emissn );
```
/* Compute the boresight ray intersection with the surface of the target body. */
sincpt_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm, iframe, insite, point, &trgepc, srfvec, &found );
/* If an intercept is found, compute planetocentric and planetodetic latitude and longitude of the point. */
if ( found ) {
reclat_c ( point, &r, &pclon, &pclat );
/* Let re, rp, and f be the satellite's longer equatorial radius, polar radius, and flattening factor. */
re = radii[0];
rp = radii[2];
f = ( re – rp ) / re;
recgeo_c ( point, re, f, &pdlon, &pdlat, &alt );
/* Compute illumination angles at the surface point. */
ilumin_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm, point, &trgepc, srfvec, &phase, &solar, &emissn );
...
} else {
...
}
The code above used quite a few inputs that we don't have yet:
- TDB epoch of interest (\texttt{et});
- satellite and s/c names (\texttt{satnm, scnm});
- satellite body-fixed frame name (\texttt{fixref});
- satellite ellipsoid radii (\texttt{radii});
- instrument fixed frame name (\texttt{iframe});
- instrument boresight vector in the instrument frame (\texttt{iniste});
Some of these values are user inputs; others can be obtained via CSPICE calls once the required kernels have been loaded.
Let's prompt for the satellite name (\texttt{satnm}), satellite frame name (\texttt{fixref}), spacecraft name (\texttt{scnm}), instrument name (\texttt{instm}) and time of interest (\texttt{time}):
```c
prompt_c ( "Enter satellite name > ", WORDSZ, satnm );
prompt_c ( "Enter satellite frame > ", WORDSZ, fixref );
prompt_c ( "Enter spacecraft name > ", WORDSZ, scnm );
prompt_c ( "Enter instrument name > ", WORDSZ, instnm );
prompt_c ( "Enter time > ", WORDSZ, time );
```
Then we can get the rest of the inputs from CSPICE calls:
To get the TDB epoch ($et$) from the user-supplied time string (which may refer to the UTC, TDB or TT time systems):
```c
str2et_c ( time, &et );
```
To get the satellite’s ellipsoid radii ($radii$):
```c
bodvrd_c ( satnm, "RADII", 3, &i, radii );
```
To get the instrument boresight direction ($insite$) and the name of the instrument frame ($iframe$) in which it is defined:
```c
bodn2c_c ( instnm, &instid, &found );
```
```c
if ( !found ) {
setmsg_c ( "Instrument name # could not be "
"translated to an ID code." );
errch_c ( "#", instnm );
sigerr_c ( "NAMENOTFOUND" );
}
getfov_c ( instid, ROOM, WORDSZ, WORDSZ,
shape, iframe, insite, &n, bundry );
```
Getting Inputs: Summary
Navigation and Ancillary Information Facility
/* Prompt for the user-supplied inputs for our program */
prompt_c ("Enter setup file name > ", FILESZ, setupf);
furnsh_c (setupf);
prompt_c ("Enter satellite name > ", WORDSZ, satnm);
prompt_c ("Enter satellite frame > ", WORDSZ, fixref);
prompt_c ("Enter spacecraft name > ", WORDSZ, scnm);
prompt_c ("Enter instrument name > ", WORDSZ, instnm);
prompt_c ("Enter time > ", WORDSZ, time);
/* Get the epoch corresponding to the input time: */
str2et_c (time, &et);
/* Get the radii of the satellite. */
bodvrd_c (satnm, "RADII", 3, &i, radii);
/* Get the instrument boresight and frame name. */
bodn2c_c (instnm, &instid, &found);
if ( !found )
{
setmsg_c ("Instrument name # could not be "
"translated to an ID code." );
errch_c ("#", instnm);
sigerr_c ("NAMENOTFOUND" );
}
getfov_c (instid, ROOM, WORDSZ, WORDSZ, shape, iframe, insite, &n, bundry);
Display Results
Navigation and Ancillary Information Facility
/* Display results. Convert angles from radians to degrees for output. */
printf ( "\n"
"Intercept planetocentric longitude (deg): %11.6f\n"
"Intercept planetocentric latitude (deg): %11.6f\n"
"Intercept planetodetic longitude (deg): %11.6f\n"
"Intercept planetodetic latitude (deg): %11.6f\n"
"Range from spacecraft to intercept point (km): %11.6f\n"
"Intercept phase angle (deg): %11.6f\n"
"Intercept solar incidence angle (deg): %11.6f\n"
"Intercept emission angle (deg): %11.6f\n",
dpr_c() * pclon,
dpr_c() * pclat,
dpr_c() * pdlon,
dpr_c() * pdlat,
vnorm_c( srfvec ),
dpr_c() * phase,
dpr_c() * solar,
dpr_c() * emissn );
To finish up the program we need to declare the variables we've used.
- We'll highlight techniques used by NAIF programmers
- Add remaining C code required to make a syntactically valid program
```c
#include <stdio.h>
#include "SpiceUsr.h"
int main ()
{
#define FILESZ 256
#define WORDSZ 41
#define ROOM 10
SpiceDouble alt;
SpiceDouble bundry[ROOM][3];
SpiceDouble emissn;
SpiceDouble et;
SpiceDouble f);
SpiceDouble insite[3];
SpiceDouble srfvec[3];
SpiceDouble pclat;
SpiceDouble pclon;
SpiceDouble pdlat;
SpiceDouble pdlon;
SpiceDouble phase;
SpiceDouble point [3];
SpiceDouble r;
SpiceDouble radii [3];
SpiceDouble re;
SpiceDouble rp;
SpiceDouble solar;
SpiceDouble trgepc;
SpiceBoolean found;
SpiceChar iframe[WORDSZ];
SpiceChar instnm[WORDSZ];
SpiceChar satnm [WORDSZ];
SpiceChar fixref[WORDSZ];
SpiceChar scnm [WORDSZ];
SpiceChar setupf[FILESZ];
SpiceChar shape [WORDSZ];
SpiceChar time [WORDSZ];
SpiceInt i;
SpiceInt instid;
SpiceInt n;
```
Complete Source Code - 2
Navigation and Ancillary Information Facility
/* Prompt for the user-supplied inputs for our program */
prompt_c ( "Enter setup file name > ", FILESZ, setupf );
furnsh_c ( setupf );
prompt_c ( "Enter satellite name > ", WORDSZ, satnm );
prompt_c ( "Enter satellite frame > ", WORDSZ, fixref );
prompt_c ( "Enter spacecraft name > ", WORDSZ, scnm );
prompt_c ( "Enter instrument name > ", WORDSZ, instnm );
prompt_c ( "Enter time > ", WORDSZ, time );
/* Get the epoch corresponding to the input time: */
str2et_c ( time, &et );
/* Get the radii of the satellite. */
bodvrd_c ( satnm, "RADII", 3, &i, radii );
/* Get the instrument boresight and frame name. */
bodn2c_c ( instnm, &instid, &found );
if ( !found )
{
setmsg_c ( "Instrument name # could not be "
"translated to an ID code." );
errch_c ( "#", instnm );
sigerr_c ( "NAMENOTFOUND" );
}
getfov_c ( instid, ROOM, WORDSZ, WORDSZ,
shape, iframe, insite, &n, bundry );
/* Compute the boresight ray intersection with the surface of the
target body. */
sincpt_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm,
iframe, insite, point, &trgepc, srfvec, &found );
/* If an intercept is found, compute planetocentric and planetodetic
latitude and longitude of the point. */
if ( found )
{
reclat_c ( point, &r, &pclon, &pclat );
/* Let re, rp, and f be the satellite's longer equatorial
radius, polar radius, and flattening factor. */
re = radii[0];
rp = radii[2];
f = ( re - rp ) / re;
recgeo_c ( point, re, f, &pdlon, &pdlat, &alt );
/* Compute illumination angles at the surface point. */
ilumin_c ( "Ellipsoid", satnm, et, fixref, "CN+S", scnm, point,
&trgepc, srfvec, &phase, &solar, &emissn );
/* Display results. Convert angles to degrees for output. */
printf ( "\n"
"Intercept planetocentric longitude (deg): %11.6f\n"
"Intercept planetocentric latitude (deg): %11.6f\n"
"Intercept planetodetic longitude (deg): %11.6f\n"
"Intercept planetodetic latitude (deg): %11.6f\n"
"Range from spacecraft to intercept point (km): %11.6f\n"
"Intercept phase angle (deg): %11.6f\n"
"Intercept solar incidence angle (deg): %11.6f\n"
"Intercept emission angle (deg): %11.6f\n",
dpr_c() * pclon,
dpr_c() * pclat,
dpr_c() * pdlon,
dpr_c() * pdlat,
vnorm_c( srfvec ),
dpr_c() * phase,
dpr_c() * solar,
dpr_c() * emissn
);
}
else {
printf ( "No intercept point found at %s\n", time );
}
return(0);
}
• First be sure that both the CSPICE Toolkit and a C compiler are properly installed.
– A "hello world" C program must be able to compile, link, and run successfully in your environment.
– Any of the mkprodct.* scripts in the cspice/src/* paths of the CSPICE installation should execute properly.
• Ways to compile and link the program:
– If you're familiar with the "make" utility, create a makefile. Use compiler and linker options from the mkprodct.* script found in the cspice/src/cook_c path of your CSPICE installation.
– Or, modify the cookbook mkprodct.* build script.
» Your program name must be *.pgm, for example demo.pgm, to be recognized by the script.
» Change the library references in the script to use absolute pathnames.
» Change the path for the executable to the current working directory.
» If your compiler supports it, add a –I option to reference the cspice/include path to make CSPICE *.h files available. Otherwise, copy those files from the include path to your current working directory.
» On some platforms, you must modify the script to refer to your program by name.
Or, compile the program on the command line. The program must be linked against the CSPICE object library cspice.a (cspice.lib under MS Visual C++/C) and the C math library. On a PC running Linux and gcc, if
» The gcc compiler is in your path
• As indicated by the response to the command "which gcc"
» the Toolkit is installed in the path (for the purpose of this example) /myhome/cspice
» You've named the program demo.c
then you can compile and link your program using the command
```bash
gcc -I/myhome/cspice/include \
-o demo \
demo.c /myhome/cspice/lib/cspice.a -lm
```
• Note: the preprocessor flag
`-DNON_UNIX_STDIO`
used in the mkprodct.csh script is needed for code generated by f2c, but is usually unnecessary for compiling user code.
Prompt> mkprodct.csh
Setting default compiler:
gcc
Setting default compile options:
-c -ansi -O2 -DNON_UNIX_STDIO
Setting default link options:
-lm
Compiling and linking: demo.pgm
Compiling and linking: demo.pgm
Prompt>
It looks like we have everything taken care of:
- We have all necessary kernels
- We made a setup file (metakernel) pointing to them
- We wrote the program
- We compiled and linked it
Let's run it.
Running the Program - 2
Prompt> demo
Enter setup file name > setup.ker
Enter satellite name > PHOEBE
Enter satellite frame > IAU_PHOEBE
Enter spacecraft name > CASSINI
Enter instrument name > CASSINI_ISS_NAC
Enter time > 2004 jun 11 19:32:00
Intercept planetocentric longitude (deg): 39.843719
Intercept planetocentric latitude (deg): 4.195878
Intercept planetodetic longitude (deg): 39.843719
Intercept planetodetic latitude (deg): 5.048011
Range from spacecraft to intercept point (km): 2089.169724
Intercept phase angle (deg): 28.139479
Intercept solar incidence angle (deg): 18.247220
Intercept emission angle (deg): 17.858309
Prompt>
• Latitude definitions:
– Planetocentric latitude of a point P: angle between segment from origin to point and x-y plane (red arc in diagram).
– Planetodetic latitude of a point P: angle between x-y plane and extension of ellipsoid normal vector N that connects x-y plane and P (blue arc in diagram).
|
{"Source-Url": "https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/Tutorials/pdf/individual_docs/40_program_c.pdf", "len_cl100k_base": 4900, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 47031, "total-output-tokens": 6206, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0004000663757324219, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.0008530616760253906, "__label__entertainment": 9.959936141967772e-05, "__label__fashion_beauty": 0.00021314620971679688, "__label__finance_business": 0.0002181529998779297, "__label__food_dining": 0.0006704330444335938, "__label__games": 0.0008263587951660156, "__label__hardware": 0.003786087036132813, "__label__health": 0.000598907470703125, "__label__history": 0.0004432201385498047, "__label__home_hobbies": 0.00022125244140625, "__label__industrial": 0.00115203857421875, "__label__literature": 0.0001996755599975586, "__label__politics": 0.00040030479431152344, "__label__religion": 0.0008764266967773438, "__label__science_tech": 0.1351318359375, "__label__social_life": 0.00011307001113891602, "__label__software": 0.01045989990234375, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.00036978721618652344, "__label__transportation": 0.0010814666748046875, "__label__travel": 0.0002815723419189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17538, 0.01597]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17538, 0.63774]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17538, 0.67484]], "google_gemma-3-12b-it_contains_pii": [[0, 49, false], [49, 120, null], [120, 1053, null], [1053, 1360, null], [1360, 1527, null], [1527, 2818, null], [2818, 3631, null], [3631, 4377, null], [4377, 5570, null], [5570, 6271, null], [6271, 7065, null], [7065, 8042, null], [8042, 8804, null], [8804, 9769, null], [9769, 10524, null], [10524, 10719, null], [10719, 11762, null], [11762, 12736, null], [12736, 13729, null], [13729, 14248, null], [14248, 15375, null], [15375, 16140, null], [16140, 16393, null], [16393, 16593, null], [16593, 17234, null], [17234, 17538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 49, true], [49, 120, null], [120, 1053, null], [1053, 1360, null], [1360, 1527, null], [1527, 2818, null], [2818, 3631, null], [3631, 4377, null], [4377, 5570, null], [5570, 6271, null], [6271, 7065, null], [7065, 8042, null], [8042, 8804, null], [8804, 9769, null], [9769, 10524, null], [10524, 10719, null], [10719, 11762, null], [11762, 12736, null], [12736, 13729, null], [13729, 14248, null], [14248, 15375, null], [15375, 16140, null], [16140, 16393, null], [16393, 16593, null], [16593, 17234, null], [17234, 17538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17538, null]], "pdf_page_numbers": [[0, 49, 1], [49, 120, 2], [120, 1053, 3], [1053, 1360, 4], [1360, 1527, 5], [1527, 2818, 6], [2818, 3631, 7], [3631, 4377, 8], [4377, 5570, 9], [5570, 6271, 10], [6271, 7065, 11], [7065, 8042, 12], [8042, 8804, 13], [8804, 9769, 14], [9769, 10524, 15], [10524, 10719, 16], [10719, 11762, 17], [11762, 12736, 18], [12736, 13729, 19], [13729, 14248, 20], [14248, 15375, 21], [15375, 16140, 22], [16140, 16393, 23], [16393, 16593, 24], [16593, 17234, 25], [17234, 17538, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17538, 0.03333]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c449ce7e53bf60c0b8c4799616c3c5fc338d7100
|
Online Map Application Development Using Google Maps API, SQL Database, and ASP.NET
Shunfu Hu, Ting Dai
1 Department of Geography, Southern Illinois University Edwardsville, Edwardsville, IL 62026, USA
2 Farm Service Agency, United States Department of Agriculture, Washington, DC 20250, USA
ABSTRACT
Recently, there has seen increasing interest in developing online map services using Google Maps Application Programming Interface (API). Yahoo! Maps API, Microsoft Bing Maps API, Nokia Ovi Maps API, and ESRI ArcGIS API. Application developers utilize Maps API as a platform and combine spatial data from multiple sources to create new customized services – a buzzword commonly called map “mashups”. The use of Maps API has revolutionized online mapping applications on the Internet. However, there are two major drawbacks in the map “mashups”. First, the application developer utilizes open source methods such as XML, Fusion Tables, CSV, or KML for the preparation of limited amount of usually non-secured spatial data, which are not suitable for data sources in the format of a commercial database stored on a secure data server. Second, map “mashups” is focused on the use of the Maps API platform for the fast delivery of the customized services or data, so they usually lack of sophisticated functionalities and intuitive user interfaces that can offer the user the capability to manipulate the data. The objective of this paper is to demonstrate an online mapping application that requires the access to data sources in the format of a commercial database stored on a secure data server and that offers sophisticated functionalities for the user to manipulate the data. A case study of developing an online map service to display tens of thousands gardens on the Internet for the United States Department of Agriculture (USDA) People’s Garden Initiative is presented. Google Maps API, Google Geocoder, Microsoft SQL database, Microsoft aspx.NET, and Spry Framework for Ajax are employed to develop this online map application. It is also anticipated that this online map application can be used in major web browsers such as Microsoft Internet Explorer (IE) 7.0+, Google Chrome, Mozilla Firefox, and Apple Safari.
Keywords: Online Mapping, Google Maps API, SQL Database, ASP.NET, USDA
1. INTRODUCTION
The Google Maps launched in 2005 has revolutionized online mapping service applications on the World Wide Web. Based on Asynchronous JavaScript and XML (AJAX), a new type of client/server interaction was introduced in Google Maps to maintain a continuous connection between the client and the server for immediate downloading of additional map information [1]. In addition, Google also provides programmers its extensive sources of code called the Application Programming Interface (API). The API consists of a set of data structures, object classes or functions that can be used by a programmer using JavaScript, PHP or other scripting language [2]. With the current version 3, it is not required to register the API key to use the Google Maps. The new version supports both traditional web browsers such as Internet Explorer 7.0+, Firefox 3.0+, Safari 4+, Chrome, Android, BlackBerry, and Dolphin as well as web browsers such as the Apple iPad and iPhone on mobile devices. all of which having a full JavaScript implementation. These features make Google Maps JavaScript API the most commonly used Maps API for online mapping [3]. Other Maps APIs are also available for online mapping, including Yahoo! Maps API, Microsoft Bing Maps API, Nokia Ovi Maps API, and ESRI ArcGIS API.
Recently there has seen increasing interest in utilizing Google Maps API to implement web-based mapping services, ranging from simple applications to display just a few points of interest with information window to sophisticated map mashups [4] [5] [6] [7] [8]. Scholefield [9] developed a web-based map service for tourism of eighteenth and nineteenth century Edinburgh using Google Map API, Oracle RDMS (Relational Database Management System), Microsoft SQL (Structured Query Language), Perl, eXtensible Markup Language (XML), JavaScript, Hypertext Markup Language (HTML), eXtensible HTML (XHTML), and Cascade Style Sheet (CSS). Similarly, Pejic et al. [10] developed an eTourism application using Google Map API to present prominent points of tourist destinations. Bildirici and Ulugtekin [11] demonstrates a web mapping service with Google Maps (API V2) mashups in which points, polylines and polygons from the data stored in Keyhole Markup Language (KML), XML and Geodatabase format are overlaid with Google Maps through JavaScript code. Liu and Palen [12] examine the use of Google Maps mashups in the crisis management for nine natural disasters such as earthquakes, fires, sea level rise, and so on using near real-time and publicly available data feeds. Hu [13] discusses a new approach of mashups in multimedia mapping that utilizes Google Maps API, Yahoo! Flick API, and YouTube API to combine spatial data, multimedia information and functionality from multiple sources to create the online visitor guide for the Southern Illinois University Edwardsville campus. Hu [14] also uses Google Maps JavaScript API, and other JavaScript libraries such as jQuery, XML, and MarkerClusterer to develop an online map service to display and search over 600 locations of the gardens from the United States Department of Agriculture (USDA) People’s Garden Initiative. However, there are two major drawbacks in the map “mashups”. First, the application developer utilizes open source methods in the preparation of spatial data, including XML files, Google Fusion Tables, comma-separated values (CSV) files, or Keyhole Markup Language (KML) files. The drawback of such
methods is that it took efforts to reformat the original data and the data is not in its original database that can be updated in real time. Second, map “mashups” is focused on the use of the Maps API platform for the fast delivery of the customized services or data, so they usually lack of sophisticated functionalities and intuitive user interfaces that can offer the user the capability to manipulate the data. The objective of this paper is to demonstrate an online mapping application that requires the access to live data sources in the format of a commercial database stored on a secure data server and that offers sophisticated functionalities that allow the user to manipulate the data. A case study of developing an online map service to display tens of thousands gardens on the Internet for the United States Department of Agriculture (USDA) People's Garden Initiative is presented. Google Maps API, Google Geocoder, Microsoft SQL database, Microsoft aspx.NET, and Spry Framework for Ajax are employed to develop this online map application. It is also anticipated that the online map application can be used in major brands of web browsers such as Microsoft Internet Explorer (IE) 7.0+, Google Chrome, Mozilla Firefox, and Apple Safari.
2. METHODOLOGY
2.1. Data Set
The USDA People's Garden Initiative is an effort to challenge its employees to establish People's Gardens at USDA facilities worldwide or help communities create gardens [15]. The garden information is collected initially through the USDA People’s Garden online registration process and is in a Microsoft SQL Server 2005 database stored on a USDA’s secure server. The data set for this project contains thousands of gardens, including the name of each garden, the street address, city, state, and zip code of the garden, the type of the garden (1 - At USDA Facilities; 2 - At Schools; 3 - At Other Places Within the Community; 4 - At Faith-based Centers; and 5 - At Other Federal Agencies), the geographic location (i.e., latitude and longitude in decimal degrees) of each garden, and more importantly what are planted in each garden.
2.2. “Mashup” of Google Maps API and SQL Database through JavaScript and XHTML
Since the online map application is a web application, every Maps API implementation is based on a web page. JavaScript is the native language of Google Maps. In addition, Google Maps is built of XHTML (Extensible HTML), formatted with CSS (Cascading Style Sheet) [16]. Therefore, both JavaScript and XHTML are used for developing the USDA online garden map application. Figure 1 illustrates a conceptual framework for the integration of the major components. The implementation is done using the Microsoft Visual Studio Express, which is a light-weight version of the Microsoft’s Visual Studio (VS) and provides a set of free software programs in the form of Microsoft’s integrated development environment (IDE). The VS Express includes Visual Web Developer Express (VWDE), Visual Basic Express, Visual C++ Express, Visual C# Express and SQL Server Express. The VWDE allows application developers to create ASP websites with the programming language either Visual C# or Visual Basic. It has a very friendly interface for web design. The SQL Server Management Studio Express provides the capability to edit, update, and manage SQL Server Express database. For instance, the developer can create a new SQL database for a web site but import the data from Oracle, Microsoft ACCESS, Excel or ODBC sources. For this project, Visual Web Developer 2010 Express was chosen to develop the USDA online map application, and SQL Server Management Studio 2008 Express was selected to import the USDA garden database and to create a new SQL database called PeoplesGarden for the application development environment. In PeoplesGarden database, two views (virtual tables) were created to dynamically summarize the total garden counts based on the state names. Using the standard Structured Query Language (SQL), the first view called Gardens was created to tally the number of gardens for each US state, and the second view called State_Centerpoints that contains the center latitude and longitude of each state was appended to the first view so that the summary data can be displayed in the map window.
Microsoft aspx.NET technology was employed to connect to and retrieve data from the database. An aspx page contains scripts written in Microsoft .NET languages such as Visual Basic.NET and C#.NET. In this application, Visual Basic.NET was used to perform all the database tasks.
As shown in Figure 1, when the garden.htm is accessed by a web user, it first determines the user’s web browser type (see section 2.4) and a browser-specific html file calls an initialize function from the Main.js JavaScript file. When the Main.js initializes the online map page, it sends a request to an aspx named NewGetData.aspx (see below). The aspx script retrieves garden data either at the national level or at the state level from PeoplesGarden database depending on users’ requests.
function initialize()
{
geocoder = new google.maps.Geocoder();
//For centering map on contiguous USA
var myLatlng = new google.maps.LatLng(38, -95);
var myOptions = {
zoom: 4,
center: myLatlng,
scaleControl: true,
mapTypeId:
google.maps.MapTypeId.ROADMAP
}
map = new google.maps.Map(document.getElementById
("map_canvas"), myOptions);
//Calls an aspx script to retrieve data from SQL Server database
$.get("newGetData.aspx?level=national", displaynational);
}
Figure 1. A conceptual framework of developing the People’s Garden online map application.
The aspx script then connects to the database using a connection string defined in the website configuration file web.config.
```xml
<connectionStrings>
<add name="MyDatabaseConnectionString" connectionString="Data Source=SERVERNAME;
Initial Catalog=PeoplesGarden; Integrated Security=True" />
</connectionStrings>
Once the database is connected, the aspx uses two member functions to retrieve the garden data. The first function getNationalData() retrieves summary data for each US state including state name, total garden count by state, and state centric latitude and longitude (Figure 2). The second function getStateData() retrieves individual garden information in each selected state from the PeoplesGarden database, including Garden Name, Garden City, Garden State, Garden Zip Code, Garden Street Address, Garden Type, Latitude, Longitude, Photo Name, and Photo Location (Figure 3). Below is the Visual Basic.NET code that works with the web.config to get the individual garden information from the database.
Imports Microsoft.VisualBasic
Imports System.Data.SqlClient
Imports System.Configuration
Public Class NewDataLogic
Dim separator As String = ";;"
Dim ConnectionString As String = ConfigurationManager.ConnectionStrings("MyDatabaseConnectionString").ConnectionString
Public Function getStateData(state As String) As String
Dim myConnection As New SqlConnection
Dim myCommand As New SqlCommand
Dim myDataReader As SqlDataReader
Dim result As String = ""
If (state = ") Then state = "DC"
Try
myConnection = New SqlConnection(ConnectionString)
myConnection.Open()
Dim mysql As String = "select Garden_Name as name,
Garden_city as city, Garden_State as state,
Garden_Zip_code as zipcode, Garden_Street_Address_1 as address,
Garden_Type as gtype, Type_of_Garden as CIndex, Lat as y, Lon as x,
Photo_Name as Photo_Name, Location as Location,
Garden_Street_Address_2 as address2 from Gardens where Garden_State = "
Catch ex As Exception
result = ex.Message
End Try
Return result
End Function
```
Public Class NewDataLogic
Dim separator As String = ";;"
Dim ConnectionString As String = ConfigurationManager.ConnectionStrings("MyDatabaseConnectionString").ConnectionString
Public Function getNationalData() As String
Dim myConnection As New SqlConnection
Dim myCommand As New SqlCommand
Dim myDataReader As SqlDataReader
Dim result As String = ""
Dim mysql As String = "select Garden_Name as name,
Garden_city as city, Garden_State as state,
Garden_Zip_code as zipcode, Garden_Street_Address_1 as address,
Garden_Type as gtype, Type_of_Garden as CIndex, Lat as y, Lon as x,
Photo_Name as Photo_Name, Location as Location,
Garden_Street_Address_2 as address2 from Gardens where Garden_State = "
```
Dim mysql As String = "select Garden_Name as name,
Garden_city as city, Garden_State as state,
Garden_Zip_code as zipcode, Garden_Street_Address_1 as address,
Garden_Type as gtype, Type_of_Garden as CIndex, Lat as y, Lon as x,
Photo_Name as Photo_Name, Location as Location,
Garden_Street_Address_2 as address2 from Gardens where Garden_State = "
```
&
```csharp
myCommand = New SqlCommand(mysql,
myConnection)
myDataReader = myCommand.ExecuteReader()
Dim x As String
Dim y As String
Dim name As String
Dim st As String
Dim city As String
Dim zip As String
Dim addr As String
Dim cindex As String
Dim Photo_Name As String
Dim Location As String
Dim address2 As String
While myDataReader.Read()
If (IsDBNull(myDataReader("x"))) Then
x = ""
Else
x = CStr(myDataReader("x"))
End If
If (IsDBNull(myDataReader("y"))) Then
y = ""
Else
y = CStr(myDataReader("y"))
End If
If (IsDBNull(myDataReader("name"))) Then
name = ""
Else
name = CStr(myDataReader("name"))
End If
If (IsDBNull(myDataReader("State"))) Then
st = ""
Else
st = CStr(myDataReader("State"))
End If
If (IsDBNull(myDataReader("city"))) Then
city = ""
Else
city = CStr(myDataReader("city"))
End If
If (IsDBNull(myDataReader("zipcode"))) Then
zip = ""
Else
zip = CStr(myDataReader("zipcode"))
End If
If (IsDBNull(myDataReader("address"))) Then
addr = ""
Else
addr = CStr(myDataReader("address"))
End If
If (IsDBNull(myDataReader("CIndex"))) Then
cindex = ""
Else
cindex = CStr(myDataReader("CIndex"))
End If
If (IsDBNull(myDataReader("gtype"))) Then
gtype = ""
Else
gtype = CStr(myDataReader("gtype"))
End If
If (IsDBNull(myDataReader("Photo_Name"))) Then
Photo_Name = ""
Else
Photo_Name = CStr(myDataReader("Photo_Name"))
End If
If (IsDBNull(myDataReader("Location"))) Then
Location = ""
Else
Location = CStr(myDataReader("Location"))
End If
If (IsDBNull(myDataReader("address2"))) Then
address2 = ""
Else
address2 = CStr(myDataReader("address2"))
End If
If (x <> "") And (y <> "") Then
result = result + y + separator + x
result = result + separator + name + separator + st
result = result + separator + city + separator + zip
result = result + separator + addr + separator + cindex
result = result + separator + gtype + separator + Photo_Name
result = result + separator + Location + separator + address2
result = result + "|
End If
End While
myDataReader.Close()
myDataReader = Nothing
Catch ex As Exception
Finally
myConnection.Close()
myConnection = Nothing
End Try
Return result
End Function
```
The result string generated from the above code that contains individual garden information is passed back to Main.js and then processed to a point on the online map.
2.3. Development of Searching and Filtering Functions
It is required for this project to provide the user with a web interface for search functions such as Find Gardens by Location (e.g., address or zip code) and Find Gardens by State (Figure 2) so the user can have the option to see only selected gardens around that address or zipcode, or within that state. Furthermore, search functions need to be accompanied by another function called Filter by Type. The first two functions, Find Gardens by Location and Find Gardens by State, were accomplished by using Google Maps API’s Geocoding process, which converts addresses (e.g., “8008 Davis Dr., St. Louis, MO”) into geographic coordinates (e.g., latitude 38.64005705 and longitude -90.3373296). With this pair of latitude and longitude, the programmer can place a marker onto the map. This can be done using the geocoder function (i.e., class) from the Google Maps API. However, the programmer has to create a new geocoder object within the Main.js function initialize() as follows:
```javascript
var geocoder = new google.maps.Geocoder();
```
and then create a new codeaddress () function (refer to https://developers.google.com/maps/documentation/javascript/geocoding).
### 2.4. Web Browsers Compatibility
It is anticipated that the USDA online map service needs to be used in most of the web browsers such as Microsoft Internet Explorer (IE) 7.0+, Google Chrome, Mozilla Firefox, and Apple Safari. The initial testing of the USDA online map application indicated that there were compatibility issues. For instance, Microsoft IE (7, 8 and 9) and Mozilla Firefox did not support rounded corners for tabbed panels. Apple Safari for iPad and iPhone did not support flash movies. Only Google Chrome did not have any problem. Those issues were resolved by offering a main page Garden.htm to detect whether the browser is IE or Google Chrome, Mozilla Firefox, and Apple Safari. If it is IE, the web page is automatically redirected to PGMSIE.htm; otherwise, it is redirected to PGChrome.html. Below is the code for Garden.htm.
```html
<html>
<head>
<meta name="viewport" content="initial-scale=1.0, user-scalable=no" />
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<title>USDA People's Garden Initiative</title>
<script type="text/javascript" language="JavaScript"/>
function detectbrowser() {
var browserName = navigator.appName;
switch (browserName) {
case "Microsoft Internet Explorer":
document.write("<META HTTP-EQUIV="REFRESH" CONTENT="1;URL=PGMSIE.htm">");
break;
case "Chrome":
document.write("<META HTTP-EQUIV="REFRESH" CONTENT="1;URL=PGchrome.html">");
break;
case "Firefox":
document.write("<META HTTP-EQUIV="REFRESH" CONTENT="1;URL=PGChrome.html">");
break;
case "Safari":
document.write("<META HTTP-EQUIV="REFRESH" CONTENT="1;URL=PGChrome.html">");
break;
case "Mozilla":
document.write("<META HTTP-EQUIV="REFRESH" CONTENT="1;URL=PGChrome.html">");
break;
default: alert("You browser might not be supported");
}
}
</script>
</head>
<body>
<p> People's Garden</p>
<script type="text/javascript" language="JavaScript">
detectbrowser();
</script>
</body>
</html>
```
### 2.5. Use of JavaScript and CSS to Design the Layout of the Online Map Application
In the design of the online map application, a HTML table with three-row layout design was adopted, including first row for a tabbed interface, second row for the map container, and third row for the garden type legend. In the tabbed interface, two tabs are provided, including one for Find Gardens by Location, and the other for Find Gardens by State (Figure 2). The tabbed interface was developed using the Adobe’s Spry 1.6 framework for Ajax, which is a JavaScript library for the development of interactive web pages ([17]). To do so, the programmer has to download SpryTabbedPanels.css and SpryTabbedPanels.js (all open sources) from the Adobe Labs. The former links the CSS for the tabbed panel; the latter links the Spry TabbedPanels JavaScript library with the search functions. Both of them need to be placed in the head section of the web page (i.e., PGMSIE.htm or PGChrome.html) as follows:
```html
<link href="SpryTabbedPanels.css" rel="stylesheet" type="text/css" />
<script src="SpryTabbedPanels.js" type="text/javascript">
</script>
```
Working in conjunction with the tab Find Gardens by Location is the HTML <input> tag and input field that allow the user to type in address or zip code from the keyboard. The HTML code looks as below:
```html
<input id="address" size="40" type="textbox" value="Please type your address or zip code here">
```
Then, the <select> tag is used to create a drop-down list for the user to select an item from a list and the <option> tags inside the select element specify the available items (i.e., options) in the list. For instance, Filter by Type allows the user to choose one of the five garden types with 0 as a default value for no filtering. It was implemented with the following HTML code:
```html
<select id="select_type" name="Garden_Type">
<option value="0" selected>Filter by Type</option>
<option value="1" >At USDA Facilities</option>
<option value="2" >Education Center</option>
<option value="3" >Garden Visitor</option>
<option value="4" >House and Home</option>
</select>
```
Similarly, the tab Find Gardens by State allows the user to choose one state from the list with “All States” as the default value to show all the gardens in the entire United States and its territories. It is worthy of note that each tab is linked to the search function and filter function described in the Section 2.3.
In the map container, the Google map or satellite imagery is displayed, and points of interest (e.g., gardens) are marked with the customized marker icons – the green shovel for gardens at USDA facilities, the yellow shovel for gardens at schools, and so on. The garden type legend is placed at the bottom of the map container that match the marker icons displayed in the map container.
In order to provide the user with interaction with the map, a few standard Google Maps controls are added, such as Pan and Zoom controls; Map Scale control; and Map Type control—Roadmap and Satellite. In addition, tooltips (e.g., garden name) to the markers are provided, along with clickable marker icons with Google Maps API’s standard Infowindow, which displays the information about each garden (i.e., garden name, address, city, state, zip code, a link to garden pictures, etc.).
### 3. RESULTS
The use of Google Maps API V3 provides a very efficient mechanism to deliver digital cartographic information to the Internet users with fast response time and user-friendly interaction. Using Google Maps standard Map Type control, the user is able to choose one of the two map types: roadmap or satellite imagery. Figure 2 shows the startup view of the People’s Garden online mapping application in Google Chrome. At the initial launch of the web page, the display of Google Maps is the way how the public can view all the gardens in the entire United States and its territories. The garden type label is placed at the bottom of the map container that match the marker icons displayed in the map container.
In order to provide the user with interaction with the map, a few standard Google Maps controls are added, such as Pan and Zoom controls; Map Scale control; and Map Type control—Roadmap and Satellite. In addition, tooltips (e.g., garden name) to the markers are provided, along with clickable marker icons with Google Maps API’s standard Infowindow, which displays the information about each garden (i.e., garden name, address, city, state, zip code, a link to garden pictures, etc.).
### 4. CONCLUSION AND DISCUSSION
This paper has demonstrated an online mapping application that was successfully developed using Google Maps API v3, Google Geocoding, Microsoft SQL Server Express database, and Spry Framework for Ajax. The case study presented in this article provides the advanced functionality to display the locations and state-based summary counts of USDA’s thousands of peoples’ gardens on the Internet with customized icons and map legend. It also provides the sophisticated functionalities for searching, filtering, and tabbed interface that offer the user the capability to manipulate the data.
Online mapping from a database being updated in real time can be very useful for many purposes. First, the database can be collected from an online registration process that, along with other information contains locational information such as latitude and longitude or street addresses. This is the way how the USDA people’s garden information has been gathered. Such database can be stored on a secure server inside a firewall. Second, once the data has been collected and stored, it can be easily and directly retrieved in full or partially for online mapping application without going through a data format transformation as it has been done in the past (e.g., XML). Third, the backend database can be updated through the database interface and the resulting data changes will be reflected immediately on the web interface. Fourth, complex data manipulation can be carried out using the powerful SQL scripts in the backend databases.
Publishing and sharing geo-spatial data are becoming important and popular tasks in various applications. One particular sector is in the public health. Doctors at different offices across a region or a country can report certain type of disease (e.g., West Nile Virus, SARS) in real time to a centralized database, and such information can be delivered to an online map immediately so the health officials and the general public can quickly take preventive actions. The project described in this paper can be easily modified to meet the requirements of such important tasks.
### ACKNOWLEDGEMENTS
We thank USDA People’s Garden Initiative director Livia Marques for her leadership in this project, USDA Annie Ceccarini for her initial web interface design, USDA NRCS Tianpu Liang for his ASP.NET coding and database connection support, USDA OCIO John Roccaforte for providing database access, and Acacia Dai of the Thomas Jefferson High School for Science and Technology for graphics design for the web page.
Figure 2. The initial view of the USDA People's Garden online map application with the number of gardens for each state.
Figure 3. Find Gardens by State displays only the gardens in a selected state. The map legend at the bottom indicates the type of the gardens.
Figure 4. Filter by Type (in this case, At USDA Facilities) displays only the gardens in that category.
Figure 5. Find Gardens by Location (address or zip code) will zoom to that location. If the user clicks a garden icon on the map, the information about that garden is displayed in the Infowindow.
REFERENCES
|
{"Source-Url": "http://www.siue.edu/~shu/vol3no3_1.pdf", "len_cl100k_base": 6064, "olmocr-version": "0.1.51", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26949, "total-output-tokens": 7664, "length": "2e12", "weborganizer": {"__label__adult": 0.00032591819763183594, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.000362396240234375, "__label__education_jobs": 0.0014848709106445312, "__label__entertainment": 5.650520324707031e-05, "__label__fashion_beauty": 0.0001342296600341797, "__label__finance_business": 0.0003235340118408203, "__label__food_dining": 0.0004444122314453125, "__label__games": 0.0006308555603027344, "__label__hardware": 0.0011835098266601562, "__label__health": 0.00047969818115234375, "__label__history": 0.0010318756103515625, "__label__home_hobbies": 0.000118255615234375, "__label__industrial": 0.0003268718719482422, "__label__literature": 0.00021529197692871096, "__label__politics": 0.0001964569091796875, "__label__religion": 0.0002803802490234375, "__label__science_tech": 0.02459716796875, "__label__social_life": 7.456541061401367e-05, "__label__software": 0.04595947265625, "__label__software_dev": 0.91943359375, "__label__sports_fitness": 0.00019800662994384768, "__label__transportation": 0.0008387565612792969, "__label__travel": 0.0007481575012207031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31232, 0.00895]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31232, 0.68527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31232, 0.81171]], "google_gemma-3-12b-it_contains_pii": [[0, 5763, false], [5763, 11371, null], [11371, 14685, null], [14685, 17436, null], [17436, 22773, null], [22773, 27750, null], [27750, 28015, null], [28015, 28316, null], [28316, 31232, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5763, true], [5763, 11371, null], [11371, 14685, null], [14685, 17436, null], [17436, 22773, null], [22773, 27750, null], [27750, 28015, null], [28015, 28316, null], [28316, 31232, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31232, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31232, null]], "pdf_page_numbers": [[0, 5763, 1], [5763, 11371, 2], [11371, 14685, 3], [14685, 17436, 4], [17436, 22773, 5], [22773, 27750, 6], [27750, 28015, 7], [28015, 28316, 8], [28316, 31232, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31232, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
25502f9818d9327b20fe6dbd91771b76dbc4adc9
|
Dynamic Load Balancing Based on Applications Global States Monitoring
Eryk Laskowski, Marek Tudruj, Richard Olejnik, Damian Kopanski
To cite this version:
HAL Id: hal-00833477
https://hal.archives-ouvertes.fr/hal-00833477
Submitted on 12 Jun 2013
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Dynamic Load Balancing Based on Applications
Global States Monitoring
Eryk Laskowski*, Marek Tudruj*,†, Richard Olejnik‡ and Damian Kopaniśki†
*Institute of Computer Science PAS, 01-248 Warsaw, Jana Kazimierza 5, Poland
Email: {laskowski,tudruj}@ipipan.waw.pl
†Polish-Japanese Institute of Information Technology, ul. Koszykowa 86, 02-008 Warsaw, Poland
Email: {damian.kopanski,tudruj}@pjwstk.edu.pl
‡Computer Science Laboratory of Lille (UMR CNRS 8022), University of Sciences and Technologies of Lille, France
Email: Richard.Olejnik@lifl.fr
Abstract—The paper presents how to use a special novel distributed program design framework with evolved global control mechanisms to assure processor load balancing during execution of application programs. The new framework supports a programmer with an API and GUI for automated graphical design of program execution control based on global application states monitoring. The framework provides high-level distributed control primitives at process level and a special control infrastructure for global asynchronous execution control at thread level. Both kinds of control assume observations of current multicore processor performance and communication throughput enabled in the executive distributed system. Methods for designing processor load balancing control based on a system of program and system properties metrics and computational data migration between application executive processes is presented and assessed by experiments with execution of graph representations of distributed programs.
Keywords—distributed programming paradigms; global application states monitoring; graphical program design tools.
1. INTRODUCTION
Load balancing is the fundamental approach used to optimize execution time of distributed programs. In a multi-user environment, the availability of computing resources can vary notably over time. Thus, an optimization subsystem, embedded in the run-time environment or in distributed applications is essential. Since the problem of load balancing of computational tasks is NP-hard, heuristics from various fields have been applied, ranging from prefix sum, recursive bisection, space filling curves to work stealing and graph partitioning. Good reviews of load balancing methods have been presented in [15] — [17].
Static load balancing problems when the computational tasks co-exist during the entire execution of parallel programs can be modeled as graph partitioning problems. METIS [8] is the most popular example of a graph partitioning framework, which has been used for mapping and static load balancing of parallel applications.
In the case of dynamic load balancing formulation with online load balancing in the presence of variations of a workload and the varying availability of resources, there exist several load balancing strategies [9]. Dynamic load balancing can be implemented by a migration of the application components (processes and threads) or by a data redistribution among computing nodes that guarantee a possibly high efficiency of the overall application execution. The simplest dynamic load balancing methods are based on the greedy heuristics, where the largest workloads are moved to the least loaded processors until the load of all processors is close to the average load. More sophisticated algorithms use some refinement strategies, where the number of migrated objects is reduced or the communication between different objects is also considered.
Monitoring of global application states [1] creates an efficient and flexible basis for distributed program execution control. Unfortunately, no existing parallel run-time system provides a built-in infrastructure for these purposes. This has been the motivation for our research on a new distributed program design framework called PEGASUS (from Program Execution Governed by Asynchronous Supervision of States) [4] which is assumed in this paper as a basic program control infrastructure.
In the PEGASUS framework the semantics of program execution control constructs at process and thread levels takes into account automatic monitoring of global application states. The proposed methods include new program execution paradigms and the corresponding software architectural solutions. The global flow control constructs assume modular structures of parallel programs based on the notions of processes and threads. The global control constructs logically bind program modules and define the involved control flow selectively dependent on global application states. The internal behaviour of processes and threads is also asynchronously controlled by inspecting global application states. The PEGASUS control infrastructure provides synchronizers which collect local state information from processes and threads, automatically construct global strongly consistent application states, evaluate relevant control predicates on global states and provide a distributed support for sending control Unix-type signals to distributed processes and threads to stimulate the desired control reactions. The repertoire of considered local and global states, the control predicates and the reactions to them are user programmed using a special API provided in the system.
The design of the program global execution control is graphically supported and decoupled from data processing control inside process and thread modules. The proposed global control constructs enable better verification and are less error prone.
The contribution of this paper is a general load balancing method based on the special infrastructure for monitoring of application program global states and runtime executive system behavior observation. We are interested in dynamic load balancing, where load distribution can be changed during execution, following variations in system resources availability and/or changes of their computational load. In the strategy presented in the paper we focus at data migration as a basic load balancing mechanism. The presented approach leverages some earlier works, reported in [2], [4]. The use of load balancing methods based on graph partitioning like in existing load balancing libraries (METIS [8], Zoltan [5], etc.) is also possible under PEGASUS. The complete graph partitioning algorithms can be embedded in the PEGASUS control infrastructure which is fully programmable. However, it would require an extensive load redistribution by numerous time-consuming distant processor thread-to-thread load transfers, to follow the global optimal work partition. In our strategy, we propose to avoid such strategy and to replace it by workload distribution control, however, allowing for real load migration if unavoidable by other methods.
When we analyze features of current parallel computing environments like CHARM++ [6] for C++ or ProActive [7] for Java, we notice the absence of any automatic infrastructure offered to a programmer to support the monitoring and using of global parallel application states in the design of program execution control. Both CHARM++ and ProActive introduce their own high-level parallel programming paradigms. CHARM++ programs are composed of message-driven objects called chares, while ProActive is based on the active object model. PEGASUS, at the parallel program implementation level, supports parallel programming modularity-based on processes and thread blocks. PEGASUS offers a unique program execution control design infrastructure based on a global application states monitoring. MPI is used for message passing based global data communication at the thread and process levels and the OpenMP/Pthreads are used for internal process parallelism and data communication at the thread level. The PEGASUS control infrastructure and underlying system architectural concepts are used to organize dynamic load balancing control in distributed applications.
The rest of the paper consists of three parts. In part II the proposed program execution model is described. Part III describes the proposed load balancing strategy, implemented using global application states monitoring. Part IV describes the experimental assessment of the presented algorithm.
II. APPLICATION EXECUTION MODEL
Distributed application programs based on the new control approach are composed of processes and threads inside each process. An executive system consists of multicore processor nodes interconnected by a message passing network (e.g. a cluster of workstations). The network is used for data communication at the process level. A programmer is able to control assignments of processes to processor nodes. We assume that processor nodes work under control of Linux operating system.
The PEGASUS runtime environment provides an application program designer with a control infrastructure which enables organizing program execution control based on the global application states. It uses a number of graphical and communication library mechanisms provided as a parallel program design infrastructure.
Application program execution control is organized at two layers:
(1) a global control layer, responsible for monitoring global execution states of processes or threads in application programs, computing control predicates on global states and issuing signals to application processes or threads, to stimulate desired reactions,
(2) a local control layer, which is responsible for reporting by processes and threads their local states to the global control layer as well as for organizing reactions to control signals coming from the global control layer.
Monitoring of application program global states will influence program execution by acting on control inside distributed application processes or threads to asynchronously modify their behavior in a manner similar to distributed interrupts issued based on global application states monitoring, or on application synchronous global control flow influenced by monitoring of global application states.
A. Asynchronous program execution control
The general scheme of the proposed control mechanism is shown in Fig. 1. Application program processes (threads) send messages on their states to special globally accessible processes (threads) called synchronizers. A synchronizer collects local state messages, determines if the application’s strongly consistent global state has been reached, evaluates control predicates and stimulates desired reactions to them in application components.
A strongly consistent global state (SCGS) means a set of fully concurrent local states detected unambiguously by a synchronizer [1]. Processor node clocks are synchronized with a known accuracy to enable the construction of strongly consistent global states by projecting the local states of all processes or threads on a common time axis and finding time intervals which are covered by local states in all participating processes or threads [3].

control signal to switch SW P1, ..., Pn from program blocks can be found in [4].
control constructs. More details on the PEGASUS framework (process) creation and activation resulting from global parallel coordinate program execution and manages code blocks in the PARALLEL DO–UNTIL construct.
To Pn predicate shows a global PARALLEL DO–UNTIL control construct. The governed by predicates on application global states. Fig. 2 level languages (IF, WHILE...DO, DO...UNTIL, CASE) but — signal-driven activation (interrupt), which breaks current provided:
- signal-driven activation, which breaks current computation and activates a reaction code associated with the region. After completion of the reaction code the broken computing resumes,
- signal-driven cancellation, which stops computation and activates a cancellation handling procedure associated with the region. Program execution resumes just after the abandoned region.
B. Control flow governed by global states monitoring
The second control mechanism involving the global state monitoring concerns defining the flow of control in distributed programs based on the global application state monitoring. The global parallel control structures provided in the system are based on PARALLEL DO (PAR) and JOIN constructs, embedded (if needed) into standard control statements of high level languages (IF, WHILE...DO, DO...UNTIL, CASE) but governed by predicates on application global states. Fig. 2 shows a global PARALLEL DO–UNTIL control construct. The predicate GP1 asynchronously controls execution of program blocks P1, ..., Pn. It receives local state messages from P1, ..., Pn. The predicate GP2 receives local state messages from program blocks P1, ..., Pn, B1, ..., Bp and sends a binary control signal to switch SW, which governs the flow of control in the PARALLEL DO–UNTIL construct.
The distributed Execution Control (EC) process in the system coordinates program execution and manages code blocks (process) creation and activation resulting from global parallel control constructs. More details on the PEGASUS framework can be found in [4].
III. LOAD BALANCING BASED ON GLOBAL STATES
A. Load balancing algorithm
The global state monitoring infrastructure of the PEGASUS environment is used as a tool to implement dynamic load balancing at the application level. The computing nodes (workstations) can be heterogeneous, moreover they can have different and variable computing capabilities over time. A load imbalance occurs when the differences of workload between the computing nodes become too big. We distinguish two main steps in load balancing: detection of imbalance and its correction. The first step uses measurement tools to detect the functional state of the system. The second consists in migrating some load from overloaded computing nodes to underloaded computing nodes to balance the workloads.
An intrinsic element of load balancing is the application observation mechanism. It provides knowledge of the application behavior during its execution. This knowledge is necessary to undertake adequate and optimal load balancing decisions. There are two types of measurements in the proposed load balancing method for PEGASUS environment:
- system level observations, which provide general functional indicators, e.g. CPU load, that are universal for all kinds of applications; the system measurements are implemented using the software agent approach, i.e. the load balancing mechanism being part of PEGASUS environments deploys observation agents on computing nodes.
- application specific observations, which incorporate measurements that have to be implemented in each application, as they provide information about application–dependent behavior; an example of this kind of indicator is the workload of a process (or thread) since it can depend on the volume of data to be processed in the future which is known only to application logic.
The aforementioned observation mechanisms are organized using the PEGASUS global execution states monitoring infrastructure. Application program processes and system observation agents send messages on local state changes to load balancing synchronizer, where they are processed and appropriate reactions are computed using the method described in next sections. Similarly, reactions are organized as asynchronous program execution control. Load balancing logic is implemented as control predicates inside a load balancing synchronizer. Figure Alg. 1 presents a general scheme of the proposed algorithm. The rest of this section describes functions and symbols used in the pseudo-code in Algorithm 1.
B. Detection of load imbalance
To detect load imbalance, the knowledge on the functional state of computing nodes composing the cluster is essential. As the environment is heterogeneous, it is necessary to know not only the load of computing nodes but also their computing power capabilities. The heterogeneity disallows us to directly compare measurements based on program execution time taken on computing nodes whose computing powers are different. After experiments to determine the computing node power, we have found that the parameter, which allows us to compare the computing nodes’ load is the availability index of a CPU
Algorithm 1 General scheme of load balancing algorithm
initialize load balancing synchronizer
loop
{Global part of the algorithm}
wait for state change
store values Indavail
\[ L_1 \leftarrow \max_{n \in N}(\text{Ind}_{\text{avail}}(n)) \geq \alpha \cdot \min_{n \in N}(\text{Ind}_{\text{avail}}(n)) \]
if \( L_1 \) then
{Step 1: Classification of the computing nodes’ load}
\( N_U, N_N, N_O \leftarrow \text{classify nodes load} \{ \text{K–Means algorithm, } K = 3: \text{ underloaded, normal, overloaded} \}
{Local part of the algorithm}
for all \( n \in N_O \) do {in parallel}
{Step 2: Choice of candidates for migration}
\( \text{rank}_{\text{min}} \leftarrow \infty \)
for \( j \in T(n) \) do
\( \text{Rank}(j) \leftarrow \beta \cdot \text{attr} \% (j) + (1 - \beta) \cdot \text{ldev} \% (j) \)
if \( \text{Rank}(j) < \text{rank}_{\text{min}} \) then
\( \text{rank}_{\text{min}} \leftarrow \text{Rank}(j) \)
\( j_n \leftarrow j \) {candidate for migration}
end if
end for
end for
{Step 3: Selection of migration target}
for \( u \in N_U \) do
\( \text{qual}_{\text{max}} \leftarrow 0 \)
for \( n \in N_O \) do
\( \text{Quality}(j_n, u) \leftarrow \gamma \cdot \text{attr} \% (j_n, u) + (1 - \gamma) \cdot \text{Ind}_{\text{avail}}(u) \)
if \( \text{Quality}(j_n, u) > \text{qual}_{\text{max}} \) then
\( \text{qual}_{\text{max}} \leftarrow \text{Quality}(j_n, u) \)
\( \text{target}(j_n) \leftarrow u \) {target of migration}
end if
end for
send signal migrate \( (j_n \Rightarrow \text{target}(j_n)) \)
\( N_O \leftarrow N_O - \{n\} \)
end for
end if
end loop
computing power on the node \( n \):
\[ \text{Ind}_{\text{avail}}(n) = \text{Ind}_{\text{power}}(n) \cdot \text{Time}_{\text{CPU}}(n) \]
where:
\( \text{Ind}_{\text{power}}(n) \) — computing power of a node \( n \), which is the sum of computing powers of all cores on the node,
\( \text{Time}_{\text{CPU}}(n) \) — the percentage of the CPU power available for programs under load balancing on the node \( n \), periodically estimated by observation agents on computing nodes.
Some explanation is needed to clarify the way the availability index of a CPU computing power is computed. The computing power of the node is the outcome of the calibration process [10]. For each node, the calibration should be performed in a consistent way to enable comparisons of calibration results (they can be expressed in MIPS, MFLOPS, Dhrystones or similar). The calibration needs to be done only once when the nodes join the system. The percentage of the CPU power available for a single computing thread is computed as a quotient of the time during which the CPU was allocated to the probe thread against the time span of the measurement (see [10] for details and the description of the implementation technique). \( \text{Time}_{\text{CPU}}^\alpha (n) \) value is the sum of the percentage of CPU power available for the number of probe threads equal to the number of CPU cores in the node.
A load imbalance \( LI \) is defined based on the difference of the availability indices between the most heavily and the weighted least heavily loaded computing nodes composing the cluster, which can be determined as:
\[ LI = \begin{cases}
true & \text{if } \max_{n \in N}(\text{Ind}_{\text{avail}}(n)) - \alpha \cdot \min_{n \in N}(\text{Ind}_{\text{avail}}(n)) \geq 0 \\
false & \text{otherwise}
\end{cases} \]
where:
\( N \) — the set of all computing nodes,
\( \alpha \) — a positive constant number.
Power indications \( \text{Ind}_{\text{power}} \) and CPU time use rate \( \text{Time}_{\text{CPU}} \) are collected and sent to load balance synchronizer by local system agents as state messages.
The proper value of the \( \alpha \) coefficient can be determined using both statistical and experimental approaches. Following our previous research [11] on load balancing algorithms for Java-based distributed computing environment, we can restrict the value to the interval [1.5 ... 2.5]. It enables controlling the sensitivity (and frequency) of detection of load imbalance for small differences in computing power availability in homogeneous systems and for heterogeneous processor clusters in the case of very fast and slow CPUs appearing in the system.
C. Correction of load imbalance
In this step we detect overloaded computing nodes and then we transform them into the normally loaded (rebalance).
1) Classification of computing nodes: We use the K-Means algorithm [12] to build categories of computing nodes based on the computed availability indices. We classify \( n \) computing nodes into the \( K = 3 \) categories: underloaded \( (N_U) \), normally loaded \( (N_N) \) and overloaded \( (N_O) \). The three centers of these categories are values of availability indices close to the minimum, average and maximum over the whole cluster of computing nodes.
2) Choice of candidates for migration: The loads are represented by the data processing activities of the threads which are running on computing nodes. To correct load imbalance, we have to migrate the load from overloaded computing nodes to underloaded ones. Two parameters are used to find the load that we want to migrate:
a) the attraction of a load to a computing node,
b) the weight of the load.
The attraction of a load to a computing node is expressed in terms of communication, i.e. it indicates how much a particular thread communicates with others allocated to the same node. A strong attraction means frequent communication, so, the less the load is attracted by the current computing node, the more interesting it is to be selected as a migration candidate. The computational weight of the load gives the quantity of load which could be removed from the current node and placed on another.
Both the attraction and weight are application-specific metrics, which should be provided by an application programmer in the form of state messages sent to load balance synchronizer:
1) COM($t_s, t_d$) is the communication metrics between threads $t_s$ and $t_d$.
2) WP$(t)$ is the load weight metrics of a thread $t$.
WP$(t)$ can be any measure of a thread work, for example the number of instructions to be executed in a thread. Our strategy for selecting threads for migration is to promote threads which show loads with a small distance to the average thread load, not to involve dramatic load changes after a single thread migration.
The attraction of the load $j$ to the actual computing node is defined as:
$$\text{attr}(j) = \sum_{o \in L^*(j)} (\text{COM}(j, o) + \text{COM}(o, j))$$
where:
$L^*(j)$ — the set of threads, placed on the same node as a thread $j$ (excluding $j$).
The load deviation compared to the average quantity of work of the node $j$ is defined as:
$$\text{ldev}(j) = |\text{WP}(j) - m_{WP}|$$
where:
$m_{WP} = \frac{\sum_{o \in L(j)} \text{WP}(o)}{|L(j)|}$.
$L(j)$ — the set of threads, placed on the same node as the thread $j$ (including $j$).
The element to migrate is the one for which a weighted sum of the normalized attraction and load deviation has the minimal value:
$$\text{Rank}(j) = \beta * \text{attr}^\%(j) + (1 - \beta) * \text{ldev}^\%(j)$$
where:
$$\text{attr}^\%(j) = \frac{\text{attr}(j)}{\max_{o \in L(j)}(\text{attr}(o))}$$
$$\text{ldev}^\%(j) = \frac{\text{ldev}(j)}{\max_{o \in L(j)}(\text{ldev}(o))}$$
$\beta$ — a real between 0 and 1. Its choice remains experimental. Let us notice however that the bigger $\beta$ is, the bigger is the weight of the object attraction.
3) Selection of the target computing node for migration:
The first criterion to qualify a computing node as a migration target is the attraction of a selected load entity to this node. The attraction of the load $j$ to node $n$ is defined as follows:
$$\text{attrext}(j, n) = \sum_{e \in T(n)} (\text{COM}(e, j) + \text{COM}(j, e))$$
where:
$T(n)$ — the set of threads, placed on a node $n$.
The second criterion is based on the computing node power availability indices. We prefer the one whose availability index is the highest, because it is actually the least loaded. We also take into account the number of waiting threads in the potential targets ($T_{wait}(n)$ – the set of waiting threads on a node $n$). We consider them, however, as potential load, which must be taken under consideration with the related load currently done on the machine. The formula to select the target for migration is as follows (we normalize all the values related in the interval $[0 \ldots 1]$):
$$\text{Quality}(j, n) = \gamma * \text{attrext}^\%(j, n) + (1 - \gamma) * \text{Ind}_{\text{avail}}^\%(n)$$
with $\gamma \in [0 \ldots 1]$ and
$$\text{attrext}^\%(j, n) = \frac{\text{attrext}(j, n)}{\max_{e \in N}(\text{attrext}(j, e))}$$
$$\text{Ind}_{\text{avail}}^\%(n) = \frac{\text{Ind}_{\text{avail}}^*(n)}{\max_{e \in N}(\text{Ind}_{\text{avail}}^*(e))}$$
$$\text{Ind}_{\text{avail}}^*(n) = \text{Ind}_{\text{avail}}^*(n) - \text{Ind}_{\text{avail}}^*(n) * \frac{|T_{wait}(n)|}{|T(n)|}$$
For a load which is a candidate for migration, we evaluate the above equations for all potential migration targets (underloaded computing nodes). The computing node which maximizes equation 3 will be chosen as the new location for the migrated load. The value of the coefficient $\gamma$ has to be determined using experimental verification.
D. Implementation under PEGASUS
We will illustrate now implementation of the described load balancing algorithm under PEGASUS framework, taking as an example a simple iterative application, consisting of processes $P_1 \ldots P_n$, run in parallel inside a single DO-UNTIL loop. The control flow graph of the application, including the load balancing infrastructure, is shown in Fig. 3.
The execution of load balancing is globally controlled by the synchronizer LB, assigned to a processor of the system. Each application process $P_i$ is composed of a number of application threads $T_j$ and a thread synchronizer $Th$. The thread synchronizer cooperates with the application threads and the global synchronizer LB. Th evaluates control predicates on local states transferred from application process threads. Based on the evaluated predicates, some control signals can be sent back to threads and/or some process states can be sent from process synchronizers $Th$ to the global synchronizer LB. LB evaluates some global predicates related to the global load balancing control. Based on these predicates, LB sends control signals to synchronizers $Th$ in application processes. The synchronizers $Th$ can process the received control signals and send the respective signals down to the threads which they supervise.
The progress of iterations in the application is controlled by the MoreIteration predicate in the global synchronizer LB. The MoreIteration predicate receives local states of threads via local synchronizers Th. This global predicate controls the switch SW, which directs the flow of control accordingly to the evaluated MoreIteration predicate value.
We will now explain the way in which the load balancing algorithm is implemented with the use of the infrastructure of synchronizers and local state/signal communication. Each thread synchronizer Th contains an Observation Agent, which
periodically evaluates the availability of computing power in processor nodes and sends to the global synchronizer $LB$ the local node state message which corresponds to the given node CPU computing power availability index (see $In_{avail}(n)$ in section III B). $LB$ periodically receives such state messages and based on them evaluates the load imbalance predicate ($LI$) in the system (following the $LI$ formula shown in section III B). If $LI = true$, the K-Means algorithm is activated which classifies the nodes in the system as underloaded, normally loaded and overloaded. The synchronizers $Th$ in the overloaded nodes are notified by control signals. As a reaction, the overloaded node $Th$ synchronizers activate computing the Rank Predicates to evaluate ranks of all their threads in respect to their eligibility for load migration (see formulas for $Rank(j)$ in section II C and the Algorithm 1). Next, each overloaded node $Th$ select the thread with the minimal rank and sends the identifier of minimal rank thread to the global synchronizer $LB$ as the best candidate for load migration from the node.
Based on best candidate messages from the overloaded $Th$s, the global synchronizer $LB$ broadcasts the list of all migration candidate threads as control signals to thread synchronizers $Th$ in all underloaded nodes. The migration candidate lists activate the Quality Predicates in $Th$ synchronizers which start evaluating the quality of potentially migrated load placement on underloaded target nodes. It is done based on the reply messages to current load and communication states requests sent by $Th$s to the threads they supervise. In response to the state messages, the Quality Predicates are evaluated in $Th$s (see formula (3) in section III C part 3) for all migration candidates (threads from overloaded nodes). The node for which a candidate thread maximizes the quality of target thread placement is selected as the effective target for the thread placement and its identifier is sent by the $Th$ to the Global Quality Predicate in $LB$ synchronizer. This predicate selects pairs of overloaded/underloaded nodes for which the quality of migration is the best. As a result $LB$ sends control signals to the $Th$ synchronizers of the selected overloaded and underloaded nodes to stimulate them to activate execution of the reduction and increase of loads in processes they supervise.
The load changes are done by cancellation of some workload in the overloaded nodes and reception of this workload for execution in the underloaded nodes. Such load balancing in pairs of nodes is done until all underloaded nodes have been used. Then, the load balancing algorithm returns to the
waiting for a new global load balance change which is checked by the evaluation of the $LoadImbalance$ predicate in the $LB$ synchronizer based on state messages sent by the Observation Agents in application processes in executive system nodes.
IV. EXPERIMENTAL ASSESSMENT OF THE PRESENTED LOAD BALANCING ALGORITHM
We will present now an experimental assessment of the presented load balancing algorithm. The experimental results were collected by simulated execution of application programs in a distributed system. The simulated model of execution corresponded to typical message-passing parallel applications using the MPI library for communication. The simulator was based on the DEVS discrete event system approach [13].
The application model used was similar to the model presented in [14] (Temporal Flow Graph, TFG). The application consisted of indivisible tasks (these are threads or processes of the operating system). Each task consisted of several computational blocks, separated by communication (messaging) with other tasks.
Applications run in a cluster of computing nodes. The system consisted of multi-core processors, each of which had its own main memory and a data network interface. Communication contention was modeled at the level of the network interface of each computing node.
During simulation, in parallel with the execution of the application, a dynamic load-balancing algorithm was performed. The algorithm used was the same as presented in the paper, see Algorithm 1. Computing nodes were periodically reporting their loads to the global load balancing synchronizer and then, depending on the states of the system and the application, appropriate actions were undertaken.
During experiments we used a set of 10 exemplary application programs, containing from 16 to 80 tasks. These programs were randomly generated, but their general structure corresponds to layered MPI-based parallel applications which correspond to numerical analysis or physical phenomena simulation. Each application program consisted of a set of phases. Each phase consisted of a fixed number of computational tasks
We simulated execution of applications in systems with 2, 3, 4 or 8 identical computing nodes, each containing the same number of cores. Since the execution times of the same application for different runs can vary, the results are the averages of 5 runs of each application.
The summary of the average speed-up improvement resulting from load balancing performed by the algorithm presented in the paper is shown in Fig. 6. The average speed-up improvement over the execution without load balancing is big for both irregular and regular applications. This could be considered as an unexpected result, since the good initial placement of tasks of regular applications usually could be statically calculated before application execution. The reason for the improvement is that the initial placement of tasks was far from optimal for both categories of applications, even when we used the METIS graph partitioning algorithm for its calculation. There are usually intense data dependencies between phases of applications, so the program execution scenario was effectively much improved by dynamic load balancing.
On Fig. 5(a) and 5(b) the speed-up of irregular and regular applications for different number of computing nodes is shown. Our exemplary regular applications give smaller speed-up than irregular ones (with or without load balancing).
In the case of the totally unoptimized initial placement of applications tasks, the dynamic load balancing algorithm gives the big speedup improvement, Fig. 7.
During experiments, we measured the cost of dynamic load balancing as the number of tasks migrations during the execution of applications, Fig. 8. The cost depends mainly on the quality of the initial placement of tasks and the category of an application. For poor (i.e. unoptimized) initial placement of tasks, the cost of dynamic load balancing is much higher. Moreover, the irregular applications require more frequent load balancing at the run-time, resulting from the unpredictable communication and computation scheme of their execution. However, even for the optimized initial placement of application tasks there is a need for dynamic load balancing at run-time when external execution conditions are changing (e.g. varying load of computing nodes dependent on other applications or operating system activity).
The proposed load balancing algorithm is meant for the PEGASUS environment based on global application states monitoring. The algorithm was implemented in the frame of the cooperation of application tasks with synchronizers. The tasks send load reports to synchronizers and modify their behavior in response to signals received asynchronously from the synchronizers.
V. CONCLUSIONS
Dynamic load balancing in distributed systems based on application global states monitoring has been discussed in this paper. Global control constructs for the control flow design and asynchronous state monitoring with control signal dispatching provide flexible infrastructure for application implementation and system-level optimizations. The infrastructure enables an easy and convenient design of the load balancing logic in applications. Our experimental results collected so far confirm that the presented load balancing method performs well for different run-time conditions.
The proposed PEGASUS framework is currently in the final implementation stage for a multicore processors cluster interconnected by dedicated separate networks for control and computational data communication as well as for processor clock synchronization for strongly consistent global states discovery. Inter-process control communication including Unix signal propagation between processors is organized by the use of message passing over Infiniband network. Computational data communication between processors is performed by an Ethernet network. C/C++ language with the MPI2, OpenMP and Pthreads libraries are used for writing application programs and the framework control code.
Important features of the load balancing implemented under PEGASUS algorithm are the expected low overheads, the ease in programming and tuning the load balancing algorithms as well as the ability to organize load balancing in distributed manner due to the ready-to-use infrastructure of asynchronous control based on global application states monitoring. The communication overhead of the load balancing algorithms can be strongly reduced due to the use, in the system infrastructure, of totally separate program layers and separate physical networks for control communication and data communication in applications. Activities of load balancing actions can be almost completely overlapped with application computations due to the assumed asynchronous type of control and the possible use of dedicated resources for load balancing control computations (the use of separate synchronizer threads assigned to separate processor cores).
An interesting topic of further research, which we were unable so far to cover, is the periodic use of the METIS algorithm to support load balancing by program graph partitioning, which can be easily embedded inside synchronizers activities.
This research was partially supported by the national MNiSW research grant No. NN 516 367 536.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00833477/document", "len_cl100k_base": 7851, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28755, "total-output-tokens": 9603, "length": "2e12", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.0004553794860839844, "__label__crime_law": 0.0003685951232910156, "__label__education_jobs": 0.0007824897766113281, "__label__entertainment": 0.0001112222671508789, "__label__fashion_beauty": 0.00017261505126953125, "__label__finance_business": 0.000370025634765625, "__label__food_dining": 0.0003523826599121094, "__label__games": 0.0007576942443847656, "__label__hardware": 0.0025539398193359375, "__label__health": 0.0006389617919921875, "__label__history": 0.0004117488861083984, "__label__home_hobbies": 0.00014770030975341797, "__label__industrial": 0.0007653236389160156, "__label__literature": 0.00027942657470703125, "__label__politics": 0.0003523826599121094, "__label__religion": 0.000568389892578125, "__label__science_tech": 0.219970703125, "__label__social_life": 0.00010091066360473631, "__label__software": 0.01593017578125, "__label__software_dev": 0.7529296875, "__label__sports_fitness": 0.0003380775451660156, "__label__transportation": 0.0008349418640136719, "__label__travel": 0.0002734661102294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41803, 0.01534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41803, 0.59028]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41803, 0.87127]], "google_gemma-3-12b-it_contains_pii": [[0, 1063, false], [1063, 6537, null], [6537, 12022, null], [12022, 17286, null], [17286, 23098, null], [23098, 28582, null], [28582, 33419, null], [33419, 35745, null], [35745, 41803, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1063, true], [1063, 6537, null], [6537, 12022, null], [12022, 17286, null], [17286, 23098, null], [23098, 28582, null], [28582, 33419, null], [33419, 35745, null], [35745, 41803, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41803, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41803, null]], "pdf_page_numbers": [[0, 1063, 1], [1063, 6537, 2], [6537, 12022, 3], [12022, 17286, 4], [17286, 23098, 5], [23098, 28582, 6], [28582, 33419, 7], [33419, 35745, 8], [35745, 41803, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41803, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
54b0b78f0b40a5d45592733ceceaf63d24a49973
|
Oracle® Call Interface
Getting Started
Release 9.0.1 for Windows
June 2001
Part No. A90166-01
### Contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Send Us Your Comments</td>
<td>v</td>
</tr>
<tr>
<td>Preface</td>
<td>vii</td>
</tr>
<tr>
<td>Audience</td>
<td>viii</td>
</tr>
<tr>
<td>Organization</td>
<td>viii</td>
</tr>
<tr>
<td>Related Documentation</td>
<td>viii</td>
</tr>
<tr>
<td>Conventions</td>
<td>ix</td>
</tr>
<tr>
<td>Documentation Accessibility</td>
<td>xiv</td>
</tr>
<tr>
<td>What’s New in Oracle Call Interface?</td>
<td>xv</td>
</tr>
<tr>
<td>Oracle9i Release 1 (9.0.1) New Features in Oracle Call Interface</td>
<td>xvi</td>
</tr>
<tr>
<td>Oracle8i Release 1 (8.1.5) New Features in Oracle Call Interface</td>
<td>xvi</td>
</tr>
<tr>
<td>OCI Release 7.x Functions</td>
<td>xvi</td>
</tr>
</tbody>
</table>
1. **Introduction to Oracle Call Interface**
- What is the Oracle Call Interface? 1-2
- What is Included in the OCI Package? 1-2
- Oracle Directory Structure 1-2
- Sample Programs 1-3
2. **Building OCI Applications**
- Writing OCI Applications 2-2
- Compiling OCI Applications 2-2
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linking OCI Applications</td>
<td>2-3</td>
</tr>
<tr>
<td>oci.lib</td>
<td>2-3</td>
</tr>
<tr>
<td>Client DLL Loading When Using LoadLibrary()</td>
<td>2-4</td>
</tr>
<tr>
<td>Running OCI Applications</td>
<td>2-4</td>
</tr>
<tr>
<td>The Oracle XA Library</td>
<td>2-4</td>
</tr>
<tr>
<td>Compiling and Linking an OCI Program with the Oracle XA Library</td>
<td>2-5</td>
</tr>
<tr>
<td>Using XA Dynamic Registration</td>
<td>2-5</td>
</tr>
<tr>
<td>Adding an Environmental Variable for the Current Session</td>
<td>2-6</td>
</tr>
<tr>
<td>Adding a Registry Variable for All Sessions</td>
<td>2-6</td>
</tr>
<tr>
<td>XA and TP Monitor Information</td>
<td>2-7</td>
</tr>
<tr>
<td>Using the Object Type Translator and the INTYPE File Assistant</td>
<td>2-7</td>
</tr>
</tbody>
</table>
Oracle Corporation welcomes your comments and suggestions on the quality and usefulness of this document. Your input is an important part of the information used for revision.
- Did you find any errors?
- Is the information clearly presented?
- Do you need more information? If so, where?
- Are the examples correct? Do you need more examples?
- What features did you like most?
If you find any errors or have any other suggestions for improvement, please indicate the document title and part number, and the chapter, section, and page number (if available). You can send comments to us in the following ways:
- E-mail: ntdoc_us@oracle.com
- FAX - (650) 506-7365 Attn: Oracle Database for Windows Documentation
- Postal service:
Oracle Corporation
Oracle Database for Windows Documentation Manager
500 Oracle Parkway, Mailstop 1op6
Redwood Shores, CA 94065
USA
If you would like a reply, please give your name, address, telephone number, and (optionally) electronic mail address.
If you have problems with the software, please contact your local Oracle Support Services. Contact information for Oracle Support Services is available at this Web site:
http://www.oracle.com/support/
This guide provides introductory information for the Oracle Call Interface (OCI) running on Microsoft Windows NT, Windows 95/98, and Windows 2000.
This preface contains these topics:
- Audience
- Organization
- Related Documentation
- Conventions
- Documentation Accessibility
Audience
Oracle Call Interface Getting Started for Windows is intended for developers who create applications written in C that interact with one or more Oracle Servers.
To use this document, you need to know:
■ How to compile and link a C program.
■ Your Microsoft Windows operating system.
Organization
This document contains:
Chapter 1, "Introduction to Oracle Call Interface"
Provides introductory information to help you get started with the OCI.
Chapter 2, "Building OCI Applications"
Provides an overview of how to build Oracle database applications using OCI.
Related Documentation
For more information, see these Oracle resources:
■ Oracle9i Database installation guide for Windows
■ Oracle9i Database release notes for Windows
■ Oracle9i Database Administrator’s Guide for Windows
■ Oracle Enterprise Manager Administrator’s Guide
■ Oracle9i Net Services Administrator’s Guide
■ Oracle9i Real Application Clusters Concepts
■ Oracle9i Database New Features
■ Oracle9i Database Concepts
■ Oracle9i Database Reference
■ Oracle9i Database Error Messages
■ Oracle Call Interface Programmer’s Guide
In North America, printed documentation is available for sale in the Oracle Store at
Customers in Europe, the Middle East, and Africa (EMEA) can purchase documentation from
http://www.oraclebookshop.com/
Other customers can contact their Oracle representative to purchase printed documentation.
To download free release notes, installation documentation, white papers, or other collateral, please visit the Oracle Technology Network (OTN). You must register online before using OTN; registration is free and can be done at
http://technet.oracle.com/membership/index.htm
If you already have a username and password for OTN, then you can go directly to the documentation section of the OTN Web site at
http://technet.oracle.com/docs/index.htm
## Conventions
This section describes the conventions used in the text and code examples of this documentation set. It describes:
- Conventions in Text
- Conventions in Code Examples
- Conventions for Windows Operating Systems
### Conventions in Text
We use various conventions in text to help you more quickly identify special terms. The following table describes those conventions and provides examples of their use.
<table>
<thead>
<tr>
<th>Convention</th>
<th>Meaning</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bold</td>
<td>Bold typeface indicates terms that are defined in the text or terms that appear in a glossary, or both.</td>
<td>When you specify this clause, you create an index-organized table.</td>
</tr>
</tbody>
</table>
Conventions in Code Examples
Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example:
```
SELECT username FROM dba_users WHERE username = 'MIGRATE';
```
The following table describes typographic conventions used in code examples and provides examples of their use.
<table>
<thead>
<tr>
<th>Convention</th>
<th>Meaning</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td><em>Italics</em></td>
<td>Italic typeface indicates book titles or emphasis.</td>
<td><em>Oracle9i Database Concepts</em></td>
</tr>
<tr>
<td><em>UPPERCASE monospace</em></td>
<td>Uppercase monospace typeface indicates elements supplied by the system. Such elements include parameters, privileges, datatypes, RMAN keywords, SQL keywords, SQL*Plus or utility commands, packages and methods, as well as system-supplied column names, database objects and structures, usernames, and roles.</td>
<td>You can specify this clause only for a <em>NUMBER</em> column.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>You can back up the database by using the <em>BACKUP</em> command.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Query the <em>TABLE_NAME</em> column in the <em>USER_TABLES</em> data dictionary view.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Use the <em>DBMS_STATS.GENERATE_STATS</em> procedure.</td>
</tr>
<tr>
<td><em>lowercase monospace</em></td>
<td>Lowercase monospace typeface indicates executables, filenames, directory names, and sample user-supplied elements. Such elements include computer and database names, net service names, and connect identifiers, as well as user-supplied database objects and structures, column names, packages and classes, usernames and roles, program units, and parameter values.</td>
<td>Enter <em>sqlplus</em> to open SQL*Plus.</td>
</tr>
<tr>
<td><em>lowercase monospace italic</em></td>
<td>Lowercase monospace italic font represents placeholders or variables.</td>
<td>The password is specified in the <em>orapwd</em> file.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Back up the datafiles and control files in the <em>/disk1/oracle/dbs</em> directory.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>The <em>department_id</em>, <em>department_name</em>, and <em>location_id</em> columns are in the <em>hr.departments</em> table.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Set the <em>QUERY_REWRITE_ENABLED</em> initialization parameter to <em>true</em>.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Connect as <em>oe</em> user.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>The <em>JRepUtil</em> class implements these methods.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>You can specify the <em>parallel_clause</em>.</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Run <em>Old_release.SQL</em> where <em>old_release</em> refers to the release you installed prior to upgrading.</td>
</tr>
</tbody>
</table>
*Conventions in Code Examples*
Code examples illustrate SQL, PL/SQL, SQL*Plus, or other command-line statements. They are displayed in a monospace (fixed-width) font and separated from normal text as shown in this example:
```
SELECT username FROM dba_users WHERE username = 'MIGRATE';
```
### Convention | Meaning | Example
---|---|---
[] | Brackets enclose one or more optional items. Do not enter the brackets. | DECIMAL (digits [, precision ])
{} | Braces enclose two or more items, one of which is required. Do not enter the braces. | (ENABLE | DISABLE)
| | A vertical bar represents a choice of two or more options within brackets or braces. Enter one of the options. Do not enter the vertical bar. | (ENABLE | DISABLE)
| | That we have omitted parts of the code that are not directly related to the example | CREATE TABLE ... AS subquery;
| | That you can repeat a portion of the code | SELECT col1, col2, ..., coln FROM employees;
| . | Vertical ellipsis points indicate that we have omitted several lines of code not directly related to the example. | ...
| Other notation | You must enter symbols other than brackets, braces, vertical bars, and ellipsis points as shown. |acctbal NUMBER(11,2);
acct CONSTANT NUMBER(4) := 3;
| Italics | Italicized text indicates placeholders or variables for which you must supply particular values. | CONNECT SYSTEM/system_password
DB_NAME = database_name
| UPPERCASE | Uppercase typeface indicates elements supplied by the system. We show these terms in uppercase in order to distinguish them from terms you define. Unless terms appear in brackets, enter them in the order and with the spelling shown. However, because these terms are not case sensitive, you can enter them in lowercase. | SELECT last_name, employee_id FROM employees;
SELECT * FROM USER_TABLES;
DROP TABLE hr.employees;
| lowercase | Lowercase typeface indicates programmatic elements that you supply. For example, lowercase indicates names of tables, columns, or files. **Note:** Some programmatic elements use a mixture of UPPERCASE and lowercase. Enter these elements as shown. | SELECT last_name, employee_id FROM employees;
sqlplus hr/hr
CREATE USER mjones IDENTIFIED BY ty3MU9;
**Conventions for Windows Operating Systems**
The following table describes conventions for Windows operating systems and provides examples of their use.
<table>
<thead>
<tr>
<th>Convention</th>
<th>Meaning</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Choose Start ></td>
<td>How to start a program. For example, to start Oracle Database Configuration Assistant, you must click the Start button on the taskbar and then choose Programs > Oracle - HOME_NAME > Database Administration > Database Configuration Assistant.</td>
<td>Choose Start > Programs > Oracle - HOME_NAME > Database Administration > Database Configuration Assistant</td>
</tr>
<tr>
<td>C: ></td>
<td>Represents the Windows command prompt of the current hard disk drive. Your prompt reflects the subdirectory in which you are working. Referred to as the command prompt in this guide.</td>
<td>C: \oracle \oradata></td>
</tr>
<tr>
<td>HOME_NAME</td>
<td>Represents the Oracle home name. The home name can be up to 16 alphanumeric characters. The only special character allowed in the home name is the underscore.</td>
<td>C: > net start Oracle HOME_ NAME TNSListener</td>
</tr>
</tbody>
</table>
In releases prior to 8.1, when you installed Oracle components, all subdirectories were located under a top level `ORACLE_HOME` directory that by default was:
- `C:\orant` for Windows NT
- `C:\orawin95` for Windows 95
- `C:\orawin98` for Windows 98
or whatever you called your Oracle home.
In this Optimal Flexible Architecture (OFA)-compliant release, all subdirectories are not under a top level `ORACLE_HOME` directory. There is a top level directory called `ORACLE_BASE` that by default is `C:\oracle`. If you install release 9.0 on a computer with no other Oracle software installed, the default setting for the first Oracle home directory is `C:\oracle\ora90`. The Oracle home directory is located directly under `ORACLE_BASE`.
All directory path examples in this guide follow OFA conventions.
See Oracle9i Database Getting Started for Windows for additional information on OFA compliances and for information on installing Oracle products in non-OFA compliant directories.
<table>
<thead>
<tr>
<th>Convention</th>
<th>Meaning</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>ORACLE_HOME</code> and <code>ORACLE_BASE</code></td>
<td>In releases prior to 8.1, when you installed Oracle components, all subdirectories were located under a top level <code>ORACLE_HOME</code> directory that by default was:</td>
<td>Go to the <code>ORACLE_BASE\ORACLE_HOME\rdbms\admin</code> directory.</td>
</tr>
</tbody>
</table>
Documentation Accessibility
Oracle’s goal is to make our products, services, and supporting documentation accessible to the disabled community with good usability. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For additional information, visit the Oracle Accessibility Program Web site at http://www.oracle.com/accessibility/
JAWS, a Windows screen reader, may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, JAWS may not always read a line of text that consists solely of a bracket or brace.
What’s New in Oracle Call Interface?
The following sections describe the new features in Oracle Call Interface:
- Oracle9i Release 1 (9.0.1) New Features in Oracle Call Interface
- Oracle8i Release 1 (8.1.5) New Features in Oracle Call Interface
- OCI Release 7.x Functions
Oracle9i Release 1 (9.0.1) New Features in Oracle Call Interface
This section contains these topics:
■ Borland Support
Oracle Corporation only ships an import library, oci.lib, for use with the Microsoft Compiler. Other compilers, for example, Borland, though likely compatible with the Oracle DLLs, are not tested and supported by Oracle for use with Oracle Call Interface.
■ Using Oracle9i on Windows 2000
There are some differences between using Oracle9i on Windows 2000 and Windows NT 4.0.
See Also: Oracle9i Database Getting Started for Windows
Oracle8i Release 1 (8.1.5) New Features in Oracle Call Interface
OCI includes many new functions and performance enhancements that extend the capabilities of the OCI to handle objects in an Oracle8i database. To use object functionality, you must have installed Oracle8i Enterprise Edition.
For Windows platforms, OCI includes support for applications written with earlier releases (7.x/8.x) of OCI. Oracle has now removed any version number from the library name oci.lib.
OCI Release 7.x Functions
OCI functions available in Release 7.x are still available, but they are not able to take full advantage of new Oracle8i features. Oracle recommends that existing applications start using the new calls to improve performance and provide increased functionality.
For Win32 applications running on Windows NT or Windows 95/98, this means that these applications will need to migrate to the new Release 8.x OCI calls in order to continue to be supported. In Release 8.x, the library and DLL containing the OCI calls is named oci.lib and oci.dll. In Release 7.x, they were named ociw32.lib and ociw32.dll. At some point in the future, ociw32.lib and ociw32.dll will no longer be supported or released, making migration to the new calls mandatory.
Introduction to Oracle Call Interface
This chapter provides introductory information to help you get started with Oracle Call Interface (OCI) for Windows.
This chapter contains these topics:
■ What is the Oracle Call Interface?
■ What is Included in the OCI Package?
■ Oracle Directory Structure
■ Sample Programs
See Also: For detailed information about OCI, including new features and function descriptions, see the Oracle Call Interface Programmer’s Guide.
What is the Oracle Call Interface?
The Oracle Call Interface (OCI) is an application programming interface (API) that allows applications written in C to interact with one or more Oracle Servers. OCI gives your programs the capability to perform the full range of database operations that are possible with Oracle9i database, including SQL statement processing and object manipulation.
What is Included in the OCI Package?
The Oracle Call Interface for Windows package includes:
- Oracle Call Interface
- Required Support Files (RSFs)
- Oracle Universal Installer
- Header files for compiling OCI applications
- Library files for linking OCI applications
- Sample programs for demonstrating how to build OCI applications
The OCI for Windows package includes the additional libraries required for linking your OCI programs on Windows NT, Windows 2000, and Windows 95/98.
Oracle Directory Structure
When you install the Oracle Call Interface for Windows, Oracle Universal Installer creates an `ORACLE_BASE\ORACLE_HOME` directory on the hard drive of your computer. The default Oracle home directory is `C:\oracle\ora90`.
The OCI files are located in the `ORACLE_BASE\ORACLE_HOME` directory, as are the library files needed to link and run OCI applications, and link with other Oracle for Windows NT products, such as Oracle Forms.
The `ORACLE_BASE\ORACLE_HOME` directory contains the following directories that are relevant to OCI:
<table>
<thead>
<tr>
<th>Directory Name</th>
<th>Contents</th>
</tr>
</thead>
<tbody>
<tr>
<td>\bin</td>
<td>Executable and help files</td>
</tr>
<tr>
<td>\oci</td>
<td>Oracle Call Interface directory for Windows files</td>
</tr>
</tbody>
</table>
Sample Programs
When OCI is installed, a set of sample programs and their corresponding project files are copied to the \ORACLE_BASE\ORACLE_HOME\oci\samples subdirectory. Oracle Corporation recommends that you build and run these sample programs to verify that OCI has been successfully installed and to familiarize yourself with the steps involved in developing OCI applications.
To build a sample, run a batch file (make.bat) at the command prompt. For example, to build the cdemo1.c sample, enter the following command:
C:> make cdemo1
After you finish using these sample programs, you can delete them if you choose.
A sample OCI application specific to Windows platforms is included. cdemonlt.c demonstrates OCI multithreading which is the thread safety feature of Oracle9i is also included on the Windows platforms. This sample program requires the emp table from the default database. The program spawns two simultaneous threads that attempt to insert different employee names with the same ID numbers. Thread synchronization is demonstrated.
ociucb.c should be compiled using ociucb.bat. This batch file creates a DLL and places it in the \ORACLE_BASE\ORACLE_HOME\bin directory. To load user callback functions, set the environment/registry variable ORA_OCI_UCBPKG = OCIUCB.
See Also: For more information on multithreading, see the Oracle Call Interface Programmer’s Guide.
Sample Programs
This chapter provides an overview of how to build Oracle database applications using OCI.
This chapter contains these topics:
- Writing OCI Applications
- Compiling OCI Applications
- Linking OCI Applications
- The Oracle XA Library
- Using the Object Type Translator and the INTYPE File Assistant
See Also: See the Oracle Call Interface Programmer’s Guide for detailed information about writing OCI applications.
Writing OCI Applications
The general goal of an OCI application is to connect to an Oracle Server, engage in some sort of data exchange, and perform necessary data processing. While some flexibility exists in the order in which specific tasks can be performed, every OCI application must accomplish particular steps.
The basic programming structure used by the OCI is as follows:
1. Initialize the OCI programming environment and processes.
2. Allocate necessary handles, and establish a server connection and a user session.
3. Issue SQL statements to the server, and perform necessary application data processing.
4. Free statements and handles not to be reused or reexecute prepared statements again, or prepare a new statement.
5. Terminate user session and server connection.
Note: The initialization of an OCI environment in Shared Data Mode that is discussed in the Oracle Call Interface Programmer’s Guide is not supported on Windows.
Compiling OCI Applications
When you compile an OCI application, you must include the appropriate OCI header files. The header files are located in the \ORACLE_BASE\ORACLE_HOME\oci\include directory.
For example, if you are using Microsoft Visual C++ 6.0, you would need to put in the appropriate path in the Directories page of the Options dialog in the Tools menu. See Figure 2-1, "Directories Tab of the Options Dialog".
Linking OCI Applications
The OCI calls are implemented in dynamic link libraries (DLLs) that Oracle provides. The DLLs are located in the `ORACLE_BASE\ORACLE_HOME\bin` directory and are part of the Required Support Files (RSFs).
To use the Oracle DLLs to make OCI calls, you can either dynamically load the DLL and function entry points, or you can link your application with the import library `oci.lib`. Oracle Corporation only provides the `oci.lib` import library for use with the Microsoft Compiler. Other compilers, though likely compatible with the Oracle DLLs, are not tested and supported by Oracle for use with OCI.
When using `oci.lib` with the Microsoft Compiler, you do not have to indicate any special link options.
`oci.lib`
`oci.lib` is a single, programmatic interface to Oracle. Oracle has removed any version number from the library name.
Client DLL Loading When Using LoadLibrary()
The following directories are searched in this order by LoadLibrary:
- Directory from which the application is loaded
- Current directory
- Windows NT or Windows 2000:
- 32-bit Windows system directory (system32). Use the GetWindowsDirectory function to obtain the path of this directory.
- 16-bit Windows directory (system). There is no Win32 function that obtains the path of this directory, but it is searched.
- Windows 95 or Windows 98:
- Windows directory. Use the GetWindowsDirectory function to obtain the path of this directory.
- Directories that are listed in the PATH environment variable
Running OCI Applications
To run an OCI application, ensure that the entire corresponding set of RSFs is installed on the computer that is running your OCI application.
The Oracle XA Library
The XA Application Program Interface (API) is typically used to enable an Oracle9i database to interact with a transaction processing (TP) monitor, such as:
- BEA Tuxedo
- IBM Transarc Encina
- IBM CICS
You can also use TP monitor statements in your client programs. The use of the XA API is supported from OCI.
The Oracle XA Library is automatically installed as part of Oracle9i Enterprise Edition. The following components are created in your Oracle home directory:
Compiling and Linking an OCI Program with the Oracle XA Library
To compile and link an OCI program:
1. Compile program.c by using Microsoft Visual C++, making sure to include ORACLE_BASE\ORACLE_HOME\rdbms\xa in your path.
2. Link program.obj with the following libraries:
<table>
<thead>
<tr>
<th>Library</th>
<th>Located in...</th>
</tr>
</thead>
<tbody>
<tr>
<td>oraxa9.lib</td>
<td>ORACLE_BASE\ORACLE_HOME\rdbms\xa</td>
</tr>
<tr>
<td>oci.lib</td>
<td>ORACLE_BASE\ORACLE_HOME\oci\lib\msvc</td>
</tr>
</tbody>
</table>
3. Run program.exe.
Using XA Dynamic Registration
The Oracle9i database supports the use of XA dynamic registration. XA dynamic registration improves the performance of applications interfacing with XA-compliant TP monitors. For TP Monitors to use XA dynamic registration with an Oracle database on Windows NT, you must add either an environmental variable or a registry variable to the Windows NT computer on which your TP monitor is running. See either of the following sections for instructions:
- Adding an Environmental Variable for the Current Session
- Adding a Registry Variable for All Sessions
Adding an Environmental Variable for the Current Session
Adding an environmental variable at the command prompt affects only the current session.
To add an environmental variable:
From the computer where your TP monitor is installed, enter the following at the command prompt:
C:\> set ORA_XA_REG_DLL = vendor.dll
where vendor.dll is the TP monitor DLL provided by your vendor.
Adding a Registry Variable for All Sessions
Adding a registry variable affects all sessions on your Windows NT computer. This is useful for computers where only one TP monitor is running.
To add a registry variable:
1. Go to the computer where your TP monitor is installed.
2. On Windows NT or Windows 2000, enter the following at the command prompt:
C:\> regedt32
On Windows 95/98, enter:
C:\> regedit
The Registry Editor window appears.
3. Go to HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME ID.
5. Enter ORA_XA_REG_DLL in the Value Name text box.
6. Select REG_EXPAND_SZ from the Data Type list box.
7. Choose OK. The String Editor dialog box appears.
8. Type vendor.dll in the String field, where vendor.dll is the TP monitor DLL provided by your vendor.
9. Choose OK. The Registry Editor adds the parameter.
10. Choose Exit from the Registry menu.
The registry exits.
**XA and TP Monitor Information**
Refer to the following general information about XA and TP monitors:
- *Distributed TP: The XA Specification* (C193) published by the Open Group. See the Web site at
- The Open Group., 1010 El Camino Real, Suite 380, Menlo Park, CA 94025, U.S.A.
- Your specific TP monitor documentation
**See Also:** For more information about the Oracle XA Library and using XA dynamic registration, see *Oracle9i Application Developer’s Guide - Fundamentals*.
**Using the Object Type Translator and the INTYPE File Assistant**
The Object Type Translator (OTT) is used to create C-struct representations of Abstract Data Types that have been created and stored in an Oracle9i database.
To take advantage of objects run OTT against the database, and a header file is generated that includes the C structs. For example, if a PERSON type has been created in the database, OTT can generate a C struct with elements corresponding to the attributes of PERSON. In addition, a null indicator struct is created that represents null information for an instance of the C struct.
The INTYPE file tells the OTT which object types should be translated. This file also controls the naming of the generated structs. The INTYPE File Assistant is a wizard that helps developers to create the INTYPE file.
Note that the `CASE` specification inside the INTYPE files, such as `CASE=LOWER`, applies only to C identifiers that are not specifically listed, either through a `TYPE` or `TRANSLATE` statement in the INTYPE file. It is important to provide the type name with the appropriate cases, such as `TYPE Person` and `Type PeRsOn`, in the INTYPE file.
The INTYPE File Assistant generates type names in the INTYPE file with the same case as in the database. By default, all of the types in the database are created in upper case.
In order to preserve the case, use double quotes when creating types in the database. For example:
```
CREATE TYPE "PeRsOn" AS OBJECT;
```
Object type dependencies are not checked by the Oracle INTYPE File Assistant. When adding an object type for inclusion in the INTYPE file, the INTYPE File Assistant does not add other object types with dependency relationships.
The INTYPE File Assistant requires explicit translations for object types or attributes whose names contain non-ASCII characters. These object types or attributes are indicated by the predefined tag Identifier in the fields where the translations would be entered. Users are required to override this tag with the C identifier translation for the corresponding object type or attribute. The INTYPE File Assistant does not create the INTYPE file until all required translations have been entered.
OTT on Windows NT can be invoked from the command line. Additionally, a configuration file may be named on the command line. For Windows NT, the configuration file is `ottcfg.cfg`, located in `ORACLE_BASE\ORACLE_HOME\precomp\admin`.
**Additional Information:** See the *Oracle Call Interface Programmer’s Guide* for more information about OTT and INTYPE files. In addition, see the online help for OTT.
A
B
bin directory, 1-2
Borland support, xvi
building OCI applications, 2-1
C
cdemomt.c, 1-3
compiling
OCI applications, 2-2
OCI with Oracle XA, 2-5
Oracle XA Library, 2-4
configuration files, 1-3
location, 1-3
demonstration programs, 1-3
directory structures, 1-2
dynamic registration
Oracle XA Library, 2-5
E
EMP table, 1-3
F
features
new, xv
G
generic documentation references
H
compiling and linking OCI applications, 2-2, 2-3
demonstration programs, 1-3
invoking OTT from the command line, 2-8
OTT configuration file, 2-8
thread safety, 1-3
XA linking file names
I
header files
location of, 1-3, 2-2
J
include directory, 1-3
INTYPE File Assistant, 2-7
L
libraries
oci.lib, 2-3
linking
OCI applications, 2-3
OCI with Oracle XA, 2-5
Oracle XA Library, 2-4
LoadLibrary, 2-4
M
make.bat, 1-3
multithreading, 1-3
O
Object Type Translator (OTT), 2-7
OCI
building applications, 2-1
new features, new features, xv
Oracle XA Library, 2-5
overview, 1-2
release 7.x functions, xvi
sample programs, 1-3
OCI applications
compiling, 2-2
linking, 2-3
running, 2-4
writing, 2-2
oci directory, 1-2
oci.dll, xvi
oci.lib, xvi, 2-3
ociw32.dll, xvi
ociw32.lib, xvi
Oracle Call Interface. See OCI
Oracle XA Library
additional documentation, 2-7
compiling and linking an OCI program, 2-5
dynamic registration, 2-5
functions, 2-4
overview, 2-4
Oracle9i database
transaction processing monitor, 2-4
OTT (Object Type Translator), 2-7
ottcfg.cfg, 1-3
R
registry
REGEDT32, 2-6
required support files, 1-2
RSFs, 1-2
running OCI applications, 2-4
S
sample programs, 1-3
samples directory, 1-3
shared data mode, 2-2
T
transaction processing monitor
additional documentation, 2-7
interacting with Oracle9i database, 2-4
types, 2-4
W
writing OCI applications, 2-2
X
XA. See Oracle XA Library
|
{"Source-Url": "https://docs.oracle.com/cd/A91202_01/901_doc/win.901/a90166.pdf", "len_cl100k_base": 7660, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 52072, "total-output-tokens": 8328, "length": "2e12", "weborganizer": {"__label__adult": 0.00022709369659423828, "__label__art_design": 0.00013959407806396484, "__label__crime_law": 0.00016379356384277344, "__label__education_jobs": 0.0003635883331298828, "__label__entertainment": 3.999471664428711e-05, "__label__fashion_beauty": 7.18832015991211e-05, "__label__finance_business": 0.0002796649932861328, "__label__food_dining": 0.00015997886657714844, "__label__games": 0.000461578369140625, "__label__hardware": 0.0007605552673339844, "__label__health": 0.0001468658447265625, "__label__history": 9.244680404663086e-05, "__label__home_hobbies": 3.647804260253906e-05, "__label__industrial": 0.0002161264419555664, "__label__literature": 0.00010693073272705078, "__label__politics": 8.618831634521484e-05, "__label__religion": 0.0002467632293701172, "__label__science_tech": 0.0035839080810546875, "__label__social_life": 3.7670135498046875e-05, "__label__software": 0.0293121337890625, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0001195073127746582, "__label__transportation": 0.00018930435180664065, "__label__travel": 0.00010758638381958008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34428, 0.02717]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34428, 0.35412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34428, 0.80583]], "google_gemma-3-12b-it_contains_pii": [[0, 97, false], [97, 97, null], [97, 866, null], [866, 1926, null], [1926, 3124, null], [3124, 3124, null], [3124, 3403, null], [3403, 4602, null], [4602, 5946, null], [5946, 10767, null], [10767, 12674, null], [12674, 14985, null], [14985, 16298, null], [16298, 17325, null], [17325, 17601, null], [17601, 19405, null], [19405, 19869, null], [19869, 21580, null], [21580, 22969, null], [22969, 22985, null], [22985, 23402, null], [23402, 24775, null], [24775, 25638, null], [25638, 26957, null], [26957, 28034, null], [28034, 29265, null], [29265, 31158, null], [31158, 32607, null], [32607, 33429, null], [33429, 34428, null]], "google_gemma-3-12b-it_is_public_document": [[0, 97, true], [97, 97, null], [97, 866, null], [866, 1926, null], [1926, 3124, null], [3124, 3124, null], [3124, 3403, null], [3403, 4602, null], [4602, 5946, null], [5946, 10767, null], [10767, 12674, null], [12674, 14985, null], [14985, 16298, null], [16298, 17325, null], [17325, 17601, null], [17601, 19405, null], [19405, 19869, null], [19869, 21580, null], [21580, 22969, null], [22969, 22985, null], [22985, 23402, null], [23402, 24775, null], [24775, 25638, null], [25638, 26957, null], [26957, 28034, null], [28034, 29265, null], [29265, 31158, null], [31158, 32607, null], [32607, 33429, null], [33429, 34428, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34428, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34428, null]], "pdf_page_numbers": [[0, 97, 1], [97, 97, 2], [97, 866, 3], [866, 1926, 4], [1926, 3124, 5], [3124, 3124, 6], [3124, 3403, 7], [3403, 4602, 8], [4602, 5946, 9], [5946, 10767, 10], [10767, 12674, 11], [12674, 14985, 12], [14985, 16298, 13], [16298, 17325, 14], [17325, 17601, 15], [17601, 19405, 16], [19405, 19869, 17], [19869, 21580, 18], [21580, 22969, 19], [22969, 22985, 20], [22985, 23402, 21], [23402, 24775, 22], [24775, 25638, 23], [25638, 26957, 24], [26957, 28034, 25], [28034, 29265, 26], [29265, 31158, 27], [31158, 32607, 28], [32607, 33429, 29], [33429, 34428, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34428, 0.14286]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.