id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
a74ae2b93ff6ae28d87e22080036a405d14b038e
|
[REMOVED]
|
{"Source-Url": "http://oa.upm.es/14541/1/HERMENE_IPT_2004-1.pdf", "len_cl100k_base": 9051, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37677, "total-output-tokens": 11742, "length": "2e13", "weborganizer": {"__label__adult": 0.00026798248291015625, "__label__art_design": 0.00022423267364501953, "__label__crime_law": 0.0002956390380859375, "__label__education_jobs": 0.00045013427734375, "__label__entertainment": 4.762411117553711e-05, "__label__fashion_beauty": 0.00012028217315673828, "__label__finance_business": 0.00020778179168701172, "__label__food_dining": 0.000278472900390625, "__label__games": 0.0004239082336425781, "__label__hardware": 0.0008759498596191406, "__label__health": 0.0003592967987060547, "__label__history": 0.0001838207244873047, "__label__home_hobbies": 8.0108642578125e-05, "__label__industrial": 0.00037217140197753906, "__label__literature": 0.00019502639770507812, "__label__politics": 0.00023567676544189453, "__label__religion": 0.0003981590270996094, "__label__science_tech": 0.0177459716796875, "__label__social_life": 5.561113357543945e-05, "__label__software": 0.005649566650390625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0002262592315673828, "__label__transportation": 0.0004987716674804688, "__label__travel": 0.0001779794692993164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48082, 0.01664]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48082, 0.60448]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48082, 0.89987]], "google_gemma-3-12b-it_contains_pii": [[0, 2198, false], [2198, 5738, null], [5738, 9235, null], [9235, 12325, null], [12325, 15853, null], [15853, 19014, null], [19014, 22398, null], [22398, 25676, null], [25676, 28109, null], [28109, 30345, null], [30345, 33133, null], [33133, 36174, null], [36174, 39423, null], [39423, 42300, null], [42300, 44852, null], [44852, 48082, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2198, true], [2198, 5738, null], [5738, 9235, null], [9235, 12325, null], [12325, 15853, null], [15853, 19014, null], [19014, 22398, null], [22398, 25676, null], [25676, 28109, null], [28109, 30345, null], [30345, 33133, null], [33133, 36174, null], [36174, 39423, null], [39423, 42300, null], [42300, 44852, null], [44852, 48082, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48082, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48082, null]], "pdf_page_numbers": [[0, 2198, 1], [2198, 5738, 2], [5738, 9235, 3], [9235, 12325, 4], [12325, 15853, 5], [15853, 19014, 6], [19014, 22398, 7], [22398, 25676, 8], [25676, 28109, 9], [28109, 30345, 10], [30345, 33133, 11], [33133, 36174, 12], [36174, 39423, 13], [39423, 42300, 14], [42300, 44852, 15], [44852, 48082, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48082, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
e3a154395885cb2c082533f9f98999df2c2166d8
|
Graph Computation Models
Selected Revised Papers from GCM 2014
More on Graph Rewriting With Contextual Refinement
Berthold Hoffmann
20 pages
More on Graph Rewriting With Contextual Refinement
Berthold Hoffmann
Fachbereich Mathematik und Informatik, Universität Bremen, Germany
Abstract: In GRGEN, a graph rewrite generator tool, rules have the outstanding feature that variables in their pattern and replacement graphs may be refined with meta-rules based on contextual hyperedge replacement grammars. A refined rule may delete, copy, and transform subgraphs of unbounded size and of variable shape. In this paper, we show that rules with contextual refinement can be transformed to standard graph rewrite rules that perform the refinement incrementally, and are applied according to a strategy called residual rewriting. With this transformation, it is possible to state precisely whether refinements can be determined in finitely many steps or not, and whether refinements are unique for every form of refined pattern or not.
Keywords: graph rewriting – rule rewriting – contextual hyperedge replacement
1 Introduction
Everywhere in computer science and beyond, one finds systems with a structure represented by graph-like diagrams, whose behavior is described by incremental transformation. Model-driven software engineering is a prominent example for an area where this way of system description is very popular. Graph rewriting, a branch of theoretical computer science that emerged in the seventies of the last century [EPS73], is a formalism of choice for specifying such systems in an abstract way [MEDJ05]. Graph rewriting has a well developed theory [EEPT06] that gives a precise meaning to such specifications. It also allows to study fundamental properties, such as termination and confluence. Over the last decades, various tools have been developed that generate (prototype) implementations for graph rewriting specifications. Some of them do also support the analysis of specifications: AGG [ERT99] allows to determine confluence of a set of rules by the analysis of finitely many critical pairs [Plu93], and GROOVE [Ren04] allows to explore the state space of specifications.
This work relates to GRGEN, an efficient graph rewrite generator [BGJ06] developed at Karlsruhe Institute of Technology. Later, Edgar Jakumeit has extended the rules of this tool substantially, by introducing recursive refinement for sub-rules and application conditions [HJG08]. A single refined rule can match, delete, replicate, and transform subgraphs of unbounded size and variable shape. These rules have motivated the research presented in this paper. Because, the standard theory [EEPT06] does not cover recursive refinement, so that such rules cannot be analyzed for properties like termination and confluence, and tool support concerning these questions cannot be provided.
Our ultimate goal is to lift results concerning confluence to rules with recursive refinement. So we formalize refinement by combining concepts of the existing theory, on two levels: We define a GRGEN rule to be a schema – a plain rule containing variables. On the meta-level, a schema is refined by replacing variables by sub-rules, using meta-rules based on contextual
hypercedge replacement [DHM12]. Refined rules then perform the rewriting on the object level. This mechanism is simple enough for formal investigation. For instance, properties of refined rules can be studied by using induction over the meta-rules. Earlier work [Hof13] has already laid the foundations for modeling refinement. Here we study conditions under which the refinement behaves well. We translate these rules into standard rules that perform the refinement in an incremental fashion, using a specific strategy, called residual rewriting, and show that the translation is correct. We conclude by indicating future work, in Sect. 5. The appendix recalls some facts about graph rewriting.
The examples in this paper arise in the area of model-driven software engineering. Refactoring shall improve the structure of object-oriented software without changing its behavior. Graphs are a straightforward representation for the syntax and semantic relationships of object-oriented programs (and also of models). Many of the basic refactoring operations proposed by Fowler [Fow99] do require to match, delete, copy, or restructure program fragments of unbounded size and variable shape. Several plain rules are needed to specify such an operation, and they have to be controlled in a rather delicate way in order to perform it correctly. In contrast, we shall see that a single rule schema with appropriate meta-rules suffices to specify it, in a completely declarative way.
The paper is organized as follows. The next section defines graphs, plain rules for graph rewriting, and contextual rules for deriving languages of graphs. In Sect. 3 we define schemata, meta-rules, and the refinement of schemata by applying meta-rules to them, and state under which conditions refinements can be determined in finitely many steps, and the replacements of refined rules are uniquely determined by their patterns. In Sect. 4, we translate schemata and meta-rules to standard graph rewrite rules, and show that the translation is correct. We conclude by indicating future work, in Sect. 5. The appendix recalls some facts about graph rewriting.
2 Graphs, Rewriting, and Contextual Grammars
We define labeled graphs wherein edges may not just connect two nodes – a source to a target – but any number of nodes. Such graphs are known as hypergraphs in the literature [DHK97].
**Definition 1 (Graph)** Let $\Sigma = (\bar{\Sigma}, \breve{\Sigma})$ be a pair of finite sets containing symbols.
A graph $G = (\breve{G}, \bar{G}, att, \ell)$ consists of two disjoint finite sets $\breve{G}$ of nodes and $\bar{G}$ of edges, a function $att: \bar{G} \to \breve{G}^\ast$ that attaches sequences of nodes to edges,\(^1\) and of a pair $\ell = (\breve{\ell}, \bar{\ell})$ of labeling functions $\breve{\ell}: \breve{G} \to \Sigma$ for nodes and $\bar{\ell}: \bar{G} \to \Sigma$ for edges. We will often refer to the attachment and labeling functions of a graph $G$ by $att_G$ and $\ell_G$, respectively.
A (graph) morphism $m: G \to H$ is a pair $m = (\breve{m}, \bar{m})$ of functions $\breve{m}: \breve{G} \to \breve{H}$ and $\bar{m}: \bar{G} \to \bar{H}$ that preserve attachments and labels: $att_H \circ \breve{m} = \breve{m}^\ast \circ att_G$, $\breve{\ell}_H = \breve{\ell}_G \circ \breve{m}$, and $\bar{\ell}_H = \bar{\ell}_G \circ \bar{m}$.\(^2\) The morphism $m$ is injective, surjective, and bijective if its component functions have the respective property. If $\breve{G} \subseteq \breve{H}$, $\bar{G} \subseteq \bar{H}$, $m$ is injective, and maps nodes and edges of $G$ onto themselves, this defines the inclusion of $G$ as a subgraph in $H$, written $G \hookrightarrow H$. If $m$ is bijective, we call $G$ and $H$ isomorphic, and write $G \cong H$.
---
1. $\ast$ denotes finite sequences over a set $A$; the empty sequence is denoted by $\varepsilon$.
2. For a function $f: A \to B$, its extension $f^*: A^\ast \to B^\ast$ to sequences $A^\ast$ is defined by $f^*(a_1 \ldots a_n) = f(a_1) \ldots f(a_n)$, for all $a_i \in A$, $1 \leq i \leq n$, $n \geq 0$; $f \circ g$ denotes the composition of functions or morphisms $f$ and $g$.
Selected Revised Papers from GCM 2014 2 / 20
Example 1 (Program Graphs) Figure 1 shows two graphs $G$ and $H$ representing object-oriented programs. Circles represent nodes, and have their labels inscribed. In these particular graphs, edges are always attached to exactly two nodes, and are drawn as straight or wave-like arrows from their source node to their target node. (The filling of nodes, and the colors of edges will be explained in Example 2.)
Program graphs have been proposed in [VJ03] for representing key concepts of object-oriented programs in a language-independent way. In the simplified version that is used here, nodes labeled with $C, V, E, S$, and $B$ represent program entities: classes, variables, expressions, signatures and bodies of methods, respectively. Straight arrows represent the syntactical composition of programs, whereas wave-like arrows relate the use of entities to their declaration in the context.
For rewriting graphs, we use the standard definition [EEPT06], but insist on injective matching of rules; it is shown in [HMP01] that this is no restriction. We choose an alternative representation of rules discussed in [EHP09] so that the rewriting of rules in Sect. 3 can be easier defined, see also in Appendix A.
**Definition 2 (Graph Rewriting)** A graph rewrite rule (rule for short) $r = (P \rightarrow B \leftarrow R)$ consists of graph inclusions, of a pattern $P$ and a replacement $R$ in a common body $B$. The intersection $P \cap R$ of pattern and replacement graph is called the interface of $r$. A rule is concise if the inclusions are jointly surjective. By default, we refer to the components of a rule $r$ by $P_r$, $B_r$, and $R_r$.
The rule $r$ rewrites a source graph $G$ into a target graph $H$ if there is an injective morphism $B \rightarrow U$ to a united graph $U$ so that the squares in the following diagram are pushouts:
$$
\begin{array}{ccc}
G & \xrightarrow{m} & U \\
\downarrow & & \downarrow \bar{m} \\
B & \xrightarrow{r} & R
\end{array}
$$
The diagram exists if the morphism $m: P \rightarrow G$ is injective, and satisfies the following gluing condition: Every edge of $G$ that is attached to a node in $m(P \setminus R)$ is in $m(P)$. Then $m$ is a match of $r$ in $G$, and $H$ can be constructed by (i) uniting $G$ disjointly with a fresh copy of the body $B$, and gluing its pattern subgraph $P$ to its match $m(P)$ in $G$, giving $U$, and (ii) removing the
nodes and edges of \( m(P \setminus R) \) from \( U \), yielding \( H \) with an embedding morphism \( \tilde{m} : R \to H \). The construction is unique up to isomorphism, and yields a rewrite step, which is denoted as \( G \Rightarrow_m H \). Note that the construction can be done so that there are inclusions \( G \hookrightarrow U \hookrightarrow H \); we will assume this wlog. in the rest of this paper.
**Example 2 (A Refactoring Rule)** Figure 2 shows a rule \( \text{pum}' \). Rounded shaded boxes enclose its pattern and replacement, where the pattern is the box extending farther to the left. Together they designate the body. (Rule \( \text{pum}' \) is concise.) We use the convention that an edge belongs only to those boxes that contain it entirely; so the “waves” connecting the top-most \( S \)-node to nodes in the pattern belong only to the pattern, but not to the replacement of \( \text{pum}' \).
The pattern of \( \text{pum}' \) specifies a class with two subclasses that contain method implementations for the same signature. The replacement specifies that one of these methods shall be moved to the superclass, and the other one shall be deleted. In other words, \( \text{pum}' \) pulls up methods, provided that both bodies are semantically equivalent. (This property cannot be checked automatically, but has to be verified by the user before applying this refactoring operation.)
The graphs in Figure 1 constitute a rewrite step \( G \Rightarrow_m \text{pum}' H \). The shaded nodes in the source graph \( G \) distinguish the match \( m \) of \( \text{pum}' \), and the shaded nodes in the target graph \( H \) distinguish the embedding \( \tilde{m} \) of its replacement. (The red nodes in \( G \) are removed, and the green nodes in \( H \) are inserted, with their incident edges, respectively.)
Rule \( \text{pum}' \) only applies if the class has exactly two subclasses, and if the method bodies have the particular shape specified in the pattern. The general Pull-up Method refactoring of Fowler [Fow99] works for classes with any positive number of subclasses, and for method bodies of varying shape and size. This cannot be specified with a single plain rule, which only has a pattern graph of fixed shape and size. The general refactoring will be specified by a single rule schema (with a set of meta-rules) in Example 5 further below.
We introduce further notions, for rewriting graphs with sets of rules. Let \( \mathcal{R} \) be a set of graph rewrite rules. We write \( G \Rightarrow R H \) if \( G \Rightarrow^* R H \) for some match \( m \) of a rule \( r \in \mathcal{R} \) in \( G \), and denote the transitive-reflexive closure of this relation by \( \Rightarrow^*_R \). A graph \( G \) is in normal form wrt. \( \mathcal{R} \) if there is no graph \( H \) so that \( G \Rightarrow^*_R H \). A set \( \mathcal{R} \) of graph rewrite rules reduces a graph \( G \) to some graph \( H \), written \( G \Rightarrow^*_R H \), if \( G \Rightarrow^*_R H \) so that \( H \) is in normal form. \( \mathcal{R} \) (and \( \Rightarrow^*_R \)) is terminating if it does not admit an infinite rewrite sequence \( G_0 \Rightarrow^*_R G_1 \Rightarrow^*_R \ldots \), and confluent if for every diverging rewrite sequence \( H_1 \Rightarrow^*_R G \Rightarrow^*_R H_2 \), there exists a graph \( K \) with a joining rewrite sequence \( H_1 \Rightarrow^*_R K \Rightarrow^*_R H_2 \). Graph rewrite rules \( \mathcal{R} \) can be used to compute a partial nondeterministic function \( f_\mathcal{R} \) from graphs to sets of their normal forms, i.e., \( f_\mathcal{R}(G) = \{ H \mid G \Rightarrow^*_R H \} \). The function \( f_\mathcal{R} \) is total if \( \mathcal{R} \) is terminating, and deterministic if \( \mathcal{R} \) is confluent.
A set of graph rewrite rules, together with a distinguished start graph, forms a grammar, which can be used to derive a set of graphs from that start graph. Such sets are called languages, as for string grammars. Graph grammars with unrestricted rules have been shown to generate the recursively enumerable languages[Ues78]. So there can be no general algorithms recognizing whether a graph belongs to the language of such a grammar. Until recently, the study of restricted grammars with recognizable languages has focused on the context-free case, where the
3 Even if \( r \) is not concise, the nodes and edges of \( B \) that are not in the subgraph \( (P \cup R) \) are not relevant for the construction as they are removed immediately after adding them to the union.
pattern of a rule is a syntactic variable (or nonterminal). Two different ways have been studied to specify how the neighbor nodes of a variable are connected to the replacement graph. In node replacement [ER97], this is done by embedding rules for neighbor nodes that depend on the labels of the neighbors and of the connecting edges. In hyperedge replacement [DHK97], the variable is a hyperedge with a fixed number of attached nodes that may be glued to the nodes in the replacement. Unfortunately, the languages derivable with these grammars are restricted in the way their graphs may be connected: neither a language as simple as that of all graphs, nor the language of program graphs introduced in Example 1 can be derived with a context-free graph grammar. To overcome these limitations, Mark Minas and the author have proposed a modest extension of hyperedge replacement where the replacement graph of a variable may not only be glued to the former attachments of $x$, but also to further nodes in the source graph [HM10]. This way of contextual hyperedge replacement does not only overcome the restrictions of context-free graph grammars (both the languages of all graphs and of program graphs can be derived), but later studies in [DHM12, DH14] have shown that many properties of hyperedge replacement are preserved, in particular, the existence of a recognition algorithm. Furthermore, they are suited to specify the refinement of rules by rules (in the next section).
For the definition of these grammars, we assume that the symbols $\Sigma$ contain a set $X \subseteq \Sigma$ of variable names that are used to label placeholders for subgraphs. $X(G) = \{ e \in G \mid e \in X \}$ is the set of variables of a graph $G$, and $\overline{G}$ is its kernel, i.e., $G$ without $X(G)$. For a variable $e \in X(G)$, the variable subgraph $G/e$ consists of $e$ and its attached nodes.
Graphs with variables are required to be typed in the following way: Variable names $x \in X$ are assumed to come with a signature graph $\text{Sig}(x)$, which consists of a single edge labeled with $x$, to which all nodes are attached exactly once; in every graph $G$, the variable subgraph $G/e$ must be isomorphic to the signature graph $\text{Sig}(\ell_G(e))$, for every variable $e \in X(G)$.
**Definition 3** (Contextual Grammar) A rule $r$: $(P \rightarrow B \leftarrow R)$ is contextual if the only edge $e$ in its pattern $P$ is a variable, and if its replacement $R$ equals the body $B$, without $e$.
With some start graph $Z$, a finite set $R$ of contextual rules forms a contextual grammar $\Gamma = (\Sigma, R, Z)$ over the labels $\Sigma$, which derives the language $L(\Gamma) = \{ G \mid Z \Rightarrow^* G, X(G) = \emptyset \}$.
The pattern $P$ of a contextual rule $r$ is the disjoint union of a signature graph $\text{Sig}(x)$ with a discrete context graph, which is denoted as $C_r$. We call $r$ context-free if $C_r$ is empty. (Grammars with only such rules have been studied in the theory of hyperedge replacement [DHK97].)
**Example 3** (A contextual grammar for program graphs) Figure 3 shows a set $P$ of contextual rules. Variables are represented as boxes with their variable names inscribed; they are connected with their attached nodes by lines, ordered from left to right. (Later, in Sect. 3, we will also use arrows in either direction.) When drawing contextual rules like those in Fig. 3, we omit the box enclosing their pattern. The variable outside the replacement box is the unique edge in the pattern, and green filling (appearing grey in B/W print) designates the contextual nodes within the box enclosing the replacement graph.
The rules $P$ in Figure 3 define a contextual grammar $\text{PG} = (\Sigma, P, \text{Sig}(\text{Cls}))$ for program graphs. The grammar uses four variable names; they are attached to a single node, which is labeled with $C$ for variables named $\text{Cls}$ and $\text{Fea}$, with $B$ for variables named $\text{Bdy}$, and with $E$ for variables named...
More on Graph Rewriting With Contextual Refinement
Figure 3: Contextual rules P for generating program graphs
Figure 4: Snapshots in a derivation of a program graph
Exp. The C-node of the start graph $\text{Sig}(Cls)$ represents the root class of the program, and the structure of a program is derived by the rules, considered from left to right, as follows. Every class has features, and may be extended by subclasses. A feature is either an attribute variable, or a method signature with parameter variables, or a method body that implements some existing signature. A method body (or rather, its data flow) consists of a set of expressions, which either use the value of some existing variable, or assign the value of an expression to some existing variable, or call some existing method signature with expressions as actual parameters. Actually, $\text{class}_{k,n}$, $\text{sig}_{n}$, $\text{body}_{n}$, and $\text{call}_{n}$ are templates for infinite sets of rule instances that generate classes with $k \geq 0$ features and $n \geq 0$ subclasses, signatures with $n \geq 0$ parameters, bodies with $n \geq 0$ expressions, and calls with $n \geq 0$ actual parameters, respectively. The instances of a template can be composed with a few replicative rules, so this is just a short hand notation, like the repetitive forms of extended Backus-Naur form of context-free string grammars.
Figure 4 shows snapshots in a derivation of a program graph. The first graph is derived from the start graph by applying four instances of the template $\text{class}_{k,n}$, which generate a root class with two features and one subclass, which in turn has two features and two subclasses, whereof only one has a feature, and both do not have subclasses. The second graph is obtained by applying rule instance $\text{sig}_{1}$ at two matches, deriving two signatures with one parameter each. Applying rule $\text{impl}$ yields the third graph, with the root of a body for one of the signatures. The fourth is obtained by applications of the instances $\text{body}_{1}$ and $\text{call}_{1}$, refining the body to a single call. Then application of rule use derives the actual parameter for the call. Four further derivation steps, applying rules $\text{impl}$ and $\text{att}$ to the remaining $\text{Fea}$-variables, and rules $\text{body}_{1}$ and then use to the resulting $\text{Bdy}$-variable, yield the target graph of the rewriting step shown in Figure 1.
As for context-free string grammars, it is important to know whether a contextual grammar is ambiguous or not. Unambiguous grammars define unique (de-) compositions of graphs in their language. Parsing of unambiguous grammars is efficient as no backtracking is needed, and the transformation of graphs can be defined over their unique structure. This property will be exploited in Lemma 1 further below.
**Definition 4 (Ambiguity)** Let $\Gamma = (\Sigma, R, Z)$ be a contextual grammar.
Consider two rewrite steps $G \Rightarrow^m_r H \Rightarrow^{m'}_r K$ where $\tilde{m}: R \rightarrow H$ is the embedding of $r$ in $H$. The steps may be **swapped** if $m'(P') \hookrightarrow \tilde{m}(P \cap R)$, yielding steps $G \Rightarrow^{m'}_r H' \Rightarrow^m_r K$. Two rewrite sequences are **equivalent** if they can be made equal up to isomorphism, by swapping their steps repeatedly.
Then $\Gamma$ is **unambiguous** if all rewrite sequences of a graph $G \in L(\Gamma)$ are equivalent to each other; if some graph $G$ has at least two rewrite sequences that are not equivalent, $\Gamma$ is **ambiguous**.
**Example 4 (Unambiguous Grammars)** The program graph grammar $PG$ in Example 3 is unambiguous.
### 3 Schema Refinement with Contextual Meta-Rules
Refining graph rewrite rules means to rewrite rules instead of graphs. A general framework for “meta-rewriting” can be easily defined. We start by lifting morphisms from graphs to rules.
**Definition 5 (Rule Morphism)** For (graph rewrite) rules $r$ and $s$, a graph morphism $m: B_r \rightarrow B_s$ on their bodies is a **rule morphism**, and denoted as $m: r \rightarrow s$, if $m(P_r) \hookrightarrow P_s$ and $m(R_r) \hookrightarrow R_s$.
Graph rewrite rules and rule morphisms form a category. This category has pushouts, pullbacks, and unique pushout complements along injective rule morphisms, just as the category of graphs. As with graphs, we write rule inclusions as “$\hookrightarrow$”, and let $\ker$ be the **kernel** of a rule $r$ wherein all variables are removed.
**Definition 6 (Rule Rewriting)** A pair $\delta: (p \hookrightarrow b \leftarrow r)$ of rule inclusions is a **rule rewrite rule**, or meta-rule for short. With $\delta_B$ we denote its **body rule**, which is a graph rewrite rule $(B_p \leftarrow B_b \leftarrow B_r)$ consisting of the bodies of $p$, $b$, and $r$.
Consider a rule $s$, a meta-rule $\delta$ as above, and a rule morphism $m: p \rightarrow s$. The meta-rule $\delta$ **rewrites** the source rule $s$ at $m$ to the target rule $t$, written $s \overset{\delta}{\rightarrow}_m t$, if there is a pair of pushouts
$$\begin{array}{ccc}
p & \hookrightarrow & b \\
\downarrow & & \downarrow \\
s & \hookrightarrow & u \\
\end{array}$$
The pushouts above exist if the underlying body morphism $m_B: B_p \rightarrow B_s$ of $m$ satisfies the graph gluing condition wrt. the body rule $\delta_B$ and the body graph $B_s$; the target rule $t$ is constructed by rewriting $B_t$ to the body $B_t$ with the body rule $\delta_B$, and extending it to a rule $(P_t \hookrightarrow B_t \leftarrow R_t)$.
As for graph rewriting, we assume that the pushouts are constructed so that all horizontal rule morphisms are rule inclusions, i.e., $s \hookrightarrow u \leftarrow t$.
It is straight-forward to define general rule rewriting on the more abstract level of adhesive categories. However, this is not useful for this paper, as we will use concrete meta-rules that are based on the restricted notion of contextual hyperedge replacement.
Let us recall the outstanding feature of rules in the graph rewriting tool GRGEN [BGJ06], the “recursive pattern refinement” devised by Edgar Jakumeit [Jak08], which we want to model.
- A rule may contain “subpatterns”, which are names that are parameterized with nodes of the pattern and of the replacement graph of the rule. (If some parameter really is of the replacement graph, the term “subrule” would be more adequate.)
- The refinement of a subpattern is defined by a “pattern rule” that adds nodes and edges to the pattern and replacement graphs of a rule. Pattern rules may define alternative refinements, and may contain subpatterns so that they can be recursive.
- The refinements of different subpatterns must be disjoint, i.e., their matches in the source graph must not overlap. If a node shall be allowed to overlap with another node in the match, it is specified to be “independent”.
We shall model subpatterns by allowing variables to occur in the body of a rule (but neither in its pattern, nor in its replacement); we call such a rule a schema. Pattern rules are modeled by alternative meta-rules where the body rule is contextual. This supports recursion, since the body of the meta-rule may contain variables. Rewriting with context-free meta-rules derives disjoint refinements for different variables in the context-free case; independent nodes can be modeled as the contextual nodes of contextual meta-rules.
Definition 7 (Schema Refinement) A schema \( s : (P \mapsto B \leftarrow R) \) is a graph rewrite rule with \( P \cup R = B \).
Every schema \( s : (P \mapsto B \leftarrow R) \) is required to be typed in the following sense: every variable name \( x \in X \) comes with a signature schema \( \text{Sigschema}(x) \) with body \( \text{Sig}(x) \) so that for every variable \( e \in X(B) \), the variable subgraph \( B/e \) is the body of a subschema that is isomorphic to \( \text{Sigschema}(x) \).
A meta-rule \( \delta : (p \mapsto b \leftarrow r) \) is contextual if \( p, b, \) and \( r \) are schemata, and if its body rule \( \delta_B : (B_p \mapsto B_b \leftarrow B_r) \) is a contextual rule so that the contextual nodes \( C_{\delta_B} \) are in \( P_p \cap R_p \).
In a less contextual variation \( \delta' \) of a meta-rule \( \delta \), some contextual nodes are removed from \( B_p \), but kept in \( B_r \). Let \( \Delta \) be a finite set of meta-rules that is closed under less contextual variations. Then \( \Delta \downarrow_s \) denotes refinement steps with one of its meta-rules, and \( \Delta \downarrow_s^\ast \) denotes repeated refinement, its reflexive-transitive closure. \( \Delta(s) \) denotes the refinements of a schema \( s : (P \mapsto B \leftarrow R) \), containing its refinements without variables:
\[
\Delta(s) = \{ r \mid s_{\Delta \downarrow_s} r, X(B_r) = \emptyset \}
\]
We write \( G \Rightarrow_{\Delta(s)} H \) if \( G \Rightarrow_r H \) for some \( r \in \Delta(s) \), and say that the refinements \( \Delta(s) \) rewrite \( G \) to \( H \).
Note that the application of a refinement \( r \in \Delta(s) \), although it is the result of a compound meta-derivation, is an indivisible rewriting step \( G \Rightarrow_r H \) on the source graph \( G \), similar to a transaction.
\---
\textsuperscript{4} We explain in Example 5 why these less contextual variations are needed.
Example 5 (Pull-Up Method) The Pull-up Method refactoring applies to a class $c$ where all direct subclasses contain implementations for the same method signature that are semantically equivalent. Then the refactoring pulls one of these implementations up to the superclass $c$, and removes all others.
Figures 5-7 show the schema $\text{pum}_k$ and the meta-rules that shall perform this operation. In schemata and meta-rules, the lines between a variable $e$ and a node $v$ attached to $e$ get arrow tips (i) at $e$ if $v$ occurs in the pattern, and (ii) at $v$ if $v$ occurs in the replacement. (Thus the line will have tips at both ends if $v$ is both in the pattern and in the replacement. However, this occurs only in Fig. 8 of Example 6.) The pattern of the schema $\text{pum}_k$ (in Fig. 5) contains a class with $k+1$ subclasses, where every subclass implements a common signature, as they contain $\text{B}$-nodes connected to the same $\text{S}$-node. (Actually, $\text{pum}_k$ is a template for $k \geq 0$ schemata, like some of the contextual rules in Fig. 3. Analogously to contextual rules, the instances of the schema template can be derived with two contextual meta-rules.)
The variables specify what shall happen to the method bodies: $k$ of them, those which are attached to a $\text{Bdy}_0$-variable, shall just be deleted, and the body attached to a $\text{Bdy}_1$-variable shall be moved to the superclass. The meta-rules can be mechanically constructed from the contextual
---
5 This application condition cannot be decided mechanically; it has to be confirmed by the user when s/he applies the operation, by *a priori* verification or *a-posteriori* testing.
The deleting meta-rules \( \{ r^0 \mid r \in M \} \) in Fig. 6 delete all edges and all nodes but the contextual nodes of a method body from the pattern. The replicating meta-rules \( \{ r^0 \mid r \in M \} \) in Fig. 7 delete a method body from the pattern, and insert a copy of this body in the replacement while preserving the contextual nodes. (Meta-rules for making \( i > 1 \) copies of a method body can be constructed analogously.) A context-free meta-rule, like those for \( \text{Bdy}^0 \) and \( \text{Bdy}^1 \), applies to every schema containing a variable of that name. A contextual meta-rule (like the other six), however, applies only if its contextual nodes can be matched in the schema. So the meta-rules \( \text{call}^0_n \) and \( \text{call}^1_n \) apply to the schema \( \text{pum}_k \) as it contains an \( S \)-node, but the others do not: neither does \( \text{pum}_k \) contain any \( V \)-node, nor does any of the meta-rules derive one. This is the reason for including less contextual variations of a contextual meta-rule \( r \). In our case, where the rules have one contextual node only, the only less contextual variation is context-free, and denoted by \( \bar{r} \). We do not not show them here, because the difference is small: just the green (contextual) nodes in Figure 6 and 7 turn white. Applying less contextual meta-rules to a schema adds the former contextual nodes to the interface of a schema, i.e., to the intersection of its pattern and replacement graphs.
If \( \Delta_M \) is the closure of the meta-rules in Figure 6 and 7 under less contextual variation, refinement of the schema \( \text{pum}_k \) may yield method bodies with recursive calls to the signature in the schema, and calls to further signatures, and (read or write) accesses to variables in the interface. For instance, the rule \( \text{pum}' \) in Fig. 2 is a refinement of \( \text{pum}_k \) with \( \Delta_M \), i.e., \( \text{pum}' \in \Delta_M(\text{pum}_k) \). For deriving \( \text{pum}' \), only the context-free variations of meta-rules have been used, adding the former contextual nodes (drawn in green in Fig. 2) to the interface. The upper row in Fig. 13 on page 15 below shows a step in the refinement sequence \( \Delta_M \triangleleft \text{pum}_k \); it applies the context-free variation \( \text{assign}^1 \) of the replicating meta-rule \( \text{assign}^1 \) in Fig. 7.
**Example 6** (Encapsulate Field) The Encapsulate Field refactoring shall transform all non-local read and write accesses to an attribute variable by calls of getter and setter methods. Figure 8 shows a schema \( \text{ef} \), two meta-rule templates, and a refinement of the schema. The schema \( \text{ef} \) (on the left-hand side) adds a getter and a setter method definition for a variable to a
class, and introduces variables readers and writers that take care of the read and write accesses. The (context-free) embedding meta-rule templates (in the middle) then replace any number of read and write accesses to the variable by calls of its getter and setter method, respectively. If $\Delta_E$ denotes the embedding meta-rules, the rule on the right-hand side of Fig. 8 is a derivative $e'f \in \Delta_E(ef)$, encapsulating one read access and two write accesses.
A single rewriting step with a refinement of Pull-up Method copies one method body of arbitrary shape and size, and deletes an arbitrarily number of other bodies, also of variable shape and size. Refinements of Encapsulate Field transforms the neighbor edges of an unbounded number of nodes. This goes beyond the expressiveness of plain rewrite rules, which may only match, delete, and replicate subgraphs of constant size and fixed shape. Many of the basic other refactorings from Fowler’s catalogue [Fow99] cannot be specified by a single plain rule, but by a schema with appropriate meta-rules.
Operationally, we cannot construct all refinements of a schema $s$ first, and apply one of them later, because the set $\Delta(s)$ is infinite in general. Rather, we interleave matching and refinement, in the next section. Before, we study some properties of schema refinement.
The following assumption excludes useless definitions of meta-rules.
Assumption 1 The set $\Delta(s)$ of refinements of a schema $s$ shall be non-empty.
Non-emptiness of refinements can be reduced to the property whether the language of a contextual grammar is empty or not. It is shown in [DHM12, Corollary 2] that this property is decidable.
We need a mild condition to show that schema refinement terminates.
Definition 8 (Pattern-Refining Meta-Rules) A meta-rule $\delta : (p \rightarrow b \leftarrow r)$ refines its pattern if $X(R_r) = \emptyset$ or if $P_r \not\sim P_p$. A set $\Delta$ of meta-rules that refine their patterns is called pattern-refining.
Theorem 1 For a schema $s$ and a set $\Delta$ of pattern-refining meta-rules, it is decidable whether some refinement $r \in \Delta(s)$ applies to a graph $G$, or not.
Proof. By Algorithm 1 in [Hof13], the claim holds under the condition that meta-rules “do not loop on patterns”. It is easy to see that pattern-refining meta-rules are of this kind.
We now turn to the question whether the (infinite) set of graph rewrite rules obtained as refinements of a schema are uniquely determined by their patterns.
Definition 9 (Right-Unique Rule Sets) A set $\mathcal{R}$ of graph rewrite rules is right-unique if different meta-rules $r_1 : (P_1 \leftarrow B_1 \rightarrow R_1), r_2 : (P_2 \leftarrow B_2 \rightarrow R_2) \in \mathcal{R}$ have different patterns, i.e., $P_1 \not\equiv P_2$ implies that $r_1 \not\equiv r_2$.
We define an auxiliary notion first. The pattern rule $\delta_P$ of a meta-rule $\delta : (p \leftarrow b \leftarrow r)$ is a contextual rule obtained from the body rule $\delta_B : (B_p \leftarrow B_b \leftarrow B_r)$ by removing all nodes and edges in $B_b \setminus R_b$, and by detaching all variables in $\delta_B$ from the removed nodes. Let $\Delta_P$ denote the set of (contextual) pattern rules of a set $\Delta$ of meta-rules. (The graphs in $\Delta_P$ are typed as well, but
in the type graph $\text{Sig}(x)$ of a variable name $x$, all nodes that do not belong to the pattern of the signature schema $\text{Sig}\text{schema}(x)$ are removed.)
**Lemma 1 (Right-Uniqueness of Refinements)** A set $\Delta(s)$ of refinements is right-unique if the pattern grammar $(\Sigma, \Delta_P, P_s)$ of their meta-rules $\Delta$ is unambiguous.
**Proof Sketch.** Consider rules $r_1, r_2 \in \Delta(s)$ with $P_1 \equiv P_2$. Then $P_1 \Rightarrow_{\Delta_P} P_1$ and $P_1 \Rightarrow_{\Delta_P} P_2$. The rewrite sequences can be made equal since $\Delta_P$ is unambiguous. This rewriting sequence has a unique extension to a meta-rewrite sequence so that $r_1 \sim r_2$.
**Example 7 (Properties of Meta-Rules)** It is easy to see that the deleting and replicative meta-rules $\Delta_M$ in Figures 6-7 of Example 5 satisfy Assumption 1. Because, it has been shown in [DHM12, Example 3.23] that all rules of the program graph grammar in Example 3 are useful. Thus its language is non-empty. This property can easily be lifted to the meta-rules $\Delta_M$, in particular as they also contain context-free variations of the rules in $P$. It is easy to check that the rules in $\Delta_M$ are also pattern-refining. The contextual rules $P$ for method bodies in Fig. 3 are unambiguous, and so are the rules $M$, which correspond to the pattern rules of the deleting and replicating meta-rules $\Delta_M$ in Figures 6-7 of Example 5, so that $\Delta_M$ is right-unique.
The embedding meta-rules $\Delta_E$ in Fig. 8 of Example 6 derive a non-empty set of rules, and are pattern-refining and right-unique as well.
### 4 Modeling Refinement by Residual Rewriting
The refinement of a schema $s$ with some meta-rules $\Delta$ yields instances $\Delta(s)$, which are ordinary rules for rewriting graphs. However, the set $\Delta(s)$ is infinite in general. Unfortunately, many analysis techniques, e.g., for termination, confluence, and state space exploration of graph rewriting, do only work for finite sets of graph rewrite rules. To make these techniques applicable, we translate each schema and every contextual meta rule into a standard graph rewrite rule:
- We turn every schema into an ordinary rule that postpones refinement, by adding its meta-variables to its replacement.
- We turn every contextual meta-rule $\delta: (p \rightarrow b \leftarrow r)$ into an graph rewrite rule that refines the translated schema incrementally, by unifying its pattern rule $r$ component-wise with the variable graphs of its body rule $\delta_B$.
The resulting rule set is always finite.
**Definition 10 (Incremental Refinement Rules)** Let $s: (P \leftarrow B \leftarrow R)$ be a schema for meta-rules $\Delta$.
The *incremental rule* $\tilde{s}: (P \rightarrow B \leftarrow R)$ of the schema $s$ has the same pattern $P$ and body $B$ as $s$, and its replacement $R_\tilde{s} = R \cup \{B/e \mid e \in X(B)\}$ is obtained by extending $R$ with the graphs of all variables in $B$.
For a meta-rule $\delta = (p \rightarrow b \leftarrow r)$ in $\Delta$, the *incremental rule* $\tilde{\delta}: (\tilde{P} \leftarrow \tilde{B} \rightarrow \tilde{R})$ is the component-wise union of its replacement rule $r: (R_p \leftarrow R_b \leftarrow R_r)$ with the variable graphs of its body rule $\delta_B: (B_p \leftarrow B_b \leftarrow B_r)$:
Figure 9: Incremental rules for Encapsulate Field in Fig. 8
Figure 10: Incremental rules for Pull-up Method in Fig. 5 and Fig. 7
(i) $\bar{P} = P_r \cup \{B_p/e \mid e \in X(B_p)\}$ (which equals $B_p \cup P_r$),
(ii) $\bar{B} = B_b \cup \{B_b/e \mid e \in X(B_b)\}$ (which equals $B_b$), and
(iii) $\bar{R} = R_r \cup \{B_r/e \mid e \in X(B_r)\}$.
$\tilde{\Delta}$ shall denote the incremental rules of the meta-rules $\Delta$.
**Example 8 (Incremental Refinement)** Figure 9 shows the incremental rule Ef for the schema ef and of the meta-rules $\Delta_E$ in Fig. 8 of Example 6.
Figure 10 shows how the schema $\text{pum}_k$ for the Pull-up Method refactoring in Fig. 5 is translated into an incremental rule $\text{Pum}_k$, and how the context-free variation $\text{assign}^1$ of the meta-rule $\text{assign}^1$ in Fig. 7 is translated into an incremental rule $\text{Assign}^1$. (In the incremental rule $\text{Pum}_k$, red arrow and waves indicate edges that do not belong to the replacement.)
If a schema $s$ is refined with a meta-rule $\delta$ to a schema $t$, the composition $\bar{s} \circ_d \bar{\delta}$ of its incremental rules (as defined in Def. 12 of the appendix) equals the incremental rule $\bar{t}$ (for a particular dependency $d$).
**Lemma 2** Consider a schema $s = (P \leftrightarrow B \leftrightarrow R)$ and a meta-rule $\delta: (p \rightarrow b \leftrightarrow r)$.
Then $s \delta|_m t$ for some schema $t$ iff there is a composition $t^d = \bar{s} \circ_d \bar{\delta}$ for a dependency $d$.
13 / 20 Volume 71 (2015)
More on Graph Rewriting With Contextual Refinement
\[ \begin{array}{c c c c c c}
& & & & & \\
B_p & \xrightarrow{m} & B_h & \xrightarrow{r} & B_r \\
\downarrow & & & & & \\
B & \xrightarrow{U} & B' \\
\end{array} \]
Figure 11: \( B \Rightarrow^m_{\delta_h} B' \)
\[ \begin{array}{c c c c c c}
& & & & & & \\
& & & & & B & B_r \\
& & & & & \downarrow & \downarrow \\
& & & & & B_p & B_p \cup R_p \\
& & & & & \downarrow & \downarrow \\
& & & & & R_r \cup \{ B_r \mid e \in X(B_r) \} & \\
\end{array} \]
Figure 12: \( r^d = \bar{s} \circ_d \bar{\delta} \)
\[ d: (R \xleftarrow{m} B_p \rightarrow (B_p \cup R_p)) \text{ so that } r^d = \bar{r}. \]
**Proof Sketch.** Let \( s, \bar{\delta} \) be as above, \( t: (P' \xrightarrow{m} B' \xrightarrow{r'} \mathcal{R}') \), \( \bar{s}: (P \xrightarrow{m} B \xrightarrow{r} \mathcal{R}) \) with \( R_t = R \cup \{ B / e \mid e \in X(B) \} \), and \( \bar{\delta}: (\bar{P} \xrightarrow{m} \bar{B} \xrightarrow{r} \bar{\mathcal{R}}) \) with \( \bar{P} = B_p \cup \bar{P}, \bar{\mathcal{R}} = \bar{R}_r \cup \{ B_r / e \mid e \in X(B_r) \} \), and \( \bar{B} = B_h \), see Def. 10. Their composition according to the dependency \( d: (R \xleftarrow{m} B_p \rightarrow (B_p \cup R_p)) \) is constructed as in Def. 12, and shown in Fig. 12.
Consider the underlying body refinement \( B \Rightarrow^m_{\delta_h} B' \). (See Fig. 11, where we assume that the lower horizontal morphisms are inclusions.) By uniqueness of pushouts, \( U \cong B_d \). Then \( (B_h \setminus B_p) = X(B_p) \) since \( \delta_h \) is contextual, and \( B' = U \setminus \tilde{m}(X(B_p)) \).
It is then easy to show that the body \( B'_d \) equals the body \( B^d \) of the composed incremental rule, and an easy argument concerning the whereabouts of variables shows that \( \bar{r} = r^d \). \( \square \)
**Example 9** (Schema Refinement and Incremental Rules) Figure 13 illustrates the relation between schema refinement and the composition of their incremental rules established in Lemma 2. As already mentioned in Example 5, the upper row shows a step in the refinement sequence \( \text{pum}_k \Delta_m \bar{\text{pum}}' \) that applies the context-free variation \( \text{assign}^1 \) of the meta-rule \( \text{assign}^1 \) in Fig. 7.
The original meta-rule does not apply to the source schema, as it does not contain a node labeled \( V \). The less contextual rule does apply; the refined rule is constructed so that the \( V \)-node will be matched in the context when it is applied to a source graph.
The lower row shows the composition of the corresponding incremental rule with the corresponding incremental refinement rule \( \text{Assign}^1 \), where the dashed box specifies the dependency \( d \) for the composition. The composed rule equals the incremental rule for the refined schema.
Using a refined schema has the same effect as applying its incremental rule, and the incremental rules of the corresponding meta-rules. This must follow a strategy that applies incremental rules as long as possible, matching the residuals of the source graphs, before another incremental rule is applied.
We define the subgraph that is left unchanged in refinement steps and sequences. The *track* of \( G \) in \( H \) (via the match \( m \) of the rule \( r \)) is then defined as \( tr^m_r(G) = (G \cap H) \).\(^6\) For a rewrite sequence \( d = G_0 \Rightarrow^{m_1}_{r_1} G_1 \Rightarrow^{m_2}_{r_2} \ldots \Rightarrow^{m_n}_{r_n} G_n \), the track of \( G \) in \( H \) is given by intersecting the tracks
\( \text{Recall that } G \xrightarrow{r} U \xrightarrow{r} H \text{ for the graphs and morphisms of a rewrite step.} \)
---
\(^6\) Recall that \( G \xrightarrow{r} U \xrightarrow{r} H \) for the graphs and morphisms of a rewrite step.
of its steps:
\[ tr_d(G) = tr^{m_1}_{r_1}(G_0) \cap \cdots \cap tr^{m_n}_{r_n}(G_{n-1}) \]
The incremental rules have to be applied so that the patterns of the refinements of the original meta-rules do not overlap.
**Definition 11 (Residual Incremental Refinement)** Consider an incremental refinement sequence
\[ G_0 \xrightarrow{\delta_1} G_1 \xrightarrow{\delta_2} \cdots \xrightarrow{\delta_n} G_n \]
with incremental rules \( \delta_i \) for meta-rules \( \delta_i : (p_i \leftrightarrow b_i \leftarrow r_i) \) (for \( 1 \leq i \leq n \)).
The step \( G_{i-1} \xrightarrow{m_i}_{\delta_i} G_i \) is residual if \( m_i(P_r) \subseteq tr^{m_1 \cdots m_{i-1}}_{r_1 \cdots r_{i-1}}(G) \). The sequence is residual if every of its steps is residual. Residual steps and sequences are denoted as \( \Rightarrow \) and \( \Rightarrow^* \), respectively.
**Lemma 3** Consider a schema \( s \) for meta-rules \( \Delta \) with incremental rule \( \bar{s} \) and incremental rules \( \bar{\Delta} \).
Then a rule \( r : (P \leftrightarrow B \leftarrow R) \) is a refinement in \( \Delta(s) \) if and only if \( P \xrightarrow{\bar{s}} P' \xrightarrow{\bar{\Delta}} R \).
Proof. By induction over the length of meta-derivations, using Lemma 2 and the fact that compositions correspond to residual rewrite steps.
Theorem 2 Consider a schema \( s \) with meta-rules \( \Delta \) as above. Then, for graphs \( G, H, \) and \( K \), \( G \Rightarrow_{\Delta(s)} H \) if and only if \( G \Rightarrow_{s} K \Rightarrow_{\Delta} H \).
Proof. Combine Lemma 3 with the embedding theorem [EEPT06, Sect. 6.2].
5 Conclusions
In this paper we have continued earlier attempts in [HJG08, Hof13] to model graph rewriting with recursive refinement, which are the outstanding feature of rules in the graph rewriting tool GRGEN [BGJ06]. The definition here is by standard graph rewriting—contextual hyperedge replacement on the meta-level and standard graph rewriting on the object level—and allows to specify conditions under which refinement “behaves well”, i.e., terminates, and yields unique refinements. It is simple enough so that it can be translated to standard graph rewriting rules that perform the refinement incrementally, using a strategy—residual rewriting—where matches do overlap only in contextual nodes (and in attached nodes of variables).
Related Work has occurred with two respects. On the one hand, expressive rules allowing transform subgraphs of variable shape and size have been proposed by several authors: D. Janssens has studied graph rewriting with node embedding rules [Jan83]. The Encapsulate Field refactoring in Example 6 could be defined in this way. D. Plump and A. Habel have proposed rules where variables in the pattern and the replacement graph can be substituted with isomorphic graphs [PH96]. There, variables could be substituted by arbitrary graphs, which is rather powerful, but difficult to use (and to implement). The author has later proposed substitutions with context-free (hyperedge replacement) languages [Hof01]. This turned out to be too restricted so that we now decided to propose contextual hyperedge replacement. The Pull-Up Method refactoring in Example 5 is a candidate for substitutive graph rewriting. In [Hof13] we have shown that embedding and substitutive rules are special cases of rules with contextual refinement.
On the other hand, the core of standard graph rewriting theory [CEH+97], with its results on parallel and sequential independence, critical pair lemma [Plu93] etc., has been extended considerably over the years. The framework now covers graph with attributes and subtyping, rules with positive and negative application conditions [EEPT06], and, as of recently, also nested application conditions [EHL+10, EGH+12].
Future Work should attempt to integrate the extensions of the standard theory to rule refinement, as all these concepts are supported by the graph rewriting tool GRGEN [BGJ06] as well. This should be straight-forward for attributes and subtyping. Application conditions require more work, in particular when conditions shall be translated to incremental rules. Obviously, application conditions are useful for modeling complex operations like refactorings: (i) The definition of program graphs in Example 1 could be more precise if the choice of a contextual node could be subject to a condition. E.g., the rule impl should require that the signature being implemented is contained in a super-class of the body. (See [HM10] for a definition of program graphs using application conditions.) (ii) The Pull-up Method refactoring in Example 5 should require that
the method body to be pulled up does not access variables or methods outside the name space of the superclass. (iii) The Encapsulate Field refactoring in Example 6 should be required to encapsulate all non-local accesses of a variable. For some of the conditions mentioned here, application conditions need to specify the (non)-existence of paths in a graph. This cannot be done by nested application conditions, but only if the conditions allow recursive refinement, as studied by H. Radke in [HR10]. But this is not (yet?) integrated into the standard theory.
Our ultimate goal is to provide support for analyzing GRGEN rules, e.g., for the existence of critical pairs. The negative result shown in [Hof13, Thm. 3] indicates that considerable restrictions have to be made to reach this aim. Our idea now is to restrict rewriting with contextual refinement to graphs that are shaped according to a contextual grammar like that for program graphs.
Acknowledgments.
The author thanks Annegret Habel and Rachid Echahed for their encouragement, and the reviewers for their detailed constructive comments.
Bibliography
More on Graph Rewriting With Contextual Refinement
A Double-Pushout Rewriting
The standard theory of graph rewriting is based on so-called spans of (injective) graph morphisms [EEPT06], where a rule consists of two morphisms from a common interface $I$ to a pattern $P$ and a replacement $R$. An alternative proposed in [EHP09] uses so-called co-spans (or joins) of morphisms where the pattern and the replacement are both included in a common supergraph, which we call the body of the rule.
Rewriting is defined by double pushouts as below:
$$\begin{align*}
\hat{r}: P & \rightarrow I \rightarrow R \\
m: G & \rightarrow C \rightarrow H \\
\hat{r}: P & \leftarrow B \rightarrow R \\
m: G & \leftarrow U \rightarrow H
\end{align*}$$
Intuitively, rewrites are constructed via a match morphism $m: P \rightarrow G$ in a source graph $G$; for a span rule $\hat{r}$, removing the match of obsolete pattern items $P \setminus I$ yields a context graph $C$ to which the new items $R \setminus I$ of the replacement are then added; for a co-span rule $\hat{r}$, the new items $B \setminus P$ are added first, yielding the united graph $U$ before the obsolete pattern items $B \setminus R$ are removed. The constructions work if the matches $m$ satisfy certain gluing conditions.
The main result of [EHP09] says that \( \hat{r} \) is the pushout of \( \check{r} \), making these rules, their rewrite steps, and gluing conditions dual to each other. Therefore we feel free to use the more intuitive gluing condition for \( \hat{r} \) together with a rule \( \check{r} \).
The following definition and theorem adapt well-known concepts of [EEPT06] to our notion of rules.
**Definition 12 (Sequential Rules Composition)** Let \( r_1 : (P_1 \hookrightarrow B_1 \leftarrow R_1) \) and \( r_2 : (P_2 \hookrightarrow B_2 \leftarrow R_2) \) be rules, and consider a graph \( D \) with a pair \( \delta : (R_1 \leftarrow D \rightarrow P_2) \) of injective morphisms.
1. Then \( \delta \) is a sequential dependency of \( r_1 \) and \( r_2 \) if \( D \not\hookrightarrow \rightarrow P_1 \) (which implies that \( D \neq \emptyset \)).
2. The sequential composition \( r_1 \circ_{\delta} r_2 : (P_\delta \rightarrow B_\delta \leftarrow R_\delta) \) of \( r_1 \) and \( r_2 \) along \( \delta \) is the rule constructed as in the commutative diagram of Fig. 14, where all squares are pushouts.
3. Two rewrite steps \( G \Rightarrow_{r_1} H \Rightarrow_{r_2} K \) are \( \delta \)-related if \( \delta \) is the pullback of the embedding \( R_1 \rightarrow H \) and of the match \( P_2 \rightarrow O \).
**Proposition 1** Let \( r_1 \) and \( r_2 \) be rules with a dependency \( \delta \) and a sequential composition \( r_\delta \) as in Def. 12.
Then there exist \( \delta \)-related rewrite steps \( G \Rightarrow_{r_1} H \Rightarrow_{r_2} K \) if and only if \( G \Rightarrow_{r_\delta} K \).
**Proof.** Straightforward use of the corresponding result for “span rules” [EEPT06, Thm. 5.23] and of the duality to “co-span rules” [EHP09].
---
7 A pullback of a pair of morphisms \( B \rightarrow D \leftarrow C \) with the same codomain is a pair of morphisms \( B \leftarrow A \rightarrow C \) that is commutative, i.e., \( A \rightarrow B \rightarrow D = A \rightarrow C \rightarrow D \), and universal, i.e., for every pair of morphisms \( B \leftarrow A' \rightarrow C \) so that \( A' \rightarrow B \rightarrow D = A' \rightarrow C \rightarrow D \), there is a unique morphism \( A' \rightarrow A \) so that \( A' \rightarrow A \rightarrow B = A' \rightarrow B \) and \( A' \rightarrow A \rightarrow C = A' \rightarrow C \). See [EEPT06, Def. 2.2].
|
{"Source-Url": "http://www.informatik.uni-bremen.de/~hof/papers/GCM2014.pdf", "len_cl100k_base": 13865, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 76413, "total-output-tokens": 16622, "length": "2e13", "weborganizer": {"__label__adult": 0.0004000663757324219, "__label__art_design": 0.0006442070007324219, "__label__crime_law": 0.00032019615173339844, "__label__education_jobs": 0.0014190673828125, "__label__entertainment": 0.0001100301742553711, "__label__fashion_beauty": 0.00019729137420654297, "__label__finance_business": 0.0002834796905517578, "__label__food_dining": 0.0003943443298339844, "__label__games": 0.0007724761962890625, "__label__hardware": 0.0009126663208007812, "__label__health": 0.0006008148193359375, "__label__history": 0.0004019737243652344, "__label__home_hobbies": 0.00013816356658935547, "__label__industrial": 0.0005364418029785156, "__label__literature": 0.00077056884765625, "__label__politics": 0.0003044605255126953, "__label__religion": 0.00067901611328125, "__label__science_tech": 0.0777587890625, "__label__social_life": 0.00012153387069702148, "__label__software": 0.00830841064453125, "__label__software_dev": 0.90380859375, "__label__sports_fitness": 0.0003159046173095703, "__label__transportation": 0.0006618499755859375, "__label__travel": 0.00021457672119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57378, 0.01672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57378, 0.52629]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57378, 0.84614]], "google_gemma-3-12b-it_contains_pii": [[0, 144, false], [144, 3259, null], [3259, 7445, null], [7445, 9841, null], [9841, 14386, null], [14386, 18403, null], [18403, 20859, null], [20859, 24132, null], [24132, 27759, null], [27759, 29446, null], [29446, 32254, null], [32254, 35582, null], [35582, 38922, null], [38922, 40478, null], [40478, 44262, null], [44262, 45435, null], [45435, 48905, null], [48905, 51274, null], [51274, 53783, null], [53783, 55008, null], [55008, 57378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 144, true], [144, 3259, null], [3259, 7445, null], [7445, 9841, null], [9841, 14386, null], [14386, 18403, null], [18403, 20859, null], [20859, 24132, null], [24132, 27759, null], [27759, 29446, null], [29446, 32254, null], [32254, 35582, null], [35582, 38922, null], [38922, 40478, null], [40478, 44262, null], [44262, 45435, null], [45435, 48905, null], [48905, 51274, null], [51274, 53783, null], [53783, 55008, null], [55008, 57378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57378, null]], "pdf_page_numbers": [[0, 144, 1], [144, 3259, 2], [3259, 7445, 3], [7445, 9841, 4], [9841, 14386, 5], [14386, 18403, 6], [18403, 20859, 7], [20859, 24132, 8], [24132, 27759, 9], [27759, 29446, 10], [29446, 32254, 11], [32254, 35582, 12], [35582, 38922, 13], [38922, 40478, 14], [40478, 44262, 15], [44262, 45435, 16], [45435, 48905, 17], [48905, 51274, 18], [51274, 53783, 19], [53783, 55008, 20], [55008, 57378, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57378, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
e2183b245b038b4c63877d0a1e231cfc47323499
|
[REMOVED]
|
{"Source-Url": "https://drops.dagstuhl.de/opus/volltexte/2021/14677/pdf/LIPIcs-DNA-27-10.pdf", "len_cl100k_base": 15829, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 68750, "total-output-tokens": 18460, "length": "2e13", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.00047850608825683594, "__label__crime_law": 0.00032401084899902344, "__label__education_jobs": 0.000896453857421875, "__label__entertainment": 0.00012218952178955078, "__label__fashion_beauty": 0.000194549560546875, "__label__finance_business": 0.0002918243408203125, "__label__food_dining": 0.0005817413330078125, "__label__games": 0.0006814002990722656, "__label__hardware": 0.0013275146484375, "__label__health": 0.0006976127624511719, "__label__history": 0.00034499168395996094, "__label__home_hobbies": 0.0001691579818725586, "__label__industrial": 0.0008497238159179688, "__label__literature": 0.0003745555877685547, "__label__politics": 0.0003452301025390625, "__label__religion": 0.0006537437438964844, "__label__science_tech": 0.09710693359375, "__label__social_life": 0.00013124942779541016, "__label__software": 0.006519317626953125, "__label__software_dev": 0.88623046875, "__label__sports_fitness": 0.0003323554992675781, "__label__transportation": 0.0007781982421875, "__label__travel": 0.0002384185791015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64890, 0.01489]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64890, 0.83891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64890, 0.85043]], "google_gemma-3-12b-it_contains_pii": [[0, 2940, false], [2940, 7129, null], [7129, 10621, null], [10621, 14467, null], [14467, 17814, null], [17814, 20538, null], [20538, 22450, null], [22450, 25344, null], [25344, 28873, null], [28873, 31138, null], [31138, 34203, null], [34203, 38220, null], [38220, 42011, null], [42011, 44735, null], [44735, 48153, null], [48153, 51413, null], [51413, 55372, null], [55372, 57965, null], [57965, 61459, null], [61459, 64890, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2940, true], [2940, 7129, null], [7129, 10621, null], [10621, 14467, null], [14467, 17814, null], [17814, 20538, null], [20538, 22450, null], [22450, 25344, null], [25344, 28873, null], [28873, 31138, null], [31138, 34203, null], [34203, 38220, null], [38220, 42011, null], [42011, 44735, null], [44735, 48153, null], [48153, 51413, null], [51413, 55372, null], [55372, 57965, null], [57965, 61459, null], [61459, 64890, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64890, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64890, null]], "pdf_page_numbers": [[0, 2940, 1], [2940, 7129, 2], [7129, 10621, 3], [10621, 14467, 4], [14467, 17814, 5], [17814, 20538, 6], [20538, 22450, 7], [22450, 25344, 8], [25344, 28873, 9], [28873, 31138, 10], [31138, 34203, 11], [34203, 38220, 12], [38220, 42011, 13], [42011, 44735, 14], [44735, 48153, 15], [48153, 51413, 16], [51413, 55372, 17], [55372, 57965, 18], [57965, 61459, 19], [61459, 64890, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64890, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
c63a4a4f5ba7ff4ee2262eaf191bb34dd46bcb60
|
[REMOVED]
|
{"Source-Url": "https://people.eecs.berkeley.edu/~necula/Papers/marktoberdorf.pdf", "len_cl100k_base": 14459, "olmocr-version": "0.1.50", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 70680, "total-output-tokens": 15715, "length": "2e13", "weborganizer": {"__label__adult": 0.0003046989440917969, "__label__art_design": 0.00023984909057617188, "__label__crime_law": 0.00032019615173339844, "__label__education_jobs": 0.0002753734588623047, "__label__entertainment": 3.886222839355469e-05, "__label__fashion_beauty": 0.00012290477752685547, "__label__finance_business": 0.00017690658569335938, "__label__food_dining": 0.0003287792205810547, "__label__games": 0.0005035400390625, "__label__hardware": 0.001117706298828125, "__label__health": 0.00032448768615722656, "__label__history": 0.0001558065414428711, "__label__home_hobbies": 8.219480514526367e-05, "__label__industrial": 0.0003674030303955078, "__label__literature": 0.0001672506332397461, "__label__politics": 0.0002104043960571289, "__label__religion": 0.0004112720489501953, "__label__science_tech": 0.0085906982421875, "__label__social_life": 5.0902366638183594e-05, "__label__software": 0.0040740966796875, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.00025963783264160156, "__label__transportation": 0.0005068778991699219, "__label__travel": 0.000179290771484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63034, 0.02461]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63034, 0.66495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63034, 0.90319]], "google_gemma-3-12b-it_contains_pii": [[0, 2086, false], [2086, 4737, null], [4737, 7503, null], [7503, 8974, null], [8974, 11206, null], [11206, 12839, null], [12839, 15452, null], [15452, 18178, null], [18178, 20822, null], [20822, 23794, null], [23794, 26259, null], [26259, 29093, null], [29093, 31717, null], [31717, 32544, null], [32544, 35164, null], [35164, 37815, null], [37815, 39985, null], [39985, 42293, null], [42293, 44994, null], [44994, 47594, null], [47594, 49873, null], [49873, 52683, null], [52683, 55009, null], [55009, 57668, null], [57668, 60478, null], [60478, 60671, null], [60671, 63034, null], [63034, 63034, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2086, true], [2086, 4737, null], [4737, 7503, null], [7503, 8974, null], [8974, 11206, null], [11206, 12839, null], [12839, 15452, null], [15452, 18178, null], [18178, 20822, null], [20822, 23794, null], [23794, 26259, null], [26259, 29093, null], [29093, 31717, null], [31717, 32544, null], [32544, 35164, null], [35164, 37815, null], [37815, 39985, null], [39985, 42293, null], [42293, 44994, null], [44994, 47594, null], [47594, 49873, null], [49873, 52683, null], [52683, 55009, null], [55009, 57668, null], [57668, 60478, null], [60478, 60671, null], [60671, 63034, null], [63034, 63034, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63034, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63034, null]], "pdf_page_numbers": [[0, 2086, 1], [2086, 4737, 2], [4737, 7503, 3], [7503, 8974, 4], [8974, 11206, 5], [11206, 12839, 6], [12839, 15452, 7], [15452, 18178, 8], [18178, 20822, 9], [20822, 23794, 10], [23794, 26259, 11], [26259, 29093, 12], [29093, 31717, 13], [31717, 32544, 14], [32544, 35164, 15], [35164, 37815, 16], [37815, 39985, 17], [39985, 42293, 18], [42293, 44994, 19], [44994, 47594, 20], [47594, 49873, 21], [49873, 52683, 22], [52683, 55009, 23], [55009, 57668, 24], [57668, 60478, 25], [60478, 60671, 26], [60671, 63034, 27], [63034, 63034, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63034, 0.08209]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ed161eb3a1d86e131ec1085ba2716ec19aab9709
|
Learning input-aware performance models of configurable systems: An empirical evaluation
Luc Lesoil, Helge Spieker, Arnaud Gotlieb, Mathieu Acher, Paul Temple, Arnaud Blouin, Jean-Marc Jézéquel
To cite this version:
HAL Id: hal-04271476
https://hal.science/hal-04271476
Submitted on 6 Nov 2023
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Distributed under a Creative Commons Attribution 4.0 International License
Luc Lesoil\textsuperscript{a}, Helge Spieker\textsuperscript{b}, Arnaud Gotlieb\textsuperscript{b}, Mathieu Acher\textsuperscript{a,}\textsuperscript{*}, Paul Temple\textsuperscript{a}, Arnaud Blouin\textsuperscript{a}, Jean-Marc Jézéquel\textsuperscript{a}
\textsuperscript{a}Univ Rennes, Inria, INSA Rennes, CNRS, IRISA, France
\textsuperscript{b}Simula Research Laboratory, Oslo, Norway
Abstract
Modern software-based systems are highly configurable and come with a number of configuration options that impact the performance of the systems. However, selecting inappropriate values for these options can cause long response time, high CPU load, downtime, RAM exhaustion, resulting in performance degradation and poor software reliability. Consequently, considerable effort has been carried out to predict key performance metrics (execution time, program size, energy consumption, etc.) from the user’s choice of configuration options values. The selection of inputs (e.g., JavaScript scripts embedded in a web page interpreted by Node.js or input videos encoded with x264 by a streaming platform) also impacts software performance, and there is a complex interplay between inputs and configurations. Unfortunately, owing to the huge variety of existing inputs, it is yet challenging to automate the prediction of software performance whatever their configuration and input. In this article, we empirically evaluate how supervised and transfer learning methods can be leveraged to efficiently learn performance models based on configuration options and input data. Our study over 1,941,075 data points empirically shows that measuring the performance of configurations on multiple inputs allows one to reuse this knowledge and train performance models robust to the change of input data. To the best of our knowledge, this is the first domain-agnostic empirical evaluation of machine learning methods addressing the input-aware performance prediction problem.
Keywords: Performance Prediction, Software Variability, Input Sensitivity, Configurable Systems, Learning Models
1. Introduction
Most modern software systems are widely configurable, featuring many configuration options that users can set or modify according to their needs, for
\textsuperscript{*}Corresponding author
Email address: mathieu.acher@irisa.fr (Mathieu Acher)
instance, to maximise some performance metrics (e.g., execution time, energy consumption). As a software system matures, it diversifies its user base and adds new features to satisfy new needs, increasing its overall number of options. However, manually quantifying the individual impact of each option and their interactions quickly becomes tedious, costly and time-consuming, which reinforces the need to automate how to study and combine these options together. Software reliability can be largely degraded if inappropriate configuration options are selected or if ageing-related bugs such as configuration-dependent memory leaks remain undetected [74].
To address these issues, researchers typically apply machine learning (ML) techniques [21, 63] to learn performance models from the selection of configuration options and/or software modules. Conversely, with an accurate performance predictive model, it becomes possible to predict the performance of any configuration, to find an optimal configuration, or to identify, debug, and reason about influential options of a system [52, 26, 66, 21, 44, 45, 60, 61, 30]. A recent survey [52] synthesized the large effort conducted in the software engineering, software variability, and software product line engineering communities.
However, the performance variability of a given software system also obviously depends on its input data [2, 72, 38, 77, 8], e.g., a video compressed by a video encoder [38] such as x264, a program analyzed by a compiler [8, 11] such as gcc, a database queried by a DBMS [77] such as SQLite. All these kinds of inputs might interact with the configuration space of the software [11, 3, 42, 34]. For instance, an input video with fixed and high-resolution images fed to x264 could reach high compression ratios if configuration options like mbtree are activated. The same option will not be suited for a low-resolution video depicting an action scene [38] with lots of changes among the different pictures, leading to different performance distributions, as for input videos in Figure 1. The interplay between input data and configurations has not caught a great attention in the literature [52]. It is a threat of practical interests, since performance models of configurable systems or software product lines, can be inaccurate and deprecated whenever a new input data is processed.
Recent works empirically show the significance of inputs (also called workloads) when predicting the performance of configurable systems and software product lines. Pereira et al. [3] showed that the performance distribution of 1152 configurations of x264 heavily depends on the inputs (19 videos) processed. Practically, a good configuration can be a bad one depending on the processed video; some configuration options have varying influence and importance depending on videos; a performance prediction model can be inaccurate if blindly reused whatever the video. Recent empirical results of Mühlbauer et al. [42] and Lesoil et al. [34] over different software systems, configurations and inputs demonstrate that inputs can induce substantial performance variations and interact with configuration options, often in non-monotonous ways. As a result, Videos 1 and 2 are extracted from our dataset (Animation_1080P-5083 and Animation_1080P-646f)
inputs should be considered when building performance prediction models to maintain and improve representativeness and reliability.
Figure 1: The performance prediction problem: how to predict software performance considering both configurations and inputs?
Measuring all configurations for all possible inputs of a configurable system is the most obvious path to resolve the issue. Given a potentially infinite input space, it is however either too costly and infeasible in practice or impossible. ML techniques are usually employed to measure only a sample of configurations and then use these configurations’ measurements to build a performance model capable of predicting the performance of other configurations (i.e., configurations not measured before). However, these measurements are obtained on a specific input and are already costly to compute. Systematically repeating this process for many inputs would explode the budgets of end-users and organizations. The inputs add a new dimension to the problem of learning performance models. The combined problem space (configuration and input dimension) requires substantially more observations and measurements. It further increases the computational cost, since it requires running configurations over many input samples. The available budget end-users can dedicate to the measurements of configurations over inputs is limited by construction. Hence, the challenge is to learn an accurate performance model, aware of configurations and inputs, with the lowest budget.
Several input-aware approaches can be envisioned. The first is to learn from scratch a performance model, whenever an input is fed to a configurable system. Many works target the scenario where users build their own performance models for their own inputs and workloads [21, 60, 52, 3]. Unfortunately, users need to measure a sample of configurations for building a new prediction model each time a new input should be processed. The computational cost can be
prohibitive as it occurs in an *online* setting (at runtime). There is no reuse of past observations, knowledge, and performance models. Another second approach, at the opposite of the spectrum, is to pre-train in an *offline* setting a set of performance models for different inputs and configurations (as, e.g., proposed in [11]). The upfront cost can be high but can pay off since these models are systematically reused when a new input comes in. In the example of a configurable video encoder, performance models could be learned offline and reused each time a new video (input) is fed. The advantage is that users typically have only a small budget and cannot make additional measurements at run-time. The counter-part is that pre-trained models can have forgotten some inputs and be (much) less accurate than a performance model trained specifically on an input. At least, it is a hypothesis worth studying. In between, a third approach is to use both online and offline settings through transfer learning [52, 26, 4, 37, 68]. Certain transfer learning methods were originally intended to handle changes in computing environments, not actual inputs’ changes, necessitating the development of new techniques that could effectively leverage specific input characteristics [25, 03, 26, 43, 37, 34, 24]. The principle is to adapt existing performance models (pre-trained in an offline setting over multiple inputs and configurations) thanks to additional measurements gathered over a specific input to process. The hope is to transfer the model with very few measurements at run time.
In this article, we study and compare the cost-effectiveness of these three input-aware approaches over a large dataset comprising 8 software systems, hundreds of configurations and inputs, and dozens of performance properties, spanning a total of 1,941,075 configurations’ measurements. The distinction between offline and online learning has not received much attention yet (e.g., most works only apply either "offline" or "online" learning), certainly because of the lack of consideration of input data that can significantly alter the accuracy of predictive performance models of configurations, as is empirically shown in the rest of the article. We also consider the peculiarities of inputs (i.e., their properties) to guide the transfer or reuse of pre-trained models. To the best of our knowledge, our work is the first domain-agnostic empirical evaluation of ML methods addressing the input-aware performance prediction problem in both online and offline settings.
Our contributions are as follows:
1. We perform an extended *comparative study* of performance model training approaches, including supervised learning as well as transfer learning. We also present their costs and error levels when addressing the performance prediction problem;
2. We provide guidelines to the user who faces a performance prediction problem and propose a learning-based solution based on her constraints and resources, i.e., based on a trade-off between costs in offline and online settings.
3. We publish the code, the dataset, and the results of this paper online as a
\[\text{See the companion repository at } \text{https://github.com/simula-vias/}\]
basis for future work on predictive performance models.
2. Problem
As measuring the performance of each software configuration and input properties takes time and is computationally costly, we define three different scenarios based on user personas that have distinct levels of resources (i.e., time and computation power). These scenarios lead to different learning strategies: a model pre-trained on multiple inputs used only as-is (offline learning); a model trained on a single input whenever an end-user processes an input (supervised online learning); a model pre-trained on multiple inputs but that will be adapted via transfer learning by an end-user. These three learning strategies have pros and cons and are further evaluated in the rest of the article (see Section 4).
2.1. User Persona (UP)
Performance prediction of software configurations has several interests for organizations, individual users and developers of configurable systems [21, 52, 25, 26, 68, 67]:
- prediction is interesting per se since users know the performance value they will get and this quantitative information is actionable. It allows users to determine whether it reaches a certain limit (or is within acceptable boundaries) and take informed decisions. For instance, users could know that the configuration $c_1$ of video encoding will be 56 s (less than 1 min, acceptable) while the configuration $c_2$ is 589 s (more than 10 min, unacceptable).
- prediction can help explore tradeoffs by (de)activating some options and see the concrete, quantitative effect on performance. Users are usually not optimizing w.r.t. one dimension (e.g., execution time) but have also technical constraints related to output quality (e.g., users do not want to alter much video quality and avoid using mbtree in x264) or other metrics in mind (e.g., size of the output). Developers can also explore the configuration space to pinpoint inefficient performance and hopefully of some configurations;
- interpretable information can be extracted out of prediction models, like feature importance or interactions across features [40, 60, 12]. This information is useful for users and developers in charge of configuring, maintaining or debugging configurable software systems.
In these three cases, inputs can dramatically alter performance distributions and thus lead to inaccurate performance prediction if the specificities of input are not taken into account [41, 34]. Hence, UP can adopt different strategies to
adapt the performance model, with different computational costs and quality of performance predictions.
Let us consider UP $A$ which is in a rush and has to quickly deploy configured software solutions to its customers. Luckily, some already trained models were made available online and can be used to make predictions, although they are not directly contextualized for UP $A$'s application. Because fitting customer needs must be fast, UP $A$ will directly reuse one of the prediction models as-is; i.e., there will be no adaptation made whatsoever. Consequently, the computed performance predictions can be of poor quality, something that remains acceptable for UP $A$. Typical examples of UP $A$ include start-up company engineers or fast prototyping developers, and R&D software developers that can reuse pre-trained models and want to quickly reason over performance prediction of their configurable systems.
In contrast, UP $B$ is not in a rush and has high expectations regarding customer satisfaction. $B$ typically wants to retrieve the best predictions so that she/he can recommend configurations fitting the users’ needs. To do so, $B$ creates a prediction model on-demand specifically tailored to the provided input. Typically UP $B$ corresponds to software engineers from large companies, which face high expectations in software capitalization and quality of service and can absorb the cost of building a performance model per input and workload.
Ultimately, UP $C$ is committed to high standards, but $C$ wants to quickly deliver high-quality software configurations. To do so, $C$ wants to find a trade-off between the two previously-identified personas. $C$ is likely to spend effort finding a pre-trained performance prediction model (or training it by using multiple inputs) so that the model can adapt to various cases. Note that in this case, even though most of the training cost is already high, $C$ wants to tailor the model to customers’ specific inputs to provide high-quality predictions. Examples of UP $C$ include organizations or individuals capable of adapting pre-trained models based on the additional measurement of configurations over specific inputs.
2.2. Prediction Strategies
Based on these different UPs, we can distinguish several strategies for training and obtaining a performance prediction model.
**Offline learning (rely on a pre-trained model.)** UP $A$ relies on a previously trained model, that is, a model which provides predictions before $A$ comes up with an input sample. In this case, $A$ is only using a pre-trained model and does neither fine-tune the model nor retrain it nor else apply any kind of transfer learning. The model is trained on a combination of multiple software configurations and multiple input samples. We follow an input-aware setup similar to the technique proposed in [11] for a compiler of the PetaBricks language. In practice (and it is the main advantage of the method), UP $A$ does not need to compute new configurations’ measurements. $A$ simply passes an input sample to the trained model that provides a prediction by computing and leveraging input properties (see hereafter, in Section section 3.1.2). Eventually,
\( \mathcal{A} \) sends the predictions to the customers. This “pre-trained model” setting advantageously reduces the computational load over UP \( \mathcal{A} \) that does not need to measure configurations. To exploit the model, only the input’s properties need to be computed and concatenated to the configurations’ descriptions used for training. (Inputs’ properties are specific to an application domain and many examples are given in the next section). The model can then be queried and all the retrieved predictions can be exploited directly. UP \( \mathcal{A} \) does not control the training process and the quality of the training set (consisting of a selection of software configurations and inputs) meaning that the selections may be poorly distributed over the input space. Similarly, while querying the model, the provided input may be out of distribution, which can possibly result in weak predictions and choices.
**Supervised online learning (train a model on demand.)** Related to UP \( \mathcal{B} \), the goal is to control as much as possible the quality of the prediction. In this setup, \( \mathcal{B} \) does not rely on a pre-trained model (there is no cost in the offline setting) but rather builds a performance prediction model on-demand, in an online setting. For that, \( \mathcal{B} \) has a pool of preselected software configurations ready to be executed with the desired inputs (i.e., those coming from the customers). Another option is to have access to a software configuration sampling procedure. As soon as the input comes up, the first step consists in evaluating the performance of each selected software configuration using that input. Then, the prediction model is built and further predictions can be queried. Note that, unlike the previous strategy, only the desired input sample matters here, and thus there is no need to discriminate among inputs and compute input properties. The learned model is thus specific to the provided input, yet, predictions are expected to be more accurate than those from UP \( \mathcal{A} \) (i.e., as the input diversity dimension of the problem disappeared). Yet, it supposes that UP \( \mathcal{B} \) has enough resources available to perform both the measurements and the training of the system before being able to provide predictions. Thus, in an online setting where predictions must come almost instantaneously, this strategy is probably inadequate. Also, the model is trained on-demand and for a specific input only. If multiple requests from different customers arrive at the same time, then the model will be re-trained again and again. This prevents sharing knowledge from different models and prevents capitalizing on previous training. Traditional statistical learning techniques (e.g., [21, 60, 52, 3]) can be used. It boils down to address a regression problem each time a new workload or input is considered.
**Transfer learning (adapt pre-trained model).** Finally, UP \( \mathcal{C} \) wants to answer best to his customers but cannot afford on-demand full training prediction models. A suitable strategy would be to leverage the bigger cost that can be left to organizations that can provide general prediction models, and adapt, on the fly, a model to the specific input that is provided. This way, the out-of-distribution and quality of the selected software configurations and inputs for training problems would be mitigated. The adaptation would then require the general model to be retrieved so that parameters can be modified. The idea is not to retrain completely though. Two different methods can be
used to adapt models, the first one is fine-tuning, such that the model weights can be adjusted quickly to the incoming input; the second one is “transfer learning” \([52, 26, 4, 37, 68]\) to find a transformation between the data distribution the model was trained for, and the data provided by \(C\)’s customers. Transfer learning can be applied for several use cases without necessarily making changes to the original model, thereby providing additional flexibility.
In the context of this paper, we focus on transfer learning as it addresses the out-of-distribution problem. Some transfer learning techniques have not been designed to operate over actual inputs’ changes, but rather changes of computing environments \([25, 68, 26, 43, 67, 33, 24]\) (e.g., hardware or versions). Hence, we had to design transfer learning techniques capable of leveraging the specifics of inputs. The idea is to compute a transformation allowing the transfer of the prediction capabilities from the inputs used for training to the one that comes from \(C\) and vice-versa. For that, \(C\) relies on a pre-trained model as does \(UP\ A\). Yet, as the input comes for the customer, the input’s properties are sent to the model for prediction and in the meantime, \(C\) actually measures the performances of the configurations that are used for training on the new incoming input. Measured performances and predictions can then be compared to retrieve a mathematical transformation that can be applied to the pre-trained model to minimize prediction error. As said previously, the main advantage is to start from a rather general prediction model and simply adapt it. Yet it requires retrieving the model along with the software configurations that were used for training. In the end, the effort is split among online and offline settings, and between (1) the creation of the original, general model and (2) \(UP\ C\) that has to compute the adaptation. Table 1 sums up how efforts are split between online and offline settings, UPs, and model providers for the three different prediction strategies (offline learning, supervised online learning, and transfer learning).
<table>
<thead>
<tr>
<th>Approach</th>
<th>Description of the approach</th>
<th>Offline cost (Offshore Organization)</th>
<th>Online measurement cost (User)</th>
<th>Input properties</th>
<th>User Persona</th>
</tr>
</thead>
<tbody>
<tr>
<td>Supervised online learning</td>
<td>Train performance model on demand, from scratch, each time a new input is fed to the configurable system</td>
<td>None</td>
<td>High</td>
<td>No</td>
<td>A</td>
</tr>
<tr>
<td>Offline learning</td>
<td>Use a pre-trained model over measurements of multiple configurations and inputs. Input properties are used to make the prediction (in an online setting).</td>
<td>High</td>
<td>None</td>
<td>Yes</td>
<td>B</td>
</tr>
<tr>
<td>Transfer learning</td>
<td>Adapt a pre-trained model for a new targeted input. It requires to gather fresh measurements of some configurations over the input (in an online setting).</td>
<td>Medium</td>
<td>Medium</td>
<td>Yes</td>
<td>C</td>
</tr>
</tbody>
</table>
Table 1: Comparison of the different approaches to learn input-aware performance models of configurable systems.
2.3. Research Questions (RQs)
Accounting for the variety of UPs and prediction strategies, we spell out the following three research questions (RQs):
RQ1. How do different machine learning algorithms compare for establishing a relevant performance prediction model? The production
of a relevant predictive performance model involves the selection of the most appropriate algorithm for that task. To address this question, we quantify the errors and the benefits of tuning hyperparameters of several relevant algorithms used in the literature. We also compare these results with a non-learning approach to the problem, used as a baseline for comparison.
**RQ2. How to select an appropriate set of inputs for training a performance prediction model?** Prior to the prediction, we have to select a list of inputs whose measurements will form the training dataset. We call this process *input selection*. **RQ2** investigates what choice of input selection technique (e.g., all inputs, random selection of inputs, most diverse inputs) leads to the best results in terms of performance prediction. Depending on the offline budget, we propose and compare various input selection techniques.
**RQ3. How does the number of measured configurations affect the performance prediction models?** Since inputs and configurations interact with each other to change software performance, input selection is sensitive to the sampling of configurations i.e., in the way we select the configurations used to train the model. To answer **RQ3**, we train performance models fed with different numbers of inputs and configurations.
### 3. Experimental protocol
To respond to these three research questions, we designed an experimental protocol based on the following data (Section 3.1) and prediction models (Section 3.2). This section also provides details about RQ1 (Section 3.3), RQ2 (Section 3.4) and RQ3 (Section 3.5).
#### 3.1. Data
**3.1.1. Dataset**
We reused the measurements from 8 configurable systems and their inputs, as they are introduced in [34] and referenced in the companion repository[^data]. Different elements are shown in Table 2. The total number of measurements taken for a software system is equal to the number of configurations multiplied by the number of inputs and the number of performance properties. For instance, 201 configurations of x264 have been systematically measured along five performance properties and using 1397 videos coming from the YouTube User General Content (YUGC) dataset [71], for a total of 1,403,985 measures for x264.
**3.1.2. Using input properties to discriminate inputs**
To differentiate the inputs directly in the learning process, we computed and added input properties [11] that describe specific characteristics of input data. The input properties are preprocessed into an input feature vector and
[^data]: Our data and measurement protocol are available and open
Table 2: An overview of the considered systems in their number of configurations (#Configs), configuration options (#Options), inputs (#Input), and input properties (#Input). It should be noted that some options have numerical values. The last column states the performance properties that were measured.
<table>
<thead>
<tr>
<th>System</th>
<th>#Configs</th>
<th>#Options</th>
<th>#Inputs</th>
<th>#Input</th>
<th>Performance</th>
</tr>
</thead>
<tbody>
<tr>
<td>gcc</td>
<td>80</td>
<td>5</td>
<td>30</td>
<td>7</td>
<td>size, ctime, exec</td>
</tr>
<tr>
<td>imagemagick</td>
<td>100</td>
<td>5</td>
<td>1000</td>
<td>5</td>
<td>size, time</td>
</tr>
<tr>
<td>lingeling</td>
<td>100</td>
<td>10</td>
<td>351</td>
<td>4</td>
<td>#conf, #reduc</td>
</tr>
<tr>
<td>nodeJS</td>
<td>50</td>
<td>6</td>
<td>1939</td>
<td>6</td>
<td>#operations/s</td>
</tr>
<tr>
<td>poppler</td>
<td>16</td>
<td>5</td>
<td>1480</td>
<td>6</td>
<td>size, time</td>
</tr>
<tr>
<td>SQLite</td>
<td>50</td>
<td>3</td>
<td>150</td>
<td>8</td>
<td>15 query times q1-q15</td>
</tr>
<tr>
<td>x264</td>
<td>201</td>
<td>23</td>
<td>1397</td>
<td>7</td>
<td>size, time, cpu, fps, kbs</td>
</tr>
<tr>
<td>zz</td>
<td>30</td>
<td>4</td>
<td>48</td>
<td>2</td>
<td>size, time</td>
</tr>
</tbody>
</table>
Concatenated with the configuration features. Input and configuration options jointly form the feature vector that is passed as the input to the machine learning model.
The encoding into a single vector allows statistical learning techniques to operate over a unified encoding to predict performance. Though both are encoded as features, configuration options and input characteristics refer to and describe different entities (i.e., inputs and software configurations respectively). Configuration options directly come from the software and the development activity. They are used to differentiate every single configuration that can be built. Options have been made explicit, traced back and implemented in the code by developers and/or domain experts. On the other hand, characteristics describing inputs are not necessarily made explicit and usually require domain experts to model features that should be both descriptive about the content, helpful for discriminating one input from the other, and also informative for predicting performance. They might not be as differentiating as the ones for configurations as different inputs may result in the same characteristics. In the end, paired with the configuration options, we expect that this feature vector allow us to observe very similar performances from the systems. Some learning approaches can leverage these input properties as predictors (or features) when building or adapting prediction models. Based on domain knowledge, we list hereafter what input properties have been computed for the different sorts of inputs (see also Table 2): for the .c scripts compiled by gcc, the size of the file, the number of imports, methods, literals, for and if loops and the number of lines of code (LOCs); for the images fed to imagemagick, the image size, width and height, category (describing the image content, e.g., ostrich, dragonfly, or koala), and its averaged (r, g, b) pixel value; for SAT formulæ processed by lingeling, the size of the .cnf file, the number of variables, or operators, and and operators; for the test suite of nodejs, the size of the .js script, the LOCs, number of functions, variables, if conditions, and for loops; for the .pdf files processed by poppler, the page height and width, the image and pdf sizes, the...
number of pages and images per input pdf; for databases queried by SQLite, the number of lines for eight different tables of the database; for input videos encoded by x264, the spatial, temporal, and chunk complexity, the resolution, the encoded frames per second, the CPU usage, the width and height of videos for the system files compressed by xz, the format and the size.
3.1.3. Separation Training-Test
We randomly split each set of configurations into a training set and a test set. We repeat this with varying proportions of configurations in the training set – we start with 10% of configurations dedicated to training and then 20%, 30%, . . . , up to 90%. The training set is available for data selection and training of the models, whereas the test set is purely dedicated to evaluating the trained models. To avoid biasing the results with different samplings of configurations, we fix the random seeds so the different techniques work with the same training and test sets. Note that every training is done independently from one model to another and from scratch so that all training procedures do not share any information and start from the same point.
3.2. Performance Prediction Models
In this section, we define the different predictive performance models used in the rest of the paper.
3.2.1. Strategies and baselines
In addition to the three learning strategies that we described in Section 2.2 (supervised online learning, offline learning, transfer learning), we consider a baseline that does not learn the specifics of the inputs. The approach, called
\[\text{For } x264, \text{ input properties were already computed in } [71] \]
**Average**, simply computes the average of configurations’ measurements observed on the input. This is supposed to mimic the behaviour of an end-user that might consider first a configuration of a system (e.g., the default configuration [76]) and then slightly explore the configuration space to have an average of the performance values. This way, the end-user may measure various configurations and approximate the trend of performance values. Average is a better approximation than the systematic use of a single point (e.g., default value) when it comes to predicting the performance of any configuration, especially when inputs change.
An advantage of this approach is that there is no cost for an offshore organization (i.e., the ones that provide pre-trained models). Instead, the baseline applies on demand for every new input (in an online setting). Once the input is received, a set of randomly sampled configurations is executed over the input to retrieve performance measures. The average of these performances is returned, and these are the values that are communicated to the users in the sense that they can expect such performance on average. Obviously, such an averaged configuration does not account for possible variations in the performance of configurable systems. In particular, the average value remains fixed regardless of the configuration we wish to predict performance for, which could possibly lead to a high degree of inaccuracy.
### 3.2.2. Learning Algorithm
Until now, we described the learning strategies but did not mention which learning algorithms we were using. We chose 4 different algorithms, namely:
1. **OLS Regression** [57] from Scikit-learn [51], estimating the performance value with a weighted sum of variables. It is not designed for complex cases;
2. **Decision Tree** [55] from Scikit-learn, using decision rules to separate the configurations into sets and then predicting the performance separately for each set;
3. **Random Forest** [49] from Scikit-learn, an ensemble algorithm based on bagging, combining the knowledge of different decision trees to make its prediction;
4. **Gradient Boosting Tree** [18] from XGBoost [10], also derived from Decision Tree. Unlike Random Forest training different trees, gradient boosting aims at improving one tree by specialising its rules of decision at each step.
These are prevalent algorithms and are commonly used for tabular data. We do not include deep learning or neural networks since we do not have lots of measurements in an online setting, which is usually a requirement, and since these methods are still commonly outperformed by random forests or gradient boosting [58, 20]. We train each machine learning algorithm with its default parameters, if not stated otherwise. The optimal parameters for an algorithm are dependent on the dataset, its size, and the performance property. As a result, they are necessary to be adjusted on a per-case basis.
3.3. Selecting Algorithms (RQ1)
First, we address RQ1: How do different machine learning algorithms compare for establishing a relevant performance prediction model? We decompose it into three parts, that jointly answer the research question.
3.3.1. Why using machine learning?
Our first goal is to state whether machine learning is suited to address the performance prediction problem. To assess the benefit of using machine learning, we compare the Average baseline to different learning algorithms. We implement them in an online setting, i.e., we use the supervised online approach; given the input of the user, we want to estimate its performance distribution. This is a prediction for one input at a time.
3.3.2. Which machine learning algorithm to use?
This evaluation is also the opportunity to compare these learning algorithms and search for the one that outperforms the others. We also study the evolution of their prediction errors with increasing training sizes. We consider those listed in Section 3.2.2. After training them on the training set, we predict the performance distribution of the test set and compute the prediction error. We repeat it for all combinations of system, inputs, and performance properties. As prediction error, we rely on the Mean Absolute Percentage Error [40]. In Figure 3, we display the average MAPE values for various training sizes.
3.3.3. What is the benefit of hyperparameter tuning?
Finally, we want to estimate how much accuracy we could expect to gain when we tune the hyperparameters of learning algorithms. To do so, we rely on a grid search [5] for hyperparameter tuning. We compute the training duration and prediction errors of these algorithms, with and without tuning their hyperparameters, and report the average difference for both.
3.4. Selecting Inputs (RQ2)
RQ1 studies the effectiveness of machine learning. However, different ways of selecting inputs during the data collection or a different number of inputs could alter the accuracy of the final performance model. Then, we address RQ2: How to select an appropriate set of inputs for training a performance prediction model? We separate this into three questions.
3.4.1. How many inputs do we need to learn an accurate predictive performance model?
From the perspective of a user in charge of the training, adding an input to measure incurs a computational cost and should be justified by an improvement of the model. So, what is the effect of adding new inputs on the accuracy of predictive performance models? How many inputs do we need to reach a decent level of accuracy? We aim at minimising the number of inputs used in the training while maximising the accuracy of the obtained model. To do so, in
this part of the evaluation, we train different performance models using various numbers of inputs and compare their prediction errors.
Then, once the number of inputs is fixed, we search if there is any benefit in precisely and methodologically selecting the inputs for data collection and model training. Does it bring any improvement over the random selection of inputs? Do the different inputs used in the training set change the final predictive performance model accuracy? Depending on the offline budget of the user, we have different objectives.
3.4.2. How to select the input data for an offline setting?
If the offline budget is restricted, a.k.a. the offline setting, the goal is to constitute a representative set of inputs to learn from, in order to build a performance model that will generalise as much as possible. We care about selecting diverse and representative inputs, in order to predict accurate results whatever the input data. For this setting, we compare the following input selections:
1. **Random** - Using a uniform distribution to decide which inputs should be included in the training;
2. **K-means** - Based on input properties, we apply K-means clustering to differentiate clusters of inputs with distinct characteristics. To increase the diversity in the selection, we pick the inputs closest to the center of the clusters;
3. **HDBScan** - Similar to the previous technique but with another clustering algorithm, namely the HDBScan \[^5\], using density-based instead of means-based \[^5\];
4. **Submodular (Selection)** - This technique computes a similarity matrix between the different inputs of a software system and optimises facility location functions \[^6\] to choose a representative set of inputs.
For this offline setting, we implement the supervised offline approach with a Gradient Boosting (best in Section 4.1.2). Once the input selection technique chose the input, we include all related measurements in the training set. The test set is then composed of the measurements of all other inputs, not selected.
To avoid biasing the machine learning model with different scales of performance distribution, we choose to standardise all performance properties. But it has a drawback: since their values are close to zero, it artificially increases the MAPE values. To overcome this, we switch to the Mean Absolute Error (MAE) \[^7\]. Since the performance property is standardised, we assume that models with MAE values inferior to 0.2 are good – way better than the expected average distance \(\frac{1}{\pi} \approx 1.13\) \[^8\] between two points selected uniformly. We repeat the prediction 20 times and depict the average MAE (y-axis, left) in Figure 4 for different input selection techniques (lines) and number of inputs (x-axis) on a per-system basis. We added the number of training samples (y-axis, right).
Except for gcc and xx, x-axis are in log scale.
---
\[^5\] We rely on this implementation: [https://hdbscan.readthedocs.io/][39]
\[^6\] We rely on this implementation: [https://apricot-select.readthedocs.io/][56]
3.4.3. How to select the input data for an online setting?
If the offline budget is low, a.k.a., in the online setting, then we must be efficient and focus on predicting the performance distribution for the current input of the user. For this setting, we implement a transfer learning approach: the input of the user becomes the target input, and the candidate input becomes the source input. In this online setting, the goal of input selection becomes to find one good source input that is as close as possible to the current input of the user - in terms of characteristics and performance. The choice of a good source should improve the performance prediction of the transfer learning approach \cite{32}. We propose the following input selections:
1. **Random** - Same baseline as in the offline setting;
2. **Closest (Input) Properties** - Two inputs sharing common characteristics might also share common performance distributions. Following this reasoning, to improve the performance prediction, we have to select an input whose properties are similar to the current input’s properties. To do so, we compute the MAE between the properties of the current input and the properties of all the candidate inputs. We pick the input obtaining the smallest MAE value;
3. **Closest Performance** - We use the few measurements already measured on the current input. For these, we compute the Spearman correlation \cite{27} between the performance distribution of the current input and all the candidate inputs. Finally, we select the candidate with the highest correlation;
4. **Input Clustering** (& Random) - With the help of a K-Means algorithm, we form different clusters of inputs based on their properties. We randomly pick a candidate input in the cluster of the current input.
For this question, we use Gradient Boosting. We display the median MAPE results over 10 predictions for all software systems, performance properties and number of inputs in Table 3.
3.5. Selecting Configurations (RQ3)
In this section, we vary the budgets of (1) inputs and (2) configurations used to constitute the training set fed to the model. **RQ3** - How does the number of measured configurations affect the performance prediction models? In this research question, we compare the accuracy of models according to the numbers of inputs and configurations used during their training, answering these questions:
3.5.1. What is the best tradeoff between selecting inputs and sampling configurations?
For a fixed number of configurations, we study the evolution of the accuracy with the number of considered inputs. Is it better to measure lots of configurations or numerous inputs? Since the evaluation is designed to improve the generalization of the model, it mostly relates to models trained in an offline setting. Therefore, we implement the offline approach, fix the algorithm to Gradient Boosting and use the random baseline as input selection technique. We repeat the experiment 20 times. In Figure 5, we depict the MAE (colour) for various numbers of inputs (x-axis) and configurations (y-axis).
We are then interested to look at the impact of the number of configurations on the accuracy of three input-aware approaches (transfer learning vs supervised online; offline learning vs supervised online).
3.5.2. Is it better to use the transfer learning or the supervised online approach?
Depending on the online budget of the user, it can be worth (or not) to transfer the knowledge from one input to another. If the online budget is high, we guess there is no need to transfer, i.e., we do not use transfer learning. If this budget is low, we can benefit from the transfer learning approach. How much online budget justifies the decision for transfer learning? To answer this, we implement both approaches with different numbers of configurations on the target input. We use Gradient Boosting and predict the performance spanning all systems and performance properties. Due to outliers drastically increasing the average value, we compute the median MAPE instead of the average. In Figure [6], we display the MAPE value (y-axis) for different budgets of configurations (x-axis).
3.5.3. Should we train performance models offline or online?
To answer this question, we compare the supervised offline approach to the supervised online approach w.r.t. MAE. We predict performance implementing both approaches and compute the MAE value for different training size – varying from 10% to 90% of the configurations available per input. Figure [7] displays the results for both approaches, i.e., the average MAE on all software systems and performance properties.
4. Evaluation
We report the results following the protocol of Section [3]
4.1. Selecting Algorithms (RQ$_1$)
4.1.1. Is using ML relevant in the context of predictive performance modelling?
Figure [3] shows the benefits of using ML as compared to the Average baseline. It appears that ML techniques clearly outperform the average performance value predicted by the baseline for all training sizes. The key indicator to study is the evolution of errors with increasing training proportions; while the baseline’s accuracy does not progress with additional measurements, the learning algorithms improve their prediction, from 12% to 3% error.
4.1.2. Which ML algorithm to use?
While ML is generally beneficial, the prediction quality varies between the different algorithms. For instance, OLS Regression generally leads to bad results, e.g., 22% of error with 10% of the configurations. Unlike the OLS regression, tree-based learning algorithms take advantage of the addition of new measurements. For a budget of 50% of the configurations, their median prediction goes under the 5% of error, which is encouraging. This result of 5% is
Figure 3: Which (learning) algorithm to use?
only valid in average; the prediction will be better for a few software systems, e.g., imagemagick or x264, but does not hold for others, e.g., lingeling or pler. Though there is no big difference between these three learning algorithms, we observe slightly better predictions for Random Forest compared to Decision Trees, and for Gradient Boosting compared to Random Forest.
Figure 4: Offline setting with Gradient Boosting (best) and different input selection techniques: Influence of input selection and the number of inputs (lower MAPE = better).
4.1.3. What is the benefit of hyperparameter tuning?
Our results found that hyperparameter search improves the MAPE on average by $8 \pm 16\%$ within a range of $3 \rightarrow 37\%$ but also requires on average 120 times more training time when doing a grid search of estimator parameters. It should be noted that this is an improvement in percent, not percentage points. While the exact overhead of hyperparameter search is dependent on the number of configuration options of the model and the search method, we note that the overall benefit on the datasets is limited. This is especially confirmed by the observation that the best-found hyperparameters were changing depending on both the system and the size of the training dataset. For simplicity of the setup, the rest of the evaluation uses the default parameters of each model, if not otherwise noted.
**RQ$_1$** Machine learning is well-suited to address the predictive performance modelling problem over configurations and inputs. It outperforms the *Average* baseline as it makes more accurate predictions. Our results also show that using tree-based learning should be favoured over OLS Regression. These tree-based algorithms reach decent levels of errors with reasonable online budgets, between 5% and 10% relative error for most of the cases, and potentially improved by 8% when tuning hyperparameters.
4.2. Selecting Inputs (RQ$_2$)
4.2.1. How many inputs are needed to learn an accurate predictive performance model?
Measuring inputs differs across software systems: measuring 5 inputs represents $5 \times 201 = 1005$ configurations for *x264* but only $5 \times 30 = 150$ for *xz*. Figure 4 shows that the performance model reaches its lowest error threshold when considering about 20 inputs. Results are thus easier to interpret on systems with more inputs, as compared to *gcc* and *xz*. There are however more difficult cases, *e.g.*, *Node.js* and *poppler*, in our experiment that might require more inputs to improve the accuracy of the prediction. To get a consistent prediction, we recommend the user to measure at least 25 inputs with a sufficient number of configurations per input.
4.2.2. How to select the input data for an offline setting?
In an offline setting, we seek to train a generalized model for all inputs; the selected inputs are supposed to be representative of the diverse set of inputs to be expected during deployment. Figure 4 presents our results. As expected, with an increased number of selected inputs, the influence of the input selection decreases. The input selection is especially important for small numbers of inputs. The evaluation shows that our techniques kmeans, submodular and hdbscan fail to beat the random baseline when selecting the inputs prior to
the training of the model. There is no clear outperforming technique of input selection. These results could be explained by multiple factors: (1) the input properties processed by the input selections are not sufficient to differentiate the inputs (2) our baseline focuses on selecting different profiles of inputs, while it may be more efficient to select a set of average-like inputs. Out of this result, we advise keeping it simple and adopting the random baseline.
4.2.3. How to select the input data for an online setting?
In an online setting, we specifically build a model for the current input and select a similar input to transfer the knowledge from. Table 3 details the results for the different input selections, as the median MAPE results over 10 predictions for all software systems, performance properties, and the number of inputs. Unlike the offline setting, our input selection techniques were able to beat the random baseline, the best input selection technique being the Closest Performance with an average MAPE around 3.8, followed by the Closest Properties (4.2) and the Input Clustering (4.7). Wilcoxon signed-rank tests [59] (with significance levels at 0.05) confirm that predictions related to different input selections are significantly different from those using the random baseline: $p = 0.0$ for Closest Performance, $p = 1 \times 10^{-184}$ for Closest Properties and $p = 1 \times 10^{-27}$ for the Input Clustering. Therefore, and to continue to provide guidance for users, we advise using the Closest Performance to select the input in an online setting. But beyond the raw comparison of error values, beating the Random baseline with the Closest Property technique is a strong result. Empirically, it validates that these input properties are valuable to compute and should be
Table 3: Online Setting - Influence of input selection
<table>
<thead>
<tr>
<th>Input Selection</th>
<th>MAPE (%)</th>
<th>Training Time (sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Random</td>
<td>5.22</td>
<td>0.02</td>
</tr>
<tr>
<td>Closest Properties</td>
<td>4.17</td>
<td>0.05</td>
</tr>
<tr>
<td>Closest Performance</td>
<td>3.82</td>
<td>0.07</td>
</tr>
<tr>
<td>Input Clustering</td>
<td>4.70</td>
<td>0.02</td>
</tr>
</tbody>
</table>
included in the models to improve the prediction. Besides, the evaluation shows that training times are negligible.
**RQ2** In an offline setting, the results show that diversification of inputs (rather than configurations) should be prioritized by using uniform distribution to select inputs, *i.e.*, the random baseline. In an online setting, the performance correlations technique gains about 1.4 point of error compared to a random selection of inputs. Our results empirically validate the important role of input properties when predicting software performance, *i.e.*, pretrained performance models (offline) can be reused under the condition input properties are computed and leveraged online.
4.3. Selecting Configurations (**RQ3**)
4.3.1. What is the best trade-off between selecting inputs and sampling configurations?
According to Figure 5 results, diversifying the inputs is more effective than selecting different configurations to train an input-aware performance model. But for a fixed budget of inputs, there is a slight improvement in accuracy when increasing the number of configurations. As a result, both should be combined to obtain the best possible model. Overall, the ideal budget – both in terms of inputs and configurations – highly depends on the expected level of errors combined with the difficulty of learning a predictive performance model for the software under test. For instance, predicting the performance of imagemagick is relatively easy, with an average MAE value at 0.06. For this software system, picking 25 inputs is almost already a waste of resources, we do not need that much data - 5 inputs is already enough with 30% of the configurations. Other systems are harder to learn from *e.g.*, lingeling with an average MAE of 0.76. For these, we recommend increasing the number of inputs and configurations. The MAE obtained on the training set should be used as a proxy to estimate the difficulty of predicting the performance of the software under test. The greater its value, the greater the budget needed to learn an accurate model.
4.3.2. Is it better to use transfer learning or a supervised online approach?
With the input selection technique set to Closest Performance and whatever the percentage of configurations used in the training, the transfer approach always outperforms the supervised online approach, as shown in Figure 6. This strong result demonstrates the importance of capitalizing on the existing measurements, measured in an offline setting. Hence, if we are able to create representative sets of inputs for each software system (thanks to Closest Performance), then transfer learning becomes the best approach to use.
However, the strategy of picking the source input among all the inputs of our dataset is not always possible — it requires a high offline budget. We consider another scenario where we put ourselves in the situation of a user with a low offline budget i.e., not able to select an ideal source input. We thus add the comparison of transfer learning with a random input selection baseline. Even in this case, we still outperform the supervised approach for less than 55% of the configurations. But this transfer with random selection has an expiration date; after a training proportion of 55% - represented by the arrow on the graph, it leads to negative transfer: the added measurements (of the source) become noisy data interfering with the training of the target model.
4.3.3. Should we train performance models offline or online?
Figure 7 shows the evolution of the two supervised approaches, offline and online, depending on how many configurations are used as part of the training. The first point we notice is the slow progression of the supervised offline approach, only from 0.37 to 0.30 between 10% and 90% of configurations.
A possible explanation is that it is so hard to generalize over the input dimension that it hides the benefit of adding configurations. While when considering only one input at a time, this difference in performance distributions does not bother the training of the machine learning model. Nevertheless, comparing the
raw numbers provides a straight answer to the initial question: unless the online budget is really low, if the choice between a supervised offline and a supervised online approach occurs, one should definitely prefer the online approach. But this finding has to be contextualized w.r.t. cost and effectiveness. From the point of view of the final user, the computation of measurements in an online setting to build a performance model will always last longer than offline prediction, using an already-trained model. Yet, when users have an online budget (even a small one), they should always prefer the online approach compared to the offline approach. Stated differently, the supervised offline approach should be adopted for lack of a better solution, as the last approach to implement when users cannot afford to measure configurations in an online setting.
**RQ3** Supervised online learning quickly outperforms the offline learning version. With more than 20% of configurations for training, online learning already shows a lower MAE on average than its offline counterpart. We come up with the following high-level recommendation: 1) if the online budget is low, transfer learning should be used; 2) with a substantial online budget (55% of the configurations in our experiments), it might be better to use the supervised online approach; 3) offline learning without transfer should be avoided, except for very small online configuration budgets.
5. Discussion
Each part of the evaluation provides a recommendation for the three user profiles defined in Section 2.1, depending on the trade-off between their offline
and their online budgets. This discussion summarizes our findings while answering RQ$_1$ to RQ$_3$ and turns these findings into recommendations and actionable rules, thus guiding the user to solve the performance prediction problem. **How to help users predict their software performance, whatever be the input data and their configuration?**
Depending on the available online budget, we distinguish the following cases:
- **If the user has a high online budget** (e.g., user persona \(C\)) we recommend using the supervised online approach (Sec. 4.3.3) with a Gradient Boosting Tree implementation (Sec. 4.1.2) and tuned hyperparameters (Sec. 4.1.3). In that case, users can expect low prediction errors. Assuming configurations’ measurement has been collected, learning a performance model from scratch for each unique input is the ideal scenario, as is the case with a large online budget.
- **If the user has a low online budget**, we can also recommend the supervised online approach, but cannot promise outstanding performance estimations. Hence, if a representative set of inputs has already been measured \(i.e.,\) with a big offline budget (as for user persona \(B\)) we rather recommend using the transfer learning approach (Sec. 4.3.2) with a Gradient Boosting algorithm and using the closest performance input selection technique (Sec. 4.2.3). Our experiments show that the performance predictions of user persona \(B\), who is counting on inputs in an offline setting, will outperform the online predictions of user persona \(C\) whatever be the budget of configurations for reasonable budgets of configurations;
- **If the user has no online budget** (e.g., the user persona \(A\)), then the available offline budget is key. If the offline budget is low, there is no silver bullet: since we cannot guarantee low errors with our models, it is probably better to avoid predicting than providing a poor estimation of software performance.
If the user has access to a diverse set of configurations’ measurements over different inputs (high offline budget), then we can advise using the supervised offline approach (Sec. 4.3.3) implementing a Gradient Boosting algorithm with a random selection of inputs (see Sec. 4.2.2).
Figure 8 summarizes these rules of thumb into a flow diagram. We also depict likely locations for user personas \(A\), \(B\) and \(C\) at the end of the decision process, based on our previous recommendations, as well as the observed relative errors from our experiments. These observed errors serve as a rule-of-thumb, but are, of course, not directly transferable to other systems and setups.
As a limitation of our work, we highlight that it is difficult to learn the performance distribution for a few software systems \(e.g.,\) lingeling, poppler, and even Node.js For these systems, the prediction errors are above 20% when implementing a supervised offline approach with tight budgets of configurations or inputs. It is worth noticing that a computational effort in terms of measurements is requested \(i.e.,\) to measure more than the general and averaged threshold of 25 inputs on average. The requested amount of inputs on a per-system basis
is documented in the companion repository[7]. This is potentially related to the impact of individual features on the performance metric, i.e. the spread of the correlation, which was investigated in concurrent work for the same dataset as used in this paper [34]. For the difficult-to-learn software systems, there are multiple features that have a high impact on the performance metric [34], Table 4, leading to a more difficult-to-learn objective landscape for the regression task.
Our results relate further to the existing body of work and confirm some of the results previously found. For example, BEETLE [32] highlights the importance of selecting the right input-specific source(s) for transfer learning to maximize the accuracy and mitigate the risk of negative transfer, a problem similar to the input selection we consider in RQ2. In [26], it is discussed in which scenarios transfer learning is more applicable compared to when it might lead to a negative transfer. The applicability is found to be hindered by a higher severity of the change between the original training environment and the target prediction environment to which the model is transferred. This problem of negative transfer in variability modelling was similarly confirmed for the Linux kernel in [37]. The negative transfer problem underlines the importance of establishing a training set that is as broad and diverse as possible, in order to best fit the data distribution that the model will apply during deployment. The closer the target environment, albeit new inputs, configurations, or other variability aspects, is to the environments from which the training data was collected, the smaller the risk of a negative transfer and the better the performance prediction quality.
6. Threats to Validity
A first threat to validity is linked to the data we are using; since we rely on a dataset [34], we are exposed to the same threats to validity. In particular, an error in the measurement protocol could invalidate our results. Besides, we do not consider all the possible configuration options of the software systems. The fact that it includes multiple software systems (8 in total) and a consequent number of performance measurements (roughly two million when considering the different performance properties) is supposed to alleviate these threats. A second threat to validity relates to the input properties computed in Section 3.1.2. Since we are not domain experts of each of the eight software systems considered in this experiment, we cannot validate the construction of such properties, i.e., it is likely that there is an opportunity to craft more expressive input properties. To the best of our knowledge, which input properties to use in order to improve the performance prediction remains an open question [11, 75]. Furthermore, we neglect their computational cost. As we mentioned, being able to report precisely the properties can be a tedious problem. Measures need to be precise, external factors need to be mitigated as much as possible to reduce potential interactions with the measurements, etc. Being meticulous about these aspects may drastically increase the cost to get such measurements and ultimately threaten the results of Section 4.2.2 or overestimate the benefit of the supervised offline approach. Another threat to validity is related to the randomness in machine learning methods, subject to modifications in their predictions. To reduce these stochastic effects, we (1) fix the random seed to feed the same training and test sets to all models and have comparable results and (2) repeat the experiments 20 times. Finally, we acknowledge that we relied on already existing libraries and implementations. These can be buggy or present some inaccuracies that may favour our results. We choose to use ML implementations coming from scikit-learn which is one of the most popular Python ML libraries at the moment. Its community is active and sensitive to these aspects, we can assume that if such a problem would exist, it would have been discovered and fixed quickly.
7. Related Work
Machine learning and configurable systems. Machine learning techniques have been widely considered in the literature to learn software configuration spaces [52, 54, 43, 25, 26, 68, 44, 45, 48, 19, 12, 16, 23]. Several works have proposed to predict the performance of configurations, with several use-cases in mind for developers and users of configurable systems [60, 12, 63, 62]: the maintenance and interpretability of configuration spaces, the exploration of tradeoffs in the configuration space, the automated specialization of configurable systems, or simply taking informed decisions when choosing a suited configuration. The selection of an optimal configuration [18, 19, 17] is also an extensive line of research. We do not target the problem of finding an optimal configuration in this article. Though prediction model can be leveraged, more targeted and
effective techniques have been proposed to find an optimal configuration [47]. Most of the studies support learning models restrictive to specific static settings, such that a new prediction model has to be learned from scratch once the environment changes. The variability of input data exacerbates the problem and questions the generalization of configuration knowledge, e.g., a configuration is only optimal for a given input.
Input sensitivity of configurable systems. Input sensitivity has been partly considered in some specific research works. Let us take the video encoding [38] as an example: Pereira et al. [3] study the effect of sampling training data from the configuration space on x264 configuration performance models for 19 input videos on two performance properties. Netflix conducts a large-scale study for comparing the compression performance of x264, x265, and libvpx [1]. 5000 12-second clips from the Netflix catalogue were used, covering a wide range of genres and signal characteristics. However, only two configurations were considered and the focus of the study was not on predicting performances. Our study covers much more inputs, systems, and performance properties. Valov et al. [67] proposed a method to transfer the Pareto frontiers (encoding time and size) of performances across heterogeneous hardware environments. Yet, the inputs (video) remain fixed, which is an immediate threat to validity. In fact, this threat is shared by numerous studies on configurable systems that consider configurations with the same input video (see [52] for the references). In response, we carefully assess numerous combinations of learning approaches, algorithms, and input selections to deal with input sensitivity. Input sensitivity is both the root cause hidden behind the need of input-aware performance models and the reason why these models may fail at predicting software performance whatever their input is. If there were no interaction at all between inputs and configurations, a simple performance model for all inputs would suffice. Based on this work combined with [34], our conjecture is as follows: the more input-sensitive a software system is, the more difficult (and costly) it is to train an efficient input-aware performance model.
The input sensitivity issue has also been identified — and sometimes dealt with — in some other domains: SAT solvers [75, 17], compilation [53, 11], data compression [29], database management [69, 14], cloud computing [15, 36, 12], etc. These works purposely leverage the specifics of their domain. However, it is unclear how proposed techniques could be adapted to any domain and all software systems [42]. Thus, We favour a generic, domain-agnostic approach (e.g., transfer learning) as part of our study. Importantly, most of these works pursue the objective of optimizing the performance of a software system according to a given input (workload). In contrast, we consider the problem of predicting the performance of any configuration. Our key goal is to investigate how configuration knowledge can be generalized or transferred among inputs.
Transfer learning. Transfer learning has been considered for configurable software systems, with the idea of transferring knowledge across different computing environments etc. The promise is to reduce measurements’ efforts and costs over configurations. Jamshidi et al. define Learning to Sample (L2S) [20] that combines an exploitation of the source and an exploration of the target to
sample a list of configurations. As many other transfer learning works [4], L2S is applied to transfer performance of executing environments (e.g., hardware changes), not input changes. L2S could be adapted as part of transfer learning (see Figure 2). However, L2S is highly sensitive to the selection of a source (an input) for a given target (another input). Martin et al. develop TEAMs [37], a transfer learning approach predicting the performance distribution of the Linux kernel using the measurements of its previous releases. Valov et al. showed that linear models are effective to transfer knowledge across different hardware environments [68]. However, inputs can significantly alter performance distributions e.g., Pearson correlations can be close to 0 for some pairs of inputs, systems, and performance properties. There is not necessarily a linear correlation and relationship, as for hardware changes. We assess model shifting as part of transfer learning. There are many studies in the literature of software engineering applying transfer learning for defect prediction [7, 35, 46, 9, 70]. They are used for handling a classification problem instead of a regression problem as in our case. Additionally, while researchers commonly utilize software quality metrics as predictive features for cross-software defects, our approach differs as we leverage configuration options to forecast performance. Beyond software systems, transfer learning is subject to intensive research in many domains (e.g., image processing, natural language processing) [50, 73, 78]. Different kinds of data, assumptions, and tasks have been considered. The interplay between configuration options and inputs calls to tackle a regression problem over tabular data that differ from images or textual content. Some techniques are simply not applicable in our context. Another specificity of our problem is that there is this open question on how to select and adapt the source for a given target (here: a new input fed to a configurable system). Overall, we design transfer learning techniques that leverage characteristics of inputs and that can operate over tabular data.
In this paper, transfer learning techniques targeting the interplay between inputs and configurations work best if the source and the target inputs are close to each other (in terms of performance profiles [11]). To this matter and for this specific context, the most important part is neither the used ML algorithm nor the way to transfer the knowledge, but just to associate the right source input to the target input under prediction. Two insights regarding this finding: 1. to be optimal, transfer learning might require an additional offline effort (i.e., measuring potential source inputs) to work best, even if the technique is supposed to be labelled as online, so we can pick the best source input among a sufficient set of inputs; 2. More than comparing TL techniques with each other, future efforts should be focusing the optimal association of inputs, how to find the best source input given the current target input. Our current proposition, deriving input properties as metrics between inputs to select the best source, can be seen as an extension of the bellwether effect (e.g., used by BEETLE [32]), stating there exists a unique source input leading to superior transfer results whatever the target.
Selection problem. The automated algorithm selection problem is subject to intensive research [28, 22, 75, 65]: given a computational problem, a set
of algorithms, and a specific problem instance to be solved, the problem is to
determine which of the algorithms can be selected to perform best on that in-
stance. Techniques have substantially improved the State-of-the-Art in solving
many prominent artificial intelligent problems, such as SAT, CSP, QBF, ASP,
or scheduling problems [28]. For instance, SATZilla uses machine learning to
select the most effective algorithm from a portfolio of SAT solvers for a given
SAT formula [75]. There are several differences in our work. First, we target
the problem of predicting the performance of any configuration as opposed to
finding an optimal system. Second, in our case, the set comprises all (valid)
configurations of a single, parameterized, configurable system. In the auto-
mated algorithm setting, the set of algorithms come from different individual
software implementation and systems. As stated in [28] (Section 6), our problem
differs and is still open because (1) the space of valid configurations to select
from is typically very large; (2) learning the mapping from instance features
(i.e., inputs’ properties) to configurations is challenging. We precisely address
this problem in this article, considering a large dataset and multiple learning
approaches.
8. Conclusion
Due to the interactions between inputs and configurations, predicting the
performance property of a software configurable system whatever the input data
is non-trivial and yet of practical importance. In particular, performance models
trained on a single input can quickly become inaccurate and useless when
used over other inputs. This lack of generalizability and practicality suggests
to investigate solutions for learning input-aware performance models. In this
article, we empirically evaluated the effectiveness of different learning strategies
(offline learning, supervised online learning, transfer learning) and user personas
when addressing this problem. We leveraged a large dataset comprising 8 soft-
ware systems, hundreds of configurations and inputs, and dozens of performance
properties, spanning a total of 1,941,075 configurations’ measurements.
Our study empirically proves that measuring the performance of configurations
on multiple inputs leads to 1) learning the complexity of predictive performance models; 2) training models which are robust to the change of input data.
Offline learning can build configuration knowledge that pays off and benefits to
online learning when a new input needs to be processed. We emphasize the need
to compute relevant input properties (e.g., video characteristics) as part of the
learning to discriminate the different inputs fed to the software system. As fu-
ture work, we plan to consider input-aware optimization methods. The problem
would differ: instead of transferring the whole performance distribution across
inputs and configurations, optimization pursues the goal of finding a single opti-
mal point, typically through the transfer of some configuration knowledge across
inputs.
Acknowledgements.
This research was funded by the ANR-17-CE25-0010-01 VaryVary project and the associated Inria/Simula team Resilient Software Science (RESIST_EA)
https://gemoc.org/resist/
References
35
DOI 10.1109/ISSRE5003.2020.00038
|
{"Source-Url": "https://hal.science/hal-04271476/file/JSS_2023___Data_efficient_performance_model-preprint.pdf", "len_cl100k_base": 15837, "olmocr-version": "0.1.50", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 102739, "total-output-tokens": 24504, "length": "2e13", "weborganizer": {"__label__adult": 0.00036215782165527344, "__label__art_design": 0.0006184577941894531, "__label__crime_law": 0.00022745132446289065, "__label__education_jobs": 0.002552032470703125, "__label__entertainment": 0.0001424551010131836, "__label__fashion_beauty": 0.0001811981201171875, "__label__finance_business": 0.0004224777221679687, "__label__food_dining": 0.00029277801513671875, "__label__games": 0.0009446144104003906, "__label__hardware": 0.000957489013671875, "__label__health": 0.0004000663757324219, "__label__history": 0.00033545494079589844, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.0003371238708496094, "__label__literature": 0.0004279613494873047, "__label__politics": 0.00023984909057617188, "__label__religion": 0.0003762245178222656, "__label__science_tech": 0.05523681640625, "__label__social_life": 0.00011217594146728516, "__label__software": 0.01515960693359375, "__label__software_dev": 0.91943359375, "__label__sports_fitness": 0.000247955322265625, "__label__transportation": 0.0004706382751464844, "__label__travel": 0.00021326541900634768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 95345, 0.05335]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 95345, 0.18607]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 95345, 0.86567]], "google_gemma-3-12b-it_contains_pii": [[0, 1175, false], [1175, 3605, null], [3605, 6922, null], [6922, 8910, null], [8910, 12147, null], [12147, 14640, null], [14640, 17845, null], [17845, 21451, null], [21451, 25286, null], [25286, 27921, null], [27921, 31377, null], [31377, 33036, null], [33036, 35993, null], [35993, 38727, null], [38727, 41815, null], [41815, 44908, null], [44908, 47609, null], [47609, 48031, null], [48031, 48206, null], [48206, 50977, null], [50977, 52793, null], [52793, 55285, null], [55285, 57346, null], [57346, 58972, null], [58972, 62164, null], [62164, 64046, null], [64046, 67250, null], [67250, 70760, null], [70760, 74289, null], [74289, 77324, null], [77324, 77514, null], [77514, 79741, null], [79741, 82120, null], [82120, 84575, null], [84575, 86933, null], [86933, 89245, null], [89245, 91534, null], [91534, 94101, null], [94101, 95345, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1175, true], [1175, 3605, null], [3605, 6922, null], [6922, 8910, null], [8910, 12147, null], [12147, 14640, null], [14640, 17845, null], [17845, 21451, null], [21451, 25286, null], [25286, 27921, null], [27921, 31377, null], [31377, 33036, null], [33036, 35993, null], [35993, 38727, null], [38727, 41815, null], [41815, 44908, null], [44908, 47609, null], [47609, 48031, null], [48031, 48206, null], [48206, 50977, null], [50977, 52793, null], [52793, 55285, null], [55285, 57346, null], [57346, 58972, null], [58972, 62164, null], [62164, 64046, null], [64046, 67250, null], [67250, 70760, null], [70760, 74289, null], [74289, 77324, null], [77324, 77514, null], [77514, 79741, null], [79741, 82120, null], [82120, 84575, null], [84575, 86933, null], [86933, 89245, null], [89245, 91534, null], [91534, 94101, null], [94101, 95345, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 95345, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 95345, null]], "pdf_page_numbers": [[0, 1175, 1], [1175, 3605, 2], [3605, 6922, 3], [6922, 8910, 4], [8910, 12147, 5], [12147, 14640, 6], [14640, 17845, 7], [17845, 21451, 8], [21451, 25286, 9], [25286, 27921, 10], [27921, 31377, 11], [31377, 33036, 12], [33036, 35993, 13], [35993, 38727, 14], [38727, 41815, 15], [41815, 44908, 16], [44908, 47609, 17], [47609, 48031, 18], [48031, 48206, 19], [48206, 50977, 20], [50977, 52793, 21], [52793, 55285, 22], [55285, 57346, 23], [57346, 58972, 24], [58972, 62164, 25], [62164, 64046, 26], [64046, 67250, 27], [67250, 70760, 28], [70760, 74289, 29], [74289, 77324, 30], [77324, 77514, 31], [77514, 79741, 32], [79741, 82120, 33], [82120, 84575, 34], [84575, 86933, 35], [86933, 89245, 36], [89245, 91534, 37], [91534, 94101, 38], [94101, 95345, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 95345, 0.06176]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
a6c40a5636136a93ae8509bcdaa45e17071de10b
|
Trade-Offs in Continuous Integration: Assurance, Security, and Flexibility
Michael Hilton¹,², Nicholas Nelson¹, Timothy Tunnell³, Darko Marinov², Danny Dig¹
¹Oregon State University, USA
²Carnegie Mellon University, USA
³University of Illinois at Urbana-Champaign, USA
mhillon@cmu.edu, {nelsonni,digd}@oregonstate.edu, {tunnell2,marinov}@illinois.edu
ABSTRACT
Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI being one of the most widely used processes in software engineering, we do not know what motivates developers to use CI, and what barriers and unmet needs they face. Without such knowledge developers make easily avoidable errors, tool builders invest in the wrong direction, and researchers miss many opportunities for improving the practice of software engineering.
In this paper, we present a qualitative study of the barriers and needs developers face when using CI. We conduct 16 semi-structured interviews with developers from different industries and development scales. We triangulate our findings by running two surveys. The Focused Survey samples 51 developers at a single company. The Broad Survey samples a population of 523 developers from all over the world. We find that when using and implementing CI, developers face trade-offs between speed and certainty (Assurance), between better access and information security (Security), and between more configuration options and greater ease of use (Flexibility). We present implications of these trade-offs for developers, tool builders, and researchers.
CCS CONCEPTS
-Software and its engineering →Agile software development;
-Software testing and debugging;
KEYWORDS
Continuous Integration, Automated Testing
1 INTRODUCTION
Continuous integration (CI) systems automate the compilation, building, and testing of software. CI usage is widespread throughout the software development industry. For example, the “State of Agile” industry survey [51], with 3,880 participants, found half of the respondents use CI. The “State of DevOps” report [33], a survey of over 4,600 technical professionals from around the world, finds CI to be an indicator of “high performing IT organizations”. We previously reported [19] that 40% of the 34,000 most popular open-source projects on GitHub use CI, and the most popular projects are more likely to use CI (70% of the top 500 projects).
Despite the widespread adoption of CI, there are still many unanswered questions about CI. In one study, Vasilescu et al. [50] show that CI correlates with positive quality outcomes. In our previous work [19], we examine the usage of CI among open-source projects on GitHub, and show that projects that use CI release more frequently than projects that do not. However, these studies do not present what barriers and needs developers face when using CI, or what trade-offs developers must make when using CI.
To fill in the gaps in knowledge about developers’ use of CI, we ask the following questions: What needs do developers have that are unmet by their current CI system(s)? What problems have developers experienced when configuring and using CI system(s)? How do developers feel about using CI? Without answers to these questions, developers can potentially find CI more obstructive than helpful, tool builders can implement unneeded features, and researchers may not be aware of areas of CI usage that require further examination and solutions that can further empower practitioners.
To answer these questions, we employ complementary established research methodologies. Our primary methodology is interviews with 16 software developers from 14 different companies of all sizes. To triangulate [15] our findings, we deploy two surveys. The Focused Survey samples 51 developers at Pivotal¹. The Broad Survey samples 523 participants, of which 95% are from industry, and 70% have seven or more years of software development experience. The interviews provide the content for the surveys, and the Focused Survey provides depth, while the Broad Survey provides breadth. Analyzing all this data, we answer four research questions:
RQ1: What barriers do developers face when using CI? (see §4.1)
RQ2: What unmet needs do developers have with CI tools? (see §4.2)
RQ3: Why do developers use CI? (see §4.3)
RQ4: What benefits do developers experience using CI? (see §4.4)
Based on our findings, we identify three trade-offs developers face when using CI. Other researchers [32, 34, 53] have identified similar trade-offs in different domains. We name these trade-offs Assurance, Security, and Flexibility.
Assurance describes the trade-off between increasing the added value that extra testing provides, and the extra cost of performing that testing. Rothermel et al. [34] identify this trade-off as a motivation for test prioritization.
Security describes the trade-off between increased security measures, and the ability to access and modify the CI system as needed. Post and Kagan [32] found a third of knowledge workers report security restrictions hinder their ability to perform their jobs. We observe this issue also applies to CI users.
¹pivotol.io
Flexibility describes the trade-off that occurs when developers want systems that are both powerful and highly configurable, yet at the same time, they want those systems to be simple and easy to use. Xu et al. [53] identify the costs of over-configurable systems and found that these systems severely hinder usability. We also observe the tension from this trade-off among developers using CI.
In the context of these three trade-offs, we present implications for three audiences: developers, tool builders, and researchers. For example, developers face difficult choices about how much testing is enough, and how to choose the right tests to run. Tool builders should create UIs for CI users to configure their CI systems, but these UIs should serialize configurations out to text files so that they can be kept in version control. Researchers have much to bring to the CI community, such as helping with fault localization and test parallelization when using CI, and examining the security challenges developers face when using CI.
This paper makes the following contributions:
(1) We conduct exploratory semi-structured interviews with 16 developers, then triangulate these findings with a Focused Survey of 51 developers at Pivotal and a Broad Survey of 523 developers from all over the world.
(2) We provide an empirically justified set of developers’ motivations for using CI.
(3) We expose gaps between developers’ needs and existing tooling for CI.
(4) We present actionable implications that developers, tool builders, and researchers can build on.
The interview script, code set, survey questions, and responses can be found at http://cope.eecs.oregonstate.edu/CI_Tradeoffs.
2 BACKGROUND
The idea of Continuous Integration (CI) was first introduced [6] in the context of object-oriented design: “At regular intervals, the process of continuous integration yields executable releases that grow in functionality at every release...” This idea was then adopted as one of the core practices of Extreme Programming (XP) [3].
The core premise of CI, as described by Fowler [14], is that the more often a project integrates, the better off it is. CI systems are responsible for retrieving code, collecting all dependencies, compiling the code, and running automated tests. The system should output “pass” or “fail” to indicate whether the CI process was successful.
We asked our interview participants to describe their CI usage pipeline. While not all pipelines are the same, they generally share some common elements.
Changesets are a group of changes that a developer makes to the code. They may be a single commit, or a group of commits, but they should be a complete change, so that after the changeset is applied, it should not break the program.
When a CI system observes a change made by developers, this triggers a CI event. How and when the CI is triggered is based on how the CI is configured. One common way to trigger CI is when a commit is pushed to a repository.
For the CI to test the code without concern for previous data or external systems, it is important that CI runs in a clean environment. The automated build script should be able to start with a clean environment and build the product from scratch before executing tests. Many developers use containers (e.g., Docker) to implement clean environments for builds.
An important step in the CI pipeline is confirming that the changeset was integrated correctly into the application. One common method is a regression test suite, including unit tests and integration tests. The CI system can also perform other analyses, such as linting or evaluating test coverage.
The last step is to deploy the artifact. We found some developers consider deployment to be a part of CI, and others consider continuous deployment (CD) to be a separate process.
3 METHODOLOGY
Inspired by established guidelines [24, 28, 31, 41, 48], the primary methodologies we employ in this work are interviews with software developers and two surveys of software developers to triangulate [15] our findings.
Interviews are a qualitative method and are effective at discovering the knowledge and experiences of the participants. However, they often have a limited sample size [41]. Surveys are a quantitative technique that summarizes information over a larger sample size and thus provides broader results. Together, they provide a much clearer picture than either can provide alone.
We first use interviews to elicit developers experiences and expectations when working with CI, and we build a taxonomy of barriers, unmet needs, motivations, and experiences. We build a survey populating the answers to each question with the results of the interviews. We deploy this survey at Pivotal, a software and services company, that also develops a CI system, Concourse3. To gain an even broader understanding, we also deploy another survey via social media. The interview script, code set, survey questions, and the responses can be found on our companion site.
3.1 Interviews
We used semi-structured interviews “which include a mixture of open-ended and specific questions, designed to elicit not only the information foreseen, but also unexpected types of information” [41]. We developed our interview script by performing iterative pilots.
We initially recruited participants from previous research, and then used snowball sampling to reach more developers. We interviewed 16 developers from 14 different companies, including large software companies, CI service companies, small development companies, a telecommunications company, and software consultants. Our participants had over eight years of development experience on average. We assigned each participant a subject number (Table 1). They all used CI, and a variety of CI systems, including Concourse, Jenkins, TravisCI, CruiseControl.NET, CircleCI, TeamCity, XCode Bots, Buildbot, Wercker, and appVeyor, and proprietary CI systems. Each interview lasted between 30 and 60 minutes, and the participants were offered a US$50 Amazon gift card for participating.
---
1 docker.io 2 concourse.ci 3 jenkins.io 4 travis-ci.org 5 cruisecontrolnet.org 6 circleci.com 7 jethrains.com/teamcity 8 developer.apple.com/xcode 9 buildbot.net 10 wercker.com 11 appveyor.com
trade-offs in continuous integration: assurance, security, and flexibility
We created a survey with 21 questions to quantify the findings from our semi-structured interviews: What tasks prompt you to interact with your CI tools? What differences have you observed? What, if anything, would you like to change about your current CI system? What barriers do developers face when using CI? We answer what barriers do developers face when using CI? (RQ1)
We collected 523 complete responses, and a total of 691 survey responses, from over 30 countries. Over 50% of our participants had over 10 years of software development experience, and over 80% had over 4 years experience.
4 Analysis of results
4.1 Barriers
We deployed our survey to a focused population of developers at Pivotal. Pivotal embraces agile development and also sponsors the development of Concourse CI. We sent our survey via email to 294 developers at Pivotal, and we collected 51 responses for a response rate of 17.3%. All respondents from Pivotal reported using CI.
We believe there are many voices among software developers, and we wanted to hear from as many of them as possible. We chose our sampling method for the survey to reach as many developers as possible. We recruited participants by advertising our survey on social media (Facebook, Twitter, and Reddit). As with all survey approaches, we were forced to make certain concessions [5]. When recruiting participants online, we can reach larger numbers of respondents, but in doing so, results suffer self-selection bias. To maximize participation, we followed guidelines from the literature [42], including keeping the survey as short as possible, and raffling one US$50 Amazon gift card to survey participants.
We collected 523 complete responses, and a total of 691 survey responses, from over 30 countries. Over 50% of our participants had over 10 years of software development experience, and over 80% had over 4 years experience.
4 Analysis of results
4.1 Barriers
We collected a list of barriers which prevent or hinder adoption and use of CI that our interview participants reported experiencing when using CI. We asked our survey participants to select up to three problems that they had experienced. If they had experienced more than three, we asked them to choose the three most common.
<table>
<thead>
<tr>
<th>Table 2: Barriers developers encounter when using CI</th>
</tr>
</thead>
<tbody>
<tr>
<td>Barrier</td>
</tr>
<tr>
<td>B1 Troubleshooting a CI build failure</td>
</tr>
<tr>
<td>B2 Overly long build times</td>
</tr>
<tr>
<td>B3 Automating the build process</td>
</tr>
<tr>
<td>B4 Lack of support for the desired workflow</td>
</tr>
<tr>
<td>B5 Setting up a CI server or service</td>
</tr>
<tr>
<td>B6 Maintaining a CI server or service</td>
</tr>
<tr>
<td>B7 Lack of tool integration</td>
</tr>
<tr>
<td>B8 Security and access controls</td>
</tr>
</tbody>
</table>
B1 Troubleshooting a CI build failure. When a CI build fails, some participants begin the process of identifying why the build failed. Sometimes, this can be fairly straightforward. However, for some build failures on the CI server, where the developer does not have the same access as they have when debugging locally, troubleshooting the failure can be quite challenging. S4 described one such situation:
The interviews were based on the research questions presented in Section 1. The following are some examples of the questions that we asked in the interview:
- Tell me about the last time you used CI.
- What tasks prompt you to interact with your CI tools?
- Comparing projects that do use CI with those that don’t, what differences have you observed?
- What, if anything, would you like to change about your current CI system?
We coded the interviews using established guidelines from the literature [35] and followed the guidance from Campbell et al. [7] on specific issues related to coding semi-structured interview data, such as segmentation, codebook evolution, and coder agreement.
The first author segmented the transcript from each interview by units of meaning [7]. The first two authors then collaborated on coding the segmented interviews, using the negotiated agreement technique to achieve agreement [7]. Negotiated agreement is a technique where both researchers code a single transcript and discuss their disagreements in an effort to reconcile them before continuing on. We coded the first eight interviews together using this negotiated agreement technique. Because agreement is negotiated along the way, there is no inter-rater agreement number. After the eighth interview, the first and second author independently coded the remaining interviews. Our final codebook contained 25 codes divided into 4 groups: demographics, systems/tools, process, and human CI interaction. The full codeset is available on our companion site.
3.2 Survey
We created a survey with 21 questions to quantify the findings from our semi-structured interviews. The questions for the survey were created to answer our research questions, focusing on what benefits, barriers, and unmet needs developers have when using CI.
The survey consisted of multiple choice questions, with a final open-ended text field to allow participants to share any additional information about CI. The answers for these multiple choice questions were populated from the answers given by interview participants. We ensured completeness by including an “other” field where appropriate. To prevent biasing our participants, we randomized the order of answers in multiple-choice questions.
Focused Population We deployed our survey to a focused population of developers at Pivotal. Pivotal embraces agile development and also sponsors the development of Concourse CI. We sent our survey via email to 294 developers at Pivotal, and we collected 51 responses for a response rate of 17.3%. All respondents from Pivotal reported using CI.
Broad Population We believe there are many voices among software developers, and we wanted to hear from as many of them as possible. We chose our sampling method for the survey to reach as many developers as possible. We recruited participants by advertising our survey on social media (Facebook, Twitter, and Reddit). As with all survey approaches, we were forced to make certain concessions [5]. When recruiting participants online, we can reach larger numbers of respondents, but in doing so, results suffer self-selection bias. To maximize participation, we followed guidelines from the literature [42], including keeping the survey as short as possible, and raffling one US$50 Amazon gift card to survey participants.
We collected 523 complete responses, and a total of 691 survey responses, from over 30 countries. Over 50% of our participants had over 10 years of software development experience, and over 80% had over 4 years experience.
<table>
<thead>
<tr>
<th>Table 1: Interview Participants</th>
</tr>
</thead>
<tbody>
<tr>
<td>Subject</td>
</tr>
<tr>
<td>S1</td>
</tr>
<tr>
<td>S2</td>
</tr>
<tr>
<td>S3</td>
</tr>
<tr>
<td>S4</td>
</tr>
<tr>
<td>S5</td>
</tr>
<tr>
<td>S6</td>
</tr>
<tr>
<td>S7</td>
</tr>
<tr>
<td>S8</td>
</tr>
<tr>
<td>S9</td>
</tr>
<tr>
<td>S10</td>
</tr>
<tr>
<td>S11</td>
</tr>
<tr>
<td>S12</td>
</tr>
<tr>
<td>S13</td>
</tr>
<tr>
<td>S14</td>
</tr>
<tr>
<td>S15</td>
</tr>
<tr>
<td>S16</td>
</tr>
</tbody>
</table>
The surveys consisted of multiple choice questions, with a final open-ended text field to allow participants to share any additional information about CI. The answers for these multiple choice questions were populated from the answers given by interview participants. We ensured completeness by including an “other” field where appropriate. To prevent biasing our participants, we randomized the order of answers in multiple-choice questions.
Focused Population We deployed our survey to a focused population of developers at Pivotal. Pivotal embraces agile development and also sponsors the development of Concourse CI. We sent our survey via email to 294 developers at Pivotal, and we collected 51 responses for a response rate of 17.3%. All respondents from Pivotal reported using CI.
Broad Population We believe there are many voices among software developers, and we wanted to hear from as many of them as possible. We chose our sampling method for the survey to reach as many developers as possible. We recruited participants by advertising our survey on social media (Facebook, Twitter, and Reddit). As with all survey approaches, we were forced to make certain concessions [5]. When recruiting participants online, we can reach larger numbers of respondents, but in doing so, results suffer self-selection bias. To maximize participation, we followed guidelines from the literature [42], including keeping the survey as short as possible, and raffling one US$50 Amazon gift card to survey participants.
We collected 523 complete responses, and a total of 691 survey responses, from over 30 countries. Over 50% of our participants had over 10 years of software development experience, and over 80% had over 4 years experience.
4 Analysis of results
4.1 Barriers
We answer what barriers do developers face when using CI? (RQ1)
We collected a list of barriers which prevent or hinder adoption and use of CI that our interview participants reported experiencing when using CI. We asked our survey participants to select up to three problems that they had experienced. If they had experienced more than three, we asked them to choose the three most common.
<table>
<thead>
<tr>
<th>Table 2: Barriers developers encounter when using CI</th>
</tr>
</thead>
<tbody>
<tr>
<td>Barrier</td>
</tr>
<tr>
<td>B1 Troubleshooting a CI build failure</td>
</tr>
<tr>
<td>B2 Overly long build times</td>
</tr>
<tr>
<td>B3 Automating the build process</td>
</tr>
<tr>
<td>B4 Lack of support for the desired workflow</td>
</tr>
<tr>
<td>B5 Setting up a CI server or service</td>
</tr>
<tr>
<td>B6 Maintaining a CI server or service</td>
</tr>
<tr>
<td>B7 Lack of tool integration</td>
</tr>
<tr>
<td>B8 Security and access controls</td>
</tr>
</tbody>
</table>
B1 Troubleshooting a CI build failure. When a CI build fails, some participants begin the process of identifying why the build failed. Sometimes, this can be fairly straightforward. However, for some build failures on the CI server, where the developer does not have the same access as they have when debugging locally, troubleshooting the failure can be quite challenging. S4 described one such situation:
If I get lucky, I can spot the cause of the problem right from the results from the Jenkins reports, and if not, then it becomes more complicated.
One way tool makers have tried to help developers is via better logging and storing test artifacts to make it easier to examine failures. One participant described how they use Sauce Labs\(^{13}\), a service for automated testing of web pages, in conjunction with their CI. When a test fails on Sauce Labs, there is a recording that the developers can watch to determine exactly how their test failed. Another participant described how Wercker saves a container from each CI run, so one can download the container and run the code in the container to debug a failed test.
**B2 Overly long build times.** Because CI must confirm that the current changeset is integrated correctly, it must build the code and run automated tests. This is a blocking step for developers, because they do not want to accept the changeset until they can be certain that it will not break the build. If this blocking step becomes too long, it reduces developers’ productivity. Many interview participants reported that their build times slowly grow over time, e.g., according to S10:
Absolutely [our build times grow over time]. Worst case scenario it creeps with added dependencies, and added sloppy tests, and too much I/O. That’s the worst case scenario for me, when it is a slow creep.
Other participants told us they had seen build times increase because of bugs in their build tools, problems with caching, dependency issues during the build process, and adding different styles of tests (e.g., acceptance tests) to the CI builds.

To dig a little deeper, we examined in-depth what developers meant by overly long build times. S9 said:
My favorite way of thinking about build time is basically, you have tea time, lunch time, or bedtime. Your builds should run in like, 5-ish minutes, however long it takes to go get a cup of coffee, or in 40 minutes to 1.5 hours, however long it takes to go get lunch, or in 8-ish hours, however long it takes to go and come back the next day.
Fowler [14] suggests most projects should try to follow the XP guideline of a 10-minute build. When we asked our Broad Survey participants what is the maximum acceptable time for a CI build to take, the most common answer was also 10 minutes, as shown in Figure 1.
Many of our interview participants reported having spent time and effort reducing the build time for their CI process. S15 said:
[When the build takes too long to run], we start to evaluate the tests, and what do we need to do to speed up the environment to run through more tests in the given amount of time. ... Mostly I feel that CI isn’t very useful if it takes too long to get the feedback.
When we asked our survey participants, 96% of Focused Survey participants and 78% of Broad Survey participants said they had actively worked to reduce their build times. This shows long build times are a common barrier faced by developers using CI.
**B3 Automating the build process.** CI systems automate the manual process that developers previously followed when building and testing their code. The migration of these manual processes to automated builds requires that developers commit time and resources before the benefits of CI can be realized.
**B4 Lack of support for the desired workflow.** Interview participants told us that CI tools are often designed with a specific workflow in mind. When using a tool to implement a CI process, it can be difficult to use if one is trying to use a different workflow than the one for which the tool was designed. For example, when asked how easy it is to use CI tools, S2 said:
Umm, I guess it really depends on how well you adopt their workflow. For me that’s been the most obvious thing. ... As soon as you want to adopt a slightly different branching strategy or whatever else, it’s a complete nightmare.
**B5 Maintaining a CI server or service.** This barrier is similar to N1 Easier configuration of CI servers or services; see section 4.2.
**B6 Setting up a CI server or service.** For our interview participants, setting up a CI server was not a concern when writing open-source code, as they can easily use one of several CI services available for free to open-source projects. We found that large commercial projects, while very complex, often have the resources to hire dedicated personnel to manage their CI pipeline. However, developers on small proprietary projects do not have the resources to afford CI as a service, nor do they have the hardware and expertise needed to setup CI locally. S9, who develops an app available on the Apple App Store, said:
[Setup] took too much time. All these tools are oriented to server setups, so I think it’s very natural if you are running them on a server, but it’s not so natural if you are running them on your personal computer. ... this makes a lot of friction if you want to set [CI] up on your laptop.
Additionally, in the comments section of our survey, we received several comments on this issue, for example:
[We need] CI for small scale individual developers! We need better options IMO.
While some of these concerns can be addressed by tool builders creating tools targeted for smaller scale developers, more research is needed to determine how project size impacts the usage of CI.
**B7 Lack of tool integration.** This barrier is similar to N2 Better tool integration; see section 4.2.
**B8 Security and access controls.** Because CI pipelines have access to the entire source code of a given project, security and access controls are vitally important. For CI pipelines that exist entirely inside of a company firewall, this may not be as much of a concern, but for projects using CI as a service, this can be a major issue. For
---
\(^{13}\)saucelabs.com
developers working on company driven open-source projects, this can also be a concern. S9 said:
*depending on your project, you may have an open-source project, but secrets living on or near your CI system.*
Configuring the security and access controls is vital to protecting those secrets. S16, who uses CI as a service, described how their project uses a secure environment variable (SEV) to authenticate a browser-based testing service with their CI. Maintaining the security of SEVs is a significant concern in their project.
**Observation**
Developers encountered increased complexity, increased time costs, and new security concerns when working with CI. Many of these issues are side-effects of implementing new CI features such as more configurability, more rigorous testing, and greater access to the development pipeline.
### 4.2 Needs
We next answer *What unmet needs do developers have with CI tools? (RQ2)* In addition to describing problems they encounter when using CI, our interview participants also described gaps where CI was not meeting their needs.
#### Table 3: Developer needs unmet by CI
<table>
<thead>
<tr>
<th>Need</th>
<th>Broad</th>
<th>Focused</th>
</tr>
</thead>
<tbody>
<tr>
<td>N1 Easier configuration of CI servers or services</td>
<td>52%</td>
<td>32%</td>
</tr>
<tr>
<td>N2 Better tool integration</td>
<td>38%</td>
<td>17%</td>
</tr>
<tr>
<td>N3 Better container/virtualization support</td>
<td>37%</td>
<td>27%</td>
</tr>
<tr>
<td>N4 Debugging assistance</td>
<td>30%</td>
<td>30%</td>
</tr>
<tr>
<td>N5 User interfaces for modifying CI configurations</td>
<td>29%</td>
<td>20%</td>
</tr>
<tr>
<td>N6 Better notifications from CI servers or services</td>
<td>22%</td>
<td>25%</td>
</tr>
<tr>
<td>N7 Better security and access controls</td>
<td>16%</td>
<td>32%</td>
</tr>
</tbody>
</table>
**N1 Easier configuration of CI servers or services.** While many CI tools offer a great deal of flexibility in how they can be used, this flexibility can require a large amount of configuration even for a simple workflow. From our interviews, we find that developers for large software companies rely on CI engineers to ensure that the configuration is correct, and to help instantiate new configurations. Open-source developers often use CI as a service, which allows for a much simpler configuration. However, for developers trying to configure their own CI server, this can be a substantial hurdle. S8, who was running his own CI server, said:
*The configuration and setup is costly, in time and effort, and yeah, there is a learning curve, on how to setup Jenkins, and setup the permissions, and the signing of certificates, and all these things. At first, when I didn’t know all these tools, I would have to sort them out, and at the start, you just don’t know.*
**N2 Better tool integration.** Our interview participants told us that they would like their CI system to better integrate with other tools. For example, S3 remarked:
*It would also be cool if the CI ran more analysis on the code, rather than just the tests. Stuff like Lint, FindBugs, or it could run bug detection tools. There are probably CIIs that already do that, but ours doesn’t.*
Additionally, in our survey responses, participants added in the “other” field both technical problems, such as poor interoperability between node.js and Jenkins, as well as non-technical problems, such as “The server team will not install a CI tool for us”.
**N3 Better container/virtualization support.** One core concept in CI is that each build should be done in a clean environment, i.e., it should not depend on the environment containing the output from any previous builds. Participants told us that this was very difficult to achieve before software-based container platforms, e.g., Docker. However, there are still times when the build fails, and in doing so, breaks the CI server. S15 explained:
*...there will be [CI] failures, where we have to go through and manually clean up the environment.*
S3 had experienced the same issues and had resorted to building Docker containers inside other Docker containers to ensure that everything was cleaned up properly.
**N4 Debugging assistance.** When asked about how they debug test failures detected by their CI, most of our participants told us that they get the output logs and start their search there. These output logs can be quite large in size though, with hundreds of thousands of lines of output, from thousands of tests. This can create quite a challenge when trying to find a specific failure. S7 suggested that they would like their CI server to sift the output from the previous run and hide all the output which remained unchanged. S15, who worked for a large company, had developed an in-house tool to do exactly this, to help developers find errors faster by filtering the output to only show changes from the previous CI run.
**N5 User interfaces for modifying CI configurations.** Many participants described administering their CI tools via configuration scripts. However, participants expressed a desire to make these configuration files editable via a user interface, which they felt would be easier. S3 said:
*Most of the stuff we are configuring could go in a UI. ... We are not modifying heavy logic. We just go in a script and modify some values. ... So all of the tedious stuff you modify by hand could go into a UI.*
Additionally, multiple participants also added “Bad UI” as a free-form answer to the question about problems experienced with CI. Developers want to be able to edit their configuration files via user interfaces, but they also want to be able to commit these configurations to their repository. Our interview participants told us they want to commit the configurations, because then when they fork a repository, the CI configurations are included with the new fork as well.
**N6 Better notifications from CI servers or services.** Almost all participants had the ability to setup notifications from their CI server, but very few found them to be useful. When asked about notifications from his CI, S7 said that he will routinely receive up to 20 emails from a single pull request, which he will immediately delete. Other participants did in fact find the notifications useful, though, including S10 who reads through them every morning, to refresh his memory of where he left off the day before.
**N7 Better security and access controls.** This need is similar to B8 Security and access controls; see section 4.1.
We next answer *Why do developers use CI? (RQ3)*. We identified developer motivations from the interviews.
**Table 4: Developers' motivation for using CI**
<table>
<thead>
<tr>
<th>Motivation</th>
<th>Broad</th>
<th>Focused</th>
</tr>
</thead>
<tbody>
<tr>
<td>M1 CI helps catch bugs earlier</td>
<td>75%</td>
<td>86%</td>
</tr>
<tr>
<td>M2 CI makes us less worried about breaking our builds</td>
<td>72%</td>
<td>82%</td>
</tr>
<tr>
<td>M3 CI provides a common build environment</td>
<td>70%</td>
<td>78%</td>
</tr>
<tr>
<td>M4 CI helps us deploy more often</td>
<td>68%</td>
<td>75%</td>
</tr>
<tr>
<td>M5 CI allows faster iterations</td>
<td>57%</td>
<td>76%</td>
</tr>
<tr>
<td>M6 CI makes integration easier</td>
<td>57%</td>
<td>75%</td>
</tr>
<tr>
<td>M7 CI can enforce a specific workflow</td>
<td>40%</td>
<td>51%</td>
</tr>
<tr>
<td>M8 CI allows testing across multiple platforms</td>
<td>29%</td>
<td>73%</td>
</tr>
</tbody>
</table>
**M1 CI helps catch bugs earlier.** Preventing the deployment of broken code is a major concern for developers. Finding and fixing bugs in production can be an expensive and stressful endeavor. Kerzazi and Adams [22] reported that 50% of all post-release failures were because of bugs. We would expect that preventing bugs from going into production is a major concern for developers. Indeed, many interview participants said that one of the biggest motivations for using CI was that it identifies bugs early on, keeping them out of the production code. For example, S3 said:
> [CI] does have a pretty big impact on [catching bugs]. It allows us to find issues even before they get into our main repo, ... rather than letting bugs go unnoticed, for months, and letting users catch them.
**M2 Less worry about breaking the build.** Kerzazi et al. [23] reported that for one project, up to 2,300 man-hours were lost over a six month period due to broken builds. Not surprisingly, this was a common theme among interview participants. For instance, S3 discussed how often this happened before CI:
> ...and since we didn’t have CI it was a nightmare. We usually tried to synchronize our changes, ... [but] our build used to break two or three times a day.
S2 talked about the repercussions of breaking the build:
> [When the build breaks], you gotta wait for whoever broke it to fix it. Sometimes they don’t know how, sometimes they left for the day, sometimes they have gone on vacation for a week. There were a lot of points at which all of us, a whole chunk of the dev team was no longer able to be productive.
**M3 Providing a common build environment.** One challenge developers face is ensuring that the environment contains all dependencies needed to build the software. By starting the CI process with a clean environment, fetching all the dependencies, and then building the code each time, developers can be assured that they can always build their code. Several developers told us that in their team if the code does not build on the CI server, then the build is considered broken, regardless of how it behaves on an individual developer’s machine. For example, S5 said:
> ...If it doesn’t work here (on the CI), it doesn’t matter if it works on your machine.
**M4 CI helps projects deploy more often.** Our previous work [19] found that open-source projects that use CI deploy twice as often as projects that do not use CI. In our interviews, developers told us that they feel that CI helped them deploy more often. Additionally, developers told us that CI enabled them to have shorter development cycles than they otherwise would have, even if they did not deploy often for business reasons. For example, S14 said:
> [Every two weeks] we merge into master, and consider that releasable. We don’t often release every sprint, because our customer doesn’t want to. Since we are services company, not a products company, it’s up to our customer to decide if they want to release, but we ensure every two weeks our code is releasable if the customer chooses to do so.
**M5 CI allows faster iterations.** Participants told us that running CI for every change allows them to quickly identify when the current changset will break the build, or will cause problems in some other location(s) of the codebase. This speed allows developers to make large changes quickly, without introducing a large amount of bugs into the codebase. S15 stated:
> We were able to run through up to 10 or 15 cycles a day, running through different tests, to find where we were, what solutions needed to be where. Without being able to do that, without that speed, and that feedback, there is no way we could have accomplished releasing the software in the time frame required with the quality we wanted.
**M6 CI makes integration easier.** Initially, CI was presented as a way to avoid painful integrations [14]. However, while developers do think CI makes integration easier, it is not the primary reason that motivates developers to use CI. For many developers, they see their VCS as the solution to difficult integrations, not the CI.
**M7 Enforcing a specific workflow.** Prior to CI, there was no common way for tools to enforce a specific workflow (e.g., ensuring all tests are run before accepting changes). This is especially a concern for distributed teams, where it is harder to overcome tooling gaps through informal communication channels. However, with CI, not only are all the tests run on every changset, but everyone knows what the results are. Everyone on the team is aware when a code breaks the tests or the builds, without having to download the code and check the test results on their own machine. This can help find bugs faster and increase team awareness, both of which are important parts of code review [2].
S16 told us that he was pretty sure that before they added CI to their project, contributors were not running the tests routinely.
**M8 Test across all platforms.** CI allows a system to be tested on all major platforms (Windows, Linux, and OS X), without each environment being setup locally by each developer, e.g., S16 stated:
> We are testing across more platforms now, it is not just OS X and Linux, which is mostly what developers on projects run. That has been useful.
Nevertheless, one survey participant responded to our openended question at the end of the survey:
Simplifying CI across platforms could be easier. We currently want to test for OS X, Linux and Windows and need to have 3 CI services.
While this is a benefit already realized for some participants, others see this as an area in which substantial improvements could be made to CI to provide additional support.
### Observation
Developers use CI to guarantee quality, consistency, and visibility across different environments. However, adding and maintaining automated tests causes these benefits to come at the expense of increased time and effort.
#### 4.4 Experiences
We next answer the research question What benefits do developers experience using CI? (RQ4)
Devanbu et al. [11] found that developers have strongly held beliefs, often based on personal experience more than research results, and that practitioner beliefs should be given due attention. In this section we present developers’ beliefs, gathered from interviews, about using CI. Our results show developers are very positive about the use of CI.
**E1 Developers believe projects with CI give more value to automated tests.** Several participants told us that before using CI, although developers would write unit tests, they often would not be run, and developers did not feel that writing tests was worth the effort. S11 related:
> Several situations I have been in, there is no CI, but there is a test suite, and there is a vague expectation that someone is running this test sometimes. And if you are the poor schmuck that actually cares about tests, and you are trying to run them, and you can’t get anything to pass, and you don’t know why, and you are hunting around like “does anyone else actually do this?”
However, due to the introduction of CI, developers were able to see their tests being run for every changeset, and the whole team becomes aware when the tests catch an error that otherwise would have made it into the product. S16 summarized this feeling:
> [CI] increases the value of tests, and makes us more likely to write tests, to always have that check in there. [Without CI, developers] are not always going to run the tests locally, or you might not have the time to, if it is a larger suite.
**E2 Developers believe projects with CI have higher quality tests.** Interview participants told us that because projects that use CI run their automated tests more often, and the results are visible to the entire team, this motivates developers to write higher quality tests.
**E3 Developers believe projects that use CI have higher code quality.** Developers believe that using CI leads to higher code quality. By writing a good automated test suite, and running it after every change, developers can quickly identify when they make a change that does not behave as anticipated, or breaks some other part of the code. S10 said:
> CI for me is very intimate part of my development process. ... I lean on it for confidence in all areas. Essentially, if I don’t have some way of measuring my test coverage, my confidence is low. ... If I don’t have at least one end-to-end test, to make sure it runs as humans expect it to run, my confidence is low.
**E4 Developers believe projects with CI are more productive.** According to our interview participants, CI allows developers to focus more on being productive, and to let the CI take care of boring, repetitive steps, which can be handled by automation. S2 said:
> It just gets so unwieldy, and trying to keep track of all those bits and pieces that are moving around, ... [CI makes it] easier it is for them to just focus on what they need to do.
Another reason interview participants gave for being more productive with CI was that CI allows for faster iterations, which helps developers be more productive. S16 said:
> CI for me is a very intimate part of my development process. ... I lean on it for confidence in all areas. Essentially, if I don’t have some way of measuring my test coverage, my confidence is low. ... If I don’t have at least one end-to-end test, to make sure it runs as humans expect it to run, my confidence is low.
5 DISCUSSION
In this section, we discuss the trade-offs developers face when using CI, the implications of those trade-offs, and the differences between our two surveys.
5.1 CI Trade-Offs
As with any technology, developers who use CI should be aware of the trade-offs that arise when using that technology. We will look into three trade-offs that developers should be aware of when using CI: Assurance, Security, and Flexibility.
Assurance (Speed vs Certainty): Developers must consider the trade-off between speed and certainty. One of the benefits of CI is that it improves validation of the code (see M1, M2, and M9).
However, the certainty that code is correct comes at a price. Building and running all these additional tests causes the CI to slow down, which developers also considered a problem (see B2, M10). Ensuring that their code is correctly tested, but keeping build times manageable, is a trade-off developers must be aware of. Rothermel et al. [34] also identify this trade-off in terms of running tests as a motivation for test prioritization.
Security (Access vs Information Security): Information security should be considered by all developers. Developers are concerned about security when using CI (see B8, N7). This is important because a CI pipeline should protect the integrity of the code passing through the pipeline, protect any sensitive information needed during the build and test process (e.g., credentials to a database), as well as protect the machines that are running the CI system.
However, limiting access to the CI pipeline conflicts with developers’ need for better access (see B1, N4). During our interviews, developers reported that troubleshooting CI build failures was often difficult because they did not have the same access to code running on a CI system, as they did when running it locally on their own machine. Providing more access may make debugging easier, but poses challenges when trying to ensure the integrity of the CI pipeline. Post and Kagan [32] examine this trade-off for knowledge workers, and found security restrictions hinder a third of workers from being able to perform their jobs.
Flexibility (Configuration vs Simplicity): Another trade-off that developers face is between the flexibility and power of highly configurable CI systems, and the ease of use that comes from simplicity. Developers wish to have more flexibility in configuring and using their CI systems (see B4, B7, N2, and N3). More flexibility increases the power of a CI system, while at the same time also increasing its complexity.
However, the rising complexity of CI systems is also a concern for developers (see B5, B6, N1, and N5). Developers’ needs for more flexibility directly opposes the desire for more simplicity. Xu et al. [53] examine over-configurable systems and also found that these systems severely hinder usability.
5.2 Implications
Each of these three trade-offs leads to direct implications for developers, tool builders, and researchers.
Assurance (Speed vs Certainty)
Developers should be careful to only write tests that add value to the project. Tests that do not provide value still consume resources every CI build, and slow down the build process. As more tests are written over time, build times trend upward. Teams should schedule time for developers to maintain their test suites, where they can perform tasks such as removing unneeded tests [40], improving the test suite by filling in gaps in coverage, or increasing test quality.
Developers face difficult choices about the extent to which each project should be tested, and to what extent they are willing to slow down the build process to achieve that level of testing. Some projects can accept speed reductions because of large, rigorous tests. However, for other projects, it may be better to keep the test run times faster, by only executing some of the tests. While this can be done manually, developers should consider using advanced test selection/minimization approaches [4, 12, 16, 20, 54].
Tool builders can support developers by creating tools that allow developers to easily run subsets of their testing suites [54]. Helping developers perform better test selection can trade some certainty for speed gains.
Researchers should investigate the trade-offs between speed and certainty. Are there specific thresholds where the build duration matters more than others? Our results suggest that developers find it important to keep build times under 10 minutes. Researchers should find ways to give the best possible feedback to developers within 10 minutes. Another avenue for researchers is to build upon previous work [13] using test selection and test prioritization to make the CI process more cost effective.
Security (Access vs Information Security)
Developers should be cognizant of the security concerns that extra access to the CI pipeline introduces. This is especially a concern for developers inside companies where some or all of their code is open source. One interview participant told us that they navigate the dichotomy between security and openness by maintaining both an internal CI server that operates behind their company firewall, and using Travis CI externally. They cannot expose their internal CI due to confidentiality requirements, but they use external CI to be taken seriously and maintain a positive relationship with the developer community at large.
Tool Builders should provide developers with the ability to have more access to the build pipeline, without compromising the security of the system. One way of accomplishing this is could be to provide fine-grained account management with different levels of access, e.g., restricting less trusted accounts to view-only access of
We deployed the Focused Survey at a single company (Pivotal), and we explored whether these claims are indeed true. When considering adding complexity to their CI pipeline, developers must contend with developers that want expanded tool builders that allow for UI changes to configurations be captured by version-control systems. Tool builders should create tools that allow for UI changes to configurations, but also output those configurations in simple text files that can be easily included in version control. Researchers should collect empirical evidence that helps developers, who wish to reduce complexity by prioritizing convention over configuration, to establish those conventions based on evidence, not on arbitrary decisions. Researchers should develop a series of empirically justified “best practices” for CI processes. Also, developers who use CI believe strongly that CI improves test quality, and that CI makes them more productive. Researchers should evaluate whether these claims are indeed true.
5.3 Focused (Pivotal) vs Broad Survey Results
We deployed the Focused Survey at a single company (Pivotal), and the Broad Survey to a large population of developers using social media. After performing both surveys, we discussed the findings with a manager at Pivotal, and these discussions allowed us to develop a deeper understanding of the results.
Flaky Tests The survey deployed at Pivotal contained 4 additional questions requested by Pivotal. One question asked developers to report the number of CI builds failing each week due to true test failures. Another question asked developers to estimate the number of CI builds failing due to non-deterministic (flaky) tests [27]. Figure 6 shows the reported number of CI build failures because of flaky tests, as well as failures due to true test failures. There was no significant difference between the two distributions (Pearson’s Chi-squared test, p-value = 0.48), suggesting that developers experienced similar numbers of flaky and true CI failures per week.
However, for the largest category, >10 fails a week, there were twice as many flaky failures as true failures.
When we discussed our findings with the manager at Pivotal, he indicated this was the most surprising finding. He related that at Pivotal, they have a culture of trying to remove flakiness from tests whenever possible. That claim was supported by our survey response, where 97.67% of Pivotal participants reported that when they encounter a flaky test, they fix it. Nevertheless, our participants reported that CI failures at Pivotal were just as likely to be caused by flaky tests as by true test failures.
Build Times Focused Survey respondents indicated that their CI build times typically take “greater than 60 minutes”. This is in contrast with the “5-10 minutes” average response from respondents in the Broad Survey. This difference can also be observed in the acceptable build time question, in which Focused Survey respondents selected “varies by project” most often compared to the Broad Survey respondents that selected “10 minutes” as the most commonly acceptable build time.
Pivotal management promotes the use of CI, and its accompanying automation, for as many aspects of their software development as possible. According to the manager at Pivotal, the difference in responses for actual and acceptable build times can be explained by the belief that adhering to test-driven development results in significantly more unit tests, but for Pivotal, the extra testing is worth the longer CI build times. The manager also suggested that the addition of multiple target platforms in CI builds will also necessarily increase build times. Therefore, at Pivotal, while they seek to reduce those times whenever possible, they accept longer build times when necessary.
Maintenance Costs Focused Survey respondents reported experiencing “troubleshooting a CI build failure”, “overly long CI build times”, and “maintaining a CI server or service” more often than the Broad Survey respondents. When asked about this difference, the manager at Pivotal indicated that they actively promote a culture of process ownership within their development teams, so the developers are responsible for maintaining and configuring the CI services that they use. They also said that the CI systems they use are more powerful and complex than other CI systems, resulting in a more complicated setup, but provides more control over the build process.
6 THREATS TO VALIDITY
**Reproducibility** Can others replicate our results? Qualitative studies in general are very difficult to replicate. We address this threat by conducting interviews, a focused survey at a single company, and a large-scale survey of a broad range of developers. The interview script, code set, survey questions, and raw data can be found on our companion site. We cannot publish the transcripts because we told the interview participants we would not release the transcripts.
**Construct** Are we asking the right questions? To answer our research questions, we used semi-structured interviews [41], which explore themes while also letting participants bring up new ideas throughout the process. By allowing participants to have the freedom to bring up topics, we avoid biasing the interviews with our preconceived ideas of CI.
**Internal** Did we skew the accuracy of our results with how we collected and analyzed information? Interviews and surveys can be affected by bias and inaccurate responses. These could be intentional or unintentional. We gave interviewees gift cards for their participation and offered the survey participants the chance to win a gift card, which could bias our results.
To mitigate these concerns, we followed established guidelines in the literature [31, 39, 42] for designing and deploying our survey. We ran iterative pilots for both studies and the surveys, and we kept the surveys as short as possible.
**External** Do our results generalize? By interviewing selected developers, it is not possible to understand the entire developer population. To mitigate this, we attempted to recruit as diverse a population as possible, including 14 different companies, and a wide variety of company size and domains. We then validate our responses using the Focused Survey with 51 responses, and the Broad Survey with 523 responses from over 30 countries. Because Pivotal is a company which builds a CI tool, the results could be biased in favor of CI. To mitigate this, we widely recruited participants for the Broad Survey. However, because we recruited participants for the Broad Survey by advertising online, our results may be affected by self-selection bias.
7 RELATED WORK
**Continuous Integration Studies** Vasilescu et al. [50] performed a preliminary quantitative study of quality outcomes for open-source projects using CI. Our previous work [19] presented a quantitative study of the costs, benefits, and usage of CI in open-source software. These studies do not examine barriers or needs when using CI, nor do they address the trade-offs developers must contend with. In contrast to these studies, we develop a deep understanding of the barriers and unmet needs of developers through interviews and surveys. We also discover trade-offs users face when using CI.
Debbiche et al. [10] present a case study of challenges faced by a telecommunications company when adopting CI. They present barriers from a specific company, but provide no generalized findings and do not address needs, experiences, or benefits of CI.
Other researchers have studied ways to improve CI. Stähl and Bosch [44] study automated software integration, a key building block for CI. Elbaum et al. [13] examined the use of regression test selection techniques to increase the cost-effectiveness in CI. Vos et al. [52] propose running CI tests even after deployment, to check the production code. Muşlu et al. [29] ran tests continuously in the IDE, even more often than in CI. Staples et al. [45] describe Continuous Validation as a potential next step after CI/CD.
Other work related to CI and automated testing includes generating acceptance tests from unit tests [21], black-box test prioritization [18], ordering of failed unit tests [17], generating automated tests at runtime [1], and prioritizing acceptance tests [43].
**Continuous Delivery** Continuous Delivery (CD), the automated deployment of software, is enabled by the use of CI. Olsson et al. [30] performed a case study of four companies transitioning to continuous delivery. They found some similar barriers when transitioning to CD as we find for CI, including automating the build process (B3), lack of support for desired workflow (B4), and lack of tool integration (B7).
Leppänen et al. [26] conducted semi-structured interviews with 15 developers to learn more about CD. Their paper does not have any quantitative analysis and does not claim to provide generalized findings. Others have studied CD and MySQL schemas [9], CD at Facebook [37], and the tension between release speed and software quality when doing CD [38].
**Developer Studies** We perform a study of developers to learn about their barriers, unmet needs, motivations, and experiences. Many other researchers have also studied developers, e.g., to learn how DevOps handles security [49], developers’ debugging needs [25], and how developers examine code history [8].
**Automated Testing** Previous work has examined the intertwined nature of CI and automated testing. Stolberg [46] and Sumrell [47] both provide experience reports of the effects of automating tests during transitions to CI. Santos and Hindle [36] used Travis CI build status as proxy for code quality.
8 CONCLUSIONS AND FUTURE WORK
Software teams use CI for many activities, including to catch errors, make integration easier, and deploy more often. Developers also experience being more productive when using CI. Despite the many benefits of CI, developers still encounter a wide variety of problems with CI. We hope that this paper motivates researchers to tackle the hard problems that developers face with CI.
For example, future work should examine the relationship between developers’ desired and actual build times when using CI. Another area that we identified for future work is a deeper analysis into flaky tests. Flaky test identification tools could automatically detect flaky tests to help developers know if CI failures are due to flaky tests or legitimate test failures. CI is here to stay as a development practice, and we need continuous improvement (‘CI’ of a different kind) of CI to realize its full potential.
ACKNOWLEDGMENTS
We thank Martin Fowler, Brian Marick, and Joel Spolsky for promoting the Broad Survey, and Matthew Kocher for all the help with the Focused Survey at Pivotal Labs. We also thank Amin Alipour, Andrew Begel, Souti Chattopadhyay, Mihai Codoban, Matt Hammer, Sean McGregor, Cyrus Omar, Anita Sarma, and the anonymous reviewers for their valuable comments on earlier versions of this paper. This research was partially supported by NSF grants CCF-1421503, CCF-1438982, CCF-1439957, and CCF-1553741.
REFERENCES
[29] Helena Holmström Olsson, Hiva Alahyari, and Jan Bosch. 2012. Climbing the “Stairway to Heaven” – A Multiple-Case Study Exploring Barriers in the Transition from Agile Development towards Continuous Deployment of Software. In Euromicro SEAA.
[46] Megan Sumrell. 2007. From waterfall to Agile – How does a QA Team Transition?. In AGILE.
|
{"Source-Url": "http://www.cs.cmu.edu/~mhilton/docs/Hilton_CI_Tradeoffs.pdf", "len_cl100k_base": 13193, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36573, "total-output-tokens": 16242, "length": "2e13", "weborganizer": {"__label__adult": 0.0003497600555419922, "__label__art_design": 0.00031375885009765625, "__label__crime_law": 0.00023627281188964844, "__label__education_jobs": 0.0017061233520507812, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.0001493692398071289, "__label__finance_business": 0.0003554821014404297, "__label__food_dining": 0.0002677440643310547, "__label__games": 0.000492095947265625, "__label__hardware": 0.0004515647888183594, "__label__health": 0.0003180503845214844, "__label__history": 0.00017380714416503906, "__label__home_hobbies": 0.00010001659393310548, "__label__industrial": 0.00023472309112548828, "__label__literature": 0.0002155303955078125, "__label__politics": 0.00019919872283935547, "__label__religion": 0.0003209114074707031, "__label__science_tech": 0.00270843505859375, "__label__social_life": 0.00011581182479858398, "__label__software": 0.005313873291015625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.00032067298889160156, "__label__travel": 0.0001691579818725586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69725, 0.03102]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69725, 0.15717]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69725, 0.93041]], "google_gemma-3-12b-it_contains_pii": [[0, 5171, false], [5171, 11450, null], [11450, 22869, null], [22869, 28794, null], [28794, 35051, null], [35051, 41132, null], [41132, 45323, null], [45323, 51077, null], [51077, 55563, null], [55563, 62257, null], [62257, 69725, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5171, true], [5171, 11450, null], [11450, 22869, null], [22869, 28794, null], [28794, 35051, null], [35051, 41132, null], [41132, 45323, null], [45323, 51077, null], [51077, 55563, null], [55563, 62257, null], [62257, 69725, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69725, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69725, null]], "pdf_page_numbers": [[0, 5171, 1], [5171, 11450, 2], [11450, 22869, 3], [22869, 28794, 4], [28794, 35051, 5], [35051, 41132, 6], [41132, 45323, 7], [45323, 51077, 8], [51077, 55563, 9], [55563, 62257, 10], [62257, 69725, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69725, 0.17964]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4e8c3415f50b420691926381698edd6984b9ebd3
|
NEURAL PROGRAM REPAIR BY JOINTLY LEARNING TO LOCALIZE AND REPAIR
Marko Vasic\textsuperscript{1,2}, Aditya Kanade\textsuperscript{1,3}, Petros Maniatis\textsuperscript{1}, David Bieber\textsuperscript{1}, Rishabh Singh\textsuperscript{1}
\textsuperscript{1}Google Brain, USA \textsuperscript{2}University of Texas at Austin, USA \textsuperscript{3}IISc Bangalore, India
vasic@utexas.edu \{akanade, maniatis, dbieber, rising\}@google.com
ABSTRACT
Due to its potential to improve programmer productivity and software quality, automated program repair has been an active topic of research. Newer techniques harness neural networks to learn directly from examples of buggy programs and their fixes. In this work, we consider a recently identified class of bugs called variable-misuse bugs. The state-of-the-art solution for variable misuse enumerates potential fixes for all possible bug locations in a program, before selecting the best prediction. We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs. We present multi-headed pointer networks for this purpose, with one head each for localization and repair. The experimental results show that the joint model significantly outperforms an enumerative solution that uses a pointer based model for repair alone.
1 INTRODUCTION
Advances in machine learning and the availability of large corpora of source code have led to growing interest in the development of neural representations of programs for performing program analyses. In particular, different representations based on token sequences (Gupta et al., 2017; Bhatia et al., 2018), program parse trees (Piech et al., 2015; Mou et al., 2016), program traces (Reed & de Freitas, 2015; Cai et al., 2017; Wang et al., 2018), and graphs (Allamanis et al., 2018) have been proposed for a variety of tasks including repair (Devlin et al., 2017b; Allamanis et al., 2018), optimization (Bunel et al., 2017), and synthesis (Parisotto et al., 2017; Devlin et al., 2017a).
In recent work, Allamanis et al. (2018) proposed the problem of variable misuse (VAR\textsc{MISUSE}): given a program, find program locations where variables are used, and predict the correct variables that should be in those locations. A VAR\textsc{MISUSE} bug exists when the correct variable differs from the current one at a location. Allamanis et al. (2018) show that variable misuses occur in practice, e.g., when a programmer copies some code into a new context, but forgets to rename a variable from the older context, or when two variable names within the same scope are easily confused. Figure 1\textsuperscript{a} shows an example derived from a real bug. The programmer copied line 5 to line 6, but forgot to rename \texttt{object} name to \texttt{subject} name. Figure 1\textsuperscript{b} shows the correct version.
Allamanis et al. (2018) proposed an enumerative solution to the VAR\textsc{MISUSE} problem. They train a model based on graph neural networks that learns to predict a correct variable (among all type-
```python
1 def validate_sources(sources):
2 object_name = get_content(sources, 'obj')
3 subject_name = get_content(sources, 'subj')
4 result = Result()
5 result.objects.append(object_name)
6 result.subjects.append(subject_name)
7 return result
```
(a) An example of VAR\textsc{MISUSE} shown in red text. At test time, one prediction task is generated for each of the variable-use locations (Blue boxes).
```python
1 def validate_sources(sources):
2 object_name = get_content(sources, 'obj')
3 subject_name = get_content(sources, 'subj')
4 result = Result()
5 result.objects.append(object_name)
6 result.subjects.append(subject_name)
7 return result
```
(b) The corrected version of Figure 1\textsuperscript{a}. If used at train time, one example would be generated for each of the variable-use locations (Blue boxes).
Figure 1: Enumerative solution to the VAR\textsc{MISUSE} problem.
accuracy drop of 4% test distributions hampers the prediction accuracy of the model. In our experiments, it leads to an
the slot on line 6 would match how the model was trained. This mismatch between training and
line 5 contains a bug elsewhere (at line 6) and not in the slot. Only the problem corresponding to
been trained on. For example, in Figure 1a, the prediction problem corresponding to the slot in a program, rather than one prediction per program. Allamanis et al. (2018) deal with this by
manually selecting a numerical threshold and reporting a bug (and its repair) only if the predicted
performance. Another drawback of the enumerative approach is that it produces one prediction per
problems, where important shared context among the dependent predictions is lost. Second, in the
training process, the synthetic bug is always only at the position of the slot. If for example, the
program in Figure 1b were used for training, then five training examples, one corresponding to each
identifier in a blue box (a variable read, in this case), would be generated. In each of them, the
synthetic bug is exactly at the slot position. However, during inference, the model generates one
prediction problem for each variable use in the program. In only one of these prediction problems
does the slot coincide with the bug location; in the rest, the model now encounters a situation where
there is a bug somewhere else, at a location other than the slot. This differs from the cases it has
been trained on. For example, in Figure 1a the prediction problem corresponding to the slot on
line 5 contains a bug elsewhere (at line 6) and not in the slot. Only the problem corresponding to
the slot on line 6 would match how the model was trained. This mismatch between training and
test distributions hampers the prediction accuracy of the model. In our experiments, it leads to an
accuracy drop of 4% to 14%, even in the non-enumerative setting, i.e., when the exact location of
the bug is provided. Since the enumerative approach uses the prediction of the same variable as the
original variable for declaring no bugs at that location, this phenomenon contributes to its worse
performance. Another drawback of the enumerative approach is that it produces one prediction per
slot in a program, rather than one prediction per program. Allamanis et al. (2015) deal with this by
manually selecting a numerical threshold and reporting a bug (and its repair) only if the predicted
probability for a repair is higher than that threshold. Setting a suitable threshold is difficult: too low
a threshold can increase false positives and too high a threshold can cause false negatives.
In order to deal with these drawbacks, we present a model that jointly learns to perform: 1) classifi-
cation of the program as either faulty or correct (with respect to VARMISUSE bugs), 2) localization
of the bug when the program is classified as faulty, and 3) repair of the localized bug. One of the key
insights of our joint model is the observation that, in a program containing a single VARMISUSE bug,
a variable token can only be one of the following: 1) a buggy variable (the faulty location), 2) some
occurrence of the correct variable that should be copied over the incorrect variable into the faulty
location (a repair location), or 3) neither the faulty location nor a repair location. This arises from
the fact that the variable in the fault location cannot contribute to the repair of any other variable –
there is only one fault location – and a variable in a repair location cannot be buggy at the same time.
This observation leads us to a pointer model that can point at locations in the input (Vinyals et al.,
2015) by learning distributions over input tokens. The hypothesis that a program that contains a bug
at a location likely contains ingredients of the repair elsewhere in the program (Engler et al., 2001)
has been used quite effectively in practice (Le Goues et al., 2012). Mechanisms based on pointer
networks can play a useful role to exploit this observation for repairing programs.
We formulate the problem of classification as pointing to a special no-fault location in the program.
To solve the joint prediction problem of classification, localization, and repair, we lift the usual
pointer-network architecture to multi-headed pointer networks, where one pointer head points to the
faulty location (including the no-fault location when the program is predicted to be non-faulty) and
another to the repair location. We compare our joint prediction model to an enumerative approach for
repair. Our results show that the joint model not only achieves a higher classification, localization,
and repair accuracy, but also results in high true positive score.
Furthermore, we study how a pointer network on top of a recurrent neural network compares to the
graph neural network used previously by Allamanis et al. (2018). The comparison is performed
for program repair given an a priori known bug location, the very same task used by that work.
Limited to only syntactic inputs, our model outperforms the graph-based one by 7 percentage points.
Although encouraging, this comparison is only limited to syntactic inputs; in contrast, the graph
model uses both syntax and semantics to achieve state-of-the-art repair accuracy. In future work we
plan to study how jointly predicting bug location and repair might improve the graph model when
bug location is unknown, as well as how our pointer-network-based model compares to the graph-based one when given semantics, in addition to syntax; the latter is particularly interesting, given the relatively simpler model architecture compared to message-passing networks (Gilmer et al., 2017).
In summary, this paper makes the following key contributions: 1) it presents a solution to the general variable-misuse problem in which enumerative search is replaced by a neural network that jointly localizes and repairs faults; 2) it shows that pointer networks over program tokens provide a suitable framework for solving the \texttt{VARMISUSE} problem; and 3) it presents extensive experimental evaluation over multiple large datasets of programs to empirically validate the claims.
2 RELATED WORK
Allamanis et al. (2018) proposed an enumerative approach for solving the \texttt{VARMISUSE} problem by making individual predictions for each variable use in a program and reporting back all variable discrepancies above a threshold, using a graph neural network on syntactic and semantic information. We contrast this paper to that work at length in the previous section.
Devlin et al. (2017b) propose a neural model for semantic code repair where one of the classes of bugs they consider is \texttt{VARREPLACE}, which is similar to the \texttt{VARMISUSE} problem. This model also performs an enumerative search as it predicts repairs for all program locations and then computes a scoring of the repairs to select the best one. As a result, it also suffers from a similar training/test data mismatch issue as Allamanis et al. (2018). Similar to us, they use a pooled pointer model to perform the repair task. However, our model uses multi-headed pointers to perform classification, localization, and repair jointly.
DeepFix (Gupta et al., 2017) and SynFix (Bhatia et al., 2018) repair syntax errors in programs using neural program representations. DeepFix uses an attention-based sequence-to-sequence model to first localize the syntax errors in a C program, and then generates a replacement line as the repair. SynFix uses a Python compiler to identify error locations and then performs a constraint-based search to compute repairs. In contrast, we use pointer networks to perform a fine-grained localization to a particular variable use, and to compute the repair. Additionally, we tackle variable misuses, which are semantic bugs, whereas those systems fix only syntax errors in programs.
The DeepBugs (Pradel & Sen, 2018) paper presents a learning-based approach to identifying name-based bugs. The main idea is to represent program expressions using a small set of features (e.g., identifier names, types, operators) and then compute their vector representations by concatenating the individual feature embeddings. By injecting synthetic bugs, the classifier is trained to predict program expressions as buggy or not for three classes of bugs: swapped function arguments, wrong binary operator, and wrong operand in a binary operation. Similar to previous approaches, it is also an instance of an enumerative approach. Unlike DeepBugs, which embeds a single expression, our model embeds the full input program (up to a maximum prefix length) and performs both localization and repair of the \texttt{VARMISUSE} bugs in addition to the classification task. Moreover, our model implements a pointer mechanism for representing repairs that often requires pointing to variable uses in other parts of the program that are not present in the same buggy expression.
Sk\_p (Pu et al., 2016) is another enumerative neural program repair approach to repair student programs using an encoder-decoder architecture. Given a program, for each program statement \( s_i \), the decoder generates a statement \( s'_i \) conditioned on an encoding of the preceding statement \( s_{i-1} \) and the following statement \( s_{i+1} \). Unlike our approach, which can generate \texttt{VARMISUSE} repairs using a pointer mechanism, the Sk\_p model would need to predict full program statements for repairing such bugs. Moreover, similar to the DeepBugs approach, it would be difficult for the model to predict repairs that include variables defined two or more lines above the buggy variable location.
Automated program repair (APR) has been an area of active research in software engineering. The traditional APR approaches (Gazzola et al., 2018; Monperrus, 2018; Motwani et al., 2018) differ from our work in the following ways: 1) They require a form of specification of correctness to repair a buggy program, usually as a logical formula/assertion, a set of tests, or a reference implementation; 2) They depend on hand-designed search techniques for localization and repair; 3) The techniques are applied to programs that violate their specifications (e.g., a program that fails some tests), which means that the programs are already known to contain bugs. In contrast, a recent line of research in APR is based on end-to-end learning, of which ours is an instance. Our solution (like some other
learning-based repair solutions) has the following contrasting features: 1) It does not require any specification of correctness, but learns instead to fix a common class of errors directly from source-code examples; 2) It does not perform enumerative search for localization or repair—we train a neural network to perform localization and repair directly; 3) It is capable of first classifying whether a program has the specific type of bug or not, and subsequently localizing and repairing it. The APR community has also designed some repair benchmarks, such as ManyBugs and IntroClass (Goues et al., 2015), and Defects4J (Just et al., 2014), for test-based program repair techniques. The bugs in these benchmarks relate to the expected specification of individual programs (captured through test cases) and the nature of bugs vary from program to program. These benchmarks are therefore suitable to evaluate repair techniques guided by test executions. Learning-based solutions like ours focus on common error types, so it is possible for a model to generalize across programs, and work directly on embeddings of source code.
3 Pointer Models for Localization and Repair of VARMISUSE
We use pointer-network models to perform joint prediction of both the location and the repair for VARMISUSE bugs. We exploit the property of VARMISUSE that both the bug and the repair variable must exist in the original program.
The model first uses an LSTM (Hochreiter & Schmidhuber, 1997) recurrent neural network as an encoder of the input program tokens. The encoder states are then used to train two pointers: the first pointer corresponds to the location of the bug, and the second pointer corresponds to the location of the repair variable. The pointers are essentially distributions over program tokens. The model is trained end-to-end using a dataset consisting of programs assumed to be correct. From these programs, we create both synthetic buggy examples, in which a variable use is replaced with an incorrect variable, and bug-free examples, in which the program is used as is. For a buggy training example, we capture the location of the bug and other locations where the original variable is used as the labels for the two pointers. For a bug-free training example, the location pointer is trained to point to a special, otherwise unused no-fault token location in the original program. In this paper, we focus on learning to localize a single VARMISUSE bug, although the model can naturally generalize to finding and repairing more bugs than one, by adding more pointer heads.
3.1 Problem Definition
We first define our extension to the VARMISUSE problem, which we call the VARMISUSEREPAIR problem. We define the problem with respect to a whole program’s source code, although it can be defined for different program scopes: functions, loop bodies, etc. We consider a program $f$ as a sequence of tokens $f = \langle t_1, t_2, \ldots, t_n \rangle$, where tokens come from a vocabulary $\mathbb{T}$, and $n$ is the number of tokens in $f$. The token vocabulary $\mathbb{T}$ consists of both keywords and identifiers. Let $\mathbb{V} \subseteq \mathbb{T}$ denote the set of all tokens that correspond to variables (uses, definitions, function arguments, etc.).
For a program $f$, we define $V^f_{\text{def}} \subseteq \mathbb{V}$ as the set of all variables defined in $f$, including function arguments; this is the set of all variables that can be used within the scope, including as repairs for a putative VARMISUSE bug. Let $V^f_{\text{use}} \subseteq \mathbb{V} \times \mathbb{L}$ denote the set of all (token, location) pairs corresponding to variable uses, where $\mathbb{L}$ denotes the set of all program locations.
Given a program $f$, the goal in the VARMISUSEREPAIR problem is to either predict if the program is already correct (i.e., contains no VARMISUSE bug) or to identify two tokens: 1) the location token $(t_i, l_i) \in V^f_{\text{use}}$, and 2) a repair token $t_j \in V^f_{\text{def}}$. The location token corresponds to the location of the VARMISUSE bug, whereas the repair token corresponds to any occurrence of the correct variable in the original program (e.g., its definition or one of its other uses), as illustrated in Figure 3.1. In the example from Figure 3.1, $\mathbb{T}$ contains all tokens, including variables, literals, keywords; $\mathbb{V}$ contains sources, object_name, subject_name, and result; $V^f_{\text{use}}$ contains the uses of variables, e.g., sources at its locations on lines 2 and 3, and $V^f_{\text{def}}$ is the same as $\mathbb{V}$ in this example.\(^1\)
\(^1\)Note that by a variable use, we mean the occurrence of a variable in a load context. However, this definition of variable use is arbitrary and orthogonal to the model. In fact, we use the broader definition of any variable use, load or store, when comparing to Allamanis et al. (2019), to match their definition and a fair comparison to their results (see Section 4).
def validate_sources(sources):
object_name = get_content(sources, 'obj')
subject = get_content(sources, 'subj')
result = Result()
result.objects.append(object_name)
result.subjects.append(subject)
return result
3.2 Multi-headed Pointer Models
We now define our pointer network model (see Figure 2b).
Given a program token sequence \( f = (t_1, t_2, \ldots, t_n) \), we embed the tokens \( t_i \) using a trainable embedding matrix \( \phi : \mathbb{T} \rightarrow \mathbb{R}^d \), where \( d \) is the embedding dimension. We then run an LSTM over the token sequence to obtain hidden states (dimension \( h \)) for each embedded program token.
\[
\begin{align*}
[e_1, \ldots, e_n] &= [\phi(t_1), \ldots, \phi(t_n)] \\
[h_1, \ldots, h_n] &= \text{LSTM}(e_1, \ldots, e_n)
\end{align*}
\]
Let \( m \in \{0, 1\}^n \) be a binary vector such that \( m[i] = 1 \) if the token \( t_i \in V^f_{\text{def}} \) or \( (t_i, \cdot) \in V^f_{\text{use}} \), otherwise \( m[i] = 0 \). The vector \( m \) acts as a masking vector to only consider hidden states that correspond to states of the variable tokens. Let \( H \in \mathbb{R}^{h \times n} \) denote a matrix consisting of the hidden-state vectors of the LSTM obtained after masking, i.e., \( H = m \odot [h_1, \ldots, h_n] \). We then perform attention over the hidden states using a mechanism similar to that of Rocktaschel et al. (2016) as follows:
\[
M = \tanh(W_1 H + W_2 h_n \otimes 1_n)
\]
where \( W_1, W_2 \in \mathbb{R}^{h \times h} \) are trainable projection matrices and \( 1_n \in \mathbb{R}^n \) is a vector of ones used to obtain \( n \) copies of the final hidden state \( h_n \).
We then use another trained projection matrix \( W \in \mathbb{R}^{h \times 2} \) to compute the \( \alpha \in \mathbb{R}^{2 \times n} \) as follows:
\[
\alpha = \text{softmax}(W^T M)
\]
The attention matrix \( \alpha \) corresponds to two probability distributions over the program tokens. The first distribution \( \alpha^T[0] \in \mathbb{R}^n \) corresponds to location token indices and the second distribution \( \alpha^T[1] \in \mathbb{R}^n \) corresponds to repair token indices. We experiment with and without using the masking vector on the hidden states, and also using masking on the unnormalized attention values.
3.3 Training the Model
We train the pointer model on a synthetically generated training corpus consisting of both buggy and non-buggy Python programs. Starting from a publicly available dataset of Python files, we construct the training, validation, and evaluation datasets in the following manner. We first collect the source code for each program definition from the Python source files. For each program definition \( f \), we collect the set of all variable definitions \( V^f_{\text{def}} \) and variable uses \( V^f_{\text{use}} \). For each variable use \( (v_u, i_u) \in V^f_{\text{use}} \), we replace its occurrence by another variable \( v_d \in V^f_{\text{def}} \) to obtain an example
ensuring the following conditions: 1) \( v_d \neq v_u \), 2) \(|V_{\text{def}}^f| > 1\), i.e., there are at least 2 possible variables that could fill the slot at \( l_u \).
Let \( i \) denote the token index in the program \( f \) of the variable \( v_u \) chosen to be replaced by another (incorrect) variable \( v_d \) in the original program. We then create two binary vectors \( \text{Loc} \in \{0, 1\}^n \) and \( \text{Rep} \in \{0, 1\}^n \) in the following manner:
\[
\text{Loc}[m] = \begin{cases} 1, & \text{if } i = m \\ 0, & \text{otherwise} \end{cases} \quad \text{Rep}[m] = \begin{cases} 1, & \text{if } v_u = t_m \\ 0, & \text{otherwise} \end{cases}
\]
\( \text{Loc} \) is a location vector of length \( n \) (program length) which is 1 at the location containing bug, otherwise 0. \( \text{Rep} \) is a repair vector of length \( n \) which is 1 at all locations containing the variable \( v_u \) (correct variable for the location \( i \)), otherwise 0.
For each buggy training example in our dataset, we also construct a non-buggy example where the replacement is not performed. This is done to obtain a 50-50 balance in our training datasets for buggy and non-buggy programs. For non-buggy programs, the target location vector \( \text{Loc} \) has a special token index 0 set to 1, i.e. \( \text{Loc}[0] = 1 \), and the value at all other indices is 0.
We use the following loss functions for training the location and repair pointer distributions.
\[
L_{\text{loc}} = -\sum_{i=1}^{n} (\text{Loc}[i] \times \log(\alpha^T[0][i])) \quad L_{\text{rep}} = -\sum_{i=1}^{n} (\text{Rep}[i] \times \log(\alpha^T[1][i]))
\]
The loss function for the repair distribution adds up the probabilities of target pointer locations. We also experiment with an alternative loss function \( L'_{\text{rep}} \) that computes the maximum of the probabilities of the repair pointers instead of their addition.
\[
L'_{\text{rep}} = -\max(\text{Rep}[i] \times \log(\alpha^T[1][i]))
\]
The joint model optimizes the additive loss \( L_{\text{joint}} = L_{\text{loc}} + L_{\text{rep}} \). The enumerative solution discussed earlier forms a baseline method. We specialize the multi-headed pointer model to produce only the repair pointer and use it within the enumerative solution for predicting repairs.
4 Evaluation
In our experimental evaluation, we evaluate three research questions. First, is the joint prediction model \textsc{VarmisuseRepair} effective in finding \textsc{Varmisuse} bugs in programs and how does it compare against the enumerative solution (Section 4.1)? Second, how does the presence of as-yet-unknown bugs in a program affect the bug-finding effectiveness of the \textsc{Varmisuse} repair model even in the non-enumerative case (Section 4.2)? Third, how does the repair pointer model compare with the graph-based repair model by Allamanis et al. (2018) (Section 4.3)?
Benchmarks We use two datasets for our experiments. Primarily, we use \textsc{Eth-Py150}², a public corpus of GitHub Python files extensively used in the literature (Raychev et al., 2016; Vechev & Yahav, 2016). It consists of 150K Python source files, already partitioned by its publishers into training and test subsets containing 100K and 50K files, respectively. We split the training set into two sets: training (90K) and validation (10K). We further process each dataset partition by extracting unique top-level functions, resulting in 394K (training), 42K (validation), and 214K (test) unique functions. For each function, we identify \textsc{Varmisuse} slots and repair candidates. For each function and slot pair, we generate one bug-free example (without any modification) and one buggy example by replacing the original variable at the slot location by a randomly chosen incorrect variable. More details about the data generation is presented in the appendix (Section A). Because of the quadratic complexity of evaluating the enumerative model, we create a smaller evaluation set by sampling 1K
²https://www.sri.inf.ethz.ch/py150
test files that results in 12,218 test examples (out of which half are bug-free). For the evaluation set, we construct a bug-free and buggy-example per function using the procedure defined before, but now by also randomly selecting a single slot location in the function instead of creating an example for each location for the training dataset. All our evaluation results on the ETH dataset use this filtered evaluation set. Note that the inputs to the enumerative model and the joint model are different; the joint model accepts a complete program, while the enumerative model accepts a program with a hole that identifies a slot. For this reason, training, validation, and test datasets for the enumerative approach are constructed by inserting a hole at variable-use locations.
Our second dataset, MSR-VarMisuse, is the public portion of the dataset used by [Allamanis et al. (2018)]. It consists of 25 C# GitHub projects, split into four partitions: train, validation, seen test, and unseen test, consisting of 3738, 677, 1807, and 1185 files each. The seen test partition contains (different) files from the same projects that appear in the train and validation partitions, whereas the unseen test partition contains entire projects that are disjoint from those in test and validation.
Note the differences between the two datasets: ETH-Py150 contains Python examples with a function-level scope, slots are variable loads, and candidates are variables in the scope of the slot (Python is dynamically typed, so no type information is used); in contrast, MSR-VarMisuse contains C# examples that are entire files, slots are both load and store uses of variables, and repair candidates are all variables in the slot’s scope with an additional constraint that they are also type-compatible with the slot. We use the ETH-Py150 dataset for most of our experiments because we are targeting Python, and we use MSR-VarMisuse when comparing to the results of [Allamanis et al. (2018)]. The average number of candidates per slot in the ETH-Py150 dataset is about 9.26, while in MSR-VarMisuse it is about 3.76.
### 4.1 Joint Model vs. Enumerative Approach
We first compare the accuracy of the joint model (Section 3.2) to that of an enumerative repair model, similar in spirit (but not in model architecture) to that by [Allamanis et al. (2018)]. For the enumerative approach, we first train a pointer network model $M_r$ to only predict repairs for a given program and slot. At test time, given a program $P$, the enumerative approach first creates $n$ variants of $P$, one per slot. We then use the trained model $M_r$ to predict repairs for each of the $n$ variants and combine them into a single set. We go through the predictions in decreasing order of probability, until a prediction modifies the original program. If no modifications happen, then it means that the model classifies the program under test as a bug-free program. We define two parameters to filter the predictions: 1) $\tau$: a threshold value for probabilities to decide whether to return the predictions, and 2) $k$: the maximum number of predictions the enumerative approach is allowed to make.
The results for the comparison for different $\tau$ and $k$ values are shown in Table 1. We measure the following metrics: 1) **True Positive**, the percentage of the bug-free programs in the ground truth classified as bug free; 2) **Classification Accuracy**, the percentage of total programs in the test set classified correctly as either bug free or buggy; 3) **Localization Accuracy**, the percentage of buggy programs for which the bug location is correctly predicted by the model; and 4) **Localization+Repair Accuracy**, the percentage of buggy programs for which both the location and repair are correctly predicted by the model.
The table lists results in decreasing order of prediction permissiveness. A higher $\tau$ value (and lower $k$ value) reduces the number of model predictions compared to lower $\tau$ values (and higher $k$ values). As expected, higher $\tau$ and lower $k$ values enable the enumerative approach to achieve a higher true positive rate, but a lower classification accuracy rate. More importantly for buggy programs, the localization and repair accuracy drop quite sharply. With lower $\tau$ and higher $k$ values, the true positive rate drops dramatically, while the localization and repair accuracy improve significantly. In contrast, our joint model achieves a maximum localization accuracy of 71% and localization-repair accuracy of 65.7%, an improvement of about 6.4% in localization and about 9.9% in localization-repair accuracy, compared to the lowest threshold and highest $k$ values. Remarkably, the joint model achieves such high accuracy while maintaining a high true-positive rate of 84.5% and a high classification accuracy of 82.4%. This shows that the network is able to perform the localization and repair tasks jointly, efficiently, and effectively, without the need of an explicit enumeration.
Table 1: The overall evaluation results for the joint model vs. the enumerative approach (with different threshold $\tau$ and top-k $k$ values) on the ETH-Py150 dataset. The enumerative approach uses a pointer network model trained for repair.
<table>
<thead>
<tr>
<th>Model</th>
<th>True Positive</th>
<th>Classification Accuracy</th>
<th>Localization Accuracy</th>
<th>Localization+Repair Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Enumerative</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Threshold ($\tau = 0.99$)</td>
<td>99.9%</td>
<td>53.5%</td>
<td>7.0%</td>
<td>7.0%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0.9$)</td>
<td>99.7%</td>
<td>56.7%</td>
<td>13.4%</td>
<td>13.3%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0.7$)</td>
<td>99.0%</td>
<td>59.2%</td>
<td>18.3%</td>
<td>17.9%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0.5$)</td>
<td>95.3%</td>
<td>63.8%</td>
<td>28.7%</td>
<td>27.1%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0.3$)</td>
<td>81.1%</td>
<td>68.6%</td>
<td>44.2%</td>
<td>39.7%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0.2$)</td>
<td>66.3%</td>
<td>70.6%</td>
<td>54.3%</td>
<td>47.4%</td>
</tr>
<tr>
<td>Threshold ($\tau = 0$)</td>
<td>42.2%</td>
<td>71.1%</td>
<td>64.6%</td>
<td>55.8%</td>
</tr>
<tr>
<td>Top-k ($k = 1$)</td>
<td>91.7%</td>
<td>63.6%</td>
<td>27.2%</td>
<td>24.8%</td>
</tr>
<tr>
<td>Top-k ($k = 3$)</td>
<td>64.9%</td>
<td>70.1%</td>
<td>49.6%</td>
<td>43.2%</td>
</tr>
<tr>
<td>Top-k ($k = 5$)</td>
<td>50.9%</td>
<td>70.9%</td>
<td>58.4%</td>
<td>50.4%</td>
</tr>
<tr>
<td>Top-k ($k = 10$)</td>
<td>43.5%</td>
<td>71.1%</td>
<td>63.6%</td>
<td>54.8%</td>
</tr>
<tr>
<td>Top-k ($k = \infty$)</td>
<td>42.2%</td>
<td>71.1%</td>
<td>64.6%</td>
<td>55.8%</td>
</tr>
<tr>
<td>Joint</td>
<td>84.5%</td>
<td>82.4%</td>
<td>71%</td>
<td>65.7%</td>
</tr>
</tbody>
</table>
Performance Comparison: In addition to getting better accuracy results, the joint model is also more efficient for training and prediction tasks. During training, the examples for the pointer model are easier to batch compared to the GGNN model in [Allamanis et al. 2018] as different programs lead to different graph structures. Moreover, as discussed earlier, the enumerative approaches require making $O(n)$ predictions at inference time, where $n$ denotes the number of variable-use locations in a program. On the other hand, the joint model only performs a single prediction for the two pointers given a program.
4.2 EFFECT OF INCORRECT SLOT PLACEMENT
We now turn to quantifying the effect of incorrect slot placement, which occurs frequently in the enumerative approach: $n - 1$ out of $n$ times for a program with $n$ slots. We use the same repair-only model from Section 4.1, but instead of constructing an enumerative bug localization and repair procedure out of it, we just look at a single repair prediction.
We apply this repair-only model to a test dataset in which, in addition to creating a prediction problem for a slot, we also randomly select one other variable use in the program (other than the slot) and replace its variable with an incorrect in-scope variable, thereby introducing a VARMISUSE bug away from the slot of the prediction problem. We generate two datasets: AddBugAny, in which the injected VARMISUSE bug is at a random location, and AddBugNear, in which the injection happens within two variable-use locations from the slot, and in the first 30 program tokens; we consider the latter a tougher, more adversarial case for this experiment. The corresponding bug-free datasets are NoBugAny and NoBugNear with the latter being a subset of the former. We refer to two experiments below: Any (comparison between NoBugAny and AddBugAny) and Near (comparison between NoBugNear and AddBugNear).
Figure 3 shows our results. Figure 3a shows that for Any, the model loses significant accuracy, dropping about 4.3 percentage points for $\tau = 0.5$. The accuracy drop is lower as a higher prediction probability is required by higher $\tau$, but it is already catastrophically low. Results are even worse for the more adversarial Near. As shown in Figure 3b, accuracy drops between 8 and 14.6 percentage points for different reporting thresholds $\tau$.
These experiments show that a repair prediction performed on an unlikely fault location can significantly impair repair, and hence the overall enumerative approach, since it relies on repair predictions for both localization and repair. Figure 4 shows some repair predictions in the presence of bugs.
Threshold | Repair Accuracy | Accuracy Drop
--- | --- | ---
| Value | NoBugAny | AddBugAny | |
| \( \tau = 0 \) | 80.8\% | 76.2\% | 4.6\% |
| \( \tau = 0.2 \) | 60.0\% | 55.5\% | 4.5\% |
| \( \tau = 0.3 \) | 40.8\% | 36.6\% | 4.2\% |
| \( \tau = 0.5 \) | 19.1\% | 14.8\% | 4.3\% |
| \( \tau = 0.7 \) | 8.5\% | 2.5\% | 3.0\% |
| \( \tau = 0.9 \) | 2.4\% | 1.0\% | 1.4\% |
(a) Testing of repair-only model on \textit{Any}.
(b) Testing of repair-only model on \textit{Near}.
Figure 3: The drop in repair-only model on due to incorrect slot placement.
<table>
<thead>
<tr>
<th>Threshold</th>
<th>Repair Accuracy</th>
<th>Accuracy Drop</th>
</tr>
</thead>
<tbody>
<tr>
<td>Value</td>
<td>NoBugNear</td>
<td>AddBugNear</td>
</tr>
<tr>
<td>( \tau = 0 )</td>
<td>88.6%</td>
<td>80.2%</td>
</tr>
<tr>
<td>( \tau = 0.2 )</td>
<td>81.5%</td>
<td>73.2%</td>
</tr>
<tr>
<td>( \tau = 0.3 )</td>
<td>68.7%</td>
<td>59.9%</td>
</tr>
<tr>
<td>( \tau = 0.5 )</td>
<td>44.6%</td>
<td>30.0%</td>
</tr>
<tr>
<td>( \tau = 0.7 )</td>
<td>24.1%</td>
<td>13.0%</td>
</tr>
<tr>
<td>( \tau = 0.9 )</td>
<td>18.1%</td>
<td>6.9%</td>
</tr>
<tr>
<td>( \tau = 0.99 )</td>
<td>8.5%</td>
<td>2.7%</td>
</tr>
</tbody>
</table>
4.3 COMPARISON OF GRAPH AND POINTER NETWORKS
We now compare the repair-only model on MSR-VarMisuse, the dataset used by the state-of-the-art \textsc{VarMisuse} localization and repair model by Allamanis et al. [2018]. Our approach deviates in three primary ways from that earlier one: 1) it uses a pointer network on top of an RNN encoder rather than a graph neural network, 2) it does separate but joint bug localization and repair rather than using repair-only enumeratively to solve the same task, and 3) it applies to syntactic program information only rather than syntax and semantics. Allamanis et al. [2018] reported in their ablation study that their system, on syntax only, achieved test accuracy of 55.3\% on the “seen” test; on the same test data we achieve 62.3\% accuracy. Note that although the test data is identical, we trained on the published training dataset, which is a subset of the unpublished dataset used in that ablation study. We get better results even though our training dataset is about 30\% smaller than their dataset.
4.4 EVALUATION ON VARIABLE MISUSE IN PRACTICE
In order to evaluate the model on realistic scenarios, we collected a dataset from multiple software projects in an industrial setting. In particular, we identified pairs of consecutive snapshots of functions from development histories that differ by a single variable use. Such before-after pairs
Prediction: item (47.2\%) ✓
Prediction: self (32.2\%) X
https://aka.ms/iclr18-prog-graphs-dataset
9
of function versions indicate likely variable misuses, and several instances of them were explicitly marked as \texttt{VARMISUSE} bugs by code reviewers during the manual code review process.
More precisely, we find two snapshots $f$ and $f'$ of the same program for which $V_{f, \text{def}} = V_{f', \text{def}}$, $V_{f, \text{use}} = V_{f', \text{use}} \cup (t_i, l_i)$, and $V_{f', \text{use}} = V_{f, \text{use}} \cup (t'_i, l_i)$, where $t_i \neq t'_i$, and $t_i, t'_i \in V_{f, \text{def}}$. For each such before-after snapshot pair $(f, f')$, we collected all functions from the same file in which $f$ was present. We expect our model to classify all functions other than $f$ as bug-free. For the function $f$, we want the model to classify it as buggy, and moreover, localize the bug at $l_i$, and repair by pointing out token $t'_i$. In all, we collected 4592 snapshot pairs. From these, we generated a test dataset of 41672 non-buggy examples and 4592 buggy examples. We trained the pointer model on a training dataset from which we exclude the 4592 files containing the buggy snapshots. The results of the joint model and the best localization and repair accuracies achieved by the enumerative baseline approach are shown in Table 2. The joint model achieved a true positive rate of 67.3\%, classification accuracy of 66.7\%, localization accuracy of 21.9\% and localization+repair accuracy of 15.8\%. These are promising results on data collected from real developer histories and in aggregate, our joint model could localize and repair 727 variable misuse instances on this dataset. On the other hand, the enumerative approach achieved significantly lower values of true positive rate of 41.7\%, classification accuracy of 47.2\%, localization accuracy of 6.1\%, and localization+repair accuracy of 4.5\%.
\begin{table}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
Model & True Positive & Classification Accuracy & Localization Accuracy & Localization+Repair Accuracy \\
\hline
Joint & 67.3\% & 66.7\% & 21.9\% & 15.8\% \\
Enumerative & 41.7\% & 47.2\% & 6.1\% & 4.5\% \\
\hline
\end{tabular}
\caption{The comparison of the joint model vs the enumerative approach on programs collected in an industrial setting.}
\end{table}
5 CONCLUSION
In this paper, we present an approach that jointly learns to localize and repair bugs. We use a key insight of the \texttt{VARMISUSE} problem that both the bug and repair must exist in the original program to design a multi-headed pointer model over a sequential encoding of program token sequences. The joint model is shown to significantly outperform an enumerative approach using a model that can predict a repair given a potential bug location. In the future, we want to explore joint localization and repair using other models such as graph models and combinations of pointer and graph models, possibly with using more semantic information about programs.
REFERENCES
A Training Data Generation
For each Python function in the ETH-Py150 dataset, we identify VARMisuse slots and repair candidates. We choose as slots only uses of variables in a load context; this includes explicit reads from variables in right-hand side expressions ($a = x + y$), uses as function-call arguments ($\text{func}(x, y)$), indices into dictionaries and lists even on left-hand side expressions ($\text{sequence}[x] = 13$), etc. We define as repair candidates all variables that are in the scope of a slot, either defined locally, imported globally (with the Python `global` keyword), or as formal arguments to the enclosing function.
For each slot in a function (variable use locations), we generate one buggy example, as long as there are at least two repair candidates for a slot (otherwise, the repair problem would be trivially solved by picking the only eligible candidate); we discard slots and corresponding examples with only trivial repair solutions, and we discard functions and corresponding examples with only trivial slots. For each buggy example, we also generate one bug-free example, by leaving the function as is and marking it (by assumption) as correct. Note that this results in duplicate copies of correct functions in the training dataset. To illustrate, using the correct function in Figure 1b, we would generate five bug-free examples (labeling the function, as is, as correct), and one buggy example per underlined slot (inserting an incorrect variable chosen at random), each identifying the current variable in the slot as the correct repair, and the variables sources, object name, subject name, and result as repair candidates. Although buggy repair examples are defined in terms of candidate variable names, any mention of a candidate in the program tokens can be pointed to by the pointer model; for example, the repair pointer head, when asked to predict a repair for the slot on line 6, could point to the (incorrect) variable sources appearing on lines 1, 2, or 3, and we don’t distinguish among those mentions of a predicted repair variable when it is the correct prediction.
The MSR-VarMisuse dataset consists of 25 C# GitHub projects, split into four partitions: train, validation, seen test, and unseen test, consisting of 3738, 677, 1807, and 1185 files each. The seen test partition contains (different) files from the same projects that appear in the train and validation partitions, whereas the unseen test partition contains entire projects that are disjoint from those in test and validation. The published dataset contains pre-tokenized token sequences, as well as VARMisuse repair examples, one per slot present in every file, as well as associated repair candidates for that slot, and the correct variable for it. This dataset defines slots as variable uses in both load and store contexts (e.g., even left-hand side expressions), and candidates are type-compatible with the slot. Every example has at least two repair candidates.
|
{"Source-Url": "http://export.arxiv.org/pdf/1904.01720", "len_cl100k_base": 11072, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 45234, "total-output-tokens": 13397, "length": "2e13", "weborganizer": {"__label__adult": 0.0003764629364013672, "__label__art_design": 0.0003590583801269531, "__label__crime_law": 0.0002334117889404297, "__label__education_jobs": 0.0006322860717773438, "__label__entertainment": 6.395578384399414e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.00016880035400390625, "__label__food_dining": 0.00028634071350097656, "__label__games": 0.0006017684936523438, "__label__hardware": 0.0008668899536132812, "__label__health": 0.0003719329833984375, "__label__history": 0.00017559528350830078, "__label__home_hobbies": 0.00010341405868530272, "__label__industrial": 0.0003027915954589844, "__label__literature": 0.00023090839385986328, "__label__politics": 0.0001691579818725586, "__label__religion": 0.00038313865661621094, "__label__science_tech": 0.0113983154296875, "__label__social_life": 7.969141006469727e-05, "__label__software": 0.00499725341796875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0003197193145751953, "__label__transportation": 0.00047969818115234375, "__label__travel": 0.00018274784088134768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49841, 0.03961]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49841, 0.25935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49841, 0.8695]], "google_gemma-3-12b-it_contains_pii": [[0, 4000, false], [4000, 9445, null], [9445, 14533, null], [14533, 19526, null], [19526, 22527, null], [22527, 26566, null], [26566, 31583, null], [31583, 36498, null], [36498, 38985, null], [38985, 43065, null], [43065, 46849, null], [46849, 49841, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4000, true], [4000, 9445, null], [9445, 14533, null], [14533, 19526, null], [19526, 22527, null], [22527, 26566, null], [26566, 31583, null], [31583, 36498, null], [36498, 38985, null], [38985, 43065, null], [43065, 46849, null], [46849, 49841, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49841, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49841, null]], "pdf_page_numbers": [[0, 4000, 1], [4000, 9445, 2], [9445, 14533, 3], [14533, 19526, 4], [19526, 22527, 5], [22527, 26566, 6], [26566, 31583, 7], [31583, 36498, 8], [36498, 38985, 9], [38985, 43065, 10], [43065, 46849, 11], [46849, 49841, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49841, 0.12992]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
e7efaff7eaec42ebff461d9ed1733383d8c0d4f1
|
PROVING LISP PROGRAMS USING TEST DATA
Timothy A. Budd
Richard J. Lipton
Yale University
Department of Computer Science
New Haven, Ct.
and
University of California at Berkeley
Computer Science Division
Berkeley, Calif.
1. INTRODUCTION
An idea proposed in [1] is the concept of proving individual programs correct with respect to some larger class of programs. That is, instead of proving a program correct we prove that either a) the program is correct, OR b) no program in this
class realizes the intended function. It is assumed that most programmers at least know if the function they are trying to compute can be realized in some large class of programs, and therefore from a theoretical point of view the introduction of this disjunction may make the task of validating programs vastly easier.
A previous paper has analysed programs written in a decision table format [4]. In this paper we will be concerned with lisp programs composed of CAR, CDR and CONS with lisp predicates composed of CAR, CDR and ATOM. Similar classes of programs have been studied in [5,6,7].
Associated with each S-Expression X we can construct a binary tree as follows: Consider the infinite binary tree where each left arc is marked CAR and each right arc CDR (call this the complete CAR/CDR tree.) Starting with X at the root of the tree, travel down each arc in turn taking the appropriate CAR or CDR. Prune the complete tree each time you reach an atom. The resulting finite binary tree will be called the projection of X (or PROJ[X]). An example is shown in figure 1. Notice the PROJ[X] is a representation of the structure of X, and in invariant under the renamings of the atoms of X.
We can define a relation $<$ as follows. Given two S-expressions $X$ and $Y$ we will say $X < Y$ if PROJ[$X$] is the intersection of PROJ[$X$] and PROJ[$Y$]. Using this relation one can show the set of lisp structures form a lattice. (The proofs can be adapted from Summers[7], although he defines the projection slightly differently.)
We will make the convention that all S-Expressions (we will use the less clumsy expression point) have unique atoms. Certainly if two programs agree on all such points they must agree on all inputs. Hence we can do this without loss of generality.
We will call a lisp program a Selector program if it is composed of just CAR and CDR. We will call it a Straight line program if it is a selector program or is formed by CONS on either selectors or other straight line programs. We will call it a Predicate program if it has the following form
\[
\text{COND} \left( \text{ATOM}(G_1(X)) \rightarrow P_1(X) \right)
\]
\[
T \rightarrow P_2(X)
\]
Where the G's are selectors and the P's are straight line programs or other predicate programs.
Assume we have a function $F$ which we know can be computed by a program in some schemata class $S$.
376
We have a program $P$ in $S$ which we wish to show computes $F$. We assume we have some method of verifying that $P(X) = F(X)$ on a finite number of test cases (say by hand calculation.) We wish to show that there exists a finite set of test cases $T$ such that if $P$ correctly computes $F$ on every element of $T$ then either 1) $P$ correctly computes $F$ for all inputs, or 2) no program in the schemata class $S$ correctly computes $F$. This goal is similar to that of mutation analysis [1-4].
Call such a test set Adequate.
We then wish to discover conditions under which we can construct adequate test data.
2. STRAIGHT LINE PROGRAMS
We will say a program $P(X)$ is Well formed if for every occurrence of the construction $\text{CONS}(A,B)$ it is the case that $A$ and $B$ do not share an immediate parent in $X$. The intuitive idea of the definition should be clear: a program is well formed if it is not doing any more work than it needs to. Notice that being well formed is an observable property of programs, independent of testing.
We can define a measure of the complexity of straight line programs by their CONS-depth, where
CONS-depth is defined as follows:
1) The CONS-depth of a selector function is zero.
2) The CONS-depth of a straight line program \( P(X) = \text{CONS}(P_1(X), P_2(X)) \) is \( 1 + \text{MAX} \left( \text{CONS-depth}(P_1(X)), \text{CONS-depth}(P_2(X)) \right) \).
Lemma 1: If any two selector programs compute identically on any point \( X \), they must compute identically on all points.
Proof: The only power of a selector program is to choose a subtree out of its input and return it. We can view this process as selecting a position in the complete CAR/CDR tree and returning the subtree rooted at that position. Since there is a unique path from the root to this position, there is a unique predicate which selects it out. Since atoms are unique by merely observing the output we can infer the subtree which was selected. The result then follows.
Lemma 2: If two well formed programs compute identically on any point then they must have the same CONS-depth.
Proof: Assume we have two programs \( P_1 \) and \( P_2 \) and a point \( X \) such that \( P_1(X) = P_2(X) \) yet the CONS-depth\((P_1) < \text{CONS-depth}(P_2) \). This then
implies that there is at least one subtree in the structure of P₂ which was produced by CONS-ing two straight line programs while the same subtree in P₁(X) was produced by a selector. But then the objects P₂ CONSed must have an immediate ancestor in X, contradicting the fact that P₂ is well formed.
THEOREM 1: If two well formed straight line programs agree on any point X then they must agree on all points.
PROOF: The proof will be by induction on the CONS-depth. By lemma 2 any two programs which agree at X must have the same CONS-depth. By lemma 1 the theorem is true for programs of CONS-depth zero. Hence we will assume it is true for programs of CONS-depth n and show the case for n+1.
If program P₁ has CONS-depth n+1 then it must be of the form CONS(P₁₁, P₁₂) where P₁₁ and P₁₂ have CONS-depth no greater then n. Assume we have two programs P₁ and P₂ in this fashion. Then for all Y:
\[ P₁(Y) = P₂(Y) \iff \]
\[ CONS(P₁₁(Y), P₁₂(Y)) = CONS(P₂₁(Y), P₂₂(Y)) \iff \]
\[ P₁₁(Y) = P₂₁(Y) \text{ and } P₁₂(Y) = P₂₂(Y) \]
Hence by the induction hypothesis P₁ and P₂
must agree for all Y.
We define a test point to Generic if by itself it constitutes an adequate test set as defined in the introduction.
Corollary: For any well formed straight line lisp program, and unique atomic point for which the function is defined is generic.
3. PREDICATE PROGRAMS
We can view the structure of a predicate program as a binary tree. Associated with each interior node is a predicate and associated with each leaf is a straight line program (see figure.)
We will call a predicate program Well formed if
1) each of the straight line programs associated with each leaf are well formed, and
2) for each leaf on the space of all possible inputs there is at least one item which passes all conditions leading to that leaf and causes the associated straight line program to be executed.
Notice that whether a program is well formed or not is an observable fact independent of testing.
For notation we will denote the leaves going from left to right by \( l_i \), \( i = 1, \ldots, n \). Let \( e_i \), \( i = 1, \ldots, n \) be the set of straight line programs associated with the leaves. We will assume that for no \( i, j \) if \( i \neq j \) is it the case that \( e_i \) is equivalent to \( e_j \). Notice again theorem 1 gives us an effective method to test this.
Given a well formed predicate program \( P \) is \( S \) we construct a set of \( n \) data points \( d_1, \ldots, d_n \) such that \( d_i \) follows the path to leaf \( l_i \) and executes the program \( e_i \) correctly. Call this set \( T_i \). There is an obvious effective procedure to generate such a test set.
**LEMMA 3:** Given any well formed program \( P' \) in 3 which evaluates correctly on each element of \( T \), at least one data point \( d_i \) in \( T \) must exercise every straight line leaf program in \( P' \).
**PROOF:** Assume we have a program \( P' \) satisfying the hypothesis but for which the conclusion is false. By the pigeon hole principle there must be at least two points \( d_i \) and \( d_j \) which were evaluated by different leaves in \( P \) but which are evaluated by the same leaf in \( P' \). Let \( f \) denote the straight line program which evaluates these points in \( P' \). Since the \( d \) points are generic this implies that \( e_i \) is equivalent to \( f \). But also \( e_j \) is equivalent to \( f \). Hence \( e_i \) must be equivalent to \( e_j \) which is a contradiction.
Corollary: Given any well formed program $P'$ in $S$ which evaluates correctly on each element of $T$, the leaf programs of $P'$ are simply a permutation of those of $P$.
It might seem that exercising all the paths of $P'$ is sufficient to show it is equivalent to $P$. But this is not the case. We might simply have consistently chosen the right path for the wrong reason. To rule out this possibility requires a more stringent set of test cases. We construct this test set in the following manner.
For each leaf $l_i$ and for each element $d_j$ in $T_j$ construct a point $d_{ij}$ in the following way. Consider the infinite CAR/CDR tree. Color each point RED which is tested and found to be atomic on the path leading to the leaf $l_i$. Color the points which are tested and found to be non atomic BLUE. As long as it is not contained in a subtree rooted at a red point and does not contain a blue point in its subtree, color a point red if it is atomic in $d_j$. As long as it is not contained in a subtree rooted at a red point, color a point blue if it is not atomic in $d_j$. $d_{ij}$ is then the smallest unique atomic point where the red colored vertexes are atomic and the blue vertexes non atomic.
Denote by $T$ the set $T_1$ augmented with these points.
THEOREM 2: Any well formed program $P'$ in $S$ which agrees with $P$ on $T$ must agree with $P$ on all points.
PROOF: Assume we have a program $P'$ which satisfies the hypothesis, yet there is a point $X$ such that $P(X)$ and $P'(X)$ differ.
The point $X$ must be evaluated by some leaf $l_i$ in $P$, hence it must satisfy all the constraints associated with that leaf.
This point is also evaluated by a leaf program $e_k$ in $P'$. By lemma 6 some data item $d_j$ in $T$ also executes this leaf program. This implies that no matter what the constraints are on this path in $P'$ (and we make no assumptions about what they might be) they cannot interfere with the constraints along the path leading the $l_i$.
But this then necessarily implies that point $d_{ij}$ would be evaluated by $e_i$ in $P$ and $e_k$ in $P'$ where $k \neq i$. Since $d_{ij}$ is also generic using the earlier theorems a contradiction is obtained.
Corollary: There is an effective procedure to construct an adequate test set for predicate programs.
4. recursive programs
We will define a class of programs \( \mathcal{D}_n \) as follows:
The input to the program shall consist of two sets of variables: Selector variables, denoted \( x_1, \ldots, x_m \) and Constructor variables, denoted \( y_1, \ldots, y_p \).
a program will consist of two parts, a program body and a recursor.
A program body consists of \( n \) statements, each statement composed of two parts. The first part is a predicate of the form \( \text{ATOM}(t(x_1)) \) where \( t(x_1) \) is a selector function and \( x_1 \) a selector variable. The second part is a straight line output function over the selector and constructor variables.
A recursor is divided into two parts. The constructor part is composed of \( p \) assignment statements for each of the \( p \) constructor variables where \( y_i \) is assigned a straight line function of the selector variables and \( y_i \). The selector part is composed of \( m \) assignment statements for the \( m \) selector variables so that \( x_1 \) is assigned a selector function of itself. The following diagram should give a more intuitive picture of this class of programs.
Program \( P(x_1, \ldots, x_m, y_1, \ldots, y_p) = \)
\[
p_1(x_1) \rightarrow f_1(x_1, \ldots, x_m, y_1, \ldots, y_p)
\]
\[ p_2(x_{i2}) \rightarrow f_2(x_1, \ldots, x_m, y_1, \ldots, y_p) \]
\[
\ldots
\]
\[ p_n(x_{in}) \rightarrow f_n(x_1, \ldots, x_m, y_1, \ldots, y_p) \]
\[ y_1 \leftarrow g_1(y_1, x_1, \ldots, x_m) \]
\[
\ldots
\]
\[ y_p \leftarrow g_p(y_p, x_1, \ldots, x_m) \]
\[ x_1 \leftarrow h_1(x_1) \]
\[
\ldots
\]
\[ x_m \leftarrow h_m(x_m) \]
Given such a program, execution proceeds as follows: Each predicate is evaluated in turn. If any predicate is undefined so is the result of the execution, otherwise if any predicate is TRUE the result of execution is the associated output function. Otherwise if no predicate evaluates true then the assignment statements in the recurer and constructor are performed and execution continues with these new values.
We will say a variable is a predicate variable if it is tested by at least one predicate. Similarly it is an output variable if it is used in at least one output function. A variable can be both a predicate and an output variable.
We will make the following restrictions on the programs we will consider:
1) every recursion selector and every constructor must be
non trivial.
2) every variable is either a predicate or an output variable.
3) there is at least one output variable
4) (freedom) for and 1<k<n and l>0 there exists a set of inputs which cause the program to recurse l times before correctly exiting by output function k.
5) each output function is unique.
6) every constructor variable appears totally in at least one output function.
Given a program P in \( \mathcal{D}_n \), let \( \mathcal{D} \) be the union of \( \mathcal{D}_i \) for \( i=1,n \).
Let us assume we know, on independent grounds, that a correct program \( P^* \) exists in \( \mathcal{D} \), furthermore that no predicate, output function, selector or constructor in \( P^* \) has a depth greater than some constant \( u>3 \).
**GOAL:** We wish to construct a set of test inputs with the property that any program \( P \) in \( \mathcal{D} \) which executes correctly on these values must then be equivalent to \( P^* \). The existence of such a test set would then imply (under the assumption that at least one correct program exists in \( \mathcal{D} \)) that \( P \) is correct.
We will use capital letters from the end of the alphabet (X, Y and Z) to represent vectors of inputs.
Hence we can refer to \( P(X) \) rather than \( P(x_1, \ldots, x_m, y_1, \ldots, y_p) \). Similarly we can abbreviate the simultaneous application of constructor functions by \( C(X) \) and recursion selectors by \( S(X) \).
We will use the initial greek letters to represent positions in a variable, where a position is defined by a finite CAR-CDR path from the root. When no confusion can arise we will frequently refer to "position \( \xi \) in \( X \)" whereby we mean position \( \xi \) in some \( x_i \) in \( X \).
We can form a lattice on the space of inputs by saying \( X \preceq Y \) if and only if for all selector variables \( x_i \) in \( X \) are smaller than their respective variables in \( Y \), and similarly the constructor variables.
We can define the notion of "Pruning \( X \) at position \( \xi \)" as follows: We will say \( Y \) is \( X \) "pruned at position \( \xi \) if \( Y \) is the largest input \( \preceq X \) where \( \xi \) is atomic. This process can be viewed as simply taking the subtree in \( X \) rooted at \( \xi \) and replacing it by a unique atom.
If a position \( \xi \) (relative to the original input) is tested by some predicate we will say that the position in question has been touched.
The assumption of freedom asserts only the existence of inputs \( X \) which will cause us to recurse a specific number of times and exit by a specific output function.
Our first lemma shows that this can be made constructive.
LEMMA 1. Given $l \geq 0$ and $1 \leq i \leq n$ we can construct an input $X$ such that $P(X)$ is defined and while executing $X \ P$ recurses $l$ times before exiting by output function $i$.
PROOF: Consider $m+p$ infinite trees corresponding to the $m+p$ input variables. Mark in BLUE every position which is touched by a predicate function and found to be non-atomic in order for $P$ to recurse $l$ times and reach the $i^{th}$ predicate. Then mark in RED the point touched by the $i^{th}$ predicate after recursing $l$ times.
The assumption of freedom implies that no blue vertex can appear in the infinite subtree rooted at the red vertex, and that the red vertex can not also be marked blue.
Now mark in YELLOW all points which are touched by constructor functions in recursing $l$ times, and each position touched by the $i^{th}$ output function after recursing $l$ times. The assumption of freedom again tells us that no yellow vertex can appear in the infinite subtree rooted at the red vertex. The red vertex may, however, also be colored yellow, as may the blue vertexes. It is a simple matter to then construct an input $X$ such that
1) all BLUE vertexes are non atomic in $X$,
2) The RED vertex is atomic, and
3) all YELLOW vertexes are contained in $X$ (they may be
atomic)
It is trivial to verify that such an \( X \) satisfies our requirements. \( \triangle \)
Notice that the procedure given in the proof of lemma 1 allows us to find the smallest \( X \) such that the indicated conditions hold. If \( \alpha \) is the position touched by the \( i \)th predicate after recursing \( l \) times call this point the minimal \( \alpha \) point, or \( X_\alpha \).
Freedom implies no point can be twice touched, hence the minimal \( \alpha \) point is a well defined concept.
Given an input \( X \) such that \( P(X) \) is defined, let \( F_X(Z) \) be the straight line function such that \( F_X(X) = P(X) \). Note that by the property of being generic, \( F_X \) is defined by this single point.
LEMMA 2: For any \( X \) for which \( P(X) \) is defined, we can construct an input \( Y \) with the properties that \( P(Y) \) is defined, \( Y \geq X \) and \( F_X \neq F_Y \).
PROOF: There exist some constants \( l \) and \( i \) such that on input \( X \) \( P \) recursed \( l \) times before exiting by output function \( i \). Let the predicate \( P_i \) test variable \( x_j \) and let \( s_j \) be the recursion selector for this variable.
There are two cases, depending upon whether the output function \( f_i \) is constant or not. If \( f_i \) is not a con-
stant then since $X$ is bounded there must be a minimal $k > 1$ such that the predicate $p_i(s^k(x_j))$ is undefined.
By lemma 1 we can find an input $Z$ which causes $P$ to recurse $k$ times before exiting by output function $i$. Let $Y = X$ union $Z$. Since $Y > Z$ $P$ must recurse at least as much on $Y$ as it did on $Z$. Since the final point tested is still atomic $P(Y)$ will recurse $k$ times before exiting by output function $i$.
It is simple to verify the fact that $F_X \neq F_Y$.
The second case arises when $f_i$ is a constant function. By assumption 6 there is at least one output function which is not a constant function. Let $f_i$ be this function. Let the predicate $p_i$ test variable $x_j$. The same argument as before goes through with the exception that is may happen by chance the $P(Y) = P(X)$ (i.e. $P(Y)$ returns the constant value.) In this case we increment $k$ by 1 and perform the same process and it cannot happen that $P(Y) = P(X)$.
**Lemma 3:** If $P$ touched a location $\alpha$, then we can construct two inputs $X$ and $Y$ such that $P(X)$ and $P(Y)$ are defined, and for any $P'$ in $\delta$, if $P(X) = P'(X)$ and $P(Y) = P'(Y)$ then $P'$ must touch $\alpha$.
**Proof:** Let $Z$ be the minimal $\alpha$ point. By lemma 2 we can construct an input $X$ such that $P(X)$ is defined, $X > Z$ and $F_X \neq F_Z$. Let $Y$ be $X$ pruned at $\alpha$. 390
We first assert that $P(Y)$ is defined and $F_Y = F_Z$. To see this we note that every point which was tested by $P$ is computing $P(Z)$ and found to be non atomic is also non atomic in $Y$. If is atomic in both, and if the output function was defined on $Z$ then it must be defined on $Y$ which is strictly larger.
Now suppose there existed some program $P'$ such that $P'(X)$ and $P'(Y)$ were computed correctly but $P'$ did not touch $d$. We see immediately that this cannot happen since all other positions are either the same in $X$ and in $Y$ or they exist in $X$ but not in $Y$. Hence if $P'(Y)$ is defined it would imply $F_X = F_Y$, a contradiction. \(\Delta\)
Define the positions which $P$ touches without going into recursion to be the primary positions of $P$.
Given a program $P$ to test our first task is then to construct a set of test inputs using theorem 1 which demonstrate that each of the primary positions must be touched.
Observe that this set contains at most $2n$ elements.
We will say a selector function $f$ factors a selector function $g$ if $g$ is equivalent to $f$ composed with itself some number of times. For example CADR factors CADADADR. We will say that $f$ is a simple factor of $g$ if $f$ factors $g$ and no function factors $f$, other than $f$ itself.
Let us denote by $\sigma_i$ $i=1,\ldots,m$ the simple factors of each of the $m$ recursion selectors. That is, for each $i$ there is a constant $l_i$ such that the recursion selector $s_i = \sigma_i^l_i$.
Let $q = \text{GCD}(l_i \; i=1,\ldots,m)$.
Let $S$ be the simultaneous recursion selector where the $i^{th}$ term is $\sigma_i^{l_i/q}$. Hence the recursion selectors of $P$ can be written as $S^q$.
We now construct a second set of data points in the following fashion:
For each selector variable $x_i$:
1) $x_i$ is an output variable used in output function $f_j$. Let $d$ be the position first tested by $p_j$ after $P(X)$ has recursed to a depth of at least $u^2$. Then we generate the minimal $d$ point.
2) $x_i$ is not an output variable, but is a predicate variable. Let $d$ be the first time a position with depth greater than $u^2$ is touched in $x_i$. First generate the minimal $d$ point, then using lemma 3 generate two inputs which demonstrate that position $d$ must be touched.
Notice that we have added no more than $3m$ points.
THEOREM 1: If $P'$ is in $\tilde{\Phi}$ and $P'$ computes correctly on all data points computed so far, then the recursion selectors of $P'$ must be powers of $\sigma_i$. 392
PROOF: Observe the fact that if $x_i$ is an output variable in $P$, it must appear as a result in at least one input $X$ in our test data space, hence if $P'(X)$ is correct $x_i$ must be an output variable for $P'$ also.
The proof of theorem 1 will then rest on the following two cases.
Case 1. If $x_i$ is an output variable. By construction there exists some $X$ in our test data space such that $P(X)$ recursively to a depth of at least $3u (<U^2)$ before exiting by the $j^{th}$ output function, where $x_i$ is an output variable in $f_j$.
Assume that the $i^{th}$ recursion selector in $P'$ is not a power of $\sigma_i$. Then somewhere before the $i^{th}$ variable has recursed to a depth of $u$ their paths must diverge.
Once the $i^{th}$ variable steps past the points where the paths in the two programs diverge it can never have access to the subtrees used in $P$ by $f_j$ in its output. Hence $P'$ on $X$ must halt before the $i^{th}$ variable has recursed to a depth of $u$.
But if that is the case then its output functions cannot access subtrees rooted any deeper then $2u$. By construction the correct output requires trees which can only be accessed by going at least $3u$ deep, hence a contradiction is obtained.
Case 2: If $x_i$ is not used as an output variable.
Assume the recursion selector of $x_i$ in $P'$ is not a power of $\sigma_i$. Then once the variables $x_i$ have recursed past the depth $u$ they will be in totally different subtrees of their input (see figure 3.)
By construction it is required that $P'$ touch a point whose depth is at least $3u$. $P'$ must therefore touch this point before the $i\text{th}$ variable diverges from the path taken by $P$, hence before it has reached a depth of $u$. But by definition $P'$ cannot touch any points deeper than $2u$ in this region, hence a contradiction is obtained. $\triangle$
Theorem 1 gives us a way to demonstrate that a program $Q$ must have the same recursion selectors, up to a power, as does $P$. We now wish to derive a slightly stronger result. We will show that there exists a constant $r$ such that the recursion selectors of $P'$ are exactly $S^r$.
Note that by definition we know that $|S^r|$ (that is, the maximum depth of any function in $S^r$) is less than $u$.
**Theorem 2**: If $P'$ is in $\delta$ computes correctly on all the points we have so far computed, then there exists a constant $r$ such that the recursion selectors of $P'$ are exactly $S^r$.
**Proof**: We know by theorem 2 that the recursion selectors of $P'$ must be powers of $\sigma_i$. For each $1 < i < m$
construct the ratio of the power of $\sigma_i$ in $P'$ to that of $P$. Let $x_i$ be the variable with the smallest such ratio and $x_j$ be the variable with the largest. From the fact that these ratios are different we will obtain a contradiction.
Case 1: $x_i$ is an output variable. By construction there is an input $X$ such that $P'(X)$ must recurse on $X$ to a depth of at least $u^2$ before outputting by a output function which uses $x_i$. This implies that $P'$ must recurse at least $u$ times. Since in comparison to the program $P$ the variable $x_j$ is gaining at least one level each recursion we have that either 1) $P'(X)$ is undefined because $x_j$ ran off the end of its input, or 2) $P'(X)$ must halt before it has recursed to a depth of $u(u-1)$ in $x_i$ in which case it cannot have produced the correct output.
The argument in the case where $x_i$ is a predicate variable, but not an output variable is almost the same and is hence omitted. △
By lemma 3 we know that if $P$ touches a location $d$, then we can construct a pair of inputs with the property that any program $P'$ in $\emptyset$ which executes correctly on these two inputs must also touch $d$. We now present the converse lemma.
**Lemma 4:** If $P$ works correctly on the test data so far constructed, and does not touch a location $d$, then we can
construct two inputs X and Y with the property that any P' in \( \emptyset \) which executes correctly on all this data must also not touch the position \( d \).
PROOF: Let \( x_1 \) be the variable containing \( d \). Let \( v \) the maximum depth any variable has obtained just after the \( i \)th recursion selector passes the depth of \( d \). Let \( X \) be a set of complete trees of depth \( v+2u \), pruned at \( d \).
There are two cases, depending upon whether \( P(X) \) is defined or not.
Case 1: \( P(X) \) is not defined. Assume \( P' \) touches \( d \). Let \( Z \) be the minimal \( d \) point in \( P' \) (we need not be able to construct this point.) We see that \( Z < X \). But this then implies that \( P'(X) \) must be defined, a contradiction.
Case 2: \( P(X) \) is defined. By lemma 1 we can construct an input \( Z > X \) so that \( F_X \not= F_Z \). Let \( Y \) be \( Z \) pruned at \( d \).
Assume \( P(X) = P'(X) \) and \( P(Y) = P'(Y) \) and \( P' \) touches \( d \). If \( P(Y) \) is undefined we are done, since \( P'(Y) \) must be defined. So assume \( P(Y) \) is defined. In this case, since \( P \) does not touch \( d \), \( F_Y = F_Z \not= F_X \). But if \( P' \) touched \( d \), then since \( x < Y \) we would have \( F_X = F_Y \), a contradiction. \( \triangle \)
Next we show that the primary positions of \( P' \) must be exactly those of \( P \).
Let \( p_1, ..., p_n \) be an ordering of the primary positions of \( P \) such that the depth of the position tested by \( p_1 \) is less then or equal to the depth of that tested by
\( \rho_{i+1} \)
We know the recursion selectors of \( P' \) are \( S^r \) where \( |S^r| < u \). This gives us at most \( u \) possibilities. For each possibility we proceed in turn as follows:
Assume position \( \rho_i \) (\( i = 1, \ldots, n \)) is not primary in \( P' \). We can construct a point which is then tested by \( P' \) earlier then \( \rho_i \) by imagining the root input was actually the result of one recursion, and then looking at the position \( \rho_i \) in relation to the earlier root (see figure 4.)
Now one of two cases arises. Either
1) the new position is not touched by \( P \), or
2) the new position corresponds to a position \( \rho_j \) \( j < i \).
In the first case we can construct two inputs which demonstrate the position in question must not be touched. The second case immediately rules out \( S^r \) as the recursion selector, since by induction \( \rho_j \) is primary to \( P \) and hence \( P' \) would not by an element of \( \emptyset \).
Notice we have increased our test case size by no more than \( 2nu \) elements. The resulting test case then gives us the following theorem.
**Theorem 3:** If \( P'(X) = P(X) \) for \( X \) in our test set, then the primary positions of \( P \) are exactly those of \( P' \).
Notice also that by the generic property that this also implies the following corollary:
THEOREM 4: The output functions of \( P' \) are exactly those of \( P \).
Once we have that the primary positions of \( P' \) are exactly those of \( P \), we can now return to the problem of showing that the selector functions of \( P' \) must be \( S^q \). Consider each of the alternative possibilities for \( S^P \) (no more then \( U \) of them.) Since the rates of recursion of \( P \) and \( P' \) differ, one of three cases must arise. Either
1) \( P' \) touches the same point twice (which means \( P' \) is not in \( \delta \) and is out of the running.)
2) \( P' \) touches a point which \( P \) fails to touch, or
3) \( P \) touches a point which \( P' \) fails to touch.
Since we only need to test for the last two conditions we need augment out test case with no more then \( 2u \) points.
we then have the following theorem:
THEOREM 5: The recursion selectors of \( P' \) must be exactly those of \( P \).
Pushing onward we next want to consider the recursion constructors. Once we have the other elements fixed, however, the constructors are almost given free. All we
need do is to construct p data points so that the $i^{th}$
data point causes the program $P$ to recurse once and exit
using an output function which uses the $i^{th}$ constructor
variable. By the generic property and the fact that the
entire $i^{th}$ constructor variable is then open to inspec-
tion we have the the next theorem.
THEOREM 6: The recursion constructors of $P'$ must be
exactly those of $P$.
What remains? Well the order in which the primary
positions are tested is the only thing we have not nailed
down. For each primary position $\xi$ add $X_\xi$ to our test
data. We leave it to the reader to verify:
THEOREM 7: The order of predicate evaluation in $P'$ is
exactly that of $P$.
Counting the size of our test set, we see now that
it contains no more then $3(n+m)+2(p+u+nu)$ points. Com-
bining all the theorems proved in this section we then
have our main result, which states:
THEOREM: Given a program $P$ in $\mathcal{G}$, there exists a set of no
more then $3(n+m)+2(p+u+nu)$ elements such that if $P'$ is
any program in $\mathcal{G}$ which computes the same results on this
set as $P$ does, then $P'$ must be equivalent to $P$.
COROLLARY: Either $P$ is correct or no program in $\mathcal{G}$
realizes the intended function.
5. AN EXAMPLE
The following example, taken from [6], will be used to illustrate some of the ideas here presented.
The program is given by [6] as follows:
(REVDBL
(LAMBDA (ARG1)
(COND
((NULL ARG1) NIL)
(T (APPEND (REVDBL (CDR ARG1))
(LIST (CAR ARG1) (CAR ARG1)))
))
We will translate it into the following form.
REVDBL(X,Y) = ATOM(X) -> Y
Y <- CONS(CAR(X),CONS(CAR(X),Y)))
X <- CDR(X)
Using the formula given in the main theorem, we see that a test set exists for this program containing no more than 20 points. However, if one follows the arguments given in this paper, one finds that actually the three points given in figure 5 suffice. This illustrates the point that we have actually been rather liberal in our counting, and usually a much smaller test set can be found than the limit stated in our main result.
$x = (a \ (b \ c) \ d)$
$\nu_j[x] =
\begin{align*}
&\begin{tikzpicture}[level distance=1.5cm, level 1/.style={sibling distance=3cm}, level 2/.style={sibling distance=2cm}, level 3/.style={sibling distance=1.5cm}]
&\node {a} [grow=down, anchor=north] child {node {b}}
&\node {c}
&\node {d}
&\node {NIL}
&\end{tikzpicture}
\end{align*}
$P_1, \ldots, P_8$ are Predicates
$e_1, \ldots, e_8$ are straight line programs
Figure 1
Figure 2
\begin{align*}
&\begin{tikzpicture}
&\node {\mathcal{U}} [grow=up, anchor=south] child {node {\sigma}}
&\node {operation not a power of $\sigma$}
&\node {original program}
&\end{tikzpicture}
\end{align*}
Figure 3
Figure 4
Figure 5
|
{"Source-Url": "http://web.engr.oregonstate.edu:80/~alipourm/pub/ProvingLispProgramsUsingTestData.pdf", "len_cl100k_base": 8984, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 33810, "total-output-tokens": 10808, "length": "2e13", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.00029754638671875, "__label__crime_law": 0.00037384033203125, "__label__education_jobs": 0.0010557174682617188, "__label__entertainment": 6.216764450073242e-05, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.00020182132720947263, "__label__food_dining": 0.0003843307495117187, "__label__games": 0.0006318092346191406, "__label__hardware": 0.0011739730834960938, "__label__health": 0.0005669593811035156, "__label__history": 0.00020492076873779297, "__label__home_hobbies": 0.00013327598571777344, "__label__industrial": 0.000392913818359375, "__label__literature": 0.0002892017364501953, "__label__politics": 0.00023090839385986328, "__label__religion": 0.0004773139953613281, "__label__science_tech": 0.0302581787109375, "__label__social_life": 8.124113082885742e-05, "__label__software": 0.005786895751953125, "__label__software_dev": 0.9560546875, "__label__sports_fitness": 0.0003192424774169922, "__label__transportation": 0.0005168914794921875, "__label__travel": 0.000156402587890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34470, 0.01749]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34470, 0.74002]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34470, 0.89118]], "google_gemma-3-12b-it_contains_pii": [[0, 489, false], [489, 1684, null], [1684, 2870, null], [2870, 4013, null], [4013, 5157, null], [5157, 6232, null], [6232, 7140, null], [7140, 8661, null], [8661, 9872, null], [9872, 10957, null], [10957, 12228, null], [12228, 13342, null], [13342, 14549, null], [14549, 15961, null], [15961, 17305, null], [17305, 18611, null], [18611, 20004, null], [20004, 21300, null], [21300, 22531, null], [22531, 23818, null], [23818, 25115, null], [25115, 26452, null], [26452, 28033, null], [28033, 29300, null], [29300, 30480, null], [30480, 31700, null], [31700, 32608, null], [32608, 33800, null], [33800, 34462, null], [34462, 34470, null]], "google_gemma-3-12b-it_is_public_document": [[0, 489, true], [489, 1684, null], [1684, 2870, null], [2870, 4013, null], [4013, 5157, null], [5157, 6232, null], [6232, 7140, null], [7140, 8661, null], [8661, 9872, null], [9872, 10957, null], [10957, 12228, null], [12228, 13342, null], [13342, 14549, null], [14549, 15961, null], [15961, 17305, null], [17305, 18611, null], [18611, 20004, null], [20004, 21300, null], [21300, 22531, null], [22531, 23818, null], [23818, 25115, null], [25115, 26452, null], [26452, 28033, null], [28033, 29300, null], [29300, 30480, null], [30480, 31700, null], [31700, 32608, null], [32608, 33800, null], [33800, 34462, null], [34462, 34470, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34470, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34470, null]], "pdf_page_numbers": [[0, 489, 1], [489, 1684, 2], [1684, 2870, 3], [2870, 4013, 4], [4013, 5157, 5], [5157, 6232, 6], [6232, 7140, 7], [7140, 8661, 8], [8661, 9872, 9], [9872, 10957, 10], [10957, 12228, 11], [12228, 13342, 12], [13342, 14549, 13], [14549, 15961, 14], [15961, 17305, 15], [17305, 18611, 16], [18611, 20004, 17], [20004, 21300, 18], [21300, 22531, 19], [22531, 23818, 20], [23818, 25115, 21], [25115, 26452, 22], [26452, 28033, 23], [28033, 29300, 24], [29300, 30480, 25], [30480, 31700, 26], [31700, 32608, 27], [32608, 33800, 28], [33800, 34462, 29], [34462, 34470, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34470, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
da7778e9604c12819c928de16d28f37b4937ebb5
|
A BPM-Systems Architecture That Supports Dynamic and Collaborative Processes
Pascal Ravesteijn
*HU University of Applied Sciences*
Martijn Zoet
*HU University of Applied Applied Sciences*
Follow this and additional works at: https://scholarworks.lib.csusb.edu/jitim
Part of the Management Information Systems Commons
**Recommended Citation**
Available at: https://scholarworks.lib.csusb.edu/jitim/vol19/iss3/1
This Article is brought to you for free and open access by CSUSB ScholarWorks. It has been accepted for inclusion in Journal of International Technology and Information Management by an authorized editor of CSUSB ScholarWorks. For more information, please contact scholarworks@csusb.edu.
A BPM-Systems Architecture That Supports Dynamic and Collaborative Processes
Pascal Ravesteijn
HU University of Applied Sciences
The Netherlands
Martijn Zoet
HU University of Applied Sciences
The Netherlands
ABSTRACT
Business Process Management Systems (BPMSs) are increasingly implemented in and across organizations. However, the current combination of functionality, concepts and characteristics in BPMSs is very much based on an industrial-based view of the economy while western economies are rapidly moving towards an information and service economy in which the ratio of knowledge workers is rising dramatically. Compared to the ‘old’ type of worker the knowledge worker is typically highly educated, used to collaborating with other knowledge workers and less likely to be sensitive to a controlling style of management in the execution of his or her work. While many organizations are initiating business process improvement projects to improve their processes, this is done with BPM-systems that are based on an old paradigm and therefore unable to support dynamic and collaborative processes. In this paper we propose a new architecture for BPM-systems that include functionality to support knowledge workers in their dynamic and collaborative activities and processes.
INTRODUCTION
Lately, Business Process Management (BPM) and Service Oriented Architectures (SOAs) receive much attention from practitioners and scholars alike. Software vendors use the fuzz and provide new labels on new and existing software products; IT-consultancy companies increase their services with BPM and SOA consultancy and implementation. BPM and SOA are considered as promising IS/IT strategies.
From the eighties and nineties, we identify two major business trends that seem to relate to BPM: Total Quality Management (TQM) and Business Process Reengineering (BPR) (Deming 1982, Hammer and Champy 1993). In the same period there was a rise in the implementation and use of new types of information systems like Enterprise Resource Planning (ERP) systems, Workflow Management (WFM) systems, advanced planning systems and more. What started as the automation of a company’s internal processes soon focused on digitization of supply chains (Davis and Spekman 2003). Among others the Internet and associated network standardization made this possible. Since the year 2000 all these trends seem to converge into new types of information systems, that some (Smith & Fingar, 2003) call Business Process Management Systems (BPMSs). A BPMS can be defined as “a generic software system that is driven by explicit process designs to enact and manage operational business processes” (Weske et al., 2004). While Aalst et al. (2003) find that Business Process Management includes methods,
techniques, and tools to support the design, enactment, management, and analysis of business processes. In this way it can be considered as an extension of classical Workflow Management (WfM) systems and approaches. In these definitions BPM clearly is based on the industrial-based view of the economy in which activities and processes are clearly defined and standardized as much as possible. Based on the current status of many BPMSs it is possible to conclude that a BPMS solution needs to be able to analyse and model processes within and across organizational boundaries, execute the modelled processes, measure their performance and use this as an input to optimization. This in essence means that support of processes by a BPMS starts in design-time.
However in the past century, there has been a shift from the agricultural- and industrial- based economy to a more service- and knowledge-based economy (Takala, Suwansaranyu & Phusavat, 2006). This has led to a dramatically increase of the proportion of knowledge workers in the workforce. The first author who refers to the term knowledge workers is Drucker (1959). He defined knowledge workers as “workers that work with intangible resources”. Besides the definition of Drucker, there are more authors that refer to knowledge workers. An example is the definition of Bennet (2003): “knowledge workers are individuals whose work effort is centered around creating, using, sharing and applying knowledge”. In 1994 Drucker rephrased his definition of knowledge workers as: “high level employees who apply theoretical and analytical knowledge, acquired through formal education, to developing new products or services”. In other words, knowledge work is human mental work performed to generate useful information and knowledge (Davis, 2002).
Based on the above it can be stated that the nature of knowledge work is more complex than the type of work that was typical to the industrial age and therefore also more difficult to manage and control.
Although knowledge work has been an important topic in both practice and science many organizations are still focusing on creating more efficient business processes by trying to automate tasks, activities and processes with BPM-systems based on the old paradigm. However as Fingar (2006) stated: "Processes don't do work, people do". Today the missing link in many process improvement initiatives is more attention for the role of knowledge workers within processes, resulting in a task-technology misfit (Goodhue & Thompson, 1995). A clear case for more awareness for the way that knowledge work is carried out is made by Harrison-Broninski (2005) in his seminal work 'Human Interactions: The Heart and Soul of Business Process Management'. In this book Harrison-Broninski states that organizations should be actively engaged in managing the collaboration between knowledge workers within and outside of the organization. The term that he uses for this is Human Interaction Management (HIM). However because almost all of the BPM-systems on the market today don’t offer functionality to support HIM many organizations are not able to manage, support and control the collaboration between knowledge workers. Therefore in this paper we answer the following research question: What functionality should be added to BPM-systems to support knowledge workers in their dynamic and collaborative activities and processes?
RESEARCH APPROACH
At the start of this research we looked at different types of research approaches as described in literature. This was done to determine which activities should be undertaken to be able to answer our research question. First we looked at analytic theories that analyze ‘what is’. “These theories are the most basic type of theory. They describe or classify specific dimensions or characteristics of individuals, groups, situations, or events by summarizing the commonalities found in discrete observations” (Fawcett & Downs, 1986; Gregor, 2006). The ‘analysis and description’ theory could be applicable because we want to describe the phenomena of knowledge workers whom collaborate and whose actions cannot be supported by the current BPM-systems offering. But because our research goes beyond analysis and description and also explains how and why BPMS does not cover the needed functionality this research could also be labelled as ‘theory for explaining’ (Gregor, 2006). Finally we also present a preliminary overview of functionality needed to support collaborative work. In other words we state how to do something and that is part of the ‘theory for design and action’. This type of theory is about methods and justificatory theoretical knowledge that are used in the development of information systems (Gregor 2002a; Gregor & Jones, 2004; Walls et al., 1992). Hevner et al. (2004) in their seminal work on design science state that the design-science paradigm seeks to extend the boundaries of human and organizational capabilities by creating new and innovative artefacts which are then validated by applying them in practice. Because we are not planning to immediately applying our findings in practice we only partially adhere to design science research.
Based on the literature analysis we decided that our research will be based on two major activities. First a literature study is done to explain why the BPM-systems that are currently on the market are not capable in supporting collaborative work. This is done by describing the architecture of existing BPM systems (section 3) and the task characteristics of work executed by knowledge workers (section 4). The second part of the paper consists of describing how interaction between knowledge workers could be supported (by information systems) in such a way that organizations get more in control (section 5). And a market survey of existing information systems that have the potential to decrease the task-technology gap for knowledge workers (section 6). This is needed to be compliant with governance regulations but also gives business the opportunity to increase productivity of their employees and the organizational processes. Finally we end this paper with conclusions and further research suggestions.
CURRENT BPM-SYSTEMS ARCHITECTURE
Organizations that want to actively engage in managing collaboration between knowledge workers need to create an (or adjust their) organizational design that is able to support knowledge workers in a proper manner. The scientific discipline within the information systems domain that focuses on designing organizations is enterprise architecture (Robinson & Gout, 2007). Enterprise architecture describes in a systematic way the structure of an organization from various perspectives. Perspectives that can be distinguished are (Robinson & Gout, 2007): activity architecture, information architecture, data architecture, software architecture and technical architecture. The first view elaborates on the activities and processes of an organization whereas the information architecture described the information required and generated during the execution of the activities. Supporting the activities, process and information gathering are the software and data architecture; the latter storing the data in such a manner that it can be used by
the software, information and activity architecture. An overview of the technical solution making all of this possible is shown in the technical architecture.
A BPM-system is a collection of information system technologies to improve the efficiency, effectiveness and governance of business processes (Shaw, 2007). Information systems in this perspective are defined as the combination of the software-, data- and technical architecture. Analysis and research with respect to current, and to be developed, BPM-System Reference Architecture can be conducted in two ways: single system architecture analysis or reference architecture analysis (Yourdon, 1989; Rumbaugh et al., 1991; Kazman et al., 1993). Scholars have defined preferable ways for conducting research with regards to both situations. Single system functionality is primarily analyzed by object oriented or structured analysis of the actual system while reference architectures are often the result of a domain analysis (Kazman et al., 1993). In this paper the focus is on reference architectures therefore domain analysis is the preferred way of conducting research leading to a reference architecture which supports knowledge workers. The domain analysis executed adheres to Arango’s (1988) methodology by first studying existing BPM-system reference architectures after which the bottlenecks/gaps and the sources of these gaps are recognized. The last step is to identify which of the existing architecture can be reused and which additional architecture is needed to close the identified gaps. Providing structure and internal validity the technology-to-performance chain defined by Goodhue and Thompson (1995) is used as a method for analyzing bottlenecks. Reviewing current literature on BPM-systems architecture leads to the identification of three focus areas: service oriented architectures (Baina et al., 2003; Costa et al., 2004, Brahe, 2007), specific process architectures (Anzbock & Dustdar, 2004; Danial & Ward, 2006) and BPMS reference architectures (WFMC, 1999; Glabbeek & Stork, Sheer, & Nuttgens, 2000; Shaw et al., 2007; Weske, 2007).
Service Orientated Architecture (hence SOA) is an overall architecture approach which has not been specifically designed for BPM-systems. It advocates the use of small and reusable information system elements such that software applications can be deployed and maintained in a more agile and flexible manner (Brahe, 2007; Weske, 2007). Research conducted around SOA within the BPM field focuses on making processes flexible and agile and to bridge a gap between BPM technology and service oriented architecture with the use of service composition (Weske, 2007). As SOA is an overall architecture approach which in the BPM domain mainly focuses on the technical architecture layer it is left out of the scope of the domain analysis. Also out of scope of this review is literature focusing on the technical architecture of business processes for specific domain. Examples of such literature is Anzbock and Dustdar (2004) which described an architecture for modelling medical e-services, Maanmar (2006) who focuses on an technical architecture for mobile devices and Danial and Ward (2006) who elaborate on an architecture for e-government solutions.
The last, and with regards to the domain analysis most important, category is literature discussing overall BPM-systems reference frameworks. According to Shaw et al. (2007) there is a limited amount of research available that in a sophisticated manner analyzes BPM-systems reference architectures, the authors concur with this. In the same paper Shaw et al. (2007) propose a BPM-systems reference framework: the BPMS pyramid architecture. Existing out of twelve different building blocks the framework indicates three different components within a BPM-system. Layer one representing the top of the pyramid (one building block): the enactable
A BPM-Systems Architecture that Supports Dynamic & Collaborative Processes P. Ravesteijn & M. Zoet
process model. An enactable process model is a model that is designed in a specific language which allows it to be executed by a BPM-system (Warboys et al., 1999). Layers two and three both represent a specific part of the BPM-system namely the logic underlying the process model (five building blocks) and the information system support (six building blocks). The five building blocks representing the logic of the process model describes the formal model, the modelling language used, the modelling grammar, the abstraction level and the real world subjects modelled. Additionally the information system pillar describes the software and technical infrastructure needed to model and execute the business processes.
Based on a knowledge management view of business processes Jung et al. (2007) propose a reference framework consisting out of six elements. The six elements of the architecture are based on the lifecycle phases of a business process (model): creation, modelling, pre analysis, enactment, post analysis and evolution. Data created and/or modified in one of the components is stored in one of three repositories which represent the central part of the architecture. Repository one, see figure 1, stores the information with regards to the actual process model, example are: creation date, author, goal, and version but also the roles, flow, activities and gateways drawn within the process. Actual execution data of a specific process model e.g. participants, data, throughput time, resources used is stored in the instance knowledge repository. Additional information about the execution of a specific process retrieved from users is stored in the knowledge repository. Generating information about the process models must happen in a chronologic order meaning that before the enactment part of the architecture can execute a business process it must be modelled such that the repository contains template information.
Figure 1: Knowledge of an enacted business process model (Jung et al., 2007).
<table>
<thead>
<tr>
<th>Components of process template knowledge</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Structural elements</td>
<td>Detail elements</td>
</tr>
<tr>
<td>Basic Process Elements</td>
<td>Process Header Information</td>
</tr>
<tr>
<td></td>
<td>Composing Activities, Flow & Condition</td>
</tr>
<tr>
<td></td>
<td>Participant Related Data Resource</td>
</tr>
<tr>
<td></td>
<td>Static Analysis & Simulation</td>
</tr>
<tr>
<td></td>
<td>Parameter/Result Evaluation Information</td>
</tr>
</tbody>
</table>
A third general reference framework is proposed by the Workflow Management Coalition (hence WFMC) which consist out of five components: process definition tools, workflow engine, administration and monitoring tools, workflow client applications and invoked applications (WFMC, 2010). Orchestrating the communication between the four components, the workflow engine is the central part of the architecture. It receives the modelled processes from the definition tools after which is uses the client applications and other workflow engines to monitor and exchange activities. The workflow engine can also invoke third party applications such as business rules engines (WFMC, 2010). The three reference frameworks discussed but also the specific process architectures examined (Anzbock & Dustda, 2004; Danial & Ward, 2006; Maanmar, 2006) have a common denominator in their architecture: an enactable business process model. As state before an enactable process model is a business process modelled in a
specific language such that it can be executed by a BPM-system (Warboys et al., 1999). To create enactable process models knowledge is needed about various aspects of the process such as flow, activities, roles etc, see Figure 1. When knowledge workers execute a process many elements of this information is not know upfront for example which activities are executed, the flow in which they are executed and who will participate. The question thus is: “Can the current reference architectures function without the enacted process models?” For the analysed architecture the answers to this question is no. All of the architecture will not properly function without the enacted model. This unfolds the main bottleneck with current BPM-system reference architectures and their support of work executed by knowledge workers: the architectures are not able to support the ad-hoc activities and therefore processes in which knowledge work is performed. An additional but similar bottleneck is that all architectures assume that the applications used are known upfront.
The three reference frameworks discussed but also the specific process architectures examined (Anzbock & Dustda, 2004; Danial & Ward, 2006; Maanmar, 2006) have a common denominator in their architecture: an enactable business process model. As state before an enactable process model is a business process modelled in a specific language such that it can be executed by a BPM-system (Warboys et al., 1999). To create enactable process models knowledge is needed about various aspects of the process such as flow, activities, roles etc, see figure 1. When knowledge workers execute a process many elements of this information is not know upfront for example which activities are executed, the flow in which they are executed and who will participate. The question thus is: “Can the current reference architectures function without the enacted process models?” For the analysed architecture the answers to this question is no. All of the architecture will not properly function without the enacted model. This unfolds the main bottleneck, and cause for task-technology misfit, with current BPM-system reference architectures and their support of work executed by knowledge workers: the architectures are not able to support the ad-hoc activities and therefore processes in which knowledge work is performed. An additional but similar bottleneck is that all architectures assume that the applications used are known upfront.
**BUSINESS PROCESSES AND KNOWLEDGE WORKERS**
The previous section elaborated on existing BPM-system reference architectures and identified the main bottleneck regarding the support of knowledge workers: the use of enacted process models According to Goodhue and Thompson (1995) and Goodhue, Klein and Salavatore (2000) bottlenecks regarding the use of information systems can be classified into two categories namely task and technology characteristics. Both characteristics together measure the task-technology fit. This in turn influences the utilization of information systems and the performance of the organization. This section elaborates on the current task-technology misfit by explaining the kind of tasks executed by knowledge workers in comparison to non knowledge workers identifying three task characteristics causing task-technology misfit with current BPM architectures.
*A Business Process ≠ A Business Process*
Within scientific and professional literature many different definitions of business processes exist (Davenport & Short, 1990; Hammer & Champy, 1994; Jeston & Nellis, 2006; Weske,
Although the many differences in the definitions used, four characteristics reappear in all of them: (1) the execution of task(s), (2) in a certain sequence, (3) to reach a certain goal and (4) thereby creating value. Depending on the author(s) one or multiple elements are either defined very loosely (Jeston & Nellis, 2006) or very strict (Bulletpoint, 1996). If every process exist out of the execution of tasks in a certain sequence to reach a goal delivering value what is/are the characteristic(s) that distinguishes a traditional process from a dynamic process?
The characteristic separating traditional business processes from dynamic processes is: value creation; more specifically the manner in which value creation is realized. Based on the old paradigm of managing business process value is delivered by creating more efficient and effective processes by automating and reordering tasks and creating interlinked chains of processes (Davenport & Short, 1990; Hammer & Champy, 1994; Stabell & Fjledstad, 1998). Additional value realized by this approach is consistency of products/services delivered to customers. To achieve this manner of value creation organizations create business processes which are translated to enacted models used by BPM systems to execute and monitor the process (Hammer & Champy, 1994; Kettinger et al., 1996; Jeston & Nellis, 2007). The possibility of creating enacted business process models is achieved by the fact that the information about the execution of individual task, the sequence of tasks, the goals and perceived value is already know before the process is executed. Davenport (2005) indicated that this information was available for 70 percent of the processes in 1920. By 1980 this information was available for only 30/40 percent of the processes (Takala et al., 2006). Although no specific numbers are available it is estimated that currently this information is only available for 20 percent of the processes executed in organisations (Fingar, 2006). For the remaining 80 percent of the processes organizations are not able to produce enough information to create an enacted business process model upfront. These processes are executed by knowledge workers which have to make decision about the activities to execute, in which order, which resources to use and very important with who to collaborate to achieve the most value (Gregerman, 1981; Stabell et al., 1998; Glomseth et al., 2007; Chan, 2009). Examples of processes and occupations with these characteristics are developing new products and services, designing marketing programs, creating strategies, law, engineering, architecture and research (Stabell et al., 1998).
If knowledge workers decide upon the activities that they are going to execute and which resource to use themselves, does this then mean that we can say nothing about the execution of the process? From the paradigm of traditional business process we cannot but from the paradigm of value shops, knowledge management and interaction management, insights can be given into the process knowledge workers use to solve challenges/issues. Five high level iterative steps can be distinguished in this process namely: problem-finding and acquisition, problem-solving, choice, execution, control and evaluation (Stabell et al., 1998; Harrison-Broninski, 2005; Glomseth et al., 2007). During the first step the problem is formulated and overall approaches to solve the problem are formulated. After the overall approach has been formulated alternative solutions are evaluated; from the solutions an actual choice is made which is executed. The last step is to measure and evaluate the solution implemented and if needed go back to problem finding. The activities executed during the five steps are not predefined and the intensity of a step depends on the actual case to be solved. The same applies to the resources used in the different steps (Stabell et al., 1998; Harrison-Broninski, 2005; Glomseth et al., 2007). To illustrate this imagine a complex medical case in which the patient already has been misdiagnosed and the right diagnose has not yet been established. In this specific case a medical
specialist is consulted who takes over the case (Abbott, 1988). The specialist looks at the charts, orders additional blood tests (traditional ‘standards’ processes) and consults with colleagues about the best approach. After the solutions have been proposed a choice is made about the actual treatment. After the treatment has started the patient conditions get worse and the medical specialist starts consulting more colleagues but also his colleagues start consulting other colleagues starting the process of problem formulation again. The cycle will stay iterative till the patient receives a treatment that cures him.
**Characteristics of collaboration between knowledge workers**
The previous paragraph described the difference between the old paradigm (hence value chain) and new paradigm (hence dynamic processes). This paragraph will elaborate on the characteristics causing task-technology misfit of tasks executed by knowledge workers supported by current BPM architectures. This misfit can be attributed to the following characteristics: communication, kind of knowledge, optionality and modality.
Communication is defined as the activity of expressing information (to people). Within value chains communication is initiated by the BPM-system, the receiving party in this case are the employees that have to execute the tasks assigned to them by the system (Weske, 2007). Although sometimes communication between employees is possible and maybe necessary the act of communication is still initiated and structured by the BPM-system based on the process model. Communication in dynamic processes is initiated by the knowledge workers executing the process. The information systems used to facilitate the act of communication is of secondary importance (Stabell et al., 1998; McDermott, 1999; Harisson-Broninski, 2005). Whereas communication between BPM-systems and employees in a value chain is about procedures and work routines communication between knowledge workers has additional functions. During communication between knowledge workers unwritten work routines, personal tools, stories and wisdom about case-effect relationship are exchanged, thereby facilitating the creation of new knowledge which can be used to solve work related issues (McDermott, 1999). Communication and working with other knowledge workers therefore improves the performance of the individual worker and eventually the team (Gregerman, 1981; McDermott, 1999). From a business process management view it is desirable to capture the, electronic, communication between knowledge workers with regards to a specific case (a story). Reasons for this are the development of best practices, compliance and management/governance of business processes.
Explicit versus tacit knowledge is the second characteristic that differs between the two types of business processes. Within the knowledge management community this distinction is very familiar and many papers discuss the difference and codification of the two types (McDermott, 1999; Wegner & Snyder, 2000; Binney, 2001). Traditional BPM-system architectures are designed to use and manage explicit knowledge by codifying the information into enacted process models. Dynamic processes on the other hand rely far more on tacit knowledge and therefore cannot be codified upfront (Lytle & Coulson, 2009; Burkhard, Horan, & Leih, 2009). An architecture dealing with processes that mainly consist out of human interaction needs to be able to codify real-time information related to the process executed e.g. documents, time stamps, email traffic, communication, internal and external employees involved (Këpuska et al., 2008).
The last distinction between value chain and dynamic processes is the optionality and modality of system use (Biney, 2001). BPM-systems supporting value chains do not provide employees with the choice which software to use when executing a task. In addition they also have limited options available for presenting information to the employees. With regards to dynamic processes the modality and optionality in choice of information representation and system use increases. Knowledge workers often have a preferred way of working and data & information presentations (Binney, 2001; McDermott, 1999). This leads to the use of personal tools and information representations thereby decreasing the predictability of software use. A typical example of this is a knowledge worker that gets sales data from a central system copies this to an excel file, runs the numbers and sends the sales forecast to the management.
Due to the combination of changing tasks characteristics and the steady state of supporting information technology (BPM systems) a task-technology misfit has emerged. In the remaining of this paper a solution is proposed to get both characteristics set realigned by proposing a new BPM systems architecture.
**STORIES AND THE HUMAN COLLABORATION BUS**
So far we have described how organizations and their environment are rapidly changing and that the old industrial era paradigms are becoming less able to support, manage and control the activities and processes of companies. As a consequence the attention for process orientation has grown considerably in the last decade, and also the market for software companies offering information systems to analyze, model, execute and control processes is maturing quickly. However even these concepts are still very much based on the notion of being able to determine upfront which tasks, roles and processes are needed in an organization. In this view workers are still little more than part of an engineered system without a free will and with no room for their own interpretations and adaptation of the tasks they are assigned to do. This however will not be tolerated by a growing highly educated workforce that sees work no longer as just a means to pay for the bills but also as part of their way of living, their social environment and thus their identity. Moreover also managers realize that to attain agility in their organizations, employees should be more empowered to work in a more flexible manner without ‘old’ organizational structures and hierarchies hindering the work. In short, the number of knowledge workers is rapidly rising and the way in which they work is totally different and no longer restricted to the boundaries of their company.
To support this new way of working in a manner that realizes both a higher effectiveness of knowledge workers and keeps the organization in control we propose to add extra functionality to (or on top of) the current business process management systems architecture as described in section 3. Central to the added functionality is the concept of story telling. Our lives are filled with stories. As a kid we grew up in a world of stories whether they were out of books or our own (make-believe) stories, and as grown-ups we are constantly part of stories that we also try to capture and record. For example, who doesn’t have family albums filled with pictures of lives events such as births, weddings, birthdays, Christmas, thanksgiving etc. And while sometimes we can’t choose our stories (such as our family) we often actively create our stories. For instance holidays are planned well in advance and everybody knows their role in the story and its final goal. So while stories are very normal in every day life this all of a sudden seems to end when we work because then we are part of a process that is designed and controlled based on an
engineering perspective. However putting stories in the middle of our concept to support knowledge workers who engage in their dynamic collaborative processes (see Figure 2), helps us to understand various notions (Loggen, 2009, p. 44) such as:
- The story in which knowledge workers participate usually has goals and when met, the story ends (or the story is abandoned earlier).
- Knowledge workers each play certain roles while collaborating and in these roles they interact in various ways and perform activities to develop the story (and reach the goals).
- There are rules (and if people don’t play by the rules a quick reaction can be expected).
- There is power - somebody controls the roles assignments and the evolution of the story.
- Communication within the story has a specific context with a specific language, where specific terms are related to specific concepts. However this communication and thus the story can be harshly broken by other emergent events (the financial crises all of a sudden broke a lot of the rules in business financing and thereby disrupted a lot of collaborations in networked organizations, thus changing the patterns of many stories).
**Figure 2: The concept of story in relation to collaborative processes.**
As can be seen in Figure 2 there are a lot of aspects surrounding our story concept. Not only does a story have objectives that need to be reached by the people that are participating and which are set in a specific context, there also has to be a lead character or group of lead characters and during the story information is used but also created. There are many different ways of supporting a real life collaboration story between knowledge workers but the most important part of this new paradigm is that organizations can no longer push the technologies that are to be used
in these dynamic processes. Even if the collaboration is part of a project within one organisation, knowledge workers will want to use the means that they are comfortable with and that they also use in other stories. This concept of modality (see section 4) means that a large part of the story may be enacted in online environments like Facebook, Google docs, LinkedIn, the Process Factory, Zimbra, Jive, and Zoho, while for information that is part of a specific organization ERP or BPM system could be used together with Microsoft office and different legacy systems. All these different systems need to be able to interact and support the story and at the same time there should be some type of controlling method that enforces the rules of the story, creates a history for auditing and governance purposes, that stores the context of the story and the general storyline. For this control method we propose the concept of the Human Collaboration Bus (HCB) as depicted in Figure 3.
Figure 3: The Human Collaboration Bus concept.
The HCB should not be seen as another software application but as a concept that contains technologies that will be different depending on the story that is told. The only constant in the HCB is the story repository. The story repository is the central storage of all stories that have been told, are told and will be told. Preferably third parties will offer a story repository in the Cloud that can be used by any organization or person that has a role in a specific story (and also other providers of story repositories when different stories connect and interact), however a single organization or a network of organizations could also provide a private story repository in support of their knowledge workers collaborating in dynamic processes.
The HCB is central to the integration of all technology and semantic communication between all participants in a story. As we explained, participants in a collaborative story typically will use different tools in communicating and will also typically communicate in terms that are specific to their context (educational level, work domain, country etc.), the HCB connects the tools used and stores the communication and context. A HCB can also (re)use information from systems such as ERP, CRM and others if the story requires so. Depending on the situation the HCB concept can be an add-on to a BPM-system but it can also be provided separate from it, for instance in the Cloud by a third party. However the HCB will only give full added value if functionality offered by BPM-systems can be used, this is because BPM-systems give access to the structured
processes which will almost always have a role in a story. Also it is practical to reuse functionality that BPM-systems contain to integrate legacy systems, realize orchestration and choreography, monitoring and control, enforce rules etc. Just keep in mind that the flexibility of the collaboration is paramount and that using a BPM-system should not lead to efforts to structure and control the story in design time.
TOOL EVALUATION
The functionality that we envisioned in the last paragraph for the HCB doesn’t yet exist (as far as the researchers know). However it could be that there are already software solutions that may offer part of the functionality. To determine if this is the case we performed a scan of available software in the domains of Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Workflow Management (WfM) / Business Process Management, Project Management, and Collaboration tools. These are all software packages that might already offer functionality that is part of our HCB concept.
For the market scan we designed a five step research approach which consisted of the following steps:
1. The construction of a long-list of possible software solutions that might contain parts of HCB functionality; this was done by studying professional literature, websites (of suppliers & consultancy firms), blogs on collaboration, and short interviews with two Capgemini consultants that specialized in collaboration processes. The result of these activities was a list of 54 software packages (the complete list is available upon request to the authors).
2. Based on the Human Collaboration Bus concept as described in the last paragraph a detailed overview of characteristics of collaboration among knowledge workers and supporting IT functionality was developed and used as input for the construction of a survey. The survey questions were then validated by the consultants that were also involved in step 1.
3. The developed survey (consisting of yes & no questions) was sent to all 54 suppliers on the long list. If no response was received or if the surveys returned were missing information we contacted the suppliers with the request to participate or deliver the missing information. As some suppliers choose not to participate they were left out of the next steps of our research. Furthermore we also decided not to include those suppliers that didn’t have at least 50% of the characteristics / functionality mentioned in the survey. This reduced the long list to 16 possible software solutions.
4. For the remaining 16 solutions a more detailed study was performed on the supported characteristics and offered functionality. Each supplier was asked to rate the characteristics / functionality in their software on a scale of 1 to 4 (bad, lacking, sufficient, good). Each package was rated on 31 items that were divided in four categories (the first 3 measuring characteristics of collaboration among knowledge workers & the fourth looking at specific software functionality) which were labelled:
collaboration, work processes, management of work, software functions. Based on the responses we calculated a score for each of the 16 suppliers.
5. The 10 highest scoring solutions from step 4 were the studied in more detail. For this we tried to get a trial version of the software to perform life testing. The test consisted of letting bachelor students use the software in their collaborations as part of performing projects for different courses. At the end of their project we had them report their experiences. Although this last step did provide us with interesting information we decided that the final top 10 should be based on the more objective scores calculated in step 4 instead of using the more subjective input of the student’s experiences.
Based on the market scan we found the following 10 software solutions that in part provide HCB functionality (between brackets the final calculated score is stated, the complete list of characteristics & the scores are available upon request to the authors):
1. Cordys Process Factory (119)
2. Action Base (116)
3. Zoho (109)
4. JIVE (109)
5. eGroupWare (102)
6. Above IT – Zimbra (101)
7. Contact Office (98)
8. HumanEdj (96)
9. Instant Business Network (95)
10. Group Office (93)
Although the software packages mentioned in this top 10 provide some functionality that is needed to support knowledge workers in collaborative processes, none provide all the functions needed. So in conclusion this market scan has shown that there are still many opportunities for software companies to develop new functionality in support to human interaction management.
CONCLUSIONS AND FURTHER RESEARCH
In this paper we have shown that organizations who want to increase the productivity of their knowledge workers and make collaboration more effective and efficient need to change the way they support, manage and control these types of processes. The current industrial paradigm in which processes are structured in design time including their control mechanisms is giving way...
to a new paradigm coined Human Interaction Management in which humans and their interactions are central.
To support this new paradigm we propose the concepts of story telling and the Human Collaboration Bus (HCB). Stories are central to our everyday way of life and consist of (lead) characters, roles, rules and goals which all play a part in a specific context during a certain amount of time. To manage and control the knowledge workers that are embedded in collaborative stories we created the concept of the HCB which provides a story repository that stores all the characteristics of a specific story (including interactions between stories) and that offers functionality to interact between different systems as part of human interactions and which manages the dynamic processes. Ideally the HCB concept is offered via the Cloud by independent third parties but closed solutions are also possible.
The concepts proposed in this paper are based on conceptual research and have not yet been tested in practice. As the market scan showed there was no single tool that offered full functionality to support knowledge workers in collaborative processes. However some software companies are showing promising visions in the way they are developing their software. Future research could therefore consist of combining functionality of different offers to create full support to collaborative processes of knowledge workers. The hereby created functionality could then be used in different research projects within our university domain to further test and validate the Human Collaboration Bus concept.
REFERENCES
|
{"Source-Url": "https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=1083&context=jitim", "len_cl100k_base": 8658, "olmocr-version": "0.1.46", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 42552, "total-output-tokens": 12163, "length": "2e13", "weborganizer": {"__label__adult": 0.0005278587341308594, "__label__art_design": 0.005809783935546875, "__label__crime_law": 0.0007510185241699219, "__label__education_jobs": 0.024688720703125, "__label__entertainment": 0.0003690719604492187, "__label__fashion_beauty": 0.0004010200500488281, "__label__finance_business": 0.0166473388671875, "__label__food_dining": 0.0008525848388671875, "__label__games": 0.0011043548583984375, "__label__hardware": 0.0010347366333007812, "__label__health": 0.0013914108276367188, "__label__history": 0.0010194778442382812, "__label__home_hobbies": 0.0002989768981933594, "__label__industrial": 0.0016775131225585938, "__label__literature": 0.0015869140625, "__label__politics": 0.0005383491516113281, "__label__religion": 0.0007429122924804688, "__label__science_tech": 0.298583984375, "__label__social_life": 0.0003764629364013672, "__label__software": 0.08673095703125, "__label__software_dev": 0.55322265625, "__label__sports_fitness": 0.00030303001403808594, "__label__transportation": 0.0008945465087890625, "__label__travel": 0.000408172607421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53457, 0.03505]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53457, 0.3882]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53457, 0.92323]], "google_gemma-3-12b-it_contains_pii": [[0, 932, false], [932, 3707, null], [3707, 7127, null], [7127, 11005, null], [11005, 14916, null], [14916, 19187, null], [19187, 22785, null], [22785, 26963, null], [26963, 30626, null], [30626, 34488, null], [34488, 36326, null], [36326, 38966, null], [38966, 42021, null], [42021, 44053, null], [44053, 46788, null], [46788, 49135, null], [49135, 51413, null], [51413, 53457, null], [53457, 53457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 932, true], [932, 3707, null], [3707, 7127, null], [7127, 11005, null], [11005, 14916, null], [14916, 19187, null], [19187, 22785, null], [22785, 26963, null], [26963, 30626, null], [30626, 34488, null], [34488, 36326, null], [36326, 38966, null], [38966, 42021, null], [42021, 44053, null], [44053, 46788, null], [46788, 49135, null], [49135, 51413, null], [51413, 53457, null], [53457, 53457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53457, null]], "pdf_page_numbers": [[0, 932, 1], [932, 3707, 2], [3707, 7127, 3], [7127, 11005, 4], [11005, 14916, 5], [14916, 19187, 6], [19187, 22785, 7], [22785, 26963, 8], [26963, 30626, 9], [30626, 34488, 10], [34488, 36326, 11], [36326, 38966, 12], [38966, 42021, 13], [42021, 44053, 14], [44053, 46788, 15], [46788, 49135, 16], [49135, 51413, 17], [51413, 53457, 18], [53457, 53457, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53457, 0.05096]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
a406b1f0b424b19e32231225dabf25417f188d18
|
[REMOVED]
|
{"len_cl100k_base": 8594, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41822, "total-output-tokens": 12267, "length": "2e13", "weborganizer": {"__label__adult": 0.00038051605224609375, "__label__art_design": 0.0002739429473876953, "__label__crime_law": 0.0002758502960205078, "__label__education_jobs": 0.000408172607421875, "__label__entertainment": 5.823373794555664e-05, "__label__fashion_beauty": 0.0001678466796875, "__label__finance_business": 0.00019359588623046875, "__label__food_dining": 0.0003337860107421875, "__label__games": 0.0005006790161132812, "__label__hardware": 0.0008144378662109375, "__label__health": 0.0005068778991699219, "__label__history": 0.00020825862884521484, "__label__home_hobbies": 8.529424667358398e-05, "__label__industrial": 0.0003342628479003906, "__label__literature": 0.0002167224884033203, "__label__politics": 0.0002613067626953125, "__label__religion": 0.0004925727844238281, "__label__science_tech": 0.01209259033203125, "__label__social_life": 9.03606414794922e-05, "__label__software": 0.005474090576171875, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00030922889709472656, "__label__transportation": 0.0004391670227050781, "__label__travel": 0.00021541118621826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51815, 0.02587]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51815, 0.27664]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51815, 0.83922]], "google_gemma-3-12b-it_contains_pii": [[0, 3925, false], [3925, 7208, null], [7208, 12535, null], [12535, 16436, null], [16436, 21254, null], [21254, 24256, null], [24256, 29577, null], [29577, 35018, null], [35018, 38787, null], [38787, 42162, null], [42162, 49256, null], [49256, 51815, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3925, true], [3925, 7208, null], [7208, 12535, null], [12535, 16436, null], [16436, 21254, null], [21254, 24256, null], [24256, 29577, null], [29577, 35018, null], [35018, 38787, null], [38787, 42162, null], [42162, 49256, null], [49256, 51815, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51815, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51815, null]], "pdf_page_numbers": [[0, 3925, 1], [3925, 7208, 2], [7208, 12535, 3], [12535, 16436, 4], [16436, 21254, 5], [21254, 24256, 6], [24256, 29577, 7], [29577, 35018, 8], [35018, 38787, 9], [38787, 42162, 10], [42162, 49256, 11], [49256, 51815, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51815, 0.104]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b42499648716d31143037c76ebcd953164e36422
|
Commodifying Replicated State Machines with OpenReplica
Deniz Altınbükén, Emin Gün Sirer
Computer Science Department, Cornell University
{deniz,egs}@cs.cornell.edu
Draft: Not for Redistribution
Abstract
This paper describes OpenReplica, an open service that provides replication and synchronization support for large-scale distributed systems. OpenReplica is designed to commodify Paxos replicated state machines by providing infrastructure for their construction, deployment and maintenance. OpenReplica is based on a novel Paxos replicated state machine implementation that employs an object-oriented approach in which the system actively creates and maintains live replicas for user-provided objects. Clients access these replicated objects transparently as if they are local objects. OpenReplica supports complex distributed synchronization constructs through a multi-return mechanism that enables the replicated objects to control the execution flow of their clients, in essence providing blocking and non-blocking method invocations that can be used to implement richer synchronization constructs. Further, it supports elasticity requirements of cloud deployments by enabling any number of servers to be replaced dynamically. A rack-aware placement manager places replicas on nodes that are unlikely to fail together. Experiments with the system show that the latencies associated with replication are comparable to ZooKeeper, and that the system scales well.
1 Introduction
Developing distributed systems is a difficult task, in part because distributed systems comprise components that can and do fail and in part because these distributed components often need to take coordinated action through failures. A typical distributed application maintains state that needs to be replicated and distributed, as well as actively executing threads of control whose behavior needs to be controlled. We term the former process replication and the latter synchronization; together they are known as coordination. The recent emergence of the cloud as a mainstream commercial deployment environment has amplified the need for coordination services, infrastructure software that provides a replication and synchronization framework for distributed applications.
Building coordination services is a difficult task. ZooKeeper [17] and Chubby [4] have recently emerged as the predominant coordination services for large-scale distributed systems. While these two systems differ in the underlying consensus protocol they employ, they both provide the same basic mechanism; namely, ordered updates to a replicated file, with optional callbacks on updates. This approach suffers from three shortcomings. First, a file-based API requires an application to convert its replicated state into a serialized form suitable for storing in a file. Consequently, the serialized form of data stored in the filesystem typically differs from the programmatic view of an object as seen by the developer. Bridging this disconnect requires either costly serialization and deserialization operations or a level of indirection, where the file serves as a membership service to convert between the two views. Second, these services require an application to express its communication and synchronization behavior using an upcall-based API. In essence, these coordination services provide a publish-subscribe system. Consequently, applications need to be rewritten to subscribe to the appropriate upcalls that match their synchronization needs. Further, because these upcall events separate control-flow from data-flow, event handlers typically have to perform expensive additional operations to reestablish the event context, such as reading from the replicated state to determine the type of modification. Finally and most importantly, configuring and maintaining the resulting distributed application is operationally challenging. For example, maintaining replica sets to prevent service degradation often requires manual intervention to spawn new replicas and changes to update configurations. Since the ZooKeeper atomic broadcast protocol does not support dynamic updates to replica sets, migrating services requires a system restart.
In this paper, we present a novel object-oriented self-configuring and self-maintaining coordination service for large-scale distributed systems, called OpenReplica. OpenReplica is a public web service that instantiates and maintains user-specific coordination instances easily. OpenReplica operates on user-provided objects that define state machines, which it transforms into fault tolerant replicated state machines (RSMs) [26, 43]. These objects are used to maintain replication and synchronization data and can be crafted to meet an application’s coordination needs. The system maintains a set of live replicas that can provide instant failover and employs consensus to keep the replicas in synchrony as the state of the replicated objects change through method invocations. OpenReplica can deploy replicated state ma-
The OpenReplica implementation is based on the Paxos protocol [28, 29] for consensus and uses a novel combination of Paxos features to achieve higher performance and dynamicity. To maximize availability and provide flexibility in replica management, we support dynamic view changes where the acceptor and replica sets can be modified at run time. To achieve high performance, we employ a version of the Paxos protocol based on a light-weight implementation built on asynchronous events. And to provide integration with existing naming infrastructure and to enable clients to be directed to up-to-date replicas, OpenReplica implements name servers which can provide authoritative DNS name service as well as optional integration with Amazon’s Route 53 [3].
OpenReplica has been used to build and deploy several fault-tolerant services, including a fault-tolerant logging service, a reliable group membership tracker, a reliable data store, a configuration service as well as reliable data structures, such as binary search trees, red black trees, queues, stacks, linked lists, and synchronization objects, such as distributed locks, semaphores, barriers and condition variables. These implementations show that OpenReplica enables programmers to build and deploy non-trivial fault-tolerant services simply by implementing a local object. The amount of engineering effort that went into these applications is substantially lower than specialized, monolithic systems built around Paxos agreement.
Overall, this paper makes three contributions. First, it presents the OpenReplica service for the construction, deployment and maintenance of Paxos-based replicated state machines (RSMs). Second, it describes the OpenReplica implementation for building practical Paxos RSMs which support high-throughput, dynamic view changes, fault-tolerance through rack-aware replica placement, client synchronization control through a multi-return primitive, and DNS integration. Finally, it compares the performance of OpenReplica to ZooKeeper, known for its high performance implementation, and demonstrates that the system achieves low latency during regular operation, quick recovery in response to failures, and high scalability in the size of the replicated state. Specifically, OpenReplica outperforms ZooKeeper by 15% on latency for 5 replicas and exhibits comparable failure recovery times.
The rest of this paper is structured as follows. Section 2 outlines the OpenReplica approach to replication for general-purpose objects. Section 3 describes the implementation of the system. Section 4 evaluates the performance of the system and provides a comparison to ZooKeeper. Section 5 places OpenReplica in the context of past work on coordination services and Section 6 summarizes our contributions.
2 Approach
OpenReplica is a public web service that instantiates and maintains application-specific coordination services. This section describes the design outline and rationale for the OpenReplica approach to providing coordination services in large-scale distributed systems.
The goals of OpenReplica are as follows:
- **Easy-to-use**: Defining, implementing, deploying and maintaining replicated state machines should be straightforward, even for non-expert programmers.
- **Transparent**: Replication and fault-tolerance techniques must not require disruptive changes to application logic. Rendering parts of an existing application fault-tolerant should not require extensive changes to the code base.
- **Dynamic**: It should be possible to change the location and number of replicas at run-time. The correct operation of the system should not depend on the liveness of statically designated clients.
- **High-Performance**: The resulting fault-tolerant system should exhibit performance that is comparable to state-of-the-art coordination services.
The underlying coordination infrastructure used by OpenReplica tackles these goals with an object-oriented approach centered around a *coordination object* abstraction. A coordination object consists of requisite data and associated methods operating on that data, which, together, define a state machine capable of stopping and restarting client executions. In essence, a coordination object specifies the application functionality to be made fault-tolerant, as well as defining the fault-tolerant synchronization mechanisms required to control the execution of an application. A user defines a coordination object as if it were a local Python object, hands it to OpenReplica, which then instantiates replicas on a set of servers and creates a distributed and fault-tolerant coordination object.
OpenReplica ensures that the coordination object replicas remain in synchrony by using the Paxos protocol to agree on the order of method invocations. There has been much work on employing the Paxos protocol to achieve fault-tolerance in specific settings [34, 36, 4, 32, 10, 20, 1], in which Paxos was monolithically integrated into a specific, static API offered by the system. In contrast, OpenReplica is a general-purpose, open service that enables any object to be made fault-tolerant. We illustrate the overall structure of a coordination object instance in Figure 1 and discuss each component in turn.
**Binary Rewriting**: OpenReplica uses binary rewriting to ensure *single-object semantics*; that is, users have the illusion of a single object both when specifying and invoking a coordination object. The implementation uses binary rewriting on the server side to generate a networked object suitable for replication from an object specification. This process involves the generation of server-side stubs and a control loop that translates local method invocations into Paxos consensus rounds. OpenReplica also uses binary rewriting to generate a client proxy object whose interface is identical to the original object. Underneath the covers, the client proxy translates method invocations into client requests, which comprise a unique client request id, method name, and arguments for invocation. The proxy marshals client requests and sends them to one of the replicas, discovered through a DNS lookup. Depending on the responses returned from the replica, the proxy is also capable of suspending and resuming the execution of the calling thread, thereby enabling a coordination object to control the execution of its callers.
**RSM Synchrony**: OpenReplica uses Paxos to ensure that the coordination object replicas are kept in synchrony. The central task of any RSM protocol is to ensure that all the replicas observe the same sequence of actions. OpenReplica retains this sequence in a data structure called *command history*. The command history consists of numbered *slots* containing client requests, corresponding to method invocations, along with their associated client request id, return value, and a valid bit indicating whether the operation has been executed. To ensure that operations are executed at most once, a replica checks the command history upon receiving a client re-
quest and, if the operation has already been executed, responds with the previously computed output. If the request has been assigned to a slot in the command history (i.e. a previous Paxos round has decided on a slot number for that request), but has not been executed yet, it records the client connection over which the output will be returned when the operation is ultimately executed. These two checks ensure that every method invocation will execute at most once, even in the presence of client retransmissions and failures of the previous replicas that the client may have contacted. If the client request does not appear in the command history, the receiving replica locates the earliest unassigned slot and proposes the operation for execution in that slot. This proposal takes place over a Paxos consensus round, which will either uncover that there was an overriding proposal for that slot suggested previously by a different replica (which will, in turn, defer the client request to a later slot in the command history and start the process again), or it will have its proposal accepted. These consensus rounds are independent and concurrent; failures of replicas may lead to unassigned slots, which get assigned by following rounds. Once a client request is assigned to a slot by a replica, that replica can propagate the assignment to other replicas and execute the operation locally as soon as all preceding slots have been decided. The replica then responds with the return value back to the client. Note that, while the propagation to other replicas occurs in the background, there is no danger of losing the agreed-upon slot number assignment, as the Paxos protocol implicitly stores this decision in a quorum of acceptor nodes at the time the proposal is accepted. For the same reason, OpenReplica does not require the object state to be written to disk. As long as there are less than a threshold $f$ failures in the system, the state of the object will be preserved.
**Dynamic Membership:** Because coordination objects are fault-tolerant and long-lived, the system supports repositioning of the replicas dynamically during execution. Consequently, the connection between the client proxy and the replicas is established not by a static configuration file, but by a DNS lookup. DNS servers that participate in the agreement protocol track the membership of nodes in the replica set, and can thus respond to DNS queries with an up-to-date list of replicas. The names of the DNS servers in the parent-level DNS service is also updated whenever DNS servers come online, thus ensuring that the replica set can be located through the standard DNS resolvers. OpenReplica uses a fault-tolerant DNS service coordination object to keep track of the DNS servers for many coordination instances that are created for client coordination objects.
The end result of this organization is that the clients can treat the set of replicas as if they implement a single object. OpenReplica extends traditional Paxos RSM implementations with a novel multi-return mechanism to support two kinds of objects: synchronous and rendezvous objects.
**Synchronous Objects**
A synchronous object is a coordination object that encapsulates replicated state and provides associated methods to update this state, which do not change the execution state of their callers. Synchronous object methods execute to completion and return a result without suspending the caller.
Since synchronous objects are by far the most common type of object in distributed systems, OpenReplica makes it particularly easy to define and invoke them. Figure 2 shows a sample coordination object implementation for an online payment service. The Account class defines synchronous objects that hold a user’s current account balance. The account has an identifier (an account number) and a balance, modified through debit and deposit methods. In effect, the object encloses the critical state that needs to be made fault-tolerant, and defines a state machine whose legal transitions are determined by the amount of money in the account. OpenReplica ensures that these operations are invoked in a consistent, totally-ordered manner.
What is noteworthy about this implementation is that it includes no replication-specific code. Neither the server-side object specification nor the user of the client proxy need be aware that the object is replicated and fault-tolerant. Single object semantics ensure that sound clients can be written in a straight-forward way, with only some additional exception handling for the cases where a partition or network failure results in a network timeout. In contrast, performing the same task with a file-based API in ZooKeeper or Chubby would require handling connections, serializing/deserializing persistent state, or perhaps using these systems to determine the membership of a set of live nodes which in turn manually implement an RSM.
```python
class Account():
def __init__(self, acctnumber, initbalance):
self.number = acctnumber
self.balance = initbalance
def debit(self, amount):
if amount >= self.balance:
self.balance = self.balance - amount
return True
else:
return False
def deposit(self, amount):
self.balance = self.balance + amount
return True
Figure 2: Coordination objects do not include OpenReplica-specific code, they are implemented as if they are local objects.
```
The decision to maintain live instances and keep them in synchrony through agreement on the command history represents critical design tradeoffs. The advantage of agreeing on command history instead of object state is that it can support any object, even those that may contain active components, such as threads, and performs well even for large objects that may be too costly to serialize, such as large files. The downsides of this approach are two-fold: the command history can grow over time, a topic we address in the next section with garbage collection, and non-deterministic operations in methods may cause replica divergence if left unchecked, a topic we address in the next section through language mechanisms.
**Rendezvous Objects**
Making a distributed system fault tolerant typically requires synchronizing the activities of distributed components. OpenReplica accomplishes this with a novel multi-return primitive, supported for a class of objects dubbed rendezvous objects. Specifically, whereas synchronous objects support methods that simply execute to completion and return, rendezvous objects may block the calling client until further notice and resume it at a later point. Normally, clients of a replicated state machine perform synchronous method invocations, where every method invocation gets assigned to a slot in the replicated state machine history through consensus, and a result is returned to the client when the execution completes. In cases where the replicated state machine is used to synchronize clients, the method invocation may need to block the client and return as the result of a method invocation by another client. Note that this is not the same as the RSM itself blocking, though on casual observation, the two effects may seem the same. When the client is explicitly blocked, the RSM itself is free to take additional state transitions, prompted by operations issued by other clients. In contrast, when the RSM is blocked, it ceases to make progress and cannot uphold liveness requirements. OpenReplica avoids such blocking by enabling rendezvous objects to suspend and resume their calling clients.
The multiple return primitive greatly simplifies the implementation of objects used for synchronization. Figure 3 shows the implementation of a semaphore object in OpenReplica. The implementation follows a conventional semaphore implementation line-by-line. It keeps a count, a wait queue and an atomic lock and blocks or unblocks clients depending on the count value.
OpenReplica uses an extension over the underlying consensus protocol, where return values may be deferred. When a rendezvous object blocks its caller, a second bit in the command history is used to indicate that the calling client has been deferred. Later, any other command can cause previously deferred method calls to be resumed. Upon completion, these calls may yield actual returned values, which are returned to the client at a later time. The command history always records the time at which a call was deferred, as well as the later call that resumed the deferred method invocation. As a result, each method invocation in OpenReplica has associated with it not only its own results, but also the results of computations it resumed as a side-effect during its execution.
This enables a replica, replaying the object history, to make the same set of synchronization-related decisions as other replicas. On the client side, the intention of the RSM to block the client is communicated by an exception carried in the first response packet, which instructs the client proxy to block the calling thread on a local condition variable. A future, asynchronous response message for the same client request unblocks the thread and yields the result carried in the second response. Consequently, users can implement synchronization objects following conventional blocking constructs.
In contrast, systems with file-based APIs require esoteric, upcall-based implementations for synchronization control. For instance, a comparable barrier implementation is three times as long in ZooKeeper than its OpenReplica counterpart, requires intimate understanding of znodes and watchers, and has almost no code in common with textbook barrier implementations [17].
### 3 Implementation
Implementing a public, open web service for general purpose replication and coordination necessitates numerous design decisions on how to layer RSMs on top of the core Paxos protocol and how to maintain multiple instances of this distributed system. We present these implementation...
details below, focusing on the design rationale.
3.1 Paxos
Paxos is used to achieve consensus among the replicas on the order in which client requests will be executed. By providing ordering guarantees, Paxos ensures that the replicated state machine behaves like a single remote state machine. Following the concise and lightweight multi-decree Paxos implementation described in [45], OpenReplica assigns a client-initialized request to a unique slot and communicates this assignment to the various nodes in the system.
OpenReplica employs two sets of nodes, replicas and acceptors. In OpenReplica, replicas keep a live copy of the replicated object, receive requests from clients, start a consensus round for each request and execute operations on the replicated state in the agreed upon order. Acceptors constitute the quorum keeping the consensus history, in effect providing memory for past decisions. Acceptors communicate solely with replicas and record the proposed client command and the highest Paxos ballot number they have seen for each slot. Consequently, replicas can use the acceptors to determine past history of proposals for each slot number, and to recover and resume past proposals in cases where they were only partially completed. At any time, a replica can retrieve the history of operations from acceptors to synchronize with other replicas. In the presence of special conditions like dynamic view changes, garbage collection, and non-deterministic inputs, the behaviors of replicas and acceptors are managed through additional mechanisms implemented upon the underlying RSM.
3.2 Meta commands
OpenReplica implements an internal control mechanism based on meta commands for managing replicas. Meta commands are special commands recognized by OpenReplica replicas that pertain to the configuration state of the replicated state machine as opposed to the state of the user-defined object. Meta commands are generated within OpenReplica and guaranteed to be executed at the same logical time and under the same configuration in every replica. This timing guarantee is required as the underlying protocol typically has many outstanding requests being handled simultaneously, and a change in the configuration would affect later operations that are being decided. For instance, a change in the set of Paxos acceptors would impact all ongoing consensus instances for all outstanding slots, and therefore needs to be performed in synchrony on all replicas.
To guarantee consistency through configuration changes, OpenReplica employs a window to define the number of non-executed operations a replica can have at any given time. To guarantee that meta commands are executed on the same configuration in every replica, the execution of a meta command is delayed by a window after it is assigned to a slot. For example, assume an OpenReplica setting where the window size is \( \omega \) and the last operation executed by a replica is at slot \( \alpha \). Here, no other replica can initiate a consensus round for slots beyond \( \alpha + \omega \). To initiate a consensus round for the next slot, a replica has to wait until after the execution of slot \( \alpha + 1 \). Therefore, when the command at \( \alpha + \omega \) is executed, all replicas are guaranteed to have executed all meta commands through \( \alpha \). Hence, by delaying the execution of meta commands by \( \omega \), consistency of the Paxos related state can be maintained through a dynamic configuration change [31].
3.3 Dynamic Views
Long-lived servers are expected to survive countless network and node failures. To do so effectively, the system has to provide sufficient flexibility to move every component at runtime. OpenReplica achieves this by using meta commands to change the replica, acceptor and name server sets. Over time, an OpenReplica object may completely change the set of servers in its configuration, though adjacent configuration can modify at most \( f \) nodes because the state transfer mechanism used during view changes may temporarily keep new nodes from fully participating in the protocol. To maintain consensus history, new acceptors need to acquire past ballot number and command tuples from a majority of old acceptors for each past round. New acceptors transfer these ballots in the background until they have reached the current ballot. A meta command can then be issued to add the acceptor to the configuration, though there exists a window during which the acceptor may fall behind. Newly added acceptors ensure that they do not participate in the protocol until they have caught up by having heard from a majority of old acceptors for each past ballot. The acceptor addition mechanism suffers from a window of vulnerability during a configuration change where a newly added node consumes one of the \( f \) failure slots; past work has developed techniques for masking this window [32], though we have not yet implemented this technique due to its complexity. Replicas are easier to bring up, as any fresh replica will iterate through slot numbers, learn previously assigned commands by proposing NOOPs for each slot (whereupon the acceptors will notify the replica of previous assignments), and transition through states until it catches up. To speed this process up, the OpenReplica implementation allows a replica to fetch the command history from another replica en masse. The previous mechanism is then used to fill any gaps that could arise when the source replica is out of date.
Dynamic view changes in our system can be initiated externally, by a system administrator manually issuing
Machine failure defines $f_{1,2}$: Cooling Unit Failure $f_{3,4}$: Top-of-Rack Switch Failure $f_{10-21}$: Single Machine Failure
$h_1 \cdot f_0 f_1 f_3 f_{10}
\quad h_4 \cdot f_0 f_1 f_4 f_{13}
\quad h_7 \cdot f_0 f_2 f_3 f_{16}
\quad h_{10} \cdot f_0 f_4 f_{19}$
Figure 4: Example of failure groups in a data center. Failure groups define set of nodes, whose failure depend strictly on the failure of one component. Data center outage defines $f_0$, failures of two top-of-rack switches define $f_{1,2}$, failures of two cooling units define $f_{3,4}$, and each machine failure defines $f_{10-21}$. OpenReplica uses the list of failure groups that a host is in, as an input to the greedy rack-aware replica placement algorithm.
commands, or internally, by a replica or the OpenReplica coordinator that detects a failure. In either case, the initiator typically brings up a nascent node, instructs it to acquire its state, and then submits a meta command to replace the suspected-dead node with the nascent one. To have the view change take effect quickly, independent of the rate of operations organically sent to the coordination object from clients, the initiator invokes $\omega$ NOOP operations.
### 3.4 Rack-Aware Replica Placement
The fault-tolerance of a distributed system is affected immensely when multiple servers fail simultaneously. These kinds of failures can happen if servers share crucial components such as power supplies, cooling units, switches and racks. Common points of failures define failure groups wherein a single failure would affect multiple servers. For instance Figure 4 illustrates possible failure groups in a data center by highlighting node sets that will be affected by the failure of a power distribution unit, a top-of-rack switch, a cooling unit and a machine. To prevent concurrent, non-independent failures, replica placement should be performed judiciously, minimizing the number of servers in the same failure group, and thus, subject to simultaneous failures due to the same root cause.
OpenReplica supports replica placement that takes failure scenarios into account, a feature that is sometimes called rack-awareness. During object instantiation, a user specifies the candidate set of hosts on which she can deploy replicas, along with a specification of their failure groups. Shown in Figure 4, a failure group specification is a free-form tuple that associates, with each server, the set of events that could lead to its failure. For instance, host $h_7$ shares common failure points $f_0$ and $f_3$ with $h_1$. OpenReplica places no limit on the number of failure groups, and is agnostic about the semantic meaning of each $f_i$. In this example, $f_0$ corresponds to a failure of a PDU that affects both racks shown in the figure, while $f_3$ corresponds to a cooling unit failure.
OpenReplica picks replicas using a greedy approach that achieves high fault tolerance. In particular, when picking a new host for a replica, acceptor or name server, it picks the host that maximizes the number of differences from the piecewise union of all existing hosts’ failure groups. This greedy approach will not necessarily yield the minimally-sized replica group for tolerating a given level of failure, an open problem that has been tackled, in part, by other work [21]. Since OpenReplica deployments are not extensive and since there exists a fundamental trade-off between optimality and query time which avoids exhaustive search [39], our greedy approach performs an assignment within 3s for a data center with 80,000 hosts, and we later show that the greedy approach achieves fault-tolerance that exceeds that of random placement. of hosts.
### 3.5 DNS Integration
In an environment where the set of nodes implementing a fault-tolerant object can change at any time, locating the replica set can be a challenge. To help direct clients to the most up-to-date set of replicas, OpenReplica implements special nodes called name server nodes. Name server nodes are involved in the underlying Paxos protocol just like replicas, but they maintain no live object and perform no object operations. They solely track meta commands to update the set of live nodes and receive and handle DNS queries.
To support integration with DNS, coordination instances can be assigned a DNS domain, such as bank.openr.org, at initialization. On boot, the name server nodes register their IP address and assigned domain with the DNS name servers for their parent domain. Thereafter, the parent domain designates them as authoritative name servers for their subdomain and directs queries accordingly.
Name server nodes also support integration with Amazon Route 53 [3] to enable users to run stand-alone coordination instances without requiring the assistance of a parent domain. To run OpenReplica integrated with Amazon Route 53, the users set up a Route 53 account.
that is ready to receive requests and supply the related
credentials to OpenReplica. From this point on, the name
server nodes track meta commands that affect the view
of the system and update the Route 53 account automatically.
DNS integration enables client to initialize their con-
nection to an RSM through a DNS lookup. After the con-
nection is initialized, following method invocations are
submitted using the same connection as long as it does
not fail. When the connection fails, the client proxy per-
forms a new DNS lookup and initializes a new connec-
tion transparently. This way the view changes that might
require new connections to be established are masked by
the client proxy. Short timeouts on DNS responses en-
sure that clients do not cache stale DNS results.
3.6 Proxy Generation
OpenReplica clients interact with a coordination object
through a client proxy or the web interface provided.
Both of these methods use the client proxy, which is au-
tomatically generated by OpenReplica through Python
reflection and binary rewriting. OpenReplica parses the
coordination object, creates the corresponding abstract
syntax tree and, following the original structure of the
tree, generates a specialized proxy that performs appro-
priate argument marshalling and unmarshalling as well
as execution blocking and unblocking, where needed.
OpenReplica also attaches a security token to every
proxy to disable unauthorized method invocations on the
replicated object, which is generated with the same to-
ken.
Clients can use a client proxy with very little modifica-
tion compared to the invocation of a local object. Due to
the replicated nature of the coordination object, the client
proxy might throw additional OpenReplica exceptions.
The client has to surround such method invocations with
an exception handler to catch and address these excep-
tions, which relate to network errors such as partitioned
network.
3.7 Non-deterministic Operations and Side-Effects
During remote method invocations, non-deterministic
operations might result in different states on each replica.
OpenReplica deals with non-deterministic operations by
performing a Paxos agreement on function calls that
might result in different states on different replicas. To
enable this kind of behavior, the operations with non-
deterministic behaviors are detected with a blacklist and,
if one of these operations is performed during a method
invocation, a new meta command, including the resulting
state after the non-deterministic operation, is started by
the replica. When this meta command is executed, the in-
structions following the non-deterministic operation are
executed using the state retrieved from the meta com-
mand. This ensures that all replicas observe the same
non-deterministic choices.
In our current prototype, we identified method invo-
cations in time, random and socket modules to result in
non-deterministic results along with dictionary and set
operations. In the Python runtime, dictionaries are im-
plemented as hash tables and sets are implemented as
open-addressing hash tables, consequently, inserting and
removing items can change their order. OpenReplica de-
termines method invocations that make use of these com-
ponents and simply sorts them to establish a canonical
order. Applications wishing to avoid the sort overhead
can use their own deterministic data structures.
3.8 Inconsistent Invocations
By default, every method invocation in OpenReplica pro-
vides strong consistency. Its slot location in the execu-
tion history is the result of an agreement protocol, and
its execution is determined by the globally-agreed slot
assignment. Because no replica executes a command un-
less it has seen the entire prefix of commands, the results
are guaranteed to be consistent.
But there are certain application-specific instances
where this level of consistency is not necessary. When
an application needs high performance and can handle
inconsistent results, it is possible to provide a best-effort
response with drastically lower overhead. For exam-
ple, a bank account normally requires fully consistent
updates, but a user profiler that wants to determine the
user’s approximate net worth need not go through the
full expense of an RSM transaction. To support these
cases, OpenReplica provides a low-overhead call for in-
consistent method invocation. Such inconsistent calls are
performed on any one of the replicas, and it is up to
the application programmer to ensure that they execute
with no side-effects, as they may be executed at different
times on different replicas. OpenReplica does not invoke
agreement for such calls, does not record them in object
history, and load-balances them uniformly across the set
of replicas for performance. As with the consistency re-
laxation in ZooKeeper, inconsistent invocations in Open-
Replica have the potential to provide a significant boost
in performance.
3.9 Garbage Collection
Any long-running system based on agreement on a
shared history will need to occasionally prune its his-
tory in order to avoid running out of memory. In par-
ticular, acceptor nodes in OpenReplica keep a record of
completely- and partially-decided commands that needs
to be compacted periodically. The key to this compaction
is the observation that a prefix of history that has been
seen by all acceptors and executed by all replicas can
be elided safely and replaced with a snapshot of the ob-
ject. OpenReplica accomplishes this in two main steps.
First, a replica takes a snapshot of the coordination ob-
ject every $\tau$ commands, and issues a meta command to garbage collect the state up to this snapshot. This consensus round on a meta command serves three purposes; namely, the garbage collection command is stored in the acceptor nodes; the acceptors detect the meta command and acquiesce only if they themselves have all the ballots for all preceding slot numbers; and finally, the meta command ensures that at the time of execution for the meta command, all the replicas will have the same state. Later, when the meta command is executed, a garbage collection command is sent to acceptor nodes along with the snapshot of the object at that point in time. Upon receiving this message, the acceptors can safely replace a slot with the snapshot of the object and delete old ballot information. This way, during a failover, new leader will be able to simply resurrect the object state after $n\tau$ operations, instead of having to apply as many state transitions.
4 Evaluation
We have performed a detailed evaluation of OpenReplica’s performance and compared it to widely used and state-of-the-art coordination service ZooKeeper. In this section we present the size of coordination objects used to implement reliable data structures, synchronization primitives and custom coordination objects follow exactly from their local, centralized versions, except for the special Blocking and Unblocking Returns required for synchronization primitives.
Our experiments reflect end-to-end measurements from clients and include the full overhead of going over the network. As a result, the latency numbers we present are not comparable to numbers presented in most past work, which has tended to report performance metrics collected on the same host. The inputs to clients are generated beforehand and same inputs are used for OpenReplica and ZooKeeper tests.
Our evaluation is performed on a cluster of eleven servers. Each server has two Intel Xeon E5420 processors with 4 cores and a clock speed 2.5 GHz and 16 GB RAM and a 500 GB SATA 3.0 Gbit/s hard disk operating at 7200 RPM. All servers are running 64-bit Fedora 10 with the Linux 2.6.27 kernel. We spread clients, replicas and acceptors on these 11 servers.
4.1 Implementation Size
The OpenReplica approach results in a great simplification in the implementation of reliable and fault-tolerant coordination objects, including reliable data structures and synchronization primitives. To illustrate, Table 1 presents the sizes of some reliable data structures, synchronization primitives and generalized coordination objects implemented to work with OpenReplica. While implementing these coordination objects, no OpenReplica-specific code has been used except for the Blocking and Unblocking Return exceptions required to implement the multi-return mechanism of synchronization primitives. Consequently, implementing a reliable and fault-tolerant data structure, synchronization primitive or coordination object suitable to be used in a distributed system is reduced to implementing a centralized, local version of it.
4.2 Latency
Next, we examine the latency of consistent requests in OpenReplica and in ZooKeeper. For this experiment, we used a synchronous client that invokes methods from the Account object of Section 2, and collected end-to-end latency measurements from the clients. To be able to examine the latency related to the underlying protocol,
we used only 128 bytes of replicated state, keeping the serialization and deserialization cost to a minimum for ZooKeeper.
Figure 5 plots the latency of requests against the number of replicas and acceptors in OpenReplica and number of replicas in ZooKeeper. OpenReplica and ZooKeeper present comparable latency results. OpenReplica shows lower latency for lower number of replicas and differs from ZooKeeper performance by 0.5ms on average for larger number of replicas. Another behavior we examine in this graph is the high standard deviation present in ZooKeeper measurements. While OpenReplica requests are handled with the same average latency, there is a big variance in latency measurements for ZooKeeper.
The CDF for OpenReplica and ZooKeeper latency shows this variance in more detail (Figure 6). For clarity, the plots are cutoff at the maximum latency value they present. The CDF of OpenReplica shows that there is an even distribution of latency values shown by OpenReplica varying from values less than 1 ms to 8 ms. ZooKeeper, on the other hand, has a number of requests whose latency exceeds 10 ms, even though these measurements were done when the services were in a stable state with no failures.
4.3 Scalability
We examine the scalability of OpenReplica in relation to size of the replicated state and the number of replicas and acceptors. Figure 7 shows how ZooKeeper and OpenReplica scale as the size of the replicated state grows. As ZooKeeper has to serialize and deserialize data and perform read and write operations with an update on the replicated size, the overhead of these operations increases as the replicated state grows larger. OpenReplica on the other hand, does not require serialization, deserialization or reinstatiation as every object is kept as a live instance. Consequently, the size of the replicated state does not affect the latency experienced by the clients in OpenReplica, providing the same performance for any replicated state size.
A critical parameter in any fault-tolerant system is the amount of fault-tolerance the system offers. Figure 8 shows how OpenReplica scales as the number of replicas and acceptors increase, when the fault-tolerance of the system is improved considerably. The graph shows that OpenReplica performance scales well, even with very large numbers of replicas and acceptors.
4.4 Throughput
OpenReplica achieves a sustained throughput of 327 ops/s with 5 replicas and 5 acceptors; this measurement includes all network overhead. For comparison, ZooKeeper achieves 1872 ops/s in the same setting. The difference is due to an unoptimized OpenReplica implementation in Python, and a highly optimized ZooKeeper implementation that employs batching to improve throughput. This is consistent with the latency measurements for the two systems, where OpenReplica outperforms ZooKeeper because the batching optimizations are not effective for the latency experiment.
OpenReplica implements a fast-read operation that provides high throughput, but inconsistent, method invocations. Fast-read operations do not require agreement, are handled by any replica in the system and are not saved in the command history. ZooKeeper provides a similar read relaxation primitive. Figure 9 examines the throughput of OpenReplica with inconsistent method invocations with 5 replicas. The experiment shows that the throughput scales with increasing numbers of clients. This is not surprising, as inconsistent reads enable OpenReplica to avoid agreement overhead entirely and distribute the request stream among all the replicas.
4.5 Fault Tolerance
Another important performance measure for a fault-tolerant system is how fast the system can recover from the failure of a server, specifically from the failure of the leader. Figure 10 shows how OpenReplica and ZooKeeper handle failures of leaders. In this benchmark, two leaders fail at the 250th and 500th requests, respectively. OpenReplica takes on average 2.75 seconds to recover whereas ZooKeeper takes on average 2 seconds. The recovery performance of OpenReplica depends heavily on the state to be transferred from the acceptors, as a new leader needs to collect all past state from acceptors, constituting the dominant cost of a failover. This overhead, in turn, is determined by the frequency of garbage collection performed in the system; it does not increase with longer amounts of time the system is kept alive.
4.6 Rack-Aware Placement
OpenReplica can recover from replica failures if more than $f$ replicas, out of $2f+1$, do not fail concurrently. To prevent these type of catastrophic failures, OpenReplica supports placing replica, acceptor and name server nodes in a rack-aware manner. Figure 11 compares the performance of OpenReplica’s greedy replica placement strategy to that of random placement. It plots the number of catastrophic failures expected within a year for a system with 5 replicas instantiated on groups of 20-hosts suballocated within a data center with the failure groups shown in Figure 4. This suballocation strategy captures a realistic scenario that a developer at a large company might face when reserving dedicated nodes within a data center, where groups of nodes within a rack are allocated to a project from the larger data center. The probabilities for component failures were extracted from empirical studies [13, 14]. The figure shows that in this setting, greedy placement achieves significant advantages compared to random placement.
5 Related Work
OpenReplica is implemented to provide infrastructure services for distributed systems using the replicated state machine approach [26]. Originally described for fault-free environments, this approach was extended to handle fail-stop failures [42], a class of failures between fail-stop and Byzantine [25], and full Byzantine failures [27]. The seminal tutorial on the state machine approach outlined various implementation strategies for achieving agreement and order requirements [43].
There has been much work examining strategies for achieving the agreement and order requirements in a replicated state machine. In particular, the Paxos Synod
protocol achieves consensus among replicas in an environment with crash failures [28, 29]. Subsequent work has concentrated on making the basic Paxos algorithm more efficient and dynamic [31, 30], two techniques employed by OpenReplica. Other work has concentrated on the practical aspects of implementing the basic algorithm [11, 7, 23, 2, 33]. There has also been some work on designing high performance protocols derived from Paxos [36].
Paxos replicated state machines have been used previously to provide the underlying infrastructure for systems such as consistent, replicated, migratable file systems. SMART [32] achieves high performance through parallelization, supports dynamic membership changes and migration without a window of vulnerability. SMARTER [10] constructs a reliable storage system using Paxos RSMs that are strongly crafted to mask the latencies related to the RSM infrastructure and allows restarts of the system by logging requests. OpenReplica is built to provide a service that enables users to implement such systems easily. While, many systems use Paxos in a monolithic fashion to support a fixed API, OpenReplica is the first system to provide an open, general-purpose object replication service.
Another approach to achieving consistency in a distributed system relies on an atomic broadcast primitive [12]. ZooKeeper follows this approach and implements universal wait-free synchronization primitives [16], which can be used for leader-based atomic broadcast [37, 22]. There have also been similar work on protocols [47, 40] presenting optimistic and collision-fast atomic broadcast protocols, respectively.
Coordination of distributed applications is a long-standing problem and there has been a lot of work focusing on how to provide coordination services for distributed systems and data center environments. Early work examined how to use locks as the basis of coordination among distributed components [24, 18, 34]. More recently, Boxwood [34], designed especially for storage applications, provides reliable data structure abstractions supported directly by the storage infrastructure. Although Boxwood offers a rich set of data structures, its API is not extensible, making it a closed system.
Automatic data center management services have recently emerged to ease the task of managing large-scale distributed systems [15, 20, 1]. Autopilot [20] is a Paxos RSM that handles tasks, such as provisioning, deployment, monitoring and repair, automatically without operator intervention. Similarly, Centrifuge [1] is a lease manager, built on top of a Paxos RSM, that can be used to configure and partition requests among servers. Much like these infrastructure services, OpenReplica is designed to offer a manageable coordination infrastructure that allows programmers to offload complicated configuration and coordination services to an automatically maintained, fault-tolerant and available service.
Past work on toolkits for replication services examined how to build infrastructure services. PRACTI [5] approach offers partial replication of state on different nodes, arbitrary consistency guarantees on the replicated data and arbitrary messaging between replicas. Ursa [6] offers safety and liveness policies that provide different consistency levels for a replicated system, and provides mechanisms that define abstractions for storage, communication, and consistency. This enables Ursa to be used as infrastructure for higher-level replication systems. In contrast to such low-level services for the construction of replication systems, OpenReplica provides a higher abstraction that directly replicates user objects.
Past work has examined how to employ an object-oriented distributed programming (OODP) paradigm. Common Object Request Broker Architecture (CORBA) [44] provides an open standard for OODP, providing a mechanism to normalize method invocations among different platforms. There has been much work on building mechanisms for distributed systems using CORBA [19, 41] and extending CORBA to provide additional guarantees such as fault-tolerance [35]. A similar approach was used to build distributed objects that remain available and offer guarantees on the completion of operations in the presence of up to \( k \) failures [9]. This work was later used to provide a platform independent framework for fault-tolerance [46]. While OpenReplica shares the same object-oriented spirit as these early efforts, it differs fundamentally in every aspect of its implementation.
6 Conclusions
This paper presented OpenReplica, an object-oriented coordination service for large-scale distributed systems. OpenReplica proposes a novel approach to providing replication and synchronization in large-scale distributed systems. This approach is based around the abstraction of a coordination object; namely, an object that defines a replicated state machine that can block and resume the execution of its clients. Coordination objects not only support ordinary replication, but also enable complex distributed synchronization constructs and reliable data structures. Critically, OpenReplica renders the specification of such constructs straightforward and similar to their non-distributed counterparts.
In contrast with the file-base APIs of extant coordination services, OpenReplica object-based API represents a novel approach to replica management. Whereas previous systems provide low-level mechanisms that could be used in a large number of ways to implement replicated state machines, OpenReplica provides a high-level approach. These state machines are specified using regular
Pyon objects. OpenReplica maintains a live instance of these coordination objects on every replica node and uses Paxos to guarantee strong consistency in the presence of crash failures. Moreover, OpenReplica implements additional mechanisms to guarantee the soundness of the replicated state in the presence of non-deterministic invocations and side effects. Evaluations show that OpenReplica provides performance, in terms of latency, scalability and failover, that is comparable to ZooKeeper, while providing additional features as well as a higher level of abstraction.
References
|
{"Source-Url": "https://ecommons.cornell.edu/bitstream/handle/1813/29009/OpenReplica.pdf;jsessionid=C5AA3E1200604F5C0702417FD4A57014?sequence=2", "len_cl100k_base": 10330, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 47045, "total-output-tokens": 14060, "length": "2e13", "weborganizer": {"__label__adult": 0.00029730796813964844, "__label__art_design": 0.00040340423583984375, "__label__crime_law": 0.00026488304138183594, "__label__education_jobs": 0.0007424354553222656, "__label__entertainment": 9.989738464355467e-05, "__label__fashion_beauty": 0.00015604496002197266, "__label__finance_business": 0.000400543212890625, "__label__food_dining": 0.0003154277801513672, "__label__games": 0.000637054443359375, "__label__hardware": 0.0016794204711914062, "__label__health": 0.0004930496215820312, "__label__history": 0.00038242340087890625, "__label__home_hobbies": 0.0001017451286315918, "__label__industrial": 0.0004773139953613281, "__label__literature": 0.00028777122497558594, "__label__politics": 0.0002913475036621094, "__label__religion": 0.0004723072052001953, "__label__science_tech": 0.11639404296875, "__label__social_life": 8.463859558105469e-05, "__label__software": 0.018341064453125, "__label__software_dev": 0.8564453125, "__label__sports_fitness": 0.00023829936981201172, "__label__transportation": 0.0005555152893066406, "__label__travel": 0.00023543834686279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62811, 0.02479]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62811, 0.17284]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62811, 0.91045]], "google_gemma-3-12b-it_contains_pii": [[0, 5038, false], [5038, 7829, null], [7829, 12096, null], [12096, 17584, null], [17584, 22164, null], [22164, 27796, null], [27796, 32696, null], [32696, 38266, null], [38266, 41679, null], [41679, 44035, null], [44035, 47836, null], [47836, 53473, null], [53473, 60380, null], [60380, 62811, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5038, false], [5038, 7829, null], [7829, 12096, null], [12096, 17584, null], [17584, 22164, null], [22164, 27796, null], [27796, 32696, null], [32696, 38266, null], [38266, 41679, null], [41679, 44035, null], [44035, 47836, null], [47836, 53473, null], [53473, 60380, null], [60380, 62811, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62811, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62811, null]], "pdf_page_numbers": [[0, 5038, 1], [5038, 7829, 2], [7829, 12096, 3], [12096, 17584, 4], [17584, 22164, 5], [22164, 27796, 6], [27796, 32696, 7], [32696, 38266, 8], [38266, 41679, 9], [41679, 44035, 10], [44035, 47836, 11], [47836, 53473, 12], [53473, 60380, 13], [60380, 62811, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62811, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
377c26f340e82bb3088e95a42f25d88bf4161be5
|
Contents
1 Getting started 3
2 Getting help 5
3 Contents 7
3.1 Citing ......................................................... 7
3.2 Known issues .................................................. 8
3.3 Changelog ....................................................... 8
3.4 Installing ...................................................... 10
3.5 Releases ......................................................... 11
3.6 Theoretical background ...................................... 12
3.7 Using Tesseroids ............................................. 18
3.8 Cookbook ....................................................... 22
3.9 License ......................................................... 36
Tessieroids, Release v1.2.1
A collection of command-line programs for modeling the gravitational potential, acceleration, and gradient tensor. Tessieroids supports models and computation grids in Cartesian and spherical coordinates.
Developed by Leonardo Uieda in cooperation with Carla Braitenberg.
Official site: http://tesseroids.leouieda.com
License: BSD 3-clause
Source code: https://github.com/leouieda/tesseroids
Note: Tessieroids is research software. Please consider citing it in your publications if you use it for your research.
Warning: See the list of known issues for things you should be aware of.
The geometric element used in the modeling processes is a spherical prism, also called a tessieroid. Tessieroids also contains programs for modeling using right rectangular prisms, both in Cartesian and spherical coordinates.

Fig. 1: View of a tessieroid (spherical prism) in a geocentric coordinate system. Original image (licensed CC-BY) at doi:10.6084/m9.figshare.1495521.
Getting started
Take a look at the examples in the *Cookbook*. They contain scripts that run *Tesseroids* and some Python code to plot the results.
If you’re the kind of person who likes to see the equations (who doesn’t?), see the *Theoretical background* and the references cited there.
For a more detailed description of the software, options, and conventions used, see the *usage instructions*.
Also, all programs accept the `-h` flag to print the instructions for using that particular program. For example:
```
$ tessgrd -h
Usage: tessgrd [PARAMS] [OPTIONS]
Make a regular grid of points.
All units either SI or degrees!
Output:
Printed to standard output (stdout) in the format:
lon1 lat1 height
lon2 lat1 height
... ... ...
lonNLON lat1 height
lon1 lat2 height
... ... ...
... ... ...
lonNLON latNLAT height
* Comments about the provenance of the data are inserted into
the top of the output
Parameters:
-b NLON/NLAT: Number of grid points in the
longitudinal and latitudinal directions.
-z HEIGHT: Height of the grid with respect to the
```
mean Earth radius.
-h Print instructions.
--version Print version and license information.
Options:
-v Enable verbose printing to stderr.
-lFILENAME Print log messages to file FILENAME.
Part of the Tesseroids package.
Project site: <http://fatiando.org/software/tesseroids>
Report bugs at: <http://code.google.com/p/tesseroids/issues/list>
CHAPTER 2
Getting help
Write an e-mail to Leonardo Uieda, or tweet, or Google Hangout. Even better, submit a bug report/feature request/question to the Github issue tracker.
Citing
Geophysics paper
To cite Tesseroids in publications, please use our paper published in *Geophysics*:
You can download a copy of the paper PDF and see all source code used in the paper at the Github repository. Please note that citing the paper is preferred over citing the previous conference proceedings.
If you’re a BibTeX user:
```latex
@article{uieda2016,
title = {Tesseroids: {{Forward}}-modeling gravitational fields in spherical coordinates},
author = {Uieda, L. and Barbosa, V. and Braitenberg, C.},
issn = {0016-8033},
doi = {10.1190/geo2015-0204.1},
url = {http://library.seg.org/doi/abs/10.1190/geo2015-0204.1},
journal = {GEOPHYSICS},
month = jul,
year = {2016},
pages = {F41--F48},
}
```
Source code
You can refer to individual versions of Tesseroids through their DOIs. However, please also cite the Geophysics paper.
For example, if you want to mention that you used the 1.1.1 version, you can go to the Releases page of the documentation and get the DOI link for that version. This link will not be broken, even if I move the site somewhere else.
You can also cite the specific version instead of just providing the link. If you click of the DOI link for 1.1.1, the Zenodo page will recommend that you cite it as:
Conference proceeding
The previous way citation for Tesseroids was a conference proceeding from the 2011 GOCE User Workshop:
Download a PDF version of the proceedings. You can also see the poster and source code at the Github repository.
Known issues
• Prism and tesseroid calculations are only valid outside of the mass elements. If you calculate on top or inside of the prism/tesseroid, there is no guarantee that the result will be correct.
• The gravity gradient components of tesseroids suffer from increased numerical error as the computation point gets closer to the tesseroid. It is not recomended to compute the effects at distances smaller than 1km above the tesseroid.
Changelog
Changes in version 1.2.1
• Binaries for Windows 64bit are now available for download as well. (PR 28)
• Validate order of boundaries for input tesseroids. Errors if boundaries are switched (e.g, W > E). (PR 27)
• Ignore tesseroids with zero volume from the input file (i.e., W == E, S == N, or top == bottom). These elements can cause crashes because of infinite loops during adaptive discretization. (PR 27)
Changes in version 1.2.0
• General improvements to the adaptive discretization (described in the upcoming method paper). (PR 21)
• Better error messages when there is a stack overflow (computation point too close to the tesseroid). (PR 21)
• Replace the recursive algorithm with a stack-based algorithm for adaptive discretization of tesseroids. This makes the computations faster, specially for gravity acceleration and gradient tensor components. (PR 21)
• Divide the tesseroids only along the necessary dimensions. This provides speedups when dealing with flattened or elongated tesseroids. (PR 21)
• Speedup tesseroid computations by moving some trigonometric functions out of loops. (PR 22)
• **BUG fix:** Singularities when calculating around a prism. Due to wrong quadrant returned by atan2 and log(0) evaluations. Fix by wrapping atan2 in a safe_atan2 that corrects the result. log(0) error happened only in cross components of the gravity gradient when the computation is aligned with the vertices of a certain face (varies for each component. Fix by displacing the point a small amount when that happens. (PR 12)
### Changes in version 1.1.1
• **BUG fix:** Wrong results when calculating fields below a prism in Cartesian coordinates (PR 1)
### Changes in version 1.1
• the tessieroids license was changed from the GNU GPL to the more permissive BSD license (see the license text).
• tess2prism has a new flag –flatten to make the prism model by flattening the tessieroids (i.e., 1 degree = 111km) into Cartesian coordinates (so that they can be used with the prismg* programs).
• tessg* programs have a new flag -t used to control the distance-size ratio for the automatic recursive division of tessieroids.
• **NEW PROGRAMS** prisms pots, prisms, and prismsgts, to calculate the prism effects in spherical coordinates. These programs are compatible with the output of tess2prism (see this recipe for an example).
• **NEW PROGRAM** tesslayers to generate a tessieroid model of a stack of layers from grids of the thickness and density of each layer. tesslayers complements the functionality of tessmodgen and can be used to generate crustal models, sedimentary basin models, etc. (see this recipe for an example).
• tessieroids now strictly follows the ANSI C standard.
• **Bug fix:** prismpot, prismgx, prismgy, prismgz, and prismgxy had problems with a log(z + r) when the computation point was bellow the top of the prism (zp > prism.z1). Fixed by calculating on top of the prism when this happens, then changing the sign of the result when needed (only for gz).
• **Bug fix:** the tess and prismg family of programs was crashing when the model file is empty. Now they fail with an error message.
### Changes in version 1.0
Tesseroids 1.0 was completely re-coded in the C programming language and is much faster and more stable than the 0.3 release. Here is a list of new features:
• tesspot and tessg* programs now take the computation points as input, allowing for custom grids.
• tesspot and tessg* programs now automatically subdivide a tessieroid if needed to maintain GLQ precision (this makes computations up to 5x faster and safer).
• Automated model generation using program tessmodgen.
• Regular grid generation with program tessgrd.
• Total mass calculation with program tessmass.
• Programs to calculate the gravitational fields of right rectangular prisms in Cartesian coordinates.
• HTML User Manual and API Reference generated with Doxygen.
• Easy source code compilation with SCons.
Installing
We offer binaries for Windows (32 and 64 bit) and GNU/Linux (32 and 64 bit). You can download the latest version for your operating system from Github:
https://github.com/leouieda/tesseroids/releases/latest
Once downloaded, simply unpack the archive in the desired directory. The executables will be in the `bin` folder. For easier access to the programs, consider adding the `bin` folder to your `PATH` environment variable.
Tesseroids is permanently archived in Zenodo. Each release is stored (source code and binaries) and given a DOI. The DOIs, source code, and compiled binaries for previous versions can be found on the Releases page.
If we don’t provide the binaries for your operating system, you can compile the source code (download a source distribution from Github) by following the instructions below.
Compiling from source
If you want to build Tesseroids from source, you’ll need:
- A C compiler (preferably GCC)
- The build tool SCons
Setting up SCons
Tesseroids uses the build tool SCons. A `SConstruct` file (Makefile equivalent) is used to define the compilation rules. The advantage of SCons over Make is that it automatically detects your system settings. You will have to download and install SCons in order to easily compile Tesseroids. SCons is available for both GNU/Linux and Windows so compiling should work the same on both platforms.
SCons requires that you have Python installed. Follow the instructions in the SCons website to install it. Python is usually installed by default on most GNU/Linux systems.
Under Windows you will have to put SCons on your `PATH` environment variable in order to use it from the command line. It is usually located in the Scripts directory of your Python installation.
On GNU/Linux, SCons will generally use the GCC compiler to compile sources. On Windows it will search for an existing compiler. We recommend that you install GCC on Windows using MinGW.
Compiling
Download a source distribution and unpack the archive anywhere you want (e.g., `~/tesseroids` or `C:\tesseroids` or whatever). To compile, open a terminal (or `cmd.exe` on Windows) and go to the directory where you unpacked (use the `cd` command). Then, type the following and hit Enter:
```bash
scons
```
If everything goes well, the compiled executables will be placed on a `bin` folder.
To clean up the build (delete all generated files), run:
```bash
scons -c
```
If you get any strange errors or the code doesn’t compile for some reason, please submit a bug report. Don’t forget to copy the output of running `scons`.
Testing the build
After the compilation, a program called `tesstest` will be placed in the directory where you unpacked the source. This program runs the unit tests for Tesseroids (sources in the test directory).
To run the test suite, simply execute `tesstest` with no arguments:
tesstest
or on GNU/Linux:
./tesstest
A summary of all tests (pass or fail) will be printed on the screen. If all tests pass, the compilation probably went well. If any test fail, please submit a bug report with the output of running `tesstest`.
Releases
Development
The latest development version can be found on github.com/leouieda/tesseroids. The master branch is kept stable and can be used. See the install guide for instruction on compiling the source code.
Stable releases
- **v1.2.1:**
- Source code
- Download
- Documentation
- doi:10.5281/zenodo.16033
- **v1.2.0:**
- Source code
- Download
- Documentation
- doi:10.5281/zenodo.16033
- **v1.1.1:**
- Source code
- Download
- Documentation
- **v1.1:**
- Source code
- Download
- Documentation
Theoretical background
What is a tesseroid anyway?
A tesseroid, or spherical prism, is segment of a sphere. It is delimited by:
1. 2 meridians, $\lambda_1$ and $\lambda_2$
2. 2 parallels, $\phi_1$ and $\phi_2$
3. 2 spheres of radii $r_1$ and $r_2$
**About coordinate systems**
The figure below shows a tesseroid, the global coordinate system (X, Y, Z), and the local coordinate system ($x$, $y$, $z$) of a point P.
The global system has origin on the center of the Earth and Z axis aligned with the Earth’s mean rotation axis. The X and Y axis are contained on the equatorial parallel with X intercepting the mean Greenwich meridian and Y completing a right-handed system.
The local system has origin on the computation point P. It’s z axis is oriented along the radial direction and points away from the center of the Earth. The $x$ and $y$ axis are contained on a plane normal to the $z$ axis. $x$ points North and $y$ East.
The gravitational attraction and gravity gradient tensor of a tesseroid are calculated with respect to the local coordinate system of the computation point P.
Original images (licensed CC-BY) at doi:10.6084/m9.figshare.1495537.
Fig. 3.1: View of a tesseroid, the integration point Q, the global coordinate system (X, Y, Z), the computation P and it’s local coordinate system (x, y, z). r, \(\phi\), \(\lambda\) are the radius, latitude, and longitude, respectively, of point P. Original image (licensed CC-BY) at doi:10.6084/m9.figshare.1495525.
**Warning:** The \(g_z\) component is an exception to this. In order to conform with the regular convention of z-axis pointing toward the center of the Earth, this component **ONLY** is calculated with an inverted z axis. This way, gravity anomalies of tesseroids with positive density are positive, not negative.
### Gravitational fields of a tesseroid
The gravitational potential of a tesseroid can be calculated using the formula
\[
V(r, \phi, \lambda) = G \rho \int_{\lambda_1}^{\lambda_2} \int_{\phi_1}^{\phi_2} \int_{r_1}^{r_2} \frac{1}{r^3} \kappa \, dr' \, d\phi' \, d\lambda'
\]
The gravitational attraction can be calculated using the formula (Grombein et al., 2013):
\[
g_\alpha(r, \phi, \lambda) = G \rho \int_{\lambda_1}^{\lambda_2} \int_{\phi_1}^{\phi_2} \int_{r_1}^{r_2} \frac{\Delta \alpha}{r^5} \kappa \, dr' \, d\phi' \, d\lambda' \quad \alpha \in \{x, y, z\}
\]
The gravity gradients can be calculated using the general formula (Grombein et al., 2013):
\[
g_{\alpha\beta}(r, \phi, \lambda) = G \rho \int_{\lambda_1}^{\lambda_2} \int_{\phi_1}^{\phi_2} \int_{r_1}^{r_2} I_{\alpha\beta}(r', \phi', \lambda') \, dr' \, d\phi' \, d\lambda' \quad \alpha, \beta \in \{x, y, z\}
\]
\[
I_{\alpha\beta}(r', \phi', \lambda') = \left( \frac{3\Delta \alpha \Delta \beta}{r^6} - \frac{\delta_{\alpha\beta}}{r^4} \right) \kappa \quad \alpha, \beta \in \{x, y, z\}
\]
where \( \rho \) is density, \( \{x, y, z\} \) correspond to the local coordinate system of the computation point \( P \) (see the tesseroid figure), \( \delta_{\alpha\beta} \) is the Kronecker delta, and
\[
\begin{align*}
\Delta_x &= r'K_\phi \\
\Delta_y &= r' \cos \phi' \sin(\lambda' - \lambda) \\
\Delta_z &= r' \cos \psi - r \\
\ell &= \sqrt{r'^2 + r^2 - 2r'r \cos \psi} \\
\cos \psi &= \sin \phi \sin \phi' + \cos \phi \cos \phi' \cos(\lambda' - \lambda) \\
K_\phi &= \cos \phi \sin \phi' - \sin \phi \cos \phi' \cos(\lambda' - \lambda) \\
\kappa &= r'^2 \cos \phi'
\end{align*}
\]
\( \phi \) is latitude, \( \lambda \) is longitude, and \( r \) is radius.
**Note:** The gravitational attraction and gravity gradient tensor are calculated with respect to \( \{x, y, z\} \), the local coordinate system of the computation point \( P \).
**Numerical integration**
The above integrals are solved using the Gauss-Legendre Quadrature rule (Asgharzadeh et al., 2007):
\[
g_{\alpha\beta}(r, \phi, \lambda) \approx G\rho \frac{(\lambda_2 - \lambda_1)(\phi_2 - \phi_1)(r_2 - r_1)}{8} \sum_{k=1}^{N_r} \sum_{j=1}^{N_\phi} \sum_{i=1}^{N_\lambda} W_r^k W_\phi^j W_\lambda^i I_{\alpha\beta}(r', \phi', \lambda') \alpha, \beta \in \{1, 2, 3\}
\]
where \( W_r^k, W_\phi^j, \) and \( W_\lambda^i \) are weighting coefficients and \( N_r, N_\phi, \) and \( N_\lambda \) are the number of quadrature nodes (i.e., the order of the quadrature), for the radius, latitude, and longitude, respectively.
*Tesseroids* implements a modified version the adaptive discretization algorithm of Li et al (2011). This helps guarantee that the numerical integration will achieve a maximum error of 0.1%.
**Warning:** The integration error may be larger than this if the computation points are closer than 1km of the tesseroids. This effect is more significant in the gravity gradient components.
**Gravitational fields of a prism in spherical coordinates**
The gravitational potential and its first and second derivatives for the right rectangular prism can be calculated in Cartesian coordinates using the formula of Nagy et al. (2000).
However, several transformations have to made in order to calculate the fields of a prism in a global coordinate system using spherical coordinates (see *this figure*).
The formula of Nagy et al. (2000) require that the computation point be given in the Cartesian coordinates of the prism \( \{x^*, y^*, z^*\} \) (see *this figure*). Therefore, we must first transform the spherical coordinates \( \{r, \phi, \lambda\} \) of the computation point \( P \) into \( \{x^*, y^*, z^*\} \). This means that we must convert vector \( \vec{e} \) (from *this other figure*) to the coordinate system of the prism. We must first obtain vector \( \vec{e} \) in the global Cartesian coordinates \( \{X, Y, Z\} \):
\[
\vec{e}^g = \vec{E} - \vec{E}^*
\]
where \( \vec{e}^g \) is the vector \( \vec{e} \) in the global Cartesian coordinates and
\[
\vec{E} = \begin{bmatrix}
r \cos \phi \cos \lambda \\
r \cos \phi \sin \lambda \\
r \sin \phi
\end{bmatrix}
\]
3.6. Theoretical background
Fig. 3.2: View of a right rectangular prism with its corresponding local coordinate system \((x^*, y^*, z^*)\), the global coordinate system \((X, Y, Z)\), the computation \(P\) and its local coordinate system \((x, y, z)\). \(r, \phi, \lambda\) are the radius, latitude, and longitude, respectively.
\[
\vec{E}^* = \begin{bmatrix}
r^* \cos \phi^* \cos \lambda^*
\\
r^* \cos \phi^* \sin \lambda^*
\\
r^* \sin \phi^*
\end{bmatrix}
\]
Next, we transform \(\vec{e}^g\) to the local Cartesian system of the prism by
\[
\tilde{\vec{e}} = \vec{P}_y \vec{R}_y (90^\circ - \phi^*) \vec{R}_z (180^\circ - \lambda^*) \tilde{\vec{e}}^g
\]
where \(\vec{P}_y\) is a deflection matrix of the y axis, \(\vec{R}_y\) and \(\vec{R}_z\) are counterclockwise rotation matrices around the y and z axis, respectively (see Wolfram MathWorld).
\[
\vec{P}_y = \begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\]
\[
\vec{R}_y(\alpha) = \begin{bmatrix}
\cos \alpha & 0 & \sin \alpha \\
0 & 1 & 0 \\
-\sin \alpha & 0 & \cos \alpha
\end{bmatrix}
\]
\[
\vec{R}_z(\alpha) = \begin{bmatrix}
\cos \alpha & -\sin \alpha & 0 \\
\sin \alpha & \cos \alpha & 0 \\
0 & 0 & 1
\end{bmatrix}
\]
\[
\tilde{\vec{W}} = \begin{bmatrix}
\cos(90^\circ - \phi^*) \cos(180^\circ - \lambda^*) & -\cos(90^\circ - \phi^*) \sin(180^\circ - \lambda^*) & \sin(90^\circ - \phi^*) \\
-\sin(180^\circ - \lambda^*) & -\cos(180^\circ - \lambda^*) & 0 \\
-\sin(90^\circ - \phi^*) \cos(180^\circ - \lambda^*) & \sin(90^\circ - \phi^*) \sin(180^\circ - \lambda^*) & \cos(90^\circ - \phi^*)
\end{bmatrix}
\]
Which gives us
\[
\tilde{\vec{e}} = \begin{bmatrix}
x \\
y \\
z
\end{bmatrix}
\]
Fig. 3.3: The position vectors involved in the coordinate transformations. $\vec{E}^*$ is the position vector of point Q in the global coordinate system, $\vec{E}$ is the position vector of point P in the global coordinate system, and $\vec{e}$ is the position vector of point P in the local coordinate system of the prism ($x^*$, $y^*$, $z^*$).
Note: Nagy et al. (2000) use the z axis pointing down, so we still need to invert the sign of $z$.
Vector $\vec{e}$ can then be used with the Nagy et al. (2000) formula. These formula give us the gravitational attraction and the gravity gradient tensor calculated with respect to the coordinate system of the prism (i.e., $x^*$, $y^*$, $z^*$). However, we need them in the coordinate system of the observation point P, where they are measured by GOCE and calculated for the tesseroids. We perform these transformations via the global Cartesian system (tip: the rotation matrices are orthogonal). $\vec{g}^*$ is the gravity vector in the coordinate system of the prism, $\vec{g}$ is the gravity vector in the global coordinate system, and $\vec{g}$ is the gravity vector in the coordinate system of computation point P.
$$\vec{g} = \vec{g}^*$$
$$\vec{R} = \vec{R}_y(90^\circ - \phi)\vec{R}_z(\lambda^* - \lambda)\vec{R}_y(\phi^* - 90^\circ)\vec{P}_y$$
3.6. Theoretical background
where
\[
\alpha = 90^\circ - \phi \\
\beta = \lambda^* - \lambda \\
\gamma = \phi^* - 90^\circ \\
\cos \alpha = \sin \phi \\
\sin \alpha = \cos \phi \\
\cos \gamma = \sin \phi^* \\
\sin \gamma = -\cos \phi^*
\]
Likewise, transformation for the gravity gradient tensor \( \mathbf{T} \) is
\[
\mathbf{\tilde{T}} = \mathbf{\tilde{R}}^T \mathbf{\tilde{R}}^T
\]
**Recommended reading**
- Smith et al. (2001)
- Wild-Pfeiffer (2008)
**References**
**Using Tesseroids**
This is a tutorial about how to use the Tesseroids package. It is a work-in-progress but I have tried to be as complete as possible. If you find that anything is missing, or would like something explained in more detail, please submit a bug report (it’s not that hard).
Any further questions and comments can be e-mail directly to me (leouieda [at] gmail [dot] com).
If you don’t find what you’re looking for here, the cookbook contains several example recipes of using Tesseroids.
A note about heights and units
In order to have a single convention, the word “height” means “height above the Earth’s surface” and are interpreted as positive up and negative down (i.e., oriented with the z axis of the Local coordinate system). Also, all input units are in SI and decimal degrees. Output of tesspot is in SI, tessgx, tessgy, and tessgz are in mGal, and the tensor components in Eotvos. All other output is also in SI and decimal degrees.
Getting help information
All programs accept the -h and --version flags. -h will print a help message describing the usage, input and output formats and options accepted. --version prints version and license information about the program.
Program tessdefaults prints the default values of constants used in the computations such as: mean Earth radius, pi, gravitational constant, etc.
Computing the gravitational effect of a tesseroid
The tesspot, tessgx, tessgy, tessgz, tessgxx, etc. programs calculate the combined effect of a list of tesseroids on given computation points. The computation points are passed via standard input and do NOT have to be in a regular grid. This allows, for example, computation on points where data was measured. The values calculated are put in the last column of the input points and printed to standard output.
For example, if calculating gz on these points:
<table>
<thead>
<tr>
<th>lon1</th>
<th>lat1</th>
<th>height1</th>
<th>value1</th>
<th>othervalue1</th>
</tr>
</thead>
<tbody>
<tr>
<td>lon2</td>
<td>lat2</td>
<td>height2</td>
<td>value2</td>
<td>othervalue2</td>
</tr>
<tr>
<td>...</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>lonN</td>
<td>latN</td>
<td>heightN</td>
<td>valueN</td>
<td>othervalueN</td>
</tr>
</tbody>
</table>
the output would look something like:
<table>
<thead>
<tr>
<th>lon1</th>
<th>lat1</th>
<th>height1</th>
<th>value1</th>
<th>othervalue1</th>
<th>gz1</th>
</tr>
</thead>
<tbody>
<tr>
<td>lon2</td>
<td>lat2</td>
<td>height2</td>
<td>value2</td>
<td>othervalue2</td>
<td>gz2</td>
</tr>
<tr>
<td>...</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>lonN</td>
<td>latN</td>
<td>heightN</td>
<td>valueN</td>
<td>othervalueN</td>
<td>gzN</td>
</tr>
</tbody>
</table>
The input model file should contain one tesseroid per line and have columns formatted as:
| W | E | S | N | HEIGHT_OF_TOP | HEIGHT_OF_BOTTOM | DENSITY |
HEIGHT_OF_TOP and HEIGHT_OF_BOTTOM are positive if above the Earth’s surface and negative if below.
Note: Remember that HEIGHT_OF_TOP > HEIGHT_OF_BOTTOM!
Use the command line option -h to view a list of all commands available.
Example:
Calculate the field of a tesseroid model having verbose printed and logged to file gz.log and GLQ order 3/3/3. The computation points are in points.txt and the output will be placed in gz_data.txt:
tessgz modelfile.txt -v -lgz.log -o3/3/3 < points.txt > gz_data.txt
The -a flag
The -a flag on tesspot, tessgx, tessgxx, etc., programs disables the automatic recursive dividing of tesseroids to maintain the GLQ accuracy. As a general rule, the tesseroid should be no bigger than a ratio times the distance from the computation point (program tessdefaults prints the value of the size ratios used). The programs automatically break the tesseroids when this criterion is breached. This means that the computations can be performed with the default GLQ order 2/2/2, which is much faster, and still maintain correctness.
**Warning:** It is strongly recommended that you don’t use this flag unless you know what you are doing! It is also recommended that you keep 2/2/2 order always.
Verbose and logging to files
The -v flag enables printing of information messages to the default error stream (stderr). If omitted, only error messages will appear. The -l flag enables logging of information and error messages to a file.
Comments and provenance information
Comments can be inserted into input files by placing a “#” character at the start of a line. All comment lines are ignored. All programs pass on (print) the comment lines of the input to the output. All programs insert comments about the provenance of their results (where they came from) to their output. These include names of input files, version of program used, date, etc.
Generating regular grids
Included in the package is program tessgrd, which creates a regular grid of points and prints them to standard output.
*Example*
To generate a regular grid of 100 x 100 points, in the area -10/10/-10/10 degrees, at a height of 250 km:
```
tessgrd -r-10/10/-10/10 -b100/100 -z250e03 -v > points.txt
```
Automatic model generation
As of version 1.0, Tesseroids includes program tessmodgen for automatically generating a tesseroid model from a map of an interface. The interface can be any surface deviating from a reference level. For example, topography (a DEM) deviates from 0, a Moho map deviates from a mean crustal thickness, etc. This program takes as input a REGULAR grid with longitude, latitude and height values of the interface. Each tesseroid is generated with a grid point at the center of it’s top face. The top and bottom faces of the tesseroid are defined as:
- Top = Interface and Bottom = Reference if the interface is above the reference
- Top = Reference and Bottom = Interface if the interface is bellow the reference
The density RHO of the tesseroids can be passed using the -d option. This will assign a density value of RHO, when the interface is above the reference, and a value of -RHO if the interface is bellow the reference. Alternatively, the density of each tesseroid can be passed as a forth column on the input grid. As with the -d option, if the interface is bellow the reference, the density value will be multiplied by -1! Also, an error will occur if both a forth column and the -d option are passed!
*Example:*
To generate a tesseroid model from a Digital Elevation Model (DEM) with 1 x 1 degree resolution using a density of 2670 km/m^3:
```
tessmodgen -s1/1 -d2670 -z0 -v < dem_file.txt > dem_tess_model.txt
```
### Calculating the total mass of a model
The tessmass program can be used to compute the total mass of a given tesseroid model. If desired, a density range can be given and only tesseroids that fall within the given range will be used in the calculation.
**Example:**
To calculate the total mass of all tesseroids in `model.txt` with density between 0 and 1 g/cm^3:
```
tessmass -r0/1000 < model.txt
```
### Computing the effect of rectangular prisms in Cartesian coordinates
Tesseroids 1.0 also introduced programs to calculate the gravitational effect of right rectangular prisms in Cartesian coordinates. This is done using the formula of Nagy et al. (2000). The programs are prismpot, prismgx, prismgy, prismgz, prismgxx, etc. Input and output for these programs is very similar to that of the tesspot, tessgx, etc., programs. Computation points are read from standard input and the prism model is read from a file. The model file should have the column format:
```
X1 X2 Y1 Y2 Z1 Z2 DENSITY
```
**Note:** As in Nagy et al. (2000), the coordinate system for the rectangular prism calculations has X axis pointing North, Y axis pointing East and Z axis pointing Down. This is important to note because it differs from the convention adopted for the tesseroids. In practice, this means that the \( g_{xx} \) and \( g_{yz} \) components of the prism and tesseroid will have different signs. This will not be such for the \( g_z \) component, though, because the convention for tesseroids is to have Z axis Down for this component only. See the *Theoretical background* section for more details on this.
### Piping
Tesseroids was designed with the Unix philosophy in mind:
```
Write programs that do one thing and do it well.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
```
Therefore, all tessg* programs and tessgrd can be piped together to calculate many components on a regular grid.
**Example:**
Given a tesseroids file `model.txt` as follows:
```
-1 1 -1 1 0 -10e03 -500
```
Running the following would calculate \( g_z \) and gradient tensor of tesseroids in `model.txt` of a regular grid from -5W to 5E and -5S to 5N on 100x100 points at 250 km height. And the best of all is that it is done in parallel! If your system has multiple cores, this would mean a great increase in the computation time. All information regarding the
3.7. Using Tesseroids
computations will be logged to files gz.log, gxx.log, etc. These should include the information about how many times the tesseroid had to be split into smaller ones to guarantee GLQ accuracy:
```bash
tessgrd -r-5/5/-5/5 -b100/100 -z250e03 | \
tessgz model.txt -lgz.log | \n
tessgxx model.txt -lgxx.log | \n
tessgxy model.txt -lgxy.log | \n
tessgxz model.txt -lgxz.log | \n
tessgyy model.txt -lgyy.log | \n
tessgyz model.txt -lgyz.log | \n
tessgzz model.txt -lgzz.log > output.txt
```
**Cookbook**
The following recipes can be found in the cookbook folder that comes with your Tesseroids download (along with shell and batch scripts and sample output):
**Calculate the gravity gradient tensor from a DEM**
This example demonstrates how to calculate the gravity gradient tensor (GGT) due to topographic masses using tesseroids.
To do that we need:
1. A DEM file with lon, lat, and height information;
2. Assign correct densities to continents and oceans (we’ll be using a little Python for this);
3. Convert the DEM information into a tesseroid model;
4. Calculate the 6 components of the GGT;
The file dem_brasil.sh is a small shell script that executes all the above (we’ll be looking at each step in more detail):
```bash
#!/bin/bash
# First, insert the density information into
# the DEM file using the Python script.
python dem_density.py dem.xyz > dem-dens.txt
# Next, use the modified DEM with tessmodgen
# to create a tesseroid model
# (usefull to diagnose when things go wrong)
tessmodgen -s0.166667/0.166667 -z0 -v < dem-dens.txt \\
> dem-tess.txt
# Calculate the GGT on a regular grid at 250km
# The output is dumped to dem-ggt.txt
tessgrd -r-60/-45/-30/-15 -b50/50 -z250e03 | \
tessgxx dem-tess.txt -lgxx.log | \
tessgxy dem-tess.txt -lgxy.log | \
tessgxz dem-tess.txt -lgxz.log | \
tessgyy dem-tess.txt -lgyy.log | \
tessgyz dem-tess.txt -lgyz.log | \
tessgzz dem-tess.txt -lgzz.log > output.txt
```
Why Python
Python is a modern programming language that is very easy to learn and extremely productive. We’ll be using it to make our lives a bit easier during this example but it is by no means a necessity. The same thing could have been accomplished with Unix tools and the Generic Mapping Tools (GMT) or other plotting program.
If you have interest in learning Python we recommend the excellent video lectures in the Software Carpentry course. There you will also find lectures on various scientific programming topics. I strongly recommend taking this course to anyone who works with scientific computing.
The DEM file
For this example we’ll use ETOPO1 for our DEM. The file dem.xyz contains the DEM as a 10’ grid. Longitude and latitude are in decimal degrees and heights are in meters. This is what the DEM file looks like (first few lines):
```
1 # This is the DEM file from ETOPO1 with 10' resolution
2 # points in longitude: 151
3 # Columns:
4 # lon lat height(m)
5 -65.000000 -10.000000 157
6 -64.833333 -10.000000 168
7 -64.666667 -10.000000 177
8 -64.500000 -10.000000 197
9 -64.333333 -10.000000 144
10 -64.166667 -10.000000 178
```
Notice that Tesseroids allows you to include comments in the files by starting a line with #. This figure shows the DEM plotted in pseudocolor. The red rectangle is the area in which we’ll be calculating the GGT.
Assigning densities
Program tessmodgen allows us to provide the density value of each tesseroid through the DEM file. All we have to do is insert an extra column in the DEM file with the density values of the tesseroids that will be put on each point. This way we can have the continents with 2.67 g/cm3 and oceans with 1.67 g/cm3. Notice that the density assigned to the oceans is positive! This is because the DEM in the oceans will have heights below our reference (h = 0km) and tessmodgen will automatically invert the sign of the density values if a point is below the reference.
We will use the Python script dem_density.py to insert the density values into our DEM and save the result to dem-dens.txt:
```
3 # First, insert the density information into
4 # the DEM file using the Python script.
5 python dem_density.py dem.xyz > dem-dens.txt
```
If you don’t know Python, you can easily do this step in any other language or even in Excel. This is what the dem_density.py script looks like:
```
1 ""
2 Assign density values for the DEM points.
3 ""
4 import sys
5 import numpy
```
Fig. 3.4: The ETOPO1 10’ DEM of the Parana Basin, southern Brasil.
lons, lats, heights = numpy.loadtxt(sys.argv[1], unpack=True)
for i in xrange(len(heights)):
if heights[i] >=0:
print "%lf %lf %lf %lf" % (lons[i], lats[i], heights[i], 2670.0)
else:
print "%lf %lf %lf %lf" % (lons[i], lats[i], heights[i], 1670.0)
The result is a DEM file with a forth column containing the density values (see this figure):
1 -65.000000 -10.000000 157.000000 2670.000000
2 -64.833333 -10.000000 168.000000 2670.000000
3 -64.666667 -10.000000 177.000000 2670.000000
4 -64.500000 -10.000000 197.000000 2670.000000
5 -64.333333 -10.000000 144.000000 2670.000000
6 -64.166667 -10.000000 178.000000 2670.000000
7 -64.000000 -10.000000 166.000000 2670.000000
8 -63.833333 -10.000000 189.000000 2670.000000
9 -63.666667 -10.000000 210.000000 2670.000000
10 -63.500000 -10.000000 210.000000 2670.000000
Fig. 3.5: Density values. 2.67 g/cm³ in continents and 1.67 g/cm³ in the oceans.
Making the tesseroid model
Next, we’ll use our new file dem-dens.txt and program tessmodgen to create a tesseroid model of the DEM:
Tesseroids, Release v1.2.1
7 # Next, use the modified DEM with tessmodgen
8 # to create a tesseroid model
9 tessmodgen -s0.166667/0.166667 -z0 -v < dem-dens.txt \
10 > dem-tess.txt
tessmodgen places a tesseroid on each point of the DEM. The bottom of the tesseroid is placed on a reference level and the top on the DEM. If the height of the point is bellow the reference, the top and bottom will be inverted so that the tesseroid isn’t upside-down. In this case, the density value of the point will also have its sign changed so that you get the right density values if modeling things like the Moho. For topographic masses, the reference surface is h = 0km (argument -z). The argument -s is used to specify the grid spacing (10') which will be used to set the horizontal dimensions of the tesseroid. Since we didn’t pass the -d argument with the density of the tesseroids, tessmodgen will expect a fourth column in the input with the density values.
The result is a tesseroid model file that should look something like this:
1 # Tesseroid model generated by tessmodgen 1.1dev:
2 # local time: Wed May 9 19:08:12 2012
3 # grid spacing: 0.166667 deg lon / 0.166667 deg lat
4 # reference level (depth): 0
5 # density: read from input
6 -65.0833335 -64.9166665 -10.0833335 -9.9166665 157 0 2670
7 -64.9166665 -64.7499995 -10.0833335 -9.9166665 168 0 2670
8 -64.7500005 -64.5833335 -10.0833335 -9.9166665 177 0 2670
9 -64.5833335 -64.4166665 -10.0833335 -9.9166665 197 0 2670
10 -64.4166665 -64.2499995 -10.0833335 -9.9166665 144 0 2670
and for the points in the ocean (negative height):
Calculating the GGT
Tesseroids allows use of custom computation grids by reading the computation points from standard input. This way, if you have a file with lon, lat, and height coordinates and wish to calculate any gravitational field in those points, all you have to do is redirect standard input to that file (using <). All tess* programs will calculate their respective field, append a column with the result to the input and print it to stdout. So you can pass grid files with more than three columns, as long as the first three correspond to lon, lat and height. This means that you can pipe the results from one tessg to the other and have an output file with many columns, each corresponding to a gravitational field. The main advantage of this approach is that, in most shell environments, the computation of pipes is done in parallel. So, if your system has more than one core, you can get parallel computation of GGT components with no extra effort.
For convenience, we added the program tessgrd to the set of tools, which creates regular grids and print them to standard output. So if you don’t want to compute on a custom grid (like us), you can simply pipe the output of tessgrd to the tess* programs:
12 # Calculate the GGT on a regular grid at 250km
13 # use the -l option to log the processes to files
14 # (usefull to diagnose when things go wrong)
15 # The output is dumped to dem-ggt.txt
16 tessgrd -r-60/-45/-30/-15 -b50/50 -z250e03 | \
17 tessgxx dem-tess.txt -lgxx.log | \
18 tessgxy dem-tess.txt -lgxy.log | \
19 tessgzx dem-tess.txt -lgxz.log | \
20 tessgyy dem-tess.txt -lgyy.log | \
21 tessgyz dem-tess.txt -lgyz.log | \
22 tessgzz dem-tess.txt -lgzz.log -v > dem-ggt.txt
The end result of this is file `dem-ggt.txt`, which will have 9 columns in total. The first three are the lon, lat and height coordinates generated by tessgrd. The next six will correspond to each component of the GGT calculated by tessgxx, tessgxy, etc., respectively. The resulting GGT is shown in this figure.

**Fig. 3.6:** GGT of caused by the topographic masses.
### Making the plots
The plots were generated using the powerfull Python library Matplotlib. The script `plots.py` is somewhat more complicated than `dem_density.py` and requires a bit of “Python Fu”. The examples in the Matplotlib website should give some insight into how it works. To handle the map projections, we used the Basemap toolkit of Matplotlib.
### Simple prism model in Cartesian coordinates
The `simple_prism.sh` script calculates the gravitational potential, gravitational attraction, and gravity gradient tensor due to a simple prism model in Cartesian coordinates:
```
#!/bin/bash
# Generate a regular grid, pipe it to all the computation programs,
# and write the result to output.txt
tessgrd -r0/20000/0/20000 -b50/50 -z1000 | \nprismpot model.txt | \n```
---
3.8. **Cookbook**
---
The model file looks like this:
```
# Test prism model file
2000 5000 2000 15000 0 5000 1000
10000 18000 10000 18000 0 5000 -1000
```
The result should look like the following (“column” means the column of the output file).
## Simple tesseroid model
The files in the folder cookbook/simple_tess show how to calculate the gravitational fields of a simple 2 tesseroid model at 260 km height.
For this simple setup, the model file looks like this:
```
# Test tesseroid model file
10 20 10 20 0 -50000 200
-20 -10 -20 -10 0 -30000 -500
```
The simple_tess.sh script performs the calculations and calls the plot.py script to plot the results:
```
#!/bin/bash
# Generate a regular grid, pipe it to all the computation programs,
# and write the result to output.txt
tessgrd -r-45/45/-45/45 -b101/101 -z260e03 | \
tesspot model.txt | \
tessgx model.txt | tessgy model.txt | tessgz model.txt | \
tessgxx model.txt | tessgxy model.txt | \
tessgxz model.txt | tessgyy model.txt | \
tessgyz model.txt | tessgzz model.txt -v -llog.txt > output.txt
# Make a plot with the columns of output.txt
python plot.py output.txt 101 101
```
tessgrd generates a regular grid and prints that to standard output (stdout). The scripts pipes the grid points to tesspot etc. to calculate the corresponding fields. Option -v tells tessgzz to print information messages (to stderr). Option -llog.txt tells tessgzz to log the information plus debug messages to a file called log.txt.
The columns of the output file will be, respectively: longitude, latitude, height, potential, gx, gy, gz, gxx, gxy, gxz, gyy, gyz, and gzz. The result should look like this (“column” means the column of the output file):
```
```
## Convert a tesseroid model to prisms and calculate in spherical coordinates
The tess2prism.sh script converts a tesseroid model to prisms (using tessmodgen) and calculates the gravitational potential, gravitational attraction, and gravity gradient tensor in spherical coordinates:
```
#!/bin/bash
```
Fig. 3.7: Plot of the columns of `output.txt` generated by `simple_prism.sh`. The x and y axis are longitude and latitude, respectively.
Fig. 3.8: Plot of the columns of output.txt generated by simple_tess.sh. Orthographic projection (thanks to the Basemap toolkit of matplotlib).
# Generate a prism model from a tesseroid model.
# Prisms will have the same mass as the tesseroids and
# associated spherical coordinates of the center of
# the top of the tesseroid.
tess2prism < tess-model.txt > prism-model.txt
# Generate a regular grid in spherical coordinates,
# pipe the grid to the computation programs,
# and dump the result on output.txt
# prismpots calculates the potential in spherical
# coordinates, prismgs calculates the full
# gravity vector, and prismggts calculates the full
# gravity gradient tensor.
tessgrd -r-160/0/-80/0 -b100/100 -z250e03 | \
prismpots prism-model.txt | \
prismgs prism-model.txt | \
prismggts prism-model.txt -v > output.txt
The tesseroid model file looks like this:
```
# Test tesseroid model file
-77 -75 -41 -39 0 -50000 500
-79 -77 -41 -39 0 -50000 500
-81 -79 -41 -39 0 -50000 500
-83 -81 -41 -39 0 -50000 500
-85 -83 -41 -39 0 -50000 500
```
and the converted prism model looks like this:
```
# Prisms converted from tesseroid model with tess2prism 1.1dev
# local time: Wed May 16 14:34:47 2012
# tesseroids file: stdin
# conversion type: equal mass|spherical coordinates
# format: dx dy dz density lon lat r
# Test tesseroid model file
221766.31696055 169882.854778591 50000 499.977196258595 -76 -40 6378137
221766.31696055 169882.854778591 50000 499.977196258595 -78 -40 6378137
221766.31696055 169882.854778591 50000 499.977196258595 -80 -40 6378137
221766.31696055 169882.854778591 50000 499.977196258595 -82 -40 6378137
```
Note that the density of prisms is altered. This is so that the tesseroid and corresponding prism have the same mass.
The result should look like the following ("column" means the column of the output file).
### Convert a tesseroid model to prisms and calculate in Cartesian coordinates
The `tess2prism_flatten.sh` script converts a tesseroid model to prisms (using the `--flatten` flag in `tessmodgen`) and calculates the gravitational potential, gravitational attraction, and gravity gradient tensor in Cartesian coordinates:
```bash
#!/bin/bash
# Generate a prism model from a tesseroid model by
# flattening the tesseroids (1 degree = 111.11 km).
```
Fig. 3.9: Plot of the columns of output.txt generated by tess2prism.sh. Orthographic projection (thanks to the Basemap toolkit of matplotlib).
# This way the converted prisms can be used
# with the prism* programs in Cartesian coordinates.
tess2prism --flattened < tess-model.txt > prism-model.txt
# Generate a regular grid in Cartesian coordinates,
# pipe the grid to the computation programs,
# and dump the result on output.txt
tessgrd -r-3e06/3e06/-3e06/3e06 -b50/50 -z250e03 | \n prismpot prism-model.txt | \n prismgx prism-model.txt | \n prismgy prism-model.txt | \n prismgz prism-model.txt | \n prismgxx prism-model.txt | prismgyx prism-model.txt | \n prismgxx prism-model.txt | prismgyy prism-model.txt | \n prismgxx prism-model.txt | prismgzy prism-model.txt | \n prismgxx prism-model.txt | prismgzz prism-model.txt > output.txt
The tesseroid model file looks like this:
```
# Test tesseroid model file
10 15 10 15 0 -30000 500
-15 -10 -10 10 0 -50000 200
-15 5 -16 -10 0 -30000 -300
```
and the converted prism model looks like this:
```
# Prisms converted from tesseroid model with tess2prism 1.1dev
# local time: Tue May 8 14:55:02 2012
# tesseroids file: stdin
# conversion type: flatten
# format: x1 x2 y1 y2 z1 z2 density
# Test tesseroid model file
1111100 1666650 1111100 1666650 0 30000 487.534658568521
-1111100 1111100 -1666650 -1111100 0 50000 198.175508383774
-1777760 -1111100 -1666650 555550 0 30000 -291.9029748328
```
Note that the density of prisms is altered. This is so that the tesseroid and corresponding prism have the same mass.
The result should look like the following (“column” means the column of the output file).
### Using tesslayers to make a tesseroid model of a stack of layers
The `tesslayers.sh` script converts grids that define a stack of layers into a tesseroid model. It then calculates the
gravitational attraction and gravity gradient tensor due to the tesseroid model:
```
#!/bin/bash
# Convert the layer grids in layers.txt to tesseroids.
# The grid spacing passed to -s is used as the size of the tesseroids,
# so be careful!
tesslayers -s0.5/0.5 -v < layers.txt > tessmodel.txt
# Now calculate the gz and tensor effect of this model at 100km height
tessgrd -r-8/8/32/48 -b50/50 -z100000 | \n tessgz tessmodel.txt | \n tessgxx tessmodel.txt | tessgxy tessmodel.txt | \n```
Fig. 3.10: Plot of the columns of output.txt generated by tess2prism_flatten.sh. The x and y axis are West-East and South-North, respectively, in kilometers.
The input file `layers.txt` contains the information about the stack of layers. It is basically regular grids in xyz format (i.e., in columns). The first 2 columns in the file are the longitude and latitude of the grid points. Then comes a column with the height of the first layer. This is the height (with respect to mean Earth radius) of the top of stack of layers. Then comes the thickness and density of each layer. Our layer file looks like this:
```
1 # Synthetic layer model of sediments and topography
2 # lon lat height thickness density
3 -10 30 800 800.002 1900
4 -9.5 30 800 800.006 1900
5 -9 30 800 800.016 1900
6 -8.5 30 800 800.042 1900
7 -8 30 800 800.105 1900
8 -7.5 30 800 800.248 1900
9 -7 30 800 800.554 1900
10 -6.5 30 800 801.173 1900
...
500 -7 36 798.411 814.357 1900
501 -6.5 36 796.635 830.394 1900
502 -6 36 793.262 860.866 1900
503 -5.5 36 787.236 915.303 1900
504 -5 36 777.127 1006.62 1900
505 -4.5 36 761.226 1150.26 1900
506 -4 36 737.823 1361.66 1900
507 -3.5 36 705.685 1651.98 1900
508 -3 36 664.665 2022.53 1900
509 -2.5 36 616.299 2459.43 1900
```
This is a synthetic layer model generated from two gaussian functions. This is what the topography (height column) and the thickness of the sediments look like:

The model file generated looks like this:
1 # Tesseroid model generated by tesslayers 1.1dev:
2 # local time: Fri Jul 20 18:02:45 2012
3 # grid spacing (size of tesseroids): 0.5 deg lon / 0.5 deg lat
4 -10.25 -9.75 29.75 30.25 800 -0.002000000005215406 1900
5 -9.75 -9.25 29.75 30.25 800 -0.006000000005215406 1900
6 -9.25 -8.75 29.75 30.25 800 -0.0159999998286366 1900
7 -8.75 -8.25 29.75 30.25 800 -0.0420000003650784 1900
8 -8.25 -7.75 29.75 30.25 800 -0.105000000447035 1900
9 -7.75 -7.25 29.75 30.25 800 -0.247999999672174 1900
10 -7.25 -6.75 29.75 30.25 800 -0.553999999538064 1900
...
500 -7.75 -7.25 35.75 36.25 799.290000000037 -7.125 1900
501 -7.25 -6.75 35.75 36.25 798.4110000000313 -15.943999999965304 1900
502 -6.75 -6.25 35.75 36.25 796.634999999776 -33.759000000005439 1900
503 -6.25 -5.75 35.75 36.25 793.262000000104 -67.60400000002831 1900
504 -5.75 -5.25 35.75 36.25 787.235999999568 -128.06700000000738 1900
505 -5.25 -4.75 35.75 36.25 777.1270000000328 -229.492999999784 1900
506 -4.75 -4.25 35.75 36.25 761.225999999791 -389.033999999985 1900
507 -4.25 -3.75 35.75 36.25 737.822999999858 -623.8370000000291 1900
508 -3.75 -3.25 35.75 36.25 705.68499999959 -946.2950000000857 1900
509 -3.25 -2.75 35.75 36.25 664.665000000037 -1357.86500000022 1900
The result should look like the following (“column” means the column of the output file).
License
Copyright (c) 2012-2017, Leonardo Uieda
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of Leonardo Uieda nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Fig. 3.12: Plot of the columns of output.txt generated by tesslayers.sh. The x and y axis are longitude and latitude, respectively.
|
{"Source-Url": "https://buildmedia.readthedocs.org/media/pdf/tesseroids/stable/tesseroids.pdf", "len_cl100k_base": 14929, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 84739, "total-output-tokens": 19439, "length": "2e13", "weborganizer": {"__label__adult": 0.00030922889709472656, "__label__art_design": 0.0007414817810058594, "__label__crime_law": 0.00030350685119628906, "__label__education_jobs": 0.0021076202392578125, "__label__entertainment": 0.0001881122589111328, "__label__fashion_beauty": 0.00019478797912597656, "__label__finance_business": 0.0005254745483398438, "__label__food_dining": 0.0004503726959228515, "__label__games": 0.001129150390625, "__label__hardware": 0.0015401840209960938, "__label__health": 0.0004374980926513672, "__label__history": 0.0010042190551757812, "__label__home_hobbies": 0.0002570152282714844, "__label__industrial": 0.001148223876953125, "__label__literature": 0.00041103363037109375, "__label__politics": 0.00054168701171875, "__label__religion": 0.000751495361328125, "__label__science_tech": 0.4248046875, "__label__social_life": 0.0002065896987915039, "__label__software": 0.041015625, "__label__software_dev": 0.5205078125, "__label__sports_fitness": 0.00035762786865234375, "__label__transportation": 0.0006394386291503906, "__label__travel": 0.0003197193145751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54457, 0.10455]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54457, 0.41409]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54457, 0.74585]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 731, false], [731, 731, null], [731, 1805, null], [1805, 1805, null], [1805, 2918, null], [2918, 3276, null], [3276, 3452, null], [3452, 3452, null], [3452, 4492, null], [4492, 7060, null], [7060, 9898, null], [9898, 12479, null], [12479, 13590, null], [13590, 13804, null], [13804, 14755, null], [14755, 16453, null], [16453, 19552, null], [19552, 21244, null], [21244, 22574, null], [22574, 24878, null], [24878, 27474, null], [27474, 30426, null], [30426, 33073, null], [33073, 34997, null], [34997, 37456, null], [37456, 37523, null], [37523, 38566, null], [38566, 41925, null], [41925, 43128, null], [43128, 45136, null], [45136, 45273, null], [45273, 45417, null], [45417, 47576, null], [47576, 47719, null], [47719, 49924, null], [49924, 50082, null], [50082, 51495, null], [51495, 54326, null], [54326, 54457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 731, true], [731, 731, null], [731, 1805, null], [1805, 1805, null], [1805, 2918, null], [2918, 3276, null], [3276, 3452, null], [3452, 3452, null], [3452, 4492, null], [4492, 7060, null], [7060, 9898, null], [9898, 12479, null], [12479, 13590, null], [13590, 13804, null], [13804, 14755, null], [14755, 16453, null], [16453, 19552, null], [19552, 21244, null], [21244, 22574, null], [22574, 24878, null], [24878, 27474, null], [27474, 30426, null], [30426, 33073, null], [33073, 34997, null], [34997, 37456, null], [37456, 37523, null], [37523, 38566, null], [38566, 41925, null], [41925, 43128, null], [43128, 45136, null], [45136, 45273, null], [45273, 45417, null], [45417, 47576, null], [47576, 47719, null], [47719, 49924, null], [49924, 50082, null], [50082, 51495, null], [51495, 54326, null], [54326, 54457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54457, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 731, 3], [731, 731, 4], [731, 1805, 5], [1805, 1805, 6], [1805, 2918, 7], [2918, 3276, 8], [3276, 3452, 9], [3452, 3452, 10], [3452, 4492, 11], [4492, 7060, 12], [7060, 9898, 13], [9898, 12479, 14], [12479, 13590, 15], [13590, 13804, 16], [13804, 14755, 17], [14755, 16453, 18], [16453, 19552, 19], [19552, 21244, 20], [21244, 22574, 21], [22574, 24878, 22], [24878, 27474, 23], [27474, 30426, 24], [30426, 33073, 25], [33073, 34997, 26], [34997, 37456, 27], [37456, 37523, 28], [37523, 38566, 29], [38566, 41925, 30], [41925, 43128, 31], [43128, 45136, 32], [45136, 45273, 33], [45273, 45417, 34], [45417, 47576, 35], [47576, 47719, 36], [47719, 49924, 37], [49924, 50082, 38], [50082, 51495, 39], [51495, 54326, 40], [54326, 54457, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54457, 0.01471]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
05ba7f0a9d9ca2e453f6dec9683afdce1269f9e3
|
Application Note 234
Migrating from PIC Microcontrollers to Cortex™-M3
Document number: ARM DAI 0234
Issued: February 2010
Copyright ARM Limited 2010
Introduction
Application Note 234
Migrating from PIC Microcontrollers to Cortex-M3
Copyright © 2010 ARM Limited. All rights reserved.
Release information
The following changes have been made to this Application Note.
<table>
<thead>
<tr>
<th>Date</th>
<th>Issue</th>
<th>Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>February 2010</td>
<td>A</td>
<td>First release</td>
</tr>
</tbody>
</table>
Proprietary notice
Words and logos marked with ® or © are registered trademarks or trademarks owned by ARM Limited, except as otherwise stated below in this proprietary notice. Other brands and names mentioned herein may be the trademarks of their respective owners.
Neither the whole nor any part of the information contained in, or the product described in, this document may be adapted or reproduced in any material form except with the prior written permission of the copyright holder.
The product described in this document is subject to continuous developments and improvements. All particulars of the product and its use contained in this document are given by ARM in good faith. However, all warranties implied or expressed, including but not limited to implied warranties of merchantability, or fitness for purpose, are excluded.
This document is intended only to assist the reader in the use of the product. ARM Limited shall not be liable for any loss or damage arising from the use of any information in this document, or any error or omission in such information, or any incorrect use of the product.
Confidentiality status
This document is Open Access. This document has no restriction on distribution.
Feedback on this Application Note
If you have any comments on this Application Note, please send email to errata@arm.com giving:
- the document title
- the document number
- the page number(s) to which your comments refer
- an explanation of your comments.
General suggestions for additions and improvements are also welcome.
ARM web address
http://www.arm.com
# Table of Contents
1 Introduction
1.1 Why change to Cortex-M3
1.2 Cortex-M3 products
1.3 References and Further Reading
2 Cortex-M3 Features
2.1 Nested Vectored Interrupt Controller (NVIC)
2.2 Memory Protection Unit (MPU)
2.3 Debug Access Port (DAP)
2.4 Memory Map
3 PIC and Cortex-M3 Compared
3.1 Programmer's model
3.2 System control and configuration registers
3.3 Exceptions and interrupts
3.4 Memory
3.5 Debug
3.6 Power management
4 Migrating a software application
4.1 General considerations
4.2 Tools configuration
4.3 Startup
4.4 Interrupt handling
4.5 Timing and delays
4.6 Peripherals
4.7 Power Management
4.8 C Programming
5 Examples
5.1 Vector tables and exception handlers
5.2 Bit banding
5.3 Access to peripherals
1 Introduction
The ARM Cortex™-M3 is a high performance, low cost and low power 32-bit RISC processor. The Cortex-M3 processor supports the Thumb-2 instruction set – a mixed 16/32-bit architecture giving 32-bit performance with 16-bit code density. The Cortex-M3 processor is based on the ARM v7-M architecture and has an efficient Harvard 3-stage pipeline core. It also features hardware divide and low-latency ISR (Interrupt Service Routine) entry and exit.
As well as the CPU core, the Cortex-M3 processor includes a number of other components. These included a Nested Vectored Interrupt Controller (NVIC), an optional Memory Protection Unit (MPU), Timer, debug and trace ports. The Cortex-M3 has an architectural memory map.
In this document, we will refer to standard features of the PIC18 or PIC24 architecture. There are several extended versions of the architecture (e.g. dsPIC, PIC32 etc.) which support additional features (e.g. extra status flags, ability to address more memory etc.).
This document should be read in conjunction with Application Note 179 “Cortex-M3 Embedded Software Development”. That document describes many standard features and coding techniques for Cortex-M3 developers.
1.1 Why change to Cortex-M3
There are many reasons to take the decision to base a new design on a device incorporating a Cortex-M3 processor. Most, if not all, of these reasons also apply to the decision to migrate an existing product to Cortex-M3.
- **Higher performance**
While exact performance is dependent on the individual device and implementation, the Cortex-M3 processor is capable of providing 1.25DMIPS/MHz at clock speeds up to 135MHz.
- **More memory**
Since the Cortex-M3 is a full 32-bit processor, including 32-bit address and data buses, it has a 4GB address space. Within the fixed address space, up to 2GB of this is available for code execution (either in flash or RAM) and up to 2GB for RAM (either on or off chip). Significant space is also allocated for peripherals, system control registers and debug support.
- **Modern tools**
The Cortex-M3 is well supported by a wide range of tools from many suppliers. In particular, the RealView Developer Suite (RVDS) and Keil Microcontroller Developer Kit (MDK) from ARM provide full support for Cortex-M3. Models are also available to accelerate software development.
- **Can program in C**
Unlike many microcontrollers, the Cortex-M3 can be programmed entirely in C. This includes exception handling, reset and initialization as well as application software. Doing away with assembly code improves portability, maintainability and debugging and also encourages code reuse. Easier programming, improved reusability and greater availability of device libraries also reduces time-to-market.
- **More efficient interrupt handling**
The interrupt architecture of the Cortex-M3 is designed for efficient interrupt entry and exit and also to minimize interrupt latency. The integrated Nested Vectored Interrupt Controller supports hardware prioritization, pre-emption and dispatch of external and internal interrupts. The core also supports late arrival, tail-chaining and nesting with minimal software intervention.
- **Future proof**
The Cortex-M3 will meet the needs of the majority of today’s microcontroller...
applications but, crucially, it provides an upwards migration path to the rest of the ARM architecture family of products. Since programming is entirely in C, achieving extra performance by migrating to a higher class of ARM processor is realistic and achievable with minimal engineering effort. A single toolset also supports multiple Cortex-M architecture from multiple MC vendors.
- **Use more capable OS/scheduler**
The architecture of the Cortex-M3 provides excellent support for many standard RTOS’s and schedulers. OS’s can make use of the privileged “Handler” mode to provide inter-process isolation and protection. The built-in SysTick timer is ideal for system synchronization and can also function as a watchdog.
- **Better consistency between suppliers**
Using a microcontroller based on an industry-standard architecture reduces risk by ensuring that products available from different suppliers are highly consistent and standardized. The engineering effort involved in moving from one supplier to another is minimized.
- **Better debug facilities**
The Cortex-M3 supports full in-circuit debug using standard debug adapters. There is full support for breakpointing, single-stepping and program trace as well as standard instrumentation features.
- **More choices**
The Cortex-M3 architecture is implemented by many device manufacturers and supported by many tools vendors. This gives the developer significantly improved choice. The high degree of standardization across Cortex-M3 microcontrollers means that there is a large range of standard software components available.
1.2 **Cortex-M3 products**
See [www.onarm.com](http://www.onarm.com) for the most comprehensive list of available Cortex-M3 devices, supporting technology and development tools.
1.3 **References and Further Reading**
Application Note 179 – Cortex-M3 Embedded Software Development, ARM DAI0179B, ARM Ltd.
Cortex Microcontroller Software Interface Standard (see www.onarm.com).
PIC18F44J11 Datasheet, DS39932C, Microchip Technology Inc.
STM32F101T4 Datasheet, Doc ID 15058, STMicroelectronics
MPASM, MPLINK, MPLIB User’s Guide, DS33014K, Microchip Technology Inc.
2 Cortex-M3 Features
2.1 Nested Vectored Interrupt Controller (NVIC)
Depending on the silicon manufacturer’s implementation, the NVIC can support up to 240 external interrupts with up to 256 different priority levels, which can be dynamically configured. It supports both level and pulse interrupt sources. The processor state is automatically saved by hardware on interrupt entry and is restored on interrupt exit. The NVIC also supports tail-chaining of interrupts.
The use of an NVIC in the Cortex-M3 means that the vector table for a Cortex-M3 is very different to previous ARM cores. The Cortex-M3 vector table contains the address of the exception handlers and ISR, not instructions as most other ARM cores do. The initial stack pointer and the address of the reset handler must be located at 0x0 and 0x4 respectively. These values are then loaded into the appropriate CPU registers at reset.
The NVIC also incorporates a standard SysTick timer which can be used as a one-shot timer, repeating timer or system wake-up/watchdog timer.
A separate (optional) Wake-up Interrupt Controller (WIC) is also available. In low power modes, the rest of the chip can be powered down leaving only the WIC powered.
2.2 Memory Protection Unit (MPU)
The MPU is an optional component of the Cortex-M3. If included, it provides support for protecting regions of memory through enforcing privilege and access rules. It supports up to 8 different regions which can be split into a further 8 sub-regions, each sub-region being one eighth the size of a region.
2.3 Debug Access Port (DAP)
The debug access port uses an AHB-AP interface to communicate with the processor and other peripherals. There are two different supported implementations of the Debug Port, the Serial Wire JTAG Debug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP). Your Cortex-M3 implementation might contain either of these depending on the silicon manufacturer’s implementation.
2.4 Memory Map
Unlike most previous ARM cores, the overall layout of the memory map of a device based around the Cortex-M3 is fixed. This allows easy porting of software between different systems based on the Cortex-M3. The address space is split into a number of different sections and is discussed further in section 3.4.1 below.
3 PIC and Cortex-M3 Compared
Direct comparisons are necessarily difficult between these two architectures. Both are available in any number of different configurations. While the Cortex-M3 is arguably more standardized than the PIC implementations, there are still several implementation options available to individual silicon fabricators (e.g. number of interrupts and depth of priority scheme, memory protection, debug configuration etc.).
In both cases, there is a large set of devices with very different peripheral sets.
For the purposes of meaningful comparison, we have selected the PIC18 architecture and will be looking at devices like the PIC18F44J11.
On the Cortex-M3 side, we have selected the STM32F101T4 from STMicroelectronics. The features of these two devices are summarized in the table below.
<table>
<thead>
<tr>
<th></th>
<th>PIC18F44J11</th>
<th>STM32F101T4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Program memory (flash)</td>
<td>16 Kbytes</td>
<td>16 Kbytes</td>
</tr>
<tr>
<td>Data memory (RAM)</td>
<td>3.8 Kbytes</td>
<td>4 Kbytes</td>
</tr>
<tr>
<td>Max clock frequency</td>
<td>48 MHz</td>
<td>36 MHz</td>
</tr>
<tr>
<td>GPIO pins</td>
<td>34</td>
<td>26</td>
</tr>
<tr>
<td>ADC</td>
<td>13-channel x 10-bit</td>
<td>10-channel x 12-bit</td>
</tr>
<tr>
<td>Timers</td>
<td>2 x 8-bit, 3 x 16-bit</td>
<td>2 x 16-bit + SysTick</td>
</tr>
<tr>
<td>Watchdog timer</td>
<td>Y</td>
<td>Y (Two)</td>
</tr>
<tr>
<td>SPI</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>I2C</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>USART</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>PWM</td>
<td>2</td>
<td>N/A</td>
</tr>
<tr>
<td>Comparators</td>
<td>2</td>
<td>N/A</td>
</tr>
<tr>
<td>RTC</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td>External interrupt sources</td>
<td>4 (+30 internal)</td>
<td>43 (+ 16 internal)</td>
</tr>
<tr>
<td>Interrupt prioritization</td>
<td>2 levels</td>
<td>16 levels</td>
</tr>
<tr>
<td>Vectored Interrupt Controller</td>
<td>N</td>
<td>Y</td>
</tr>
<tr>
<td>Power-saving modes</td>
<td>Idle/Sleep/DeepSleep</td>
<td>Sleep/Stop/Standby</td>
</tr>
<tr>
<td>DMA</td>
<td>7-channel</td>
<td></td>
</tr>
<tr>
<td>Debug port</td>
<td>ICD (In-Circuit Debug)</td>
<td>SWJ-DP JTAG port</td>
</tr>
<tr>
<td>Voltage Detection</td>
<td>Y</td>
<td>Y</td>
</tr>
</tbody>
</table>
While these two devices have been selected as they are similar in size, it is worth noting that the PIC is “large” in the family, whereas the Cortex-M3 is relatively “small” compared
to other available Cortex-M3 options. Both devices have reasonably small areas of ROM and RAM together with manageable peripheral sets.
Note that the constraints of pin count and packaging dictate that not all combinations of peripherals may be available simultaneously. This is especially true of the PIC device. The values in table indicate the maximum available set on the device.
3.1 Programmer’s model
3.1.1 Register set
The two processors are quite significantly different with regards to the register set.
The PIC essentially treats internal data RAM as an extended general purpose register set and all locations support single-cycle access when using direct addressing (accessing registers outside the current bank requires an extra instruction to set the Bank Select Register first). Some of this is reserved for Special Function Registers which control system features. Most ALU instructions operate on the Working Register (W or WREG). Special registers are provided for holding pointers to access memory, stack pointer, ALU status etc.
There are many general purpose registers which are used for specific functions e.g. PRODH and PRODL which hold multiply result, FSRnH and FSRnL used for indirect memory access, STKPTR which holds the stack pointer, TOS which holds the current top-of-stack value, TBLPTR which is used for accessing program memory etc.
The fact that there is a large number of registers with special functions and the need for banked addressing means that the majority of code for PIC18 devices is written in C. Handling bank switching in assembler can be difficult.
The Cortex-M3 has 16 general purpose registers, R0-R15, all 32-bit. R0-R12 are generally available for essentially all instructions, R13 is used as the Stack Pointer, R14 as the Link Register (for subroutine and exception return) and R15 as the Program Counter. There is also a single Program Status Register (PSR) which holds current status (operating mode, ALU status etc.) – see below.
Peripheral and system control registers are memory-mapped within the System Control Space.
3.1.2 Status registers
The PIC STATUS register is an 8-bit register containing 5 ALU flags. Bits associated with interrupt status and masking are contained in separate registers.
The Cortex-M3 Program Status Register (PSR) is a single 32-bit register with several aliases, each providing a view of a different subset of the contents. From the user point of view, the Application Program Status Register (APSR) contains the ALU status flags. For operating system and exception handling use, the Interrupt Program Status Register (IPSR) contains the number of the currently executing interrupt (or zero if none is currently active). The Execution Program Status Register (EPSR) contains bits which reflect execution status and is not directly accessible.
3.1.3 Instruction set
There are several variations of the PIC instruction set. Older devices (not considered here) support 12-bit and 14-bit instruction sets. The PIC18 series support 16-bit instructions and is backwards compatible with PIC16. Although instructions are 16-bits, the ALU and memory interfaces are still 8-bit so these are regarded as 8-bit devices.
The basic PIC18F instruction set contains 77 instructions.
Later generations of PIC devices support full16-bit operation (PIC24, dsPIC) and 32-bit operation (PIC32). The PIC18 series is one of the first for which a C compiler is available.
The Cortex-M3 supports the ARM v7-M architecture. The instruction set is a subset of the Thumb-2 instruction set, in which instructions are either 16-bits or 32-bits in size. The set contains 159 instructions (though some are functionally similar, differing only in the size and encoding).
3.1.4 Operating modes
Different operating modes, sometimes allied with the concept of “privilege”, are used by many embedded operating systems and schedulers to enforce task separation and to protect the system from rogue software.
The PIC has no concept of operating mode, nor of privilege.
The Cortex-M3 supports two modes, Thread mode (used for user processes) and Handler mode (used for handling exceptions and automatically entered when an exception is entered). Optionally, Thread mode can be configured to be “unprivileged” (sometimes called “user privilege”) and can then be prevented from carrying out certain operations. This configuration can be used to provide a degree of system protection from errant or malicious programs. At startup, Thread mode is configured to operate in privileged mode.
3.1.5 Stack
PIC devices support three stacks, two of which are not stored in “normal” memory space. These are implemented as hardware registers and this results in some limitations and can increase power consumption.
For return addresses, a 32-entry Full Ascending stack supports a maximum of 31 nested subroutines and/or interrupts. Applications cannot access this stack space directly but special registers provide access to the top word of the stack and to the stack pointer.
The instruction set allows PC to be pushed on to the stack via a PUSH instruction (the top word on the stack can then be modified via the special-purpose registers, providing a method for “push”-ing arbitrary values. The POP instruction discards the top entry. Stack underflow and overflow are automatically detected.
For interrupts, a single-entry stack is used for saving status registers on interrupt entry. Both high and low priority interrupts use the same space, so special care must be taken when pre-emption is enabled. There is an option for using this space for storing context across a function call but care must be taken that no interrupts can occur in the meantime.
For application software a separate stack can be defined, used for function parameters and automatic variables. This can be located in internal or external RAM. Locating it in external RAM requires larger pointers and can be less efficient.
In contrast, the Cortex-M3 uses only “normal” memory for the stack. The Cortex-M3 supports a Full Descending stack addressed by the current stack pointer (see below). This stack can be located anywhere in RAM. Typically, for best performance, it will be located in internal SRAM. Stack size is limited only by the available RAM space.
The Cortex-M3 stack pointer is typically initialized to the word above the top of the allocated stack area. Since the stack model is Full Descending, the stack pointer is decremented before the first store, thus placing the first word on the stack at the top of the allocated region.
All stack accesses on the Cortex-M3 are word-sized.
3.1.6 Code execution
During normal sequential code execution, the PIC PC increments by 2 bytes per instruction. The Cortex-M3 PC may increment by 2 or 4 bytes depending on the size of the current instruction.
The Cortex-M3 PC is 32-bits wide, held in a single register, and is able to address an instruction anywhere in the 4GB address space. The PIC PC is 21-bits wide and is held in 3 separate 8 registers PCU (3 bits), PCH (8 bits) and PCL (8 bits). Only PCL is directly writeable. Updates to the higher bytes (PCU and PCH) are via the shadow registers PCLATU and PCLATH. Any operation which writes to PCL (e.g. a computed GOTO operation) will simultaneously copy the values in these shadow registers to the corresponding portions of the PC (this does not apply to CALL, RCALL and GOTO instructions).
In both processors, the Program Counter (PC) is generally accessed only indirectly i.e. via call or jump instructions. However, both support limited indirect access to the PC to support, for instance, jump tables or computed calls.
The Cortex-M3 supports loading the PC from memory and also branching to value held in a general-purpose register (using the BX or BLX instructions).
Both processors enforce alignment requirements on instructions and this places attendant restrictions on the values which can be taken by the program counter. In both cases, instructions must appear on an even byte boundary so jumps and calls must be to an even address.
The least significant bit of the PIC PC is fixed to zero to enforce instruction alignment.
In most ARM architecture processors (including the Cortex-M3), the least significant bit of a value loaded or transferred into the PC is used to control a change in operating state. Since the Cortex-M3 supports only the Thumb-2 instruction set it must operate in Thumb state at all times (for details of ARM and Thumb states, refer to the ARM Architecture Reference Manual). The programmer’s model and the tools (assembler, compiler, linker) will generally ensure that this is the case at all times during normal code execution. However, if the PC is loaded from memory, bit 0 of the loaded value is used to control the state of the processor. In the Cortex-M3, this bit must always be set to 1 (to remain in Thumb state). Loading a value into the PC which has the least significant bit clear will result in a Usage Fault. Again, the tools (compiler, assembler and linker) will usually ensure that the least significant bit is always set on any function pointer.
3.2 System control and configuration registers
In both devices, the system control and configuration registers are memory-mapped. In PIC devices, it is more useful though to regard this space as part of the register set since all can be accessed in single-cycle instructions as directly-addressable memory locations with 8-bit addresses which can be embedded in a single instruction. These are different from the instructions used to access external memory if any is implemented. Many of these registers are bit-addressable.
The Cortex-M3 system control space is fixed at 0xE0000000 an above. The placement of registers in this space is fixed. All can be directly addressed using 32-bit pointers and standard memory access instructions. Since these registers do not lie in the bit-band region, they are not bit-addressable.
3.3 Exceptions and interrupts
PIC supports a number of interrupts sources, the actual number being dependent on the peripheral set in the selected device. In general, these map to 3 external interrupt sources (INT0, INT1, INT2), timer interrupts (TMR0-3), and then further interrupts associated with peripheral devices (A/D converter, UART, SSP, PSP etc.). By default, there is no priority scheme and all interrupts have the same priority with no pre-emption. Optionnally, each interrupt can be assigned one of two priority levels. High- and Low-Priority interrupts are then routed to two separate vectors and high-priority interrupts can pre-empt low-priority interrupts.
PIC Low-Priority Interrupts may nest if the interrupt handler explicitly re-enables interrupts (by setting the GIEL bit). Special care must be taken to preserve registers, variables and
other system resources to ensure that the handler is re-entrant. High-Priority interrupts may not nest since they use a single set of shadow registers to store context.
The Cortex-M3 has an integrated Nested Vectored Interrupt Controller which supports between 1 and 240 separate external interrupt courses. There are up to 256 priority levels, which may be configured as a hierarchy of priority group and sub-priority. Individual implementations may configure the number of interrupts and the number of priority levels which are supported so it is important to check the manual for the target device to determine the exact configuration. The Cortex-M3 also supports an external Non-Maskable Interrupt (NMI) and several internal interrupts (e.g. HardFault, SVC etc.).
Cortex-M3 interrupts of any priority may nest. There is no need to take any special steps to write a re-entrant handler (beyond the obvious requirements not to use global variables in a non-reentrant fashion) as the hardware automatically saves sufficient system context to make this unnecessary.
3.3.1 Interrupt prioritization and pre-emption
The PIC18F44J11 supports the interrupt priority feature, though this feature is disabled on reset for compatibility with earlier and smaller devices. Each interrupts may be assigned to either High or Low Priority. There are separate vectors for each priority, with all interrupts of each priority being routed to a single vector. INT0 has fixed high priority. High and low priority interrupts can be globally enabled/disabled via single bit.
The STM32F101T4 NVIC supports 68 interrupts and 16 priority levels. There is no implementation of priority grouping.
3.3.2 External interrupts
PIC18F44J11 supports three external interrupt sources (INT0-2). All external interrupts must be routed to one of these three sources. Software must then de-multiplex the interrupt sources in the handler.
STM32F101T4 supports up to 43 external interrupts, all of which may have separately configured priority, and a Non-Maskable Interrupt (NMI).
3.3.3 Internal interrupts
PIC supports a number of internal interrupts from peripherals included in the device. Since the peripheral set varies greatly from device to device, the exact set of supported interrupts also varies. A number are associated with internal error events such as Oscillator Fail (OSCF), Bus Collision (BCL), Low-Voltage Detection (LVD).
STM32F101T4 supports the standard set of Cortex-M3 internal interrupts, with the exception of MemManageException (since there is no MPU).
3.3.4 Vector table
The PIC vector table consists of three entries: Reset, High-Priority Interrupt and Low-Priority Interrupt. The vector table is located at the start of program memory, immediately following the reset vector. Each location contains executable instructions, typically branch instructions to the start of the interrupt handler. The vector table cannot be relocated.
The Cortex-M3 vector table is located, by default, at address 0x00000000. It can be relocated during initialization to a location in either Code or RAM regions. Within the vector table, each entry contains the starting address of the corresponding handler routine. This is automatically loaded by the NVIC during interrupt execution and passed directly to the core.
3.3.5 Interrupt handlers
Interrupt handlers in the PIC architecture are responsible for preserving any registers which they corrupt. Only the status registers (STATUS, WREG and BSR) are
automatically saved. Handlers must return using RETFIE rather than a standard return instruction so must be flagged to the C compiler using #pragma keywords. High and low priority interrupt handlers must be separately flagged at compile-time so cannot be dynamically reassigned at runtime.
The Cortex-M3 supports all exception entry and exit sequences in hardware and thus allows interrupt routines to be standard C functions, compliant with the ARM Architecture Procedure Call Standard (AAPCS). Any compliant function can be installed in the vector table as a handler simply by referencing its address.
### 3.3.6 Interrupt optimizations
The PIC has fixed interrupt latency and does not support tail-chaining or late arrival of interrupts. Pre-emption is supported between the two available priority levels.
The Cortex-M3 supports tail-chaining and late arrival to reduce interrupt latency and minimize unnecessary context save and restore operations.
### 3.4 Memory
Both PIC and Cortex-M3 devices have a fixed memory map. Within these maps, areas are allocated for ROM, RAM, peripherals etc. Both also support a scheme of memory-mapped registers for system configuration and control.
Both devices support a Harvard memory architecture, with separate data and program memory interfaces. This provides for greater throughput by allowing simultaneous accesses at each of the two interfaces. Cortex-M3 supports a single, unified external address space, covering both program and data (including peripheral) regions. The PIC maintains a completely separate address space for program and data.
#### 3.4.1 Memory map
PIC program and data memory are in separate address spaces and are addressed in different ways.
The table below shows the internal program memory map (the PIC18F44J11 does not support external program memory). Note that only 16k of the address space is populated with usable memory. Accesses to other regions will read as ‘0’ (equivalent to a NOP instruction).
<table>
<thead>
<tr>
<th>Address</th>
<th>Contents</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x0000</td>
<td>Reset Vector</td>
</tr>
<tr>
<td>0x0008</td>
<td>High-Priority Interrupt Vector</td>
</tr>
<tr>
<td>0x0018</td>
<td>Low-Priority Interrupt Vector</td>
</tr>
<tr>
<td>to 0x3FF7</td>
<td>On-Chip Program Memory</td>
</tr>
<tr>
<td>0x3FF8</td>
<td>Configuration Words (CONFIG1 – CONFIG4)</td>
</tr>
<tr>
<td>to 0x3FFF</td>
<td></td>
</tr>
<tr>
<td>0x4000</td>
<td>Unimplemented memory (read as ‘0’ - NOP)</td>
</tr>
<tr>
<td>to 0x1FFFFFF</td>
<td></td>
</tr>
</tbody>
</table>
Note the configuration words which are in a fixed location at the top of usable program memory. These words are read automatically by the device on reset and are used to set initial configuration. They are mainly concerned with initial configuration of clock and oscillator modes. Compiler (#pragma config directive) and assembly (config directive) tools provide mechanisms for setting these words.
PIC internal data memory is organized into up to 16 banks of 256 bytes, giving a total of 4k bytes. Banks are accessed either via a 12-bit address (using the MOVFF instruction) or via a 4-bit Bank Select Register (which specifies the bank number) coupled with an 8-bit address.
Additionally, the lower part of Bank 0 and the upper part of Bank 15 can be accessed directly (termed the “Access Bank”) bypassing the Bank Select Register. This allows for quicker access to some general purpose RAM locations and a number of the Special Purpose Registers in Bank 15. Access to the remaining Special Purpose Registers (in Bank 14 and the lower part of Bank 15) requires either a full 12-bit address or setting the Bank Select Register.
The File Select Registers can be used to access memory indirectly using a full 12-bit address. These registers support auto-increment/decrement addressing modes.
The PIC18F44J11 data memory map is summarized in the table below. Of the 4k address space, approximately 3.8k is usable RAM, the remainder being allocated to Special Function Registers.
<table>
<thead>
<tr>
<th>Bank</th>
<th>Bank Address</th>
<th>Full address</th>
<th>Contents</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0x00–0x5F</td>
<td>0x000–0x05F</td>
<td>Access Bank General Purpose RAM</td>
</tr>
<tr>
<td>0</td>
<td>0x60–0xFF</td>
<td>0x060–0xFF</td>
<td>Bank 0 General Purpose RAM</td>
</tr>
<tr>
<td>1</td>
<td>0x00–0xFF</td>
<td>0x100–0x1FF</td>
<td>Bank 1 General Purpose RAM</td>
</tr>
<tr>
<td>2–13</td>
<td>0x00–0xFF</td>
<td>0x200–0xFF</td>
<td>Bank n General Purpose RAM</td>
</tr>
<tr>
<td>14</td>
<td>0x00–0xBF</td>
<td>0xE00–0xEBF</td>
<td>Bank 14 General Purpose RAM</td>
</tr>
<tr>
<td>14</td>
<td>0xC0–0xFF</td>
<td>0xE00–0xEF</td>
<td>Bank 14 Special Function Registers</td>
</tr>
<tr>
<td>15</td>
<td>0x00–0x5F</td>
<td>0xF00–0xF5F</td>
<td>Bank 15 Special Function Registers</td>
</tr>
<tr>
<td>15</td>
<td>0x60–0xFF</td>
<td>0xF60–0xFFF</td>
<td>Access Bank Special Function Registers</td>
</tr>
</tbody>
</table>
Copying between program and data memory on PIC devices requires special operations since program memory cannot be directly accessed as data. The TBLPTR registers are provided for this purpose.
The banked memory structure limits the maximum memory size for applications and increases software overhead for data accesses. This makes PIC microcontrollers more suitable for small applications.
The Cortex-M3 memory map is summarized in the table below. This is a unified address space covering both program and data regions as well as peripherals and system control registers.
<table>
<thead>
<tr>
<th>Address</th>
<th>Region</th>
<th>Address</th>
<th>Detail</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x0000 0000</td>
<td>Code</td>
<td>Code memory (e.g. flash, ROM etc.)</td>
<td></td>
</tr>
<tr>
<td>0x2000 0000</td>
<td>SRAM</td>
<td>0x2000 0000 – 0x200F FFFF</td>
<td>Bit band region</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x2200 0000 – 0x23FF FFFF</td>
<td>Bit band alias</td>
</tr>
<tr>
<td>0x4000 0000</td>
<td>Peripheral</td>
<td>0x4000 0000 – 0x400F FFFF</td>
<td>Bit band region</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x4200 0000 – 0x43FF FFFF</td>
<td>Bit band alias</td>
</tr>
<tr>
<td>0x6000 0000 – 0x9FFF FFFF</td>
<td>External RAM</td>
<td>For external RAM</td>
<td></td>
</tr>
<tr>
<td>0xA000 0000 – 0xDFFF FFFF</td>
<td>External device</td>
<td>External peripherals or shared memory</td>
<td></td>
</tr>
<tr>
<td>0xE000 0000 – 0xE003 FFFF</td>
<td>Private Peripheral Bus – Internal</td>
<td>0xE000 0000 – 0xE000 2FFF</td>
<td>ITM, DWT, FPB</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xE000 E000 – 0xE000 EFFF</td>
<td>System Control Space</td>
</tr>
<tr>
<td>0xE004 0000 – 0xE00F FFFF</td>
<td>Private Peripheral Bus – External</td>
<td>0xE004 0000 – 0xE004 1FFF</td>
<td>MPU, ETM</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xE004 2000 – 0xE00F EFFF</td>
<td>MPU, NVIC etc</td>
</tr>
<tr>
<td>0xE010 0000 to 0xFFFF FFFF</td>
<td>Vendor-specific</td>
<td>For vendor-specific use</td>
<td></td>
</tr>
</tbody>
</table>
The memory map implemented in the STM32F101T4 is as follows. Regions which are not indicated are unimplemented.
<table>
<thead>
<tr>
<th>Address</th>
<th>Region</th>
<th>Address</th>
<th>Detail</th>
</tr>
</thead>
<tbody>
<tr>
<td>0x0000 0000 – 0x1FFF FFFF</td>
<td>Code</td>
<td>0x0000 0000 – 0x007F FFFF</td>
<td>Flash or system memory alias</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x0800 0000 – 0x0801 EFFF</td>
<td>Flash memory</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x1FFF F000 – 0x1FFF F7FF</td>
<td>System memory</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x1FFF F800 – 0x1FFF F80F</td>
<td>Option bytes</td>
</tr>
<tr>
<td>0x2000 0000 – 0x201F FFFF</td>
<td>SRAM</td>
<td>0x2000 0000 – 0x2000 FFFF</td>
<td>SRAM (bit banded)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x2200 0000 – 0x201F FFFF</td>
<td>Bit band alias region for SRAM</td>
</tr>
<tr>
<td>0x4000 0000</td>
<td>Peripheral</td>
<td>0x4000 0000 – 0x4002 33FF</td>
<td>Peripherals (bit banded)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x4200 0000 – 0x43FF FFFF</td>
<td>Bit band alias for peripherals</td>
</tr>
<tr>
<td>0xE000 0000 – 0xE00F FFFF</td>
<td>Internal peripherals</td>
<td>0xE000 0000 – 0xE000 2FFF</td>
<td>ITM, DWT, FPB</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xE000 E000 – 0xE000 EFFF</td>
<td>System Control Space</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xE004 0000 – 0xE004 1FFF</td>
<td>TPIU, ETM</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xE004 2000 – 0xE00F EFFF</td>
<td>SysTick, NVIC etc</td>
</tr>
</tbody>
</table>
The system may be configured (via hardware signals sensed at reset) to boot from the “System memory” region. This is typically used to hold a simple boot loader which is capable of downloading a program and programming the flash.
Since the amount of SRAM implemented on the STM32F101T4 is wholly contained within the bit band region, all SRAM on this device is bit-addressable. Likewise, all peripherals in the Peripheral region are bit-addressable.
### 3.4.2 Memory protection
PIC supports protection of program memory at block resolution. Block size varies from device to device but is otherwise fixed. This prevents reprogramming of the on-chip flash memory. There is no protection scheme for data memory.
The Cortex-M3 supports an optional Memory Protection Unit (MPU). When implemented, this allows access to memory to be partitioned into regions. Access to each region may then be restricted based on the current operating mode. This allows software developers to implement memory access schemes aimed at providing a degree of protection to the system from errant or malicious software applications.
When porting from PIC to Cortex-M3 there is no need to use any of the memory protection features offered by the Cortex-M3. At reset, the MPU is disabled and the default memory map as described above applies.
However, if the Cortex-M3 device incorporates caches and/or write buffers (these are not part of the architecture and, if present, are implemented externally) the cache and buffer policies are normally generated from the MPU configuration. In this case, it may be desirable for performance reasons to configure and enable the MPU.
The STM32F101T4 does not implement an MPU.
### 3.4.3 Access types and endianness
PIC instructions are little-endian. Since PIC data memory is 8-bits wide and is only accessed in bytes, endianness is not relevant.
The Cortex-M3 is a 32-bit processor and all internal registers are 32-bit. Memory transfers of 8-bit bytes, 16-bit halfwords and 32-bit words are supported. In the case of bytes and halfwords, the programmer needs to specify whether the loaded value is to be treated as signed or unsigned. In the case of signed values, the loaded value is sign-extended to create a 32-bit signed value in the destination register; in the case of unsigned values, the upper part of the register is cleared to zero.
The Cortex-M3 also has instructions which transfer doublewords and also Load and Store Multiple instructions which transfer multiple words in a single instruction to and from a contiguous block of memory.
Cortex-M3 instructions are always little-endian. Data memory accesses default to little-endian but the processor can be configured to access data in a big-endian format via a configuration pin which is sampled on reset. It is not possible to change endianness following reset. Note that registers in the System Control Space and accesses to any system peripherals are always little-endian regardless of the configuration.
### 3.4.4 Bit banding
All internal RAM contents on PIC devices can be bit-addressed. Bit Set, Bit Clear, Bit Test and Bit Toggle instructions support this addressing mode.
The Cortex-M3 provides bit access to two 1MB regions of memory, one within the internal SRAM region and the other in the peripheral region. A further 32MB of address space is reserved for this purpose and each word within these regions aliases to a specific bit within the corresponding bit-band region. Reading from the alias region returns a word containing the value of the corresponding bit; writing to bit 0 of a word in the alias region results in an atomic read-modify-write of the corresponding bit within the bit-band region.
### 3.5 Debug
On the PIC, limited debug functionality for on-board devices is available via the MPLAB ICD 3 unit. This communicates with the target device via a standard connector and allows simple breakpoint, single-step, data watch etc.
More complex debug facilities are available via standard header boards which connect to the target hardware as a replacement for the PIC device. Debug on the PIC device requires a small monitor program which executes on the target. This does not consume resources while the device is running but does occupy some program memory space. When debugging, it will also require some stack space.
Cortex-M3 devices are debugged via a standard JTAG or Serial-Wire Debug (SWD) connector. A simple, standardized external connector is required to interface to the host system.
In addition, the uVision simulator from Keil and the MPLAB IDE provide software simulation of target devices. In the case of uVision, this can include simulating external components at board level.
Both devices provide support for program debug and trace, though different external hardware may be required to support trace when using the PIC device. In the case of uVision, full instruction trace can be captured using the Keil ULINK-Pro trace analyzer (or a third-party equivalent).
3.6 Power management
The PIC devices support RUN, IDLE and SLEEP modes.
In RUN mode, the processor is fully-clock viaed though it is possible to select a number of different clock speeds (Primary, Secondary, Internal).
In IDLE mode, the main processor clock is turned off while peripherals remain clocked (with the same options for clock source and speed).
In SLEEP mode both peripherals and processor are powered down and unclocked.
Sleep modes are entered on execution of the SLEEP instruction (with the exact mode selected via configuration bits). Exit from sleep modes is via a watchdog timeout, interrupt or reset.
Architecturally, the Cortex-M3 supports Sleep and Deep Sleep modes. The manner in which these modes are supported on an actual device and the power-savings which are possible in each is dependent on the device. Sleep modes and power-saving features can be further extended by individual microcontroller vendors by using additional control registers.
The STM device supports three power-saving configurations: Sleep, Stop and Standby.
In Sleep mode (corresponding to the architectural Sleep mode), the processor is stopped (though state is retained completely) while peripherals continue to operate. Interrupts or other external events cause the processor to restart.
Stop mode (corresponding to the architectural Deep Sleep mode) achieves lowest possible power consumption while retaining the state of SRAM and registers. All external clocks are stopped. The device is woken by an external interrupt, low voltage detector or RTC alarm.
In Standby mode, everything except the internal watchdog, RTC and Wakeup Interrupt Controller is powered down. Register state and RAM contents will be lost. Exit from Standby mode is via external reset, watchdog reset, external WKUP pin or RTC alarm.
Sleep mode can be entered in several ways:
- **Sleep-now**
The Wait-For-Interrupt (WFI) or Wait-For-Event (WFE) instructions cause the processor to enter Sleep mode immediately. Exit is on detection of an interrupt or debug event.
- **Sleep-on-exit**
Setting the SLEEPONEXIT bit within the System Control Register (SCR) causes the processor to enter Sleep mode when the last pending ISR has exited. In this case, the exception context is left on the stack so that the exception which wakes the processor can be processed immediately.
In addition, Deep Sleep mode can be entered by setting the SLEEPDEEP bit in the SCR. On entry to Sleep mode, if this bit is set, the processor indicates to the external system that deeper sleep is possible.
4 Migrating a software application
The majority of software on both PIC18 and STM32 devices will be written in high-level languages, almost all in C. We will therefore ignore any migration relating to changes in instruction set as this can be handed by a recompilation.
We must also recognize that the “size” of PIC devices often requires a distinctive style of software which does not lend itself easily to porting to another architecture.
This section is written with reference to the C18 C compiler from Microchip and the Keil Microcontroller Developer Kit from ARM. Other tools may differ from these in significant ways.
4.1 General considerations
4.1.1 Operating mode
The Cortex-M3 will reset into Thread mode, executing as privileged. Handler mode (always privileged) is automatically entered when handling any exceptions which occur.
Since PIC does not support more multiple operating modes and has no concept of privilege, leaving the Cortex-M3 in this configuration is the simplest option and is often sufficient.
In order to take advantage of the protection offered by the privileged execution Thread mode can be configured to be unprivileged by setting CONTROL[0]. Unprivileged execution is prohibited from carrying out some system operations, e.g. masking interrupts.
4.1.2 Stack configuration
On PIC devices, the return stack is automatically initialized and is not accessible to the application software. If an application stack is required (and it almost always will be when programming in C) it is initialized via a STACK directive in the linker control script.
Allocating a stack region of up to 256 bytes is simple since it can be located within a single bank of memory. C18 supports stacks larger than this by combining more than one contiguous memory bank into a single region. However, this can mean that space available for data variables is limited.
The Cortex-M3 takes the initial value for the Main Stack Pointer (SP_main) from the first word in the vector table. This must be initialized to an area of RAM. Ideally this should be internal SRAM for best performance. Unless configured otherwise (see below) the Cortex-M3 will use this single stack pointer in both Thread and Handler modes. This is the simplest configuration to use when migrating from the 8051 which only supports a single stack pointer.
For applications which require an OS, the Cortex-M3 can be configured to use the separate Process Stack Pointer (SP_process) when in Thread Mode. This is done by writing to the CONTROL[1] bit. Setting this configuration allows separate stacks to be used for normal execution and exception handling. This would normally be handled by an OS kernel – in most simple applications, there is no need to use the PSP.
Since the Cortex-M3 is programmed entirely in C, stack usage is likely to be much higher than for the same program running on PIC. Sufficient stack space must therefore be provided. When allocating stack space, ensure that you take account of any usage required by exceptions.
4.1.3 Memory map
Unless an MPU is present and enabled, the default memory map described above is used. When migrating an application from PIC it is not usually necessary to implement any memory map configuration. In this case the MPU can be safely left disabled.
Microcontrollers using a Cortex-M3 processor can be built with many different memory devices. Usually, there will be some internal Flash or ROM (mapped to the CODE region) and internal SRAM (in the SRAM region). Any peripherals will normally be mapped to the Peripheral region. There may also be some external RAM.
Consult the manual for your chosen device to determine exactly what memories have been implemented and how they are mapped.
In any event, the system control registers and standard core peripherals (such as the SysTick timer etc) will be located in the standard location.
4.1.4 Code and data placement
1. Code
When coding for PIC devices, it is common to write non-relocatable code. Indeed, the assembler produces absolute executable files by default (ARM compilers and assemblers do not do this – a link stage is always required). Directives in C and assembly language source files fix the placement in memory at compile-time. This is extremely rare when coding for Cortex-M3 devices. Essentially, all code and object files are relocatable and the placement of code and data is decided at link time.
Both linkers take a list of object files and a script which controls the code and data placement.
Since PIC devices differ considerably in the amount and type of memory which they support and also the location of special registers, the PIC linker (MPLINK) control scripts are different for every PIC device. The generic script then needs to be modified by the developer to add application-specific placement rules.
When writing code containing GOTO or CALL instructions, unless page selection instructions are also included in the source, it will be necessary to split these up so that the maximum range of the instructions is not exceeded. This is not necessary when coding for ARM since the linker will automatically add long-branch support code where necessary.
The ARM linker takes its control input from a “scatter control file” – this can be generated automatically from project setting if using the Keil-MDK development tools. Specifying explicit sections is not usually necessary when linking for Cortex-M3 devices. Providing information about available ROM sections is often sufficient to allow the linker to place all objects. Code is normally placed in the Code region and this is normally populated with non-volatile memory of some kind e.g. flash. It is possible to place code in the SRAM region at run-time but performance will be slightly degraded as the core is optimized to fetch instructions using the ICODE bus from the Code region.
2. Data
Data memory in PIC devices is divided into banks. It is difficult, therefore, to create data segments which are larger than a single bank.
The PIC linker tries to place all variables in a single 256-byte section. Arrays larger than 256 bytes need special sections creating in the linker script.
There are no such restrictions when coding for Cortex-M3 devices. The only restriction is the absolute size of the RAM devices available.
3. Peripherals
All PIC peripherals which are included in the device are accessed and controller via SFRs in fixed locations (though the location and layout may vary from device to device). Additional memory-mapped peripherals may only be used with larger devices which support external memory interfaces. are not generally used so do not need locations defining at link time. The locations of these registers are generally supplied via included
header files when coding in assembly or C and mirrored in the standard linker configuration script for the target device.
In contrast, Cortex-M3 devices have a small set of architectural peripherals (e.g. the SysTick timer, the NVIC etc) which are accessed via registers in the System Control Space. The location of this is fixed. Other memory-mapped peripherals can be located absolutely at compile-time (via fixed addresses in source code) but this is not recommended. Instead, they are usually defined in relocatable data segments which are then placed by the linker at absolute addresses – these addresses area supplied to the linker in the scatter control file.
### 4.1.5 Data types and alignment
When programming in a high-level language, the natural data type of the underlying machine is not necessarily important. The C compiler will take care of mapping high-level data types to low-level arithmetic operations and storage units.
Both compilers support a variety of types, as listed in the table.
<table>
<thead>
<tr>
<th>Type</th>
<th>Cortex-M3</th>
<th>PIC</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>char</td>
<td>8-bit signed</td>
<td>8-bit signed</td>
<td>Char is signed by default in both architectures</td>
</tr>
<tr>
<td>short</td>
<td>16-bit</td>
<td>16-bit</td>
<td></td>
</tr>
<tr>
<td>int</td>
<td>32-bit</td>
<td>16-bit</td>
<td></td>
</tr>
<tr>
<td>short long</td>
<td>N/A</td>
<td>24-bit</td>
<td></td>
</tr>
<tr>
<td>long</td>
<td>32-bit</td>
<td>32-bit</td>
<td></td>
</tr>
<tr>
<td>long long</td>
<td>64-bit</td>
<td>N/A</td>
<td>No 64-bit type for PIC</td>
</tr>
<tr>
<td>float</td>
<td>32-bit</td>
<td>32-bit</td>
<td></td>
</tr>
<tr>
<td>double</td>
<td>64-bit</td>
<td>32-bit</td>
<td></td>
</tr>
<tr>
<td>long double</td>
<td>64-bit</td>
<td>N/A</td>
<td></td>
</tr>
</tbody>
</table>
The Cortex-M3 is a 32-bit architecture and, as such, handles 32-bit types very efficiently. In particular, 8-bit and 16-bit types are less efficiently manipulated, although they will save on data memory if this is an issue.
Cortex-M3 devices, in common with other ARM architecture devices, normally require that data be aligned on natural boundaries. For instance, 32-bit objects should be aligned on 32-bit (word) boundaries and so on. The core is capable of supporting accesses to unaligned types (and this is useful when porting from architectures which do not have such strict alignment requirements) but there is a small speed penalty associated with this (since the memory interface is required to make multiple word accesses).
Since data memory is always accessed as bytes, PIC devices have no such restrictions.
### 4.1.6 Storage classes
#### 1. Static
PIC compilers support the ability to share storage for automatic variables between functions. These are placed in static RAM locations and initialized on each entry to the function. The compiler will attempt to overlay storage for these functions, provided that they can never be active at the same time. Such functions cannot be re-entrant.
Function parameters can also be declared as static and will be allocated fixed storage in data memory. Again, such functions cannot be re-entrant.
The advantage in each case is the potential for smaller code (due to the ease of access to the variables) and smaller data footprint (due to the possibility of overlaying data segments).
While ARM compilers support the static keyword, it is not applicable to parameters and the concept of overlaying data objects is not supported.
2. Banked
Due to the banked nature of the data memory in the PIC architecture, data items can be declared as near or far. Near items are located within the Access RAM area and can therefore be accessed directly with an 8-bit address. Items declared as far are in banked memory and will therefore require the Bank Selection Register to be set in order to access them.
Near/far can also be applied to program memory objects. Far objects are located at addresses above 64k. In the context of the PIC18F46 device, this is not relevant as the maximum size of program memory is less than this threshold.
In Cortex-M3 devices, there is no concept of near/far for either program or data objects since all pointers can address all of available memory.
3. RAM/ROM location
Since PIC devices enforce strict isolation between program and data address spaces, data which is to be placed in ROM must be explicitly marked using the rom qualifier. Similarly, pointers to data objects in program memory must be marked using the same keyword.
This introduces complications when using standard library functions to access, for instance, string constants held in program memory (the default). There are several versions of functions like strcpy, depending on whether source and data strings are located in program or data memory. Care must be taken to use the correct variant.
Since the Cortex-M3 address space is unified, this technique is not necessary.
4.2 Tools configuration
When using larger PIC devices, addressing extended program and data memory becomes problematic since pointers are 16-bits in size. Therefore, in addition to the “near” and “far” qualifiers for objects and pointers, PIC compilers, support various memory configurations when building applications. “Small” models limit pointers to 16-bits, while “large” models allow 24-bit pointers. There comes with a code size and speed penalty.
When building for Cortex-M3, there is no need for this since all pointers are 32-bits and can address the entire memory map.
4.3 Startup
PIC18 devices begin execution at the reset vector, which is located at address 0x0000. The return stack pointer is automatically initialized to the bottom of the internal stack region. The software stack pointer, if one is used, is initialized via the linker control file (see 4.1.2).
Before starting execution, the PIC will load a set of system configuration values from a fixed location in program memory (the exact location varies from device to device – the default location will be specified in the template linker script for the device). These values are set via #pragma statements in C or CONFIG directives in assembler language source files.
The PIC startup code automatically initializes the C runtime environment before calling main().
Cortex-M3 devices take the initial value for the Main Stack Pointer from the first word of the vector table (at address 0x00000000) and then begin execution by jumping to the address contained in the reset vector (at address 0x00000004).
Note that Cortex-M3 devices do not support any mechanism for auto-configuration (as described above for PIC) so all components will be in their reset state as described in the manual.
The C startup code will initialize the runtime environment and then call main().
4.4 Interrupt handling
One significant difference between the two architectures is the balance between hardware and software in interrupt dispatch.
PIC18 devices (with the priority scheme enabled) have two interrupt vectors. All high-priority interrupts are assigned to one, all low priority interrupts to the other. Each interrupt handler is responsible for determining which of the possible sources has raised an interrupt (the Interrupt Flag bits must be individually checked to work this out). Once the source has been identified, a specific handler can be involved to handle the event.
The NVIC on Cortex-M3 devices handles all this automatically. All interrupt sources have separate vectors defined in the vector table. In parallel with saving context on the stack, the NVIC reads the relevant vector address from the table and directs program execution directly to the start of the correct handler.
1. Writing interrupt handlers
Interrupt handler routines for PIC devices must be identified in C source code using a #pragma directive.
When writing interrupt handlers in PIC assembler, there are constraints (e.g. cannot access arrays using calculated index, call other functions, perform complex math, access ROM variables etc). These must be followed for correct operation.
If the interrupt priority scheme is not used only one handler is required; two are required if the priority scheme is used.
Cortex-M3 interrupt handlers are standard C functions and there are no special C coding rules for them.
2. Vector table generation
The interrupt handlers must be installed by putting a GOTO instruction at the vector location. Typically this is done with a short section of inline assembler within the C application. If the priority scheme is used, two vectors are required.
On Cortex-M3 devices, the vector table is typically defined by an array of C function pointers. This is then located at the correct address by the linker.
3. Interrupt configuration
In order to enable a PIC interrupt, the following must be set correctly:
- Specific Interrupt Enable bit
- Global Interrupt Enable bit
- Global High/Low Priority Interrupt Enable bit (if priority is used)
- Peripheral Interrupt Enable bit (for peripheral interrupts)
On Cortex-M3 devices, the NVIC must be configured appropriately:
- Interrupt Set Enable register
- PRIMASK must be clear to enable interrupts
This is achieved using CMSIS intrinsic functions __enable_irq() and NVIC_EnableIRQ(). See the CMSIS documentation for more detail. CMSIS functions are also available for setting and clearing pending interrupts, checking the current active interrupt and configuring priority.
4.5 Timing and delays
It is common, when programming for PIC devices, to use NOP instructions as a way of consuming time. The execution time of NOP instructions is easily determined from the system clock speed. When developing for Cortex-M3 devices this cannot be relied on as the pipeline is free to “fold out” NOP instructions from the instruction stream. When this happens, they do not consume time at all.
Deterministic delays in Cortex-M3 systems must therefore make use of a timer peripheral.
4.6 Peripherals
4.6.1 Standard peripherals
On the Cortex-M3 it is usual to define a structure covering the System Control Space. All of the system control registers are located in this region. The standard peripherals (NVIC, SysTick etc.) are all controlled via registers in this area.
4.6.2 Custom peripherals
Custom peripherals on Cortex-M3 microcontrollers are generally memory-mapped within the defined peripheral region in the memory map. The usual method of access to these is to define C structures which describe the relevant registers. These can then be located at absolute addresses at link time via the scatter control file.
4.7 Power Management
ANSI C cannot generate the WFI and WFE instructions directly. Instead, you should use the CMSIS intrinsic functions __WFE() and __WFI() in your source code. If suitable device-driver functions are provided by the vendor, then these should be used in preference.
4.8 C Programming
- Don’t bother with static parameters or overlays. Cortex-M3 does not suffer from data RAM size constraints like PIC.
- Unless memory size is a constraint, don’t bother with small data types. They are less efficient on Cortex-M3.
- There is no need to declare objects as “near” or “far”.
- There is no need to specify which “bank” variables are located in.
- No need to designate interrupt handlers using any special keywords. Though it is regarded as good programming practice to declare interrupt handlers using the __irq keyword when using the ARM/Keil tools to develop for Cortex-M3.
- It is unlikely that you will encounter any data alignment problems but the __packed keyword should be used to resolve any which do arise.
• There is no need to specify any particular memory model.
• Any inline assembler in your PIC source will need to be re-written. Typically it should be rewritten in C rather than translated to inline ARM assembler – this will be easier and more portable.
• Any #pragma directives will need to be either removed or replaced. Those associated with associated with placement code or data in C source code must be removed. Those marking interrupt handlers should also be removed and optionally the functions marked with __irq instead.
• There is no facility in Cortex-M3 for specifying any initial conditions as with the PIC configuration words which are automatically loaded on reset. Any code associated with this should be removed. All setup of Cortex-M3 core and peripherals needs to be performed in software following reset.
5 Examples
5.1 Vector tables and exception handlers
5.1.1 In assembler
The examples below show definition of vector tables and placeholders for exception handlers when writing in assembly code. Note that it is possible to avoid C startup for Cortex-M3 systems completely.
```
// include standard device header file
#include p18f452.inc
; the following code will be located at
; 0x0000 - reset vector
org 0x0000
goto Main ; jump to main entry point
; the following code will be located at
; 0x00088 - high priority interrupt vector
org 0x0008
goto int_hi ; jump to handler
; the following code will be located at
; 0x0018 - low priority interrupt vector
org 0x0018
goto int_lo ; jump to handler
Main
; write your main program here.
;
int_hi
;
; write your high priority
; interrupt service routine here
retfie ; use retfie to return
int_lo
;
; write your low priority
; interrupt service routine here.
retfie ; use retfie to return
end
```
5.1.2 In C
These two examples show how the same is achieved when coding in C.
**PIC**
```c
#include <p18cxxx.h>
void low_isr(void);
void high_isr(void);
/* This will be located at the low-priority interrupt vector at 0x0018. */
#pragma code low_vector=0x18
void interrupt_at_low_vector(void)
{
_asm
GOTO low_isr
_endasm
}
/* return to the default code section */
#pragma code
#pragma interrupt low_isr
void low_isr (void)
{ /* write handler here */ }
/* This will be located at the hi-priority interrupt vector at 0x0018. */
#pragma code high_vector=0x08
void interrupt_at_high_vector(void)
{
_asm
GOTO high_isr
_endasm
}
/* return to the default code section */
#pragma interrupt high_isr
void high_isr (void)
{ /* write handler here */ }
```
**Cortex-M3**
```c
/* Filename: exceptions.c */
typedef void(* const ExecFuncPtr)(void);
/* Place table in separate section */
#pragma arm section rodata="vectortable"
ExecFuncPtr exception_table[] =
{
(ExecFuncPtr)&Image$$ARM_LIB_STACKHEAPS$$ZI$Limit,
/* Initial SP */
(ExecFuncPtr)__main, /* Initial PC */
NMIException,
HardFaultException,
MemManageException,
BusFaultException,
UsageFaultException,
0, 0, 0, 0, /* Reserved */
SVCHandler,
DebugMonitor,
0, /* Reserved */
PendSVC,
SysTickHandler,
/* Configurable interrupts from here */
InterruptHandler0,
InterruptHandler1,
InterruptHandler2 /*
* etc.
*/
};
/* One example exception handler */
#pragma arm section
void SysTickHandler(void)
{
printf("---- SysTick Interrupt ----");
}
```
In Scatter Control File:
```
LOAD_REGION 0x00000000 0x00200000
{ exceptions.o (vectortable, +FIRST)
}
```
5.2 Bit banding
As described above, both devices support bit access to certain areas of memory. In both cases, bit accesses are atomic.
The PIC supports this through a direct bit addressing mode in many instructions. Individual bits within many of the Special Function Registers and within the internal RAM memory can be addressed like this. Instructions are implemented to set, clear, test and toggle individual bits.
Cortex-M3 devices support bit access via a different method entirely. Within, for example, the SRAM region of the memory map, 1MB is designated as the “bit band region”. A second 32MB region, called the bit band alias region, is used to access the bits within the bit band region. Bit 0 of each word in the alias region is mapped within the memory system to a single bit within the bit band region. Bits can be read and written. Reading or writing any bit other than bit 0 in a word in the alias region has no effect.
A simple formula converts from bit address to aliased word address.
\[
\text{word}\_\text{addr} = \text{bit}\_\text{band}\_\text{base} + (\text{byte}\_\text{offset} \times 32) + (\text{bit}\_\text{number} \times 4)
\]
C macros can then be easily defined to automate this process. For example
```c
#define BITBAND_SRAM(a,b) ((BITBAND_SRAM_BASE \n + (a - BITBAND_SRAM_REF) * 32 \n + (b * 4)))
```
A similar macro can be defined for the peripheral region.
Individual bits can then be accessed using sequences like this.
```c
#define MAILBOX 0x20004000
#define MBX_B7 (*((volatile unsigned int *) \n (BITBAND_SRAM(MAILBOX,7))))
a = MBX_B7;
```
## 5.3 Access to peripherals
There are device-specific header files for all available PIC devices. These are included with most, if not all, development tools for PIC. These header files define all registers and systems constants e.g. available memory size etc.
Similarly, header files are usually provided for Cortex-M3 devices. You can obtain these either from the device supplier or use those included with many development tools. Keil MDK-ARM includes header files for most common devices.
Developers for Cortex-M3 platforms should be aware of the Cortex Microcontroller Software Interface Standard (CMSIS). This defines a standard software application interface for many standard peripherals (e.g. SysTick, NVIC) and system functions (e.g. enable/disable interrupts) on Cortex-M3 platforms. This covers the set of standard peripherals and core functions. Most Cortex-M3 device manufacturers supply additional CMSIS-compliant header files which provide definitions for all device-specific functions and peripherals.
|
{"Source-Url": "http://infocenter.arm.com/help/topic/com.arm.doc.dai0234a/DAI0234A_migrating_from_pic_to_m3.pdf", "len_cl100k_base": 15619, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 73237, "total-output-tokens": 16522, "length": "2e13", "weborganizer": {"__label__adult": 0.0008745193481445312, "__label__art_design": 0.00092315673828125, "__label__crime_law": 0.00070953369140625, "__label__education_jobs": 0.0007696151733398438, "__label__entertainment": 0.00013816356658935547, "__label__fashion_beauty": 0.0004799365997314453, "__label__finance_business": 0.0005159378051757812, "__label__food_dining": 0.0006232261657714844, "__label__games": 0.0026073455810546875, "__label__hardware": 0.154541015625, "__label__health": 0.0006465911865234375, "__label__history": 0.0005140304565429688, "__label__home_hobbies": 0.0005435943603515625, "__label__industrial": 0.0027294158935546875, "__label__literature": 0.0003101825714111328, "__label__politics": 0.000385284423828125, "__label__religion": 0.0011587142944335938, "__label__science_tech": 0.0738525390625, "__label__social_life": 7.140636444091797e-05, "__label__software": 0.01474761962890625, "__label__software_dev": 0.740234375, "__label__sports_fitness": 0.0007634162902832031, "__label__transportation": 0.0016155242919921875, "__label__travel": 0.0002894401550292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67878, 0.04502]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67878, 0.35991]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67878, 0.87932]], "google_gemma-3-12b-it_contains_pii": [[0, 152, false], [152, 2085, null], [2085, 2843, null], [2843, 6148, null], [6148, 8595, null], [8595, 10881, null], [10881, 13115, null], [13115, 16566, null], [16566, 19958, null], [19958, 23953, null], [23953, 27441, null], [27441, 30031, null], [30031, 32678, null], [32678, 34025, null], [34025, 36747, null], [36747, 40349, null], [40349, 43203, null], [43203, 46499, null], [46499, 49967, null], [49967, 53164, null], [53164, 56335, null], [56335, 59181, null], [59181, 61776, null], [61776, 62605, null], [62605, 63557, null], [63557, 65689, null], [65689, 67878, null]], "google_gemma-3-12b-it_is_public_document": [[0, 152, true], [152, 2085, null], [2085, 2843, null], [2843, 6148, null], [6148, 8595, null], [8595, 10881, null], [10881, 13115, null], [13115, 16566, null], [16566, 19958, null], [19958, 23953, null], [23953, 27441, null], [27441, 30031, null], [30031, 32678, null], [32678, 34025, null], [34025, 36747, null], [36747, 40349, null], [40349, 43203, null], [43203, 46499, null], [46499, 49967, null], [49967, 53164, null], [53164, 56335, null], [56335, 59181, null], [59181, 61776, null], [61776, 62605, null], [62605, 63557, null], [63557, 65689, null], [65689, 67878, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67878, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67878, null]], "pdf_page_numbers": [[0, 152, 1], [152, 2085, 2], [2085, 2843, 3], [2843, 6148, 4], [6148, 8595, 5], [8595, 10881, 6], [10881, 13115, 7], [13115, 16566, 8], [16566, 19958, 9], [19958, 23953, 10], [23953, 27441, 11], [27441, 30031, 12], [30031, 32678, 13], [32678, 34025, 14], [34025, 36747, 15], [36747, 40349, 16], [40349, 43203, 17], [43203, 46499, 18], [46499, 49967, 19], [49967, 53164, 20], [53164, 56335, 21], [56335, 59181, 22], [59181, 61776, 23], [61776, 62605, 24], [62605, 63557, 25], [63557, 65689, 26], [65689, 67878, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67878, 0.15]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
c98a8e7b5d88f7dc5d1377233f45499c4df07b24
|
COURSE MANAGEMENT WEB SYSTEM
by
Li Tan
A REPORT SUBMITTED IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE SFU-ZU DUAL DEGREE OF
BACHELOR OF SCIENCE
in the School of Computing Science
Simon Fraser University
and
the College of Computer Science and Technology
Zhejiang University
© Li Tan 2010
SIMON FRASER UNIVERSITY AND ZHEJIANG UNIVERSITY
Spring 2010
All rights reserved. This work may not be
reproduced in whole or in part, by photocopy
or other means, without the permission of the author.
APPROVAL
Name: Li Tan
Degree: Bachelor of Science
Title of Report: Course Management Web System
Examine Committee:
________________________________________
Dr. Qianping Gu, Supervisor
________________________________________
Greg Baker, Senior Lecturer, Supervisor
________________________________________
Dr. Ramesh Krishnamurti, Examiner
Date Approved: _________________________________
Abstract
In this report, I present a web based course management system that we develop for School of Computing Science at SFU. This project aims at integrating grade-book, marking and submission functions together. It also introduces the grouping function that the current system does not have. This project can provide convenience to students and faculty members in managing courses, assignments, exams and grades. It is based on a modern and powerful web development framework called Django. I first give a complete picture of the system to give an idea of what the four modules (grades, marking, submissions and grouping) do and their relationships. Then I focus on the design and implementation of the marking module for which I am mainly responsible. Topics of the discussion include use cases analysis, data model description, web requests, web responses as well as other specific development issues.
This is a very valuable project because not only I have an opportunity to learn some modern technologies in developing web systems but also I can make contributions to the School of Computing Science. I hope this project is a good start to renew the current course management system.
Contents
Approval ii
Abstract iii
1 Introduction 1
1.1 Background ......................................................... 2
1.2 Grades module ...................................................... 2
1.2.1 Main functionalities ......................................... 2
1.2.2 Module interaction .......................................... 3
1.3 Marking module .................................................... 3
1.3.1 Main functionalities ......................................... 3
1.3.2 Module interaction .......................................... 4
1.4 Submissions module ............................................... 4
1.4.1 Main functionalities ......................................... 4
1.4.2 Module interaction .......................................... 4
1.5 Grouping module .................................................. 4
1.5.1 Main functionalities ......................................... 4
1.5.2 Module interaction .......................................... 5
1.6 Overall goal of modules design ................................. 5
2 Preliminaries 6
2.1 Database layer ..................................................... 6
2.2 Functional layer or view layer .................................. 7
2.3 Presentation layer ................................................ 8
2.4 Division of responsibilities .................................... 8
3 Use Cases Analysis
3.1 Activity configuration ................................................. 10
3.1.1 Add, edit or delete marking components ......................... 10
3.1.2 Add, edit or delete common problems .......................... 11
3.1.3 Copy course setup .................................................. 12
3.2 Marking ................................................................. 13
3.2.1 Mark for one student or one group ............................. 13
3.2.2 Give marks to all students ..................................... 14
3.2.3 Import/export all students’ marks ............................. 14
3.2.4 View marking summary .......................................... 14
3.2.5 Marking based on previous marks ............................. 17
3.2.6 View marking history of one student or one group .......... 17
3.2.7 Change grade status ............................................. 18
4 Data Model Description ................................................. 19
4.1 What the module needs ............................................. 19
4.2 What we have from other modules ................................ 20
4.3 Data model definitions and relationships ........................ 20
5 Making Requests .......................................................... 23
5.1 URL design .......................................................... 23
5.1.1 Use identifiers in URL’s .......................................... 23
5.1.2 View function parameters ..................................... 24
5.1.3 URL parameter list .............................................. 24
5.1.4 URL referencing .................................................. 24
5.2 HTTP GET and POST requests ..................................... 25
5.2.1 HTTP GET .......................................................... 25
5.2.2 HTTP POST ....................................................... 26
6 Generating and Rendering Responses ................................. 28
6.1 Generating responses using model form and formset ............ 28
6.2 Validation ........................................................... 29
6.2.1 Formset validation .............................................. 29
6.2.2 Typical work flow .............................................. 30
6.3 Rendering response with templates ........................................ 30
6.3.1 Tags and variable references ........................................... 30
6.3.2 Filters for displaying forms ........................................... 31
7 Other Specific Topics ......................................................... 32
7.1 Security ........................................................................... 32
7.1.1 Decorators for authorization ......................................... 32
7.1.2 URL integrity checking ................................................ 32
7.1.3 Claim ownership for submissions .................................... 33
7.2 Client-side presentation and interaction .............................. 33
7.2.1 Layout styles ............................................................. 34
7.2.2 A tricky issue ............................................................ 34
8 A Concrete Example ............................................................. 36
8.1 View function ............................................................... 36
8.2 Template ................................................................. 39
9 Conclusion ........................................................................... 41
Chapter 1
Introduction
In this project, we will develop a web based course management system for the School of Computing Science at SFU. The system mainly includes the following four modules: grades, marking, assignment submissions and student grouping.
The main purpose of this project is to use a modern website development tool to come up with an integrated system that have all its parts functioning together. The current system has the first three parts mentioned above, but they were not designed together in the first place. So in the design phase, we consider the system as a whole and the interactions between the four modules. Another reason is that many instructors are calling for a mechanism for them to assign grades to groups for a course project rather than to every individual student every time. They want a fast way to get work done. To support the grouping functionality, we feel it is necessary to design our system from scratch.
We choose Django [2] as the website development platform and Python as the programming language. That means the system will be far easier to maintain than the current one. The good thing about Django is that it supports many types of databases such as Mysql and PostgreSQL [4]. The server side scripting is also provided by the powerful query functionalities of Django [8]. As it is stated in [18], “Server-side scripting is a web server technology in which a user’s request is fulfilled by running a script directly on the web server to generate responses. It is usually used to provide interactive web sites that interface to databases or other data stores.” While we can choose one type of database during the development period, we can choose another one during production with minimum change in the configuration.
Now I will give a brief overview about the background and the most important features
1.1 Background
SFU’s websites have been an important part of daily campus life in each semester. As a student, you have concerns as to which course you will take and who the instructor and TA’s are for the course. You may also want to get informed on what grades you get for assignments and exams. As a faculty member, you are concerned with the students in your courses, and how to compute and assign grades to a student quickly. You may also have concerns on who are in the same group for a course project and what comment you want to give each student or a group to justify the grades they deserve. In all this information, the following are the fundamental pieces of information in the system: the information about students and faculty members; the information about the courses offered at each semester; the memberships of each course offering and each member’s relationship to this course offering (i.e., as an instructor, TA or student). This core data of the system serve as a foundation for grades, marking, submissions and grouping modules to work together.
1.2 Grades module
1.2.1 Main functionalities
For a student, the grades module is like the current grade-book used by the School of Computing Science at SFU[13]. A student can view the courses he/she is enrolled in and the activities of each course, the grades he/she get on them and the performance distribution of the class (e.g., by histograms).
For an instructor, he/she can manage the activities by adding and editing them. These changes are exposed to students as well.
An activity is a general term here. It could mean:
- A numeric activity, which is typically an assignment or exam and is given a numeric grade.
- A letter activity, which is mainly used for final performance evaluation and is given a letter grade.
• A calculation activity that contains a formula used in calculation. For example, by giving assignments, midterms and final exams different weightings, an instructor can specify a formula to calculate the final numeric grade for every student. Students do not need to submit any deliverables for a calculation activity because it is used by the instructor to quickly calculate grades for students. According to the distribution of the final numeric grades, the instructor can decide cut-off values to give the final letter grade for every student. An instructor can also set an activity to be group activity (not for calculation activity). For a group activity, students submit deliverables by groups and can be marked by groups.
1.2.2 Module interaction
Activities information is available to the marking, submissions and grouping module. The numeric marks will be assigned through the marking module.
1.3 Marking module
1.3.1 Main functionalities
Marking module deals with how to give numeric marks to students in a course on each numeric activity. Mark can be assigned directly to an individual student or a group if the activity is a group activity. If one activity has several logically separated components, an instructor can give a mark for each component with feedback. To save time, we introduce common problems which are the mistakes commonly seen in students’ submissions. An instructor or TA can configure common problems and use the description of this common problems as comments when marking. Also, for each mark given, an instructor or TA can provide additional information such as mark adjustment (due to various reasons), late penalty and a file attachment.
Marking will be initiated by clicking a link beside the submission entry of a student’s work or a group’s work. There will also be an indicator beside it saying whether it has been marked or not. We also keep records about the history of who gives whom what mark for which activity and when. These records can be investigated when there is an issue or confusion on a particular mark.
Another handy function is that an instructor will be able to copy the course setup
(i.e., the basic information of all activities together with their marking components and submission components) from one course in some semester to another in the subsequent semester.
1.3.2 Module interaction
Information about numeric activities comes from the grades module while the marks given in the marking module are stored in the grades module and can be viewed through grade-book. For marking groups, grouping membership information comes from the group module.
1.4 Submissions module
1.4.1 Main functionalities
Student can submit their assignments or deliverables in multiple configurable components. Each component should have types configured by the instructor or TA’s. Like the current submission server, only zip/rar/tgz file formats are allowed and students are able to override their previous submissions with newer ones.
1.4.2 Module interaction
A submission can be tagged as not graded or graded. This requires information from the grades module. Submissions can be made by students for a group where they belong to, which needs information from the grouping module. This module also deals with the marking ownership of every submission to avoid instructors and TA’s marking the same thing at the same time without being aware.
1.5 Grouping module
1.5.1 Main functionalities
There are two ways to form groups. One way is that the instructor assigns students to groups. Another way is that students create groups spontaneously and one group member can invite other students to join. While the membership assigned by instructor takes effect right away, a student who is invited by other students can decide whether to accept the
invitation. When the group is created, the group creator can specify which group activities to associate with this group (although groups in most courses do not change during the whole semester and apply to all group activities). A student can submit assignments on behalf of his/her group and can be marked through the group.
1.5.2 Module interaction
As mentioned in the above sections, it provides grouping information to all the other modules.
1.6 Overall goal of modules design
In summary, the goal is to decouple these modules so that they can be developed almost in parallel and with integration ongoing along the way as well. These practices accord with the idea of the Scrum development model. As a brief introduction (for this can be found in [19]), “Scrum is an agile process for software development. With Scrum, projects progress via a series of iterations called sprints.” We normally add relatively small features to different modules steadily with minor worry about the integration.
The rest of the report is organized as follows. In Chapter 2, the important technical terms and concepts in our project are introduced. From Chapter 3 and on, the discussion is basically around the design and implementation of the marking module. Chapter 3 analyzes all the uses cases in the marking module. Design of data models in the marking module are described in Chapter 4. Handling web requests handling and generating web responses are the topics of Chapters 5 and 6 respectively. Chapter 7 presents a couple of specific issues including security and interactive user interface design. In Chapter 8, a concrete example of view is given for your reference. Finally, Chapter 9 concludes this report.
Chapter 2
Preliminaries
In this chapter, I am going to introduce some technical terms in our project. Basically, we group them into three layers [2]: database layer, functional layer and presentation layer. It is natural to do it in this way. First, data models (or table schemas) in the database can be separately defined and it should be well defined before the details of system logic can be worked out. The functional layer is about how to implement the use cases or the system logic. It includes how to process user web requests and how to interact with the database. The presentation layer is loosely connected with the other two. It focuses on how to display the information passed from the functional layer to users. It can change the way it displays things with little effects on the other two layers. However, the presentation layer is very important because it is the layer users directly interact with. The user interfaces should be clear and easy to use.
2.1 Database layer
In Django, each module in our system is called an application. In each application, we define all the relational table schemas in this module, which are called models [7]. We can retrieve from the database a single object representing one row in a particular table. We can also retrieve a series of objects satisfying certain conditions we specify. These queries are done by using queryset that initially contains the information about the query. It will hit the database and be evaluated only when we actually need the results of it [8]. While we could still use raw SQL to communicate with the database, queryset already provides a strong API that we can use in almost all situations. It offers a quick approach to make a query
that even involves complex relationships between tables (e.g. foreign key or many-to-many relationships).
2.2 Functional layer or view layer
The view layer is where the actual system logic takes place. View here may sound a little misleading. It not only handles what data to present to a user after the user makes an Http GET request, but also handles what data to save to the database after a user submits an Http form with an Http POST request (an Html form on a web page allows a user to input data that is sent to the server for processing). After handling a GET request, the view passes to the presentation layer the information to show to the user. The information is put into a context object before passed to the presentation layer. After handling a POST request, if the data can be validated, it saves updates to the database and then redirects to another page which shows a success message. If the data has errors, the view will pass the form with invalid fields and error messages to the presentation layer so that the user can try to correct the errors and submit again. Therefore, we can see the view layer is a data processing layer between the database and the presentation layer.
There is a mapping between URL’s and views. Each view function is responsible for handling the requests targeted at one particular type of URL. For example, a view function can respond to a user request to \texttt{http://server_base/course_id/new/activity}.
Apparently, not every user has the right to access all the URL’s, so we also need an authorization mechanism. We use the SFU authorization server as our authorization middleware, meaning login and logout actions happen there before a user could go into our system. On the other hand, within our system, some view functions would be run only when the user is authorized to access the corresponding URL. We put special functions called decorators [12] to do the job before running that view function.
To test our system, Django provides a unit testing environment based on the Python unit testing library [16] in which we can write unit tests for each module. We can test our design of each relational table in the database to verify if it works correctly. We can also emulate a user request and verify if the Html response is valid and its contents are as expected.
CHAPTER 2. PRELIMINARIES
2.3 Presentation layer
The presentation layer is based on the customized Html format files called templates. As mentioned earlier, that the view function passes the context object to the presentation layer. Usually the context object contains a bunch of variables including single objects and aggregation objects like lists, sets and dictionaries which are basic data types in Python [17]. The fields and methods of these variables can be referenced in the template.
Django templates API also gives built-in tags to make it more convenient to write Html code[10]. For instance, it provides an if-else clause and a for loop. These seem like dynamic features, but the flow behind the scene is that the template will be processed by Django (e.g., the for loops is expanded and all variable references are resolved) before the final Html file is generated and passed to the browser.
For presenting a model object, there is one thing very useful called model form. Suppose we have defined an activity model class with title and description fields in it. Then we can bind to an activity object a model form that will be rendered in Html with a textbox and text-area in it. The Html code for the form is generated automatically by Django. Based on that, we can also define our own filters to display our form with different pieces of Html code.
To design the layout, we use CSS(Cascading Style Sheets) file to change our web page’s layout by assigning differentiating attributes (i.e., id and class) to Html elements. We use external jquery [15] libraries such as Datatables to render table data. Datatables has comprehensive built-in features like sorting, searching and paginating\(^1\), which makes the web pages more interactive.
2.4 Division of responsibilities
As our project leader, SFU Computing Science faculty member Greg Baker is basically responsible for coordinating the four modules and the overall quality assurance. He also lays the database foundation, implements notification and news feed functionalities and deploys the system onto real servers. The four modules of this project are realized by different SFU Computing Science students. SFU Computing Science student Vincent Zhao is responsible
---
\(^1\)Pagination here means consecutive numbering to indicate the proper order of the entries in a table to show
for the grades module. SFU Computing Science students Youyou Yang and Jiangfeng Hu take charge of the submissions module. SFU Computing Science students Yiran Zhou and Xiong Yi are responsible for the grouping module. Finally, I am responsible for designing and implementing the marking module.
Before I go into the details of the marking module starting from the next chapter, here is some clarifications worth of mentioning. In the following chapters, we will use course *staffs* to mean instructors or TA’s that can do administrative works on marking. The *activities* in the following discussion will only refer to *numeric activities* because only numeric activities can be marked. I will use words *grade* and *mark* interchangeably to mean the same. The word *marking record* will also appear very often. A *marking record* contains not only information about the mark itself (e.g., the mark value, late penalty, file attachment), but also the context information including by whom and when the mark is given.
Chapter 3
Use Cases Analysis
Use cases should be analyzed before defining data models and designing user interfaces, so in this chapter, we are going to see the details of each use case in the marking module. I also selectively include some figures of my implementation results. Note that the data shown in these figures is not real. The main use cases in the marking module fall into two general categories: activity configuration and marking [1].
3.1 Activity configuration
As I have mentioned in the introduction, instructors usually want to divide an activity into components, giving each of them some points. I will call them marking components or simply components if there is no ambiguity. For each component, the instructor may want to define a bunch of common problems found in student’s submissions so that the descriptions of common problems can be reused in the comments or feedbacks given to a student about how he/she does on that component.
3.1.1 Add, edit or delete marking components
By following a link from the activity information page, the staff can go to a page containing all the components of that activity, edit or delete the ones that already exist and add new ones by inputting its title, description and the max mark out of the total mark of the activity.
The staff can also change the relative order of the components (i.e., put the more
important ones in front). In other words, the positions of the components can be modified. The components will be presented in a table with each row representing one component. By clicking the arrows on the left of each row, a staff can swap any two adjacent components. (A similar feature is provided for reordering activities on a course information page in the grade-book as well.)
3.1.2 Add, edit or delete common problems
By following some link from the activity information page, the staff can also go to a page showing all the common problems of the activity. He can edit or delete the ones that already exist. He can also add a new common problem by selecting a component of this activity to associate with, the title, description and the associated penalty. The student who has this problem may or may not be penalized to the extent suggested. These penalty values are not enforced but merely suggestive.

Figure 3.1: Common problems configuration
As shown in Figure 3.1, for each common problem, whenever you select the associated component, it tells you the max mark of the component (at the corner below the drop-down box) and you should specify a penalty value no higher than this value. The user interface for configuring the marking components is similar to this one.
These configurations are preparations for marking to be conducted, because we will see
that typically staff can mark an assignment component by component and specify which common problems a student (or a group) has in the submission.
3.1.3 Copy course setup
An instructor can copy the setup of one course to another course. Normally, this is done for the same courses in different semesters. Setup here is a general term, which includes all the activities (numeric or letter) in a course. For each numeric activity, all its marking components, common problems and submission components will also be copied. This functionality saves a lot of time for instructors. We can call the course that the instructor is going to setup the target, and the one whose setup will be copied the source.
There is an uncommon but important case here: if some activity in the target already has the same name (or short name) as that of any activity in the source, we will warn the user that he/she should rename that activity in the target in order to avoid it being overwritten by the one in the source.
In Figure 3.2, we see that the instructor ggbaker wants to copy the setup from CMPT 165 D100 (Fall 2009) to CMPT 165 D1 (Spring 2011). There are five activities in the source setup. However, the target setup already contains Assignment 1 which conflicts with Assignment 1 in the source setup. The warning text recommends the instructor to rename this activity in the target setup to resolve the naming conflict.
3.2 Marking
3.2.1 Mark for one student or one group
On the activity information page in the grade-book, the staff will be presented a table showing all the students with their marks if they are graded or otherwise with no grade signs. They can pick up one student who has not been graded yet and click an icon on this row to enter the marking page. On this page, all the components are shown side by side with their title and an array of common problems if any. The input fields are the comment and the mark value. There are fields where staff can input additional information including late penalty (in percentage), score adjustment with the reasons, overall comment and file attachment. Except for the mark values, all other fields are optional (i.e., can be left blank).
Figure 3.3: Marking component by component
Figure 3.3 shows the user interface for marking Assignment 1. There are three components called Part 1, Part 2 and Integration. Each component will be given a mark and comment. At the bottom, the instructor can input additional information into those fields (they are not completely shown here).
If the activity is a group activity, all these can be done for a group, too. The members
in a group share the mark information. Each of them sees the marks given to their group when logging on to grade-book. The interface for showing group grades and for picking a group to mark has a two level list style. The first level contains the list of groups and below each group there is a second level containing its group members. The staff can click an icon to show or hide the second level.
### 3.2.2 Give marks to all students
The above is the usual way a staff gives a mark to a particular student or group where they need to give marks to components of the activity. However, instead of giving marks to components, sometimes a staff wants to bypass this and give a total mark to a student or group directly to save time. He can do this by clicking a Mark All button and go to a table containing rows for all the students or groups with a text box on each row. He can enter a mark into a text box or leave it blank (if he decides not to mark the student or group on that row), then submit the whole form.
### 3.2.3 Import/export all students’ marks
In the previous use case, if a staff has at hand a CSV (comma-separated values) file containing the students’ grades, the data can be imported. The contents of the file should abide by a special format such that each row starts with a student’s user-id or student number followed by a decimal grade with a comma in between. If the content of the file is error free, the result will be presented to the staff for reviewing before submitting. On the other hand, a staff can also export all students’ grades for a particular activity into a CSV file as well.
Figure 3.4 is a picture taken when the instructor *ggbaker* is marking all students on Assignment 1 and he has already successfully imported a file containing eight students’ marks. He now is reviewing these marks to make sure they are correct. Note that even though the first student *0aaa0* has already been marked, the mark still can be overwritten if needed.
### 3.2.4 View marking summary
By clicking a Show Details icon beside the student’s grade in the grade-book, a staff can go to a page showing the detailed summary of that marking record. It contains the
following three pieces of information:
- Basic information including who created the mark at what time, how this mark is given to this student either individually or via the group this student is in (if this is for a group activity).
- Additional information as described in Section 3.2.1. A staff can download the attachment to review.
- The marking details on components, namely the comment and mark given to each component of the activity (this information may or may not be available depending on whether the mark was assigned component by component or only a total mark was given).
Figure 3.5 shows that students’ grades are presented in the activity information page in grade-book. For those students who have not been graded yet, a no grade sign displayed in the Grade status column and you can mark them by clicking the paper and pencil icon. For those who have been graded, their marks and links to their submission are shown in the
CHAPTER 3. USE CASES ANALYSIS
Figure 3.5: Student grades displayed in grade-book
Figure 3.6: Marking summary
last two columns. By clicking a reading glass icon, you can go to the summary page as shown in Figure 3.6 below (in this case, it shows the marking summary of student 0aaa0 on Assignment 1). We can see the basic information section and the additional information section. The attachment has been downloaded and the user is going to either open it or save it. Lower at the page are the details of the marking information for each component in Assignment 1. For example, the student got 3 out of 5 for Part 1.
Provided that the grades on the activity have been released, a student can also see the same summary by clicking a Show Details icon besides the grade entry on his/her corresponding grade-book page.
3.2.5 Marking based on previous marks
From the marking summary page, a staff can give a mark to the same student or group again based on this marking record. The fields in the marking page will be filled with the same information as the original marking record and staff can begin modifying it. This functionality is convenient. It may happen that the staff gives mark to same student or group multiple times and different staffs give a student different marks at different times. All this information sometimes may be of interest. So another use case follows.
3.2.6 View marking history of one student or one group
The system keeps the records for each mark given in the past. All the marking records will be shown saying who gave the mark and when. All the marking records associating with a particular student on one activity will be displayed in chronological order. For a group activity, a staff can view all the marking records for one group as well. A student may be graded individually even for a group activity (e.g., the instructor gives him bonus mark for his prominent performance). In this case, for each marking record we also show by which method that mark is given, namely directly to the individual or via his/her group. Similarly as described in Section 3.2.4, a staff can view the details of each record and just follow a link besides it and even assign newer marks based on this.
In Figure 3.7, we see that student 0aaa0 has been marked on Assignment 1 three times individually, twice by ggbaker and once by 0grad. The group of this student has also been marked on this activity once by ggbaker. Only the mark shown at the top (given by 0grad) is the valid one because it is the latest. Although this example seems to be uncommon in
real life, it illustrates that the database keeps all the marking records for each student and each group. And for a group activity, a student can still be marked separately from other members in his/her group.
3.2.7 Change grade status
In very rare situations, a staff may set the status of a grade to **Academic Dishonest** due to plagiarism or **Excused** for reasons like student’s sick leave. The staff can change this status after the mark has been given, and the student will get a severe notice when he/she logs on to the grade-book. Chapter 8 will take this use case as an example to demonstrate some concepts in implementation.
These use cases provide a basis for designing our data models in the marking module. Now we move on to the design of these data models to see how they support all these use cases.
Chapter 4
Data Model Description
In Django, we can define a model class to represent a type of object. One model essentially represents a relational table schema in the database with each row being an object of that class.
4.1 What the module needs
To summarize functionalities in the use cases we have seen, as far as the marking module is concerned, it should know:
(1) How to associate components to an activity.
(2) How to associate common problems to an activity component.
(3) How to give a mark for a student on one activity (and also its components).
(4) How to give the additional information of a marking record (e.g. the late penalty, overall comment, etc.).
(5) How to give a mark to a group on one activity (and also its components).
Note that, the marking module uses some data models defined in other modules. Now let us have a look at how the marking module uses these models in realizing the above six functionalities.
4.2 What we have from other modules
First, two data models CourseOffering and Member are used in all modules. The membership refers to the role of the person enrolled in the course offering (it could be instructor, TA or student). In grades module, there are two important data models connected with the marking module. One is NumericActivity (a subclass of Activity) which contains information of a numeric activity such as its name, due date, percent toward final grade and its max grade. The other one is called NumericGrade, which stores the grade one student gets on one activity. There can be only one such object for each student and activity in a certain course. It contains the information of the student membership to one course (a foreign key field to Member), the activity (a foreign key field to NumericActivity), a mark value and a grade status flag such as No Graded, tt Graded, Excused, etc. as mentioned in Section 3.2.7. The group module has classes Group and GroupMember. GroupMember contains a foreign key to Activity telling us that for which activity the student joins the group. We need to handle two tasks with regard to these two data models. First, given an activity, we want to find all the groups to mark (use case in Section 3.2.2). Second, given a student and an activity, we need to find which group he/she joins so that we can know if any mark has been given to him/her via the group on this activity (use case in Section 3.2.6).
4.3 Data model definitions and relationships
To deal with (1) and (2) listed in Section 4.1, the marking module needs to define two types of classes. The class ActivityComponent should have a foreign key to the NumericActivity and the class CommonProblem should in turn have a foreign key to the ActivityComponent. To implement (4), intuitively, we want a class ActivityComponentMark with mark value, comment and a foreign key to ActivityComponent.
Although in (3) the marking records are for an individual student while in (5) they are created for a group, they both will have the additional information mentioned in (4). Based on this rationale, we define three classes using an inheritance structure: the super class ActivityMark that only contains additional information and two sub classes StudentActivityMark and GroupActivityMark for individual student marking and group marking respectively. Therefore, a ActivityMark object represents a marking record and its specific type is either StudentActivityMark or GroupActivityMark. In StudentActivityMark, the only
field is a foreign key to *NumericGrade*. To give mark to an individual student, it actually saves the mark to the associated *NumericGrade* object. In *GroupActivityMark*, there is a foreign key to *Group* and a foreign key to *activity*, basically telling the mark is for which group on which activity. To give mark to a group, it actually finds all the members of the group and sets the mark in the *NumericGrade* corresponding to each of the group members. So for each individual student, the valid mark version is stored in the *NumericGrade* of that student. Since this object only contains the latest version of the grade, if we want to view the history marks, we have to keep a copy of the mark in every *ActivityMark* object because a new instance of this class (either as *StudentActivityMark* or *GroupActivityMark*) is created every time a mark is assigned.
To view the details of a marking record, we have fields *created_at*, *created_by* in an *ActivityMark* object. Their meaning is obvious as their names indicate. The *created_at* helps to sort the marking records for one student in time order, so it is used in selecting the valid marking record (i.e., the latest one) and showing the marking history. When viewing a marking summary, the user wants to know what mark is given to each activity component. Since the information is stored in *ActivityComponentMark* object, in it we put a foreign key to *ActivityMark* so that given an *ActivityMark* object we can easily find the marking information on all the components by following the foreign key relationship in reverse.

All these relationships are shown in Figure 1. Particularly, as shown by the bold arrows, we can see that for both individual marking and group marking the mark values are always
finally saved to the **NumericGrade** objects. However, because a **NumericGrade** object only holds the mark value assigned most recently, the **ActivityMark** object also keeps a local copy of the mark value as a history record.
Data models are always crucial and they are the backbone of the whole marking module. With all these classes of data models defined, we now know how the tables in our database will look like. Now when user web requests arrive, we consider how to interpret them, which query to make on the database and which data we need to process and save to the database. These tasks are performed in the view layer.
Chapter 5
Making Requests
In this chapter, I am going to talk about the characteristics of HTTP requests. As we know, one request is always for one URL and it either gets data from the server or sends data to the server.
5.1 URL design
Recall that Django maintains a one to one mapping from URL’s to views. To design neat and meaningful URL’s, we usually insert meaningful words into URL’s. For example, /students or /groups is inserted as a portion to URL’s to distinguish marking for individual students from marking for groups. The general words do not refer to any particular resource or object we want to access. So we need identifiers of the objects in URL’s and a view can retrieve the objects of interest from database.
5.1.1 Use identifiers in URL’s
Most of the time identifiers should be as readable as possible. We use a python package called autoslug that produces a unique identifier from a data model object (virtually a database row) and we specify which fields are to be used. For instance, in the CourseOffering data model, we use all its fields as the source to populate a unique string and it gives us 1101-cmpt-165-d100 for CMPT 165 D100 in spring 2011. To identify an activity belonging to some course offering, we use the slugs both of the course offering and the activity in the URL like http://server_base/1101-cmpt-165-d100/a1/marking/.
5.1.2 View function parameters
Every type of identifier (e.g., slug, id, user-id, etc.) usually has one fixed regular expression pattern. This is a very important property because in the mapping from URL’s to views, we specify the type of URL’s that match certain pattern. We can also specify which portion of the URL should be a parameter, meaning it can be extracted and passed as a parameter to the corresponding view that this URL maps to. In almost all cases in the marking module, we need at least two portions of the URL to be used as the parameters: the course slug and the activity slug. Typically, when these two are passed to a view function, it would first use them to access the database to get the CourseOffering and NumericActivity objects.
5.1.3 URL parameter list
A view function may be invoked in different scenarios. For example, a user may want to see the marking summary either of a marking record shown in the marking history or of the marking record associated with the current valid mark (i.e. most recently created). In my implementation, in both cases, the requests go to the same view called marking_summary. For the first case only, we could just put the id of the ActivityMark object into the URL. But obviously this method does not work for the latter case because we don’t really know the id beforehand. So instead, we append a URL parameter list (although here only one parameter) like http://base/1101-cmpt-165-d100/a1/markings/students/0aaa0/?activity_mark=5 to our URL to get the marking record whose id is 5. The URL-to-view mapping ignores this portion and once the correct view gets invoked, it examines the URL parameter list in the request object (which is always the first parameter passed to view functions): if an id is present it just fetches that ActivityMark object; otherwise it finds the most recently created ActivityMark object according to the created_at field.
5.1.4 URL referencing
A good practice is that to obtain the URL that maps to some view, we use function reverse and pass the name of the view function as the parameter. Rationale behind this is that we may change URL’s much more often than view names. So we should avoid hard coding URL’s as far as possible.
5.2 Http GET and POST requests
There are two basic types of Http requests. One type asks for data from the server and the other sends data to the server. We call them Http GET and POST requests respectively.
5.2.1 Http GET
In the marking module, a GET request only carries information in its URL and its parameter list. Generally, we need to first do database queries to retrieve the objects we need. One complicated query in the marking module is to get all the marking records for one student on one group activity. Since we know that the marks may be assigned directly to that individual student or via the group he/she is in, we need to query the mark records from tables for both data models StudentActivityMark and GroupActivityMark. Assuming we have the NumericActivity object named (activity) and the Member object named (student_membership), the queries might be:
- \(\text{num\_grade} = \text{NumericGrade.objects.get(activity = activity, member = student\_membership)}\)
\(\text{marks\_to\_individual} = \text{StudentActivityMark.objects.filter(numeric\_grade = num\_grade)}\)
- \(\text{group\_mem} = \text{GroupMember.objects.get(student = student\_membership, activity=activity, confirmed = true)}\)
\(\text{if group\_mem exists:}\)
\(\text{marks\_via\_group} = \text{GroupActivityMark.objects.filter(group = group\_member.group)}\)
Here, we use method **get** expecting exactly one object returned from the table and use method **filter** to get potentially any number of objects. The parameters to these specify the conditions that each qualified row in the table must satisfy at the same time. The variable **marks_to_individual** will contain all the marking records created for this student individually on this activity. If the student does join some group on this activity, the variable **marks_via_group** will contain all the marking records for that group on this activity.
After these objects have been retrieved, the view will organize and encapsulate them into an Http response that will be rendered with a Django template. We will talk more about this process in the next chapter.
A GET request can ask for file data, too. As mentioned in the Section 3.2.4, this happens when a user downloads the file attachment in a marking record. The View acts differently for this type of request. It writes the data in the file to Http response object and sets special headers: `Content-Disposition='attachment;filename=[name of the file]'` and `Content-Length='[the size of the file in bytes]'`. For exporting students’ grades into CSV files, we set the Http response object with `mimetype='text/csv'` and use Python `csv` package to write data to the Http response object [3]. Rather than rendering a new Html webpage, the browser pops up a dialog prompting users to save or open the download file.
5.2.2 Http POST
Other than the information in the URL and its parameter list, a POST request also carries the input contents submitted with an Html form by the user. For instance, you can submit one text input, one drop down selection and an uploaded file (e.g., by clicking a `Browse...` and selecting a file) all together in an Html form. To allow file uploading, we need to set `enctype='multipart/form-data'` for the Html form element [5]. A POST request object has a dictionary-like dataset of the non-file input contents called `POST` and another dataset called `FILES` for file data [9]. We use these two datasets to construct a Django form and then get the value of a field in the form with its name as the key. We will talk more about the Django form in the next chapter.
Usually, it is a good practice to redirect to an appropriate page after we successfully submitted the form [11]. So one problem arises: we may go to the same page from different pages, so how can we return to the correct page? To solve this, we use a parameter like `?from_page=pageA` in the URL and the view can check what the previous active page is and return to it. This is a technique commonly applied in the grades module as well.
A POST request usually corresponds to a GET request and they are addressed to the same URL. A GET request is sent first to get a web page asking the user to input data when a `Submit` button is clicked, a POST request is sent addressed to the same URL. Hence, the view that the URL maps to needs to handle both types of requests. It decides the request type by the `method` attribute of the request object. For both types of requests, the view first retrieves the objects identified by the URL from the database. Then for the GET request, the view creates a Django form (or forms) that will be presented to the user for input. For the POST request, the view constructs a Django form (or forms) from the data contained
in the request object, analyzes them and saves updates to the database if needed. Chapter 8 will give a detailed example of this process.
Chapter 6
Generating and Rendering Responses
We have seen how requests are sent to a view. Now let us see how responses are generated by a view and how the information in the response is passed and rendered with Django templates. Particuially, we will see how the Django model form and fromset provide a fast solution for representing objects from the database with Html forms.
6.1 Generating responses using model form and formset
As discussed in the Section 2.3, we define a form with various types of fields which will be automatically rendered by Django as a group of Html input elements (or widgets). The most exciting thing is that we can associate a model form with our data model. The fields of such a form will correspond to the fields of the model. We can choose which fields in the data model are actually used in the form. Although you can override the input widget for each field, in the marking module, I just use the default choices by Django because they suffice.
In the marking module, I define ActivityMarkForm which is a model form for ActivityMark and ActivityComponentMarkForm which is a model form for ActivityComponentMark. You may wonder why I do not define a model form for ActivityComponent and CommonProblem, the reason is that there is an even more powerful thing: model formset. A formset by its name is a collection of forms. For a model, we can use a model formset factory to
define a class of formset for our model. Basically it’s a collection of model forms, each one corresponding to one object of that model. We can specify the fields of the model we want to include and the base class of the formset if desired (we will see this usage shortly in next section), a \textit{queryset} (list of the model objects we fetch from the database satisfying certain conditions) we want to initially fill into the formset. This is very useful considering that if we want to just display a list of components belonging to activity \textit{Assignment 1}, we can pass in a queryset telling we just want the components of \textit{Assignment 1}. We can also pass in the parameter \textit{extra} to tell how many empty rows to display. In marking module, I use this technique for the purpose of adding new activity components and common problems.
6.2 Validation
The validation is used when a user sends an \texttt{Http POST} request containing the data of a form or a bunch of forms (e.g., a formset). We have to validate that the data is legal and can be saved to the database safely. Model form and formset do their default validation automatically according to the restrictions on their associated model (e.g., you cannot enter letters into a text field representing a decimal number field of the model). We can define extra checking logic for a single model form by overriding its method called \texttt{clean} (e.g., we can rule out negative numbers for the maximum mark of a component which is legally decimal by default). This is validation on form level \cite{6}.
6.2.1 Formset validation
There is another level of validation: formset level validation. We need to validate some attributes that all the forms in the formset as a whole should not violate. For example, each of the components for one activity should have a unique title. With each model form in the formset associated with a component, even if each single form is valid, the titles of two or more components may still conflict. This validation is defined in the formset class and that is the very usage of the formset base class: we define the validation logic in that base class. By passing it to the model formset factory, we can generate a model formset that inherits this extra validation ability.
6.2.2 Typical work flow
So the typical work flow of a view handling HTTP POST request is as follows: If the validation passes, we process and save data to the database if needed, and finally redirect to an appropriate page. On that page a success message is displayed. If there are any warning messages they get displayed, too (e.g., a mark value higher than the max mark is entered which may or may not be the user’s intention). If the validation fails, the erroneous forms is passed in the response and the user will see an error message indicating that submission fails. There also will be a specific error message beside each invalid form field so that the user will know how to correct it.
6.3 Rendering response with templates
Let us now look at how a view passes the response for rendering and how a template uses the contents in the response. Templates are HTML files, but because they are specific to Django they need to be processed first before the browser can understand them.
6.3.1 Tags and variable references
A view produces a dictionary-like context object encapsulating all the information the template needs. The key-value pairs in the dictionary are the variable name and the actual variable that are referenced in the template. In the templates, these variables as well as their fields and methods (only those with no parameters) can be referenced. Tags in a template like for loops and if-else clauses are surrounded by `{%` and `%}`. References to a variable or its method or field are surrounded by `{{` and `}}`. So Django interprets and executes the tags, replaces the variables by their string presentation (i.e., the str method, replaces the variable field references with their values and replaces variable method references by their output. Then the resulting HTML is handed to the browser. For example, when displaying a list of components of an activity, in the template we loop over the list variable and for each element in it, we access its title, description and maximum mark. Django expands the loop, replaces the references to the fields with their actual string values. The browser then will receive the resulting regular HTML file.
6.3.2 Filters for displaying forms
There is another concept called *filter* that we can apply to a variable in templates. A filter is just a python function we define that takes a variable and returns an Html code of how it should be displayed. We have defined several filters for displaying a form. A form can be displayed as a list with each field being an item (within `<ul>`/`</ul>` ) or as a row in a table (within `<tr>`/`</tr>` ) with each field being a table field. Furthermore, the filter also displays a *cross* icon beside any field in a form that has an error such as ‘‘*this field is required’’*. We use filters to avoid repeating writing the same Html code on multiple templates which is more time consuming and difficult to maintain.
Chapter 7
Other Specific Topics
7.1 Security
7.1.1 Decorators for authorization
We have seen that a course staff can give marks to students and view their marking history. Whenever a user makes a request to some URL, the user’s role in the course should be checked: he/she must be the instructor or one of the TA’s of that course. The course can be queried by the course slug portion in the URL. This is the basic authorization mechanism and is implemented by python decorators which we put before the views. The mechanism of decorators is as follows: “the original function is compiled and the resulting function object is passed to the decorator, which does something to produce a function-like object that is then substituted for the original function”[12]. That essentially means that the decorator tells the runtime compiler to modify the code of the view function to actually do the authorization checking logic first. Another way is to imagine that the decorator has its own ”body” and the code in it is for authorization checking. It is executed before the view is entered. In our system, all modules share the same set of authorization decorators.
7.1.2 URL integrity checking
Another type of security checking is the URL integrity checking. It ensures the information represented by the URL as a whole is consistent. This checking is implemented in every view function and it needs to be done only when the authorization checking
has passed. Integrity checking usually is the first step the view does. The view continues only after this checking has passed. For example, when an instructor marks a student’s assignment, instead of following links on the web page (which guarantees the correctness of the URL), he manually types the student’s user-id abc12 in the URL like http://server_base/1101-cmpt-165-d100/a1/markings/new/students/abc12. We need to check that the activity does belong to that course and the student does participate in the course. This checking usually involves retrieving CourseOffering, NumericActivity and Member objects with proper query conditions first. If they can be retrieved successfully, the checking passes. Otherwise a ‘Page Not Found’ error with Http code 404 is returned.
7.1.3 Claim ownership for submissions
When a staff wants to assign a mark to a student’s electronic submission, he has to first claim the ownership of the submission so that if someone else (i.e., another TA or instructor) wants to mark this student on the same activity, they will get warning messages saying this student’s submission is being or has been marked by another staff. This will avoid the situation where one staff is marking or has marked and another one comes along and begins marking without being aware that someone else is marking or has marked it. So whoever finishes later will override the marking results made earlier, which is somewhat unusual. Actually the steps of checking, claiming ownership and issuing warning messages are implemented in the submission module and the request will first be sent to a proxy view in the submission module. If conflicts are found, the user will get noticed. If there is no conflict or the user wants to take over the ownership and continue to mark, the request will then be redirected to the view for actual marking in the marking module.
This is an important place where marking system interacts directly with submission system. When you mark, you can click a link to view the submission details, which is convenient to users.
7.2 Client-side presentation and interaction
In designing user interfaces, one principle is that we should present data neatly and interactively to users.
7.2.1 Layout styles
As introduced in Section 2.3, the Datatables library offers a clean style to present table data with rich interactive features including searching, sorting and pagination. These features can be disabled optionally. For example, when displaying activity components in a table, since normally there will be no more than a couple of components for a single activity, I can disable the pagination feature. We also use another jquery user interface package that contains animations and images for styling. In many places, we use an element called fieldset to encase an Html form for nicer appearance.
The background styles of our system references to the CSS files for websites in SFU. We also add more customized styles for the layout. We also unify the styles for the whole system by defining classes that are used across different web pages. For instance, we design styles associated with button class of <a> (i.e., Html link element) for status including hover, active. This type of link is used across all the four modules.
7.2.2 A tricky issue
As mentioned in Section 3.1.1, to reorder activities for a course or marking components for an activity, there is an issue as to how to make it interactive. We display an up arrow and a down arrow in the first cell of each row in the table (each row represents an activity or marking component). When the up arrow is clicked by the user, the current row will be swapped with the previous row; when the down arrow is clicked, the current row will be swapped with the next row.
My first approach is to make these arrows act as links and whenever a click on an arrow occurs, an Http request with the position information of the two rows is sent to the server. The view corresponding to this URL updates the positions of these two components and returns an Http response showing the new ordering of the components. So the new result will be presented when the page is reloaded. While this is a working approach, it is not satisfactory in terms of the degree of user interaction. Reloading the webpage means the user has to wait for response until the page suddenly moves to its top location once reloaded. This is very unnatural to user’s eyes.
So instead of reloading the page every time, I decided to use Ajax approach. It sends asynchronous POST request containing the identifiers of the two rows to the server and the server does almost the same work as the first approach. Specifically, this is realized by ajax
post method in jQuery library. In a call-back function, I need to tell what action to take when the server has done the update (i.e., the server returns a successful HTTP response) [14]. Intuitively, the action should be to swap the two rows using the Datatables API (i.e., update one row’s content to that of the other and vice versa). I introduce this method when reordering activities for grades module. However, for reordering the marking components, I use another slightly different method: once any arrow is clicked, change happens on client side only. When the user clicks an Update button, the new position information of all the components will be sent to the server in an Ajax POST request and the server updates the ordering. In this case, when server sends back a successful response, the action of the call-back function is to pop up an alert window telling the user the new ordering has taken effect.
Using Ajax approach, both methods avoid reloading the page while the later one is more responsive because the user does not need to wait for the server to finish processing the request in order to see the swap of rows happen. But the trade off is that the user has to click the Update button in order to save the new ordering on the server.
Chapter 8
A Concrete Example
If you are interested, here is a good example for your reference because it embodies many important concepts described in Chapters 5, 6 and 7.
8.1 View function
Let us first have a glance of the view function:
```python
1. @requires_course_staff_by_slug
2. def change_grade_status(request, course_slug, activity_slug, userid):
3. course = get_object_or_404(CourseOffering, slug=course_slug)
4. activity = get_object_or_404(NumericActivity, offering=course, slug=activity_slug)
5. member = get_object_or_404(Member, offering=course, person__userid=userid, role='STUD')
6. numeric_grade = get_object_or_404(NumericGrade, activity=activity, member=member)
7. error = None
8. if request.method == 'POST':
9. status_form = GradeStatusForm(data=request.POST, activity=activity)
10. if not status_form.is_valid():
11. error = 'Error found'
12. else:
13. new_status = status_form.cleaned_data['status']
14. comment = status_form.cleaned_data['comment']
15. if new_status != numeric_grade.flag:
16. numeric_grade.save_status_flag(new_status, comment)
17. messages.add_message(request, messages.SUCCESS, 'Grade status for student %s on %s changed!' % (userid, activity.name,))
18. return _redirect_response(request, course_slug, activity_slug)
19. else:
20. status_form = GradeStatusForm(initial={'status': numeric_grade.flag})
```
This piece of Python code is a view function handling user requests for changing a
student’s grade status on some activity as mentioned in Section 3.2.7.
It has four parameters (line 2): the first one is an HttpRequest object and the following
three are identifiers of the course, activity and the student. These identifiers are extracted
from the URL of the request (recall section 5.1.1). An example URL maps to this view
is http://server_base/1101-cmpt-165-d100/a1/gradestatus/abc12/
with the course’s slug (1101-cmpt-165-d100) with the activity’s slug (a1) and the student’s user-id (abc12) in it.
We use slugs to identify a course and an activity while we use user-id to identifier a student.
Line 1 is a decorator for authorization that checks the user is a staff for this course. The
code of the view is run only when the checking has passed.
In the view function, the first step is to fetch the CourseOffering, NumericActivity and
Member objects from the database (lines 3 - 6) using the identifiers. At the same time, by
specifying proper query conditions, it does URL integrity checking to ensure the activity
does belong to this course (line 4) and the student is a member in this course (line 5). Since
we assume the grade status can be changed only when a grade has been given, that means
in the database there should already be a NumericGrade object for this student on this
activity, we obtain it, too (line 6). If any one of these four queries fails, which indicates
inconsistency in the database, we return ‘‘page not found’’ error with Http code 404.
As described by Section 5.2, the request is either GET or POST. Recall that for a view
handling both GET and POST requests, typically the GET request is sent asking for a form
where user can input data and the POST request is sent once the input is submitted by the
user. If it is GET, we switch to line 20. We create a status form object (this is a django form
type we have defined which contains a drop-down box for selecting status and a text area
for comment). In line 21, the current status (the flag field of the NumericGrade object)
is passed to initialize the selection. We then construct a dictionary object called context
which contains the course, activity and student objects together with the form object (lines
24, 25). We encapsulate it into the HttpRequest response which will be rendered with the template
called grade_status.html (line 26). Django will resolve the references to the variables in the context object and return the resulting HTML file to the browser.

Figure 8.1 shows the web page that will be presented to the user as the response to the GET request. We can see that it has information about the activity, the student and the current grade status. That means the template makes use of the variables activity and student in the context object. All these are referenced as fields in the templates. The drop-down box and the text area can be submitted as HTML form input data.
If it is POST, the work flow is a little more complicated. First we construct a form object from the dataset contained in the POST request. If the form object passes validation (line 12), we get the two fields (lines 13, 14): the status the user selected and the comment the user input. Now we compare this status with the current one, if they are the same we do nothing because it is unnecessary to save to the database. Otherwise, we save the new status (line 15, 16). The method save_status_flag of NumericGrade saves the update to the database and use the comment to create an object for notifying the student about this status change. We then finish by adding a success message and redirecting to another page (lines 17 - 19) where the success message will be shown. If the form validation fails (e.g., the user selected an empty status entry), we fall to line 22 and set an error message (which...
will be shown near the top of the page). Then as we do for a GET request, we construct the \textit{context} object and return the same web page \texttt{grade\_status.html}. But this time the form object contains errors in it because we want the user to correct the invalid fields before submitting again.
### 8.2 Template
Now, let us also look at a small portion of the template \texttt{grade\_status.html}:
```html
{% block content %}
...
<div id="form_container">
<form action="" method="post">
<fieldset>
<legend>Change Grade Status</legend>
{{status_form|display_form}}
</fieldset>
</form>
</div>
{% endblock %}
```
The template inherits from \texttt{base.html} and overrides the \texttt{content} block (lines 1, 10). This part basically shows how we display the \texttt{status\_form} passed in by the \texttt{context} object. We use a filter called \texttt{display\_form}. It is a Python function we have defined which takes a django form object as the parameter. The syntax is to put this filter after the form object with a — in between. What this filter does is to loop over every input field of the form and output its default Html code plus some styling to it and its error message (e.g., appends a red icon) if there is an error with this field. It also adds a Submit button at the end. The filter saves a lot of work because we display many different form objects across many web pages in this manner. With it we just need one line of code.
To improve the layout, we encase the contents of the form object into a \texttt{fieldset} element (lines 4, 7) because it gives a nicer style as mentioned in Section 7.2.1. Then we in turn encase this \texttt{fieldset} element into an Html form element (lines 3, 8). By setting its method of it to be \texttt{post} and leave the action URL empty (line 3), we tell the browser to send a POST request to the same URL of the current page whenever the Submit button is clicked. Finally, we enclose the whole Html form element into a \texttt{div} element (lines 2, 9). This \texttt{div} element has a particular id so that it subjects to the styles we specify in our CSS for this
id.
Chapter 9
Conclusion
In summary, this report presents the course management web system that our team has been working on during this semester. In particular, it focuses on the marking module of the system which is the part I mainly contributed to. I am glad that I have implemented all the functionalities well.
This project is of big practical value. On one hand, it aims at creating an application to bring convenience to instructors and students in the School of Computing Science in their daily campus life. On the other hand, I hope our adventure with Django could be a valuable experience that people can benefit from. I think as an open-source project Django is a very powerful, fast and clearly structured web development framework.
However, the system still needs to be improved. Firstly, testing is not solid enough. We have done unit tests for each module, but we are short of systematic testing plans for integration between modules and for various security issues. Secondly, in my implementation, I have not used the Django cache system which lets us save dynamic objects and pages to speeds up response time of server. In my opinion, it should be one of the priorities for future extension.
http://docs.djangoproject.com/en/1.1/ref/forms/validation/#ref-forms-validation.
http://docs.djangoproject.com/en/1.1/topics/db/models/#topics-db-models.
http://docs.djangoproject.com/en/1.1/topics/db/queries/#topics-db-queries.
https://gradebook.cs.sfu.ca/.
|
{"Source-Url": "https://www.cs.sfu.ca/content/dam/sfu/computing/Undergraduate_students/LiTan.pdf", "len_cl100k_base": 15274, "olmocr-version": "0.1.49", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 88550, "total-output-tokens": 17870, "length": "2e13", "weborganizer": {"__label__adult": 0.0006017684936523438, "__label__art_design": 0.0011615753173828125, "__label__crime_law": 0.0003936290740966797, "__label__education_jobs": 0.07177734375, "__label__entertainment": 0.00020182132720947263, "__label__fashion_beauty": 0.00031876564025878906, "__label__finance_business": 0.0006418228149414062, "__label__food_dining": 0.00072479248046875, "__label__games": 0.0007643699645996094, "__label__hardware": 0.0009889602661132812, "__label__health": 0.0003888607025146485, "__label__history": 0.0005383491516113281, "__label__home_hobbies": 0.0002498626708984375, "__label__industrial": 0.0005698204040527344, "__label__literature": 0.0006613731384277344, "__label__politics": 0.0003287792205810547, "__label__religion": 0.0008196830749511719, "__label__science_tech": 0.00516510009765625, "__label__social_life": 0.0003783702850341797, "__label__software": 0.02020263671875, "__label__software_dev": 0.8916015625, "__label__sports_fitness": 0.00034356117248535156, "__label__transportation": 0.0007448196411132812, "__label__travel": 0.0004317760467529297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74868, 0.03867]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74868, 0.60062]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74868, 0.89248]], "google_gemma-3-12b-it_contains_pii": [[0, 501, false], [501, 898, null], [898, 2092, null], [2092, 3540, null], [3540, 5849, null], [5849, 7140, null], [7140, 9001, null], [9001, 10800, null], [10800, 12951, null], [12951, 14606, null], [14606, 16315, null], [16315, 18036, null], [18036, 20361, null], [20361, 22720, null], [22720, 23738, null], [23738, 25112, null], [25112, 26527, null], [26527, 27942, null], [27942, 29149, null], [29149, 31337, null], [31337, 32283, null], [32283, 32394, null], [32394, 34860, null], [34860, 35681, null], [35681, 36627, null], [36627, 39161, null], [39161, 41007, null], [41007, 41642, null], [41642, 43010, null], [43010, 45237, null], [45237, 47358, null], [47358, 50007, null], [50007, 50145, null], [50145, 51557, null], [51557, 53844, null], [53844, 56020, null], [56020, 56770, null], [56770, 58216, null], [58216, 60441, null], [60441, 62925, null], [62925, 64182, null], [64182, 65691, null], [65691, 68086, null], [68086, 69635, null], [69635, 71792, null], [71792, 71796, null], [71796, 73005, null], [73005, 74243, null], [74243, 74868, null]], "google_gemma-3-12b-it_is_public_document": [[0, 501, true], [501, 898, null], [898, 2092, null], [2092, 3540, null], [3540, 5849, null], [5849, 7140, null], [7140, 9001, null], [9001, 10800, null], [10800, 12951, null], [12951, 14606, null], [14606, 16315, null], [16315, 18036, null], [18036, 20361, null], [20361, 22720, null], [22720, 23738, null], [23738, 25112, null], [25112, 26527, null], [26527, 27942, null], [27942, 29149, null], [29149, 31337, null], [31337, 32283, null], [32283, 32394, null], [32394, 34860, null], [34860, 35681, null], [35681, 36627, null], [36627, 39161, null], [39161, 41007, null], [41007, 41642, null], [41642, 43010, null], [43010, 45237, null], [45237, 47358, null], [47358, 50007, null], [50007, 50145, null], [50145, 51557, null], [51557, 53844, null], [53844, 56020, null], [56020, 56770, null], [56770, 58216, null], [58216, 60441, null], [60441, 62925, null], [62925, 64182, null], [64182, 65691, null], [65691, 68086, null], [68086, 69635, null], [69635, 71792, null], [71792, 71796, null], [71796, 73005, null], [73005, 74243, null], [74243, 74868, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74868, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74868, null]], "pdf_page_numbers": [[0, 501, 1], [501, 898, 2], [898, 2092, 3], [2092, 3540, 4], [3540, 5849, 5], [5849, 7140, 6], [7140, 9001, 7], [9001, 10800, 8], [10800, 12951, 9], [12951, 14606, 10], [14606, 16315, 11], [16315, 18036, 12], [18036, 20361, 13], [20361, 22720, 14], [22720, 23738, 15], [23738, 25112, 16], [25112, 26527, 17], [26527, 27942, 18], [27942, 29149, 19], [29149, 31337, 20], [31337, 32283, 21], [32283, 32394, 22], [32394, 34860, 23], [34860, 35681, 24], [35681, 36627, 25], [36627, 39161, 26], [39161, 41007, 27], [41007, 41642, 28], [41642, 43010, 29], [43010, 45237, 30], [45237, 47358, 31], [47358, 50007, 32], [50007, 50145, 33], [50145, 51557, 34], [51557, 53844, 35], [53844, 56020, 36], [56020, 56770, 37], [56770, 58216, 38], [58216, 60441, 39], [60441, 62925, 40], [62925, 64182, 41], [64182, 65691, 42], [65691, 68086, 43], [68086, 69635, 44], [69635, 71792, 45], [71792, 71796, 46], [71796, 73005, 47], [73005, 74243, 48], [74243, 74868, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74868, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
79abaeaff5e8265f97c73bcbc727d1d195fe939f
|
Eiffel: Efficient and Flexible Software Packet Scheduling
Ahmed Saeed†, Yimeng Zhao‡, Nandita Dukkipati*, Mostafa Ammar†, Ellen Zegura†, Khaled Harras‡, Amin Vahdat*
†Georgia Institute of Technology, *Google, ‡Carnegie Mellon University
Abstract
Packet scheduling determines the ordering of packets in a queuing data structure with respect to some ranking function that is mandated by a scheduling policy. It is the core component in many recent innovations to optimize network performance and utilization. Our focus in this paper is on the design and deployment of packet scheduling in software. Software schedulers have several advantages over hardware including shorter development cycle and flexibility in functionality and deployment location. We substantially improve current software packet scheduling performance, while maintaining flexibility, by exploiting underlying features of packet ranking; namely, packet ranks are integers and, at any point in time, fall within a limited range of values. We introduce Eiffel, a novel programmable packet scheduling system. At the core of Eiffel is an integer priority queue based on the Find First Set (FFS) instruction and designed to support a wide range of policies and ranking functions efficiently. As an even more efficient alternative, we also propose a new approximate priority queue that can outperform FFS-based queues for some scenarios. To support flexibility, Eiffel introduces novel programming abstractions to express scheduling policies that cannot be captured by current, state-of-the-art scheduler programming models. We evaluate Eiffel in a variety of settings and in both kernel and userspace deployments. We show that it outperforms state of the art systems by 3-40x in terms of either number of cores utilized for network processing or number of flows given fixed processing capacity.
1 Introduction
Packet scheduling is the core component in many recent innovations to optimize network performance and utilization. Typically, packet scheduling targets network-wide objectives (e.g., meeting strict deadlines of flows [34], reducing flow completion time [14]), or provides isolation and differentiation of service (e.g., through bandwidth allocation [40, 35] or Type of Service levels [44, 15, 32]). It is also used for resource allocation within the packet processing system (e.g., fair CPU utilization in middleboxes [56, 30] and software switches [33]).
Packet scheduling determines the ordering of packets in a queuing data structure with respect to some ranking function that is mandated by a scheduling policy. In particular, as packets arrive at the scheduler they are enqueued, a process that involves ranking based on the scheduling policy and ordering the packets according to the rank. Then, periodically, packets are dequeued according to the packet ordering. In general, the dequeuing of a packet might, for some scheduling policies, prompt recalculation of ranks and a reordering of the remaining packets in the queue. A packet scheduler should be efficient by performing a minimal number of operations on packet enqueue and dequeue thus enabling the handling of packets at high rates. It should also be flexible by providing the necessary abstractions to implement as many scheduling policies as possible.
In modern networks, hardware and software both play an important role [23]. While hardware implementation of network functionality will always be faster than its corresponding software implementation, software schedulers have several advantages. First, the short development cycle and flexibility of software makes it an attractive replacement or precursor for hardware schedulers. Second, the number of rate limiters and queues deployed in hardware implementations typically lags behind network needs. For instance, three years ago, network needs were estimated to be in the tens of thousands of rate limiters [46] while hardware network cards offered 10-128 queues [4]. Third, software packet schedulers can be deployed in multiple platforms and locations, including middleboxes as Virtual Network Functions and end hosts (e.g., implementation based on BESS [33], or OpenVSwitch [45]). Hence, we assert that software solutions will always be needed to replace or augment hardware schedulers [19, 36, 47, 22, 39]. However, as will be discussed in Section 2, current software schedulers do not meet our efficiency and flexibility objectives.
Our focus in this paper is on the design and implementation of efficient and flexible packet scheduling in software. The need for programmable schedulers is rising as more sophisticated policies are required of networks [27, 50] with schedulers deployed at multiple points on a packet’s path. It has proven difficult to achieve scheduler efficiency in software schedulers, especially handling packets at high line rates, without limiting the supported scheduling policies [47, 50, 36, 47, 19, 22]. Furthermore, CPU-efficient implementation of even the simplest scheduling policies is still an open problem for most platforms. For instance, kernel packet pacing can cost CPU utilization of up to 10% [47] and up to 12% for hierarchical weighted fair queuing scheduling in...
NetIOC of VMware’s hypervisor [37]. This overhead will only grow as more programmability is added to the scheduler, assuming basic building blocks remain the same (e.g., OpenQueue [39]). The inefficiency of these systems stems from relying on $O(\log n)$ comparison-based priority queues.
At a fundamental level, a scheduling policy that has $m$ ranking functions associated with a packet (e.g., pacing rate, policy-based rate limit, weight-based share, and deadline-based ordering) typically requires $m$ priority queues in which this packet needs to be enqueued and dequeued [49], which translates roughly to $O(m \log n)$ operations per packet for a scheduler with $n$ packets enqueued. We show how to reduce this overhead to $O(m)$ for any scheduling policy (i.e., constant overhead per ranking function).
Our approach to providing both flexibility and efficiency in software packet schedulers is two fold. First, we observe (§2) that packet ranks can be represented as integers that at any point in time fall within a limited window of values. We exploit this property (§3.1.1) to employ integer priority queues that have $O(1)$ overhead for packet insertion and extraction. We achieve this by proposing a modification to priority queues based on the First First Set (FFS) instruction, found in most CPUs, to support a wide range of policies and ranking functions efficiently. We also propose a new approximate priority queue that can outperform FFS-based queues for some scenarios (§3.1.2). Second, we observe (§3.2) that packet scheduling programming models (i.e., PIFO [50] and OpenQueue [39]) do not support per-flow packet scheduling nor do they support reordering of packets on a dequeue operation. We augment the PIFO scheduler programming model to capture these two abstractions.
We introduce Eiffel, an efficient and flexible software scheduler that instantiates our proposed approach. Eiffel is a software packet scheduler that can be deployed on end-hosts and software switches to implement any scheduling algorithm. To demonstrate this we implement Eiffel (§4) in: 1) the kernel as a Queuing Discipline (qdisc) and compare it to Carousel [47] and FQ/Pacing [26] and 2) the Berkeley Extensible Software Switch (BESS) [8, 33] using Eiffel-based implementations of pFabric [14] and hClock [19]. We evaluate Eiffel in both settings (§5). Eiffel outperforms Carousel by 3x and FQ/Pacing by 14x in terms of CPU overhead when deployed on Amazon EC2 machines with line rate of 20 Gbps. We also find that an Eiffel-based implementation of pFabric and hClock outperforms an implementation using comparison-based priority queues by 5x and 40x respectively in terms of maximum number of flows given fixed processing capacity and target rate.
## 2 Background and Objectives
In modern networks, packet scheduling can easily become the system bottleneck. This is because schedulers are burdened with the overhead of maintaining a large number of buffered packets sorted according to scheduling policies. Despite the growing capacity of modern CPUs, packet processing overhead remains a concern. Dedicating CPU power to networking takes from CPU capacity that can be dedicated to VM customers especially in cloud settings [28]. One approach to address this overhead is to optimize the scheduler for a specific scheduling policy [26, 25, 19, 47, 22]. However, with specialization two problems linger. First, in most cases inefficiencies remain because of the typical reliance on generic default priority queues in modern libraries (e.g., RB-trees in kernel and Binary Heaps in C++). Second, even if efficiency is achieved, through the use of highly efficient specialized data structures (e.g., Carousel [47] and QFQ [22]) or hybrid hardware/software systems (e.g., SENIC [46]), this efficiency is achieved at the expense of programmability.
Eiffel system we develop in this paper is designed to be both efficient and programmable. In this section we examine these two objectives, show how existing solutions fall short of achieving them and highlight our approach to successfully combine efficiency with flexibility.
**Efficient Priority Queuing:** Priority queueing is fundamental to computer science with a long history of theoretical results. Packet priority queues are typically developed as comparison-based priority queues [26, 19]. A well known result for such queues is that they require $O(\log n)$ steps for either insertion or extraction for a priority queue holding $n$ elements [52]. This applies to data structures that are widely used in software packet schedulers such as RB-trees, used in kernel Queuing Disciplines, and Binary Heaps, the standard priority queue implementation in C++.
Packet queues, however, have the following characteristics that can be exploited to significantly lower the overhead of packet insertion and extraction:
- **Integer packet ranks:** Whether it is deadlines, transmission time, slack time, or priority, the calculated rank of a packet can always be represented as an integer.
- **Packet ranks have specific ranges:** At any point in time, the ranks of packets in a queue will typically fall within a limited range of values (i.e., with well known maximum and minimum values). This range is policy and load dependent and can be determined in advance by operators (e.g., transmission time where packets can be scheduled a maximum of a few seconds ahead, flow size, or known ranges of strict priority values). Ranges of priority values are diverse ranging from just eight levels [1], to 50k for a queue implementing per flow weighted fairness which requires a number of priorities corresponding to the number of flows (i.e., 50k flows on a video server [47]), and up to 1 million priorities for a time indexed priority queue [47].
- **Large numbers of packets share the same rank:** Modern line rates are in the range of 10s to 100s of Gbps. Hence, multiple packets are bound to be transmitted with nanosecond time gaps. This means that packets with small differences in their ranks can be grouped and said to have...
the same rank with minimal or no effect on the accurate implementation of the scheduling policy. For instance, consider a busy-polling-based packet pacer that can dequeue packets at fixed intervals (e.g., order of 10s of nanoseconds). In that scenario, packets with gaps smaller than 10 nanoseconds can be considered to have the same rank.
These characteristics make the design of a packet priority queue effectively the design of bucketed integer priority queues over a finite range of rank values $[0, C]$ with number of buckets $N$, each covering $C/N$ interval of the range. The number of buckets, and consequently the range covered by each bucket, depend on the required ranking granularity which is a characteristic of the scheduling policy. The number of buckets is typically in the range of a few thousands to hundreds of thousands. Elements falling within a range of a bucket are ordered in FIFO fashion. Theoretical complexity results for such bucketed integer priority queues are reported in [53, 29, 52].
Integer priority queues do not come for free. Efficient implementation of integer priority queues requires pre-allocation of buckets and meta data to access those buckets. In a packet scheduling setting the number of buckets is fixed, making the overhead per packet a constant whose value is logarithmic in the number of buckets, because searching is performed on the bucket list not the list of elements. Hence, bucketed integer priority queues achieve CPU efficiency at the expense of maintaining elements unsorted within a single bucket and pre-allocation of memory for all buckets. Note that the maintaining elements unsorted within a bucket is inconsequential because packets within a single bucket effectively have equivalent rank. Moreover, the memory required for buckets, in most cases, is minimal (e.g., tens to hundreds of kilobytes), which is consistent with earlier work on bucketed queues [47]. Another advantage of bucketed integer priority queues is that elements can be (re)moved with $O(1)$ overhead. This operation is used heavily in several scheduling algorithms (e.g., hClock [19] and pFabric [14]).
Recently, there has been some attempts to employ data structures specifically developed or re-purposed for efficiently implementing specific packet scheduling algorithms. For instance, Carousel [47], a system developed for rate limiting at scale, relies on Timing Wheel [54], a data structure that can support time-based operations in $O(1)$ and requires comparable memory to our proposed approach. However, Timing Wheel supports only non-work conserving time-based schedules in $O(1)$. Timing Wheel is efficient as buckets are indexed based on time and elements are accessed when their deadline arrives. However, Timing Wheel does not support operations needed for non-work conserving schedules (i.e., ExtractMin or ExtractMax). Another example is efficient approximation of popular scheduling policies (e.g., Start-Time Fair Queueing [31] as an approximation of Weighted Fair Queuing [24], or the more recent Quick Fair Queue (QFQ) [22]). This approach of developing a new system or a new data structure per scheduling policy does not provide a path to the efficient implementation of more complex policies. Furthermore, it does not allow for a truly programmable network. These limitations lead us to our first objective for Eiffel:
**Objective 1:** Develop data structures that can be employed for any scheduling algorithm providing $O(1)$ processing overhead per packet leveraging integer priority queues (§3.1).
**Flexibility of Programmable Packet Schedulers:** There has been recent interest in developing flexible, programmable, packet schedulers [50, 39]. This line of work is motivated by the support for programmability in all aspects of modern networks. Work on programmable schedulers focuses on providing the infrastructure for network operators to define their own scheduling policies. This approach improves on the current standard approach of providing a small fixed set of scheduling policies as currently provided in modern switches. A programmable scheduler provides building blocks for customizing packet ranking and transmission timing. Proposed programmable schedulers differ based on the flexibility of their building blocks. A flexible scheduler allows a network operator to specify policies according to the following specifications:
- **Unit of Scheduling:** Scheduling policies operate either on per packet basis (e.g., pacing) or on per flow basis (e.g., fair queuing). This requires a model that provides abstractions for both.
- **Work Conservation:** Scheduling policies can be work-conserving or non-work-conserving.
- **Ranking Trigger:** Efficient implementation of policies can require ranking packets on their enqueue, dequeue, or both.
Table 1: Proposed work in the context of the state of the art in scheduling
<table>
<thead>
<tr>
<th>System</th>
<th>Efficiency</th>
<th>HW/SW</th>
<th>Unit of Scheduling</th>
<th>Work Conserving</th>
<th>Supports Shaping</th>
<th>Programmable</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>FQ/Pacing qdisc [26]</td>
<td>$O(\log n)$</td>
<td>SW</td>
<td>Flows</td>
<td>No</td>
<td>Yes</td>
<td>No</td>
<td>Only non-work conserving FQ</td>
</tr>
<tr>
<td>hClock [19]</td>
<td>$O(\log n)$</td>
<td>SW</td>
<td>Flows</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
<td>Only HWPQ Sched.</td>
</tr>
<tr>
<td>Carousel [47]</td>
<td>$O(1)$</td>
<td>SW</td>
<td>Packets</td>
<td>No</td>
<td>Yes</td>
<td>No</td>
<td>Only non-work conserving sched.</td>
</tr>
<tr>
<td>OpenQueue [39]</td>
<td>$O(\log n)$</td>
<td>SW</td>
<td>Packets & Flows</td>
<td>Yes</td>
<td>No</td>
<td>On enq/dep</td>
<td>Efficient building blocks</td>
</tr>
<tr>
<td>PIFO [50]</td>
<td>$O(1)$</td>
<td>HW</td>
<td>Packets</td>
<td>Yes</td>
<td>Yes</td>
<td>On enq</td>
<td>Max. # flows 2048</td>
</tr>
<tr>
<td>Eiffel</td>
<td>$O(1)$</td>
<td>SW</td>
<td>Packets & Flows</td>
<td>Yes</td>
<td>Yes</td>
<td>On enq/dep</td>
<td>-</td>
</tr>
</tbody>
</table>
Notes:
- $C$, $N$, $n$: Integer, natural number, real number, respectively.
parameters, often within limits. The PIFO scheduler programming model is the most prominent example [50]. It is implemented in hardware relying on Push-In-First-Out (PIFO) building blocks where packets are ranked only on enqueue. The scheduler is programmed by arranging the blocks to implement different scheduling policies. Due to its hardware implementation, the PIFO model employs compact constructs with considerable flexibility. However, PIFO remains very limited in its capacity (i.e., PIFO can handle a maximum of 2048 flows at line rate), and expressiveness (i.e., PIFO can’t express per flow scheduling). OpenQueue is an example of a flexible programmable packet scheduler in software [39]. However, the flexibility of OpenQueue comes at the expense of having three of its building blocks as priority queues, namely queues, buffers, and ports. This overhead, even in the presence of efficient priority queues, will form a memory and processing overhead. Furthermore, OpenQueue does not support non-work-conserving schedules.
The design of a flexible and efficient packet scheduler remains an open research challenge. It is important to note here that the efficiency of programmable schedulers is different from the efficiency of policies that they implement. An efficient programmable platform aims to reduce the overhead of its building blocks (i.e., Objective 1) which makes the overhead primarily a function of the complexity of the policy itself. Thus, the efficiency of a scheduling policy becomes a function of only the number of building blocks required to implement it. Furthermore, an efficient programmable platform should allow the operator to choose policies based on their requirements and available resources by allowing the platform to capture a wide variety of policies. To address this challenge, we choose to extend the PIFO model due to its existing efficient building blocks. In particular, we introduce flows as a unit of scheduling in the PIFO model. We also allow modifications to packet ranking and relative ordering both on enqueue and dequeue.
**Objective 2:** Provide a fully expressive scheduler programming abstraction by extending the PIFO model (§3.2).
**Eiffel’s place in Scheduling Research Landscape:** This section reviewed scheduling support in software 1. Table 1 summarizes the discussed related work. Eiffel fills the gap in earlier work by being the first efficient $O(1)$ and programmable software scheduler. It can support both per flow policies (e.g., hClock and pFabric) and per packet scheduling policies (e.g., Carousel). It can also support both work-conserving and non-work-conserving schedules.
---
1Scheduling is widely supported in hardware switches using a short list of scheduling policies, including shaping, strict priority, and Weighted Round Robin [7, 6, 9, 50]. An approach to efficient hardware packet scheduling relies on pipelined-heaps [18, 38, 55] to help position Eiffel. Pipelined-heaps are composed of pipelined-stages for enqueuing and dequeuing elements in a priority queue. However, such approaches are not immediately applicable to software.
that for all Integer Priority Queues discussed in this section, enqueue operation is trivial as buckets are identified by the priority value of their elements. This makes the enqueue operation a simple bucket lookup based on the priority value of the enqueued element.
3.1.1 Circular FFS-based Queue (cFFS)
FFS-based queues are bucketed priority queues with a bitmap representation of queue occupancy. Zero represents an empty bucket, and one represents a non-empty bucket. FFS produces the index of the leftmost set bit in a machine word in constant time. All modern CPUs support a version of Find First Set at a very low overhead (e.g., Bit-Scan-Forward (BSR) takes three cycles to complete [3]). Hence, a priority queue, with a number of buckets equal to or smaller than the width of the word supported by the FFS operation can obtain the smallest set bit, and hence the element with the smallest priority, in $O(1)$ (e.g., Figure 2). In the case that a queue has more buckets than the width of the word supported by a single FFS operation, a set of words can be processed sequentially to represent the queue, with every bit representing a bucket. This results in an $O(M)$ algorithm that is very efficient for very small $M$, where $M$ is the number of words. For instance, real-time process scheduling in the Linux kernel has a hundred priority levels. An FFS-based priority queue is used where FFS is applied sequentially to two words, in case of 64-bit words, or four words in case of 32-bit words [11]. This algorithm is not efficient for large values of $M$ as it requires scanning all words, in the worst case, to find the index of the highest priority element. FFS instruction is also used in QFQ to sort groups of flows based on the eligibility for transmission where the number of groups is limited to a number smaller than 64 [22]. QFQ is an efficient implementation of fair queuing which uses FFS efficiently over a small number of elements. However, QFQ does not provide any clear direction towards implementing other policies efficiently.
To handle an even larger numbers of priority levels, hierarchical bitmaps may be used. One example is Priority Index Queue (PIQ) [55], a hardware implementation of FFS-based queues, which introduces a hierarchical structure where each node represents the occupancy of its children, and the children of leaf nodes are buckets. The minimum element can be found by recursively navigating the tree using FFS operation (e.g., Figure 3 for a word width of two). Hierarchical FFS-based queues have an overhead of $O(\log_w N)$ where $w$ is the width of the word that FFS can process in $O(1)$ and $N$ is the number of buckets. It is important to realize that, for a given scheduling policy, the value of $N$ is a given fixed value that doesn’t change once the scheduling policy is configured. Hence, a specific instance of a Hierarchical FFS-based queue has a constant overhead independent of the number of enqueued elements. In other words, once an implementation is created $N$ does not change.
Hierarchical FFS-based queues only work for a fixed range of priority values. However, as discussed earlier, typical priority values for packets span a moving range. PIQ avoids this problem by assuming support for the universe of possible values of priorities. This is an inefficient approach because it requires generating and maintaining a large number of buckets, with relatively few of them in use at any given time.
Typical approaches to operating over a large moving range while maintaining a small memory footprint rely on circular queues. Such queues rely on the mod operation to map the moving range to a smaller range. However, the typical approach to circular queuing does not work in this case as it results in an incorrect bitmap. For example, if we add a packet with priority value six to the queue in Figure 2 selecting the bucket with a mod operation, the packet will be added in slot zero and consequently mark the bit map at slot zero. Hence, once the range of an FFS-based queue is set, all elements enqueued in that range have to be dequeued before the queue can be assigned a new range so as to avoid unnecessary re-setting of elements. In that scenario, enqueued elements that are out of range are enqueued at the last bucket, and thus losing their proper ordering. Otherwise, the bitmap meta data will have to be reset in case any changes are made to the range of the queue.
A natural solution to this problem is to introduce an overflow queue where packets with priority values outside the current range are stored. Once all packets in the current range are dequeued, packets from that “secondary” queue are inserted using the new range. However, this introduces a significant overhead as we have to go through all pack-
Figure 5: A sketch of a curvature function for three states of a maximum priority queue. As the maximum index of nonempty buckets increases, the critical point shifts closer to that index.
events in the buffer every time the range advances. We solve this problem by making the secondary queue an FFS-based queue, covering the range that is immediately after the range of the queue (Figure 4). Elements outside the range of the secondary queue are enqueued at the last bucket in the secondary queue and their values are not sorted properly. However, we find that to not be a problem as ranges for the queues are typically easy to figure out given a specific scheduling policy.
A Circular Hierarchical FFS-based queue, referred to hereafter simply as a cFFS, maintains the minimum priority value supported by the primary queue (h_index), the number of buckets (q_size) per queue, two pointers to the two sets of buckets, and two pointers to the two sets of bitmaps. Hence, the queue “calculates” by switching the pointers of the two queues from the buffer range to the primary range and back based on the location of the minimum element along with their corresponding bitmaps.
Note that work on efficient priority queues has a very long history in computer science with examples including van Emde Boas tree [53] and Fusion trees [29]. However, such theoretical data structures are complicated to implement and require complex operations. cFFS is highly efficient both in terms of complexity and the required bit operations. Moreover, it is relatively easy to implement.
3.1.2 Approximate Priority Queuing
cFFS queues still require more than one step to find the minimum element. We explore a tradeoff between accuracy and efficiency by developing a gradient queue, a data structure that can find a near minimum element in one step.
Basic Idea The Gradient Queue (GQ) relies on an algebraic approach to calculating FFS. In other words, it attempts to find the index of the most significant bit using algebraic calculations. This makes it amenable to approximation. The intuition behind GQ is that the contribution of the most significant set bit to the value of a word is larger than the sum of the contributions of the rest of the set bits. We consider the weight of a non-empty bucket to be proportional to its index. Hence, Gradient Queue occupancy is represented by its curvature function. The curvature function of the queue is the sum of the weight functions of all nonempty buckets in the queue. More specifically, a specific curvature shape corresponds to a specific occupancy pattern. A proper weight function ensures the uniqueness of the curvature function per occupancy pattern. It also makes finding the non-empty bucket with the maximum index equivalent to finding the critical point of the queue’s curvature (i.e., the point where the derivative of the curvature function of the queue is zero). A sample sketch of a curvature function is illustrated in Figure 5.
Exact Gradient Queue On a bucket becoming nonempty, we add its weight function to the queue’s curvature function, and we subtract its function when it becomes empty. We define a desirable weight function as one that is: 1) easy to differentiate to find the critical point, and 2) easy to maintain when bucket state changes between empty and non-empty. We use weight function, $2^i(x - i)^2$ where $i$ is the index of the bucket and $x$ is the variable in the space of the curvature function.
This weight function results in queue curvature of the form of $ax^2 - bx + c$, where the critical point is located at $x = b/2a$. Hence, we only care about $a$ and $b$ where $a = \sum 2^i$ and $b = \sum i2^i$ for all non-empty buckets $i$. The maintenance of the curvature function of the queue becomes as simple as incrementing and decrementing $a$ and $b$ when a bucket becomes non-empty or empty respectively. Theorem 1, in Appendix A, shows that determining the highest priority non-empty queue can be calculated using $\text{ceil}(b/a)$.
A Gradient Queue with a single curvature function is limited by the the range of values $a$ and $b$ can take, which is analogous to the limitation of FFS-based queues by the size of words for which FFS can be calculated in $O(1)$. A natural solution is to develop a hierarchical Gradient Queue. This makes Gradient Queue an equivalent of FFS-based queue with more expensive operations (i.e., division is more expensive than bit operations). However, due to its algebraic nature, Gradient Queue allows for approximation that is not feasible using bit operations.
Approximate Gradient Queue Like FFS-based queues, gradient queue has a complexity of $O(\log w N)$ where $w$ is the width of the representation of $a$ and $b$ and $N$ is the number of buckets. Our goal is reduce the number of steps even further for each lookup. We are particularly interested in having lookups that can be made in one operation, which can be achieved through approximation. The advantage of the curvature representation of the Gradient Queue compared to FSS-based approaches is that it lends itself naturally to approximation.
A simple approximation is to make the value of $a$ and $b$ corresponding to a certain queue curvature smaller which will allow them to represent a larger number of priority values. In particular, we change the weight function to $2^{f(i)}(x - i)^2$ which results in $a = \sum 2^{f(i)}$ and $b = \sum i2^{f(i)}$ where $f(i) = i/\alpha$ and $\alpha$ is a positive integer. This approach leads to two natural results: 1) the biggest gain of the approximation is that $a$ and $b$ can now represent a much larger range of values for $i$ which eliminates the need for hierarchical Gradient Queue and allows for finding the minimum ele-
ment with one step, and 2) the employed weight function is no longer proper. While BSR instruction is 8-32x faster than DIV [3], the performance gained from the reduced memory lookups required per BSR operation.
This approximation stems from using an “improper” weight function. This leads to breaking the two guarantees of a proper weight function, namely: 1) the curvature shape is no longer unique per queue occupancy pattern, and 2) the index of the maximum non-empty bucket no longer corresponds to the critical point of the curvature in all cases. In other words, the index of the maximum non-empty bucket, \( M \), is no longer \( \text{ceil}(b/a) \) due the fact that the weight of the maximum element no longer dominates the curvature function as the growth is sub-exponential. However, this ambiguity does not exist for all curvatures (i.e., queue occupancy patterns).
We characterize the conditions under which ambiguity occurs causing error in identifying the highest priority non-empty bucket. Hence, we identify scenarios where using the approximate queue is acceptable. The effect of \( f(i) = i/\alpha \) can be described as introducing ambiguity to the value of \( \text{ceil}(b/a) \). This is because exponential growth in \( a \) and \( b \) occurs not between consecutive indices but every \( \alpha \) indices. In particular, we find solving the geometric and arithmetic-geometric sums of \( a \) and \( b \) that for \( \frac{b}{a} = \frac{M}{1-\log(\alpha M)} + u(\alpha) \) where \( g(\alpha, M) = (2^{1/\alpha})^M - 1 \) is a logarithmically decaying function of \( M \) and \( \alpha \). \( u(\alpha) = 1/(1 - 2^{1/\alpha}) \) is non-linear but slowly growing function of \( \alpha \). Hence, an approximate GQ can operate as a bucketed-queue where indices start from \( I_0 \) where \( g(\alpha, M_0) \approx 0 \) and end at \( I_{\text{max}} \) where \( 2^{1/I_{\text{max}}} \) can be precisely represented in the CPU word used to represent \( a \) and \( b \). In this case, there is a constant shift in the value \( \text{ceil}(b/a) \) that is calculated by \( u(\alpha) \). For instance, consider an approximate queue with an \( \alpha \) of 16. The function \( g(\alpha, M) \) decays to near zero at \( M = 124 \) making the shift \( u(\alpha) = 22 \). Hence, \( I_0 = 124 \) and \( I_{\text{max}} = 647 \) which allows for the creation of an approximate queue that can handle 523 buckets. Note that this configuration results in an exact queue only when all buckets between \( I_0 \) and \( I_{\text{max}} \) are nonempty. However, error is introduced when some elements are missing. In Section 5.2, we show the effect of this error through extensive experiments; more examples are shown in Appendix B.
Typical scheduling policies (e.g., timestamp-based shaping, Least Slack Time First, and Earliest Deadline First) will generate priority values for packets that are uniformly distributed over priority levels. For such scenarios, the approximate gradient queue will have zero error and extract the minimum element in one step. This is clearly not true for all scheduling policies (e.g., strict priority will probably have more traffic for medium and low level priorities compared to high priority). For cases where the index suggested by the function is of an empty bucket, we perform linear search until we find a nonempty bucket. Moreover, for a cases of a moving range, a circular approximate queue can be implemented as with cFFS.
Approximate queues have been used before for different use cases. For instance, Soft-heap [21] is an approximate priority queue with a bounded error that is inversely proportional to the overhead of insertion. In particular, after \( n \) insertions in a soft-heap with an error bound \( 0 < \varepsilon < 1/2 \), the overhead of insertion is \( O(\log(1/\varepsilon)) \). Hence, \( \text{ExtractMin} \) operation which can have a large error under Soft-heap. Another example is the RIPQ which was developed for caching [51]. RIPQ relies on a bucket-sort-like approach. However, the RIPQ implementation is suited for static caching, where elements are not moved once inserted, which makes it not very suitable for the dynamic nature of packet scheduling.
### 3.2 Flexibility in Eiffel
Our second objective is to deploy flexible schedulers that have full expressive power to implement a wide range of scheduling policies. Our goal is to provide the network operator with a compiler that takes as input policy description and produces an initial implementation of the scheduler using the building blocks provided in the previous section. Our starting point is the work in PIFO which develops a model for programmable packet scheduling [50]. PIFO, however, suffers from several drawbacks, namely: 1) it doesn’t support reordering packets already enqueued based on changes in their flow ranking, 2) it does not support ranking of elements on packet dequeue, and 3) it does not support shaping the output of the scheduling policy. In this section, we show our augmentation of the PIFO model to enable a completely flexible programming model in Eiffel. We address the first two issues by adding programming abstractions to the PIFO model, and we address the third problem by enabling arbitrary shaping with Eiffel by changing how shaping is handled within the PIFO model. We discuss the implementation of an initial version of the compiler in Section 4.
#### 3.2.1 PIFO Model Extensions
Before we present our new abstractions, we review briefly the PIFO programming model [50]. The model relies on the Push-In-First-Out (PIFO) conceptual queue as its main building block. In programming the scheduler, the PIFO blocks are arranged to implement different scheduling algorithms.
The PIFO programming model has three abstractions: 1) scheduling transactions, 2) scheduling trees, and 3) shaping transactions. A scheduling transaction represents a single ranking function with a single priority queue. Scheduling trees are formed by connecting scheduling transactions, where each node’s priority queue contains an ordering of its children. The tree structure allows incoming packets to change the relative ordering of packets belonging to different policies. Finally, a shaping transaction can be attached to any non-root node in the tree to enforce a rate limit on it.
There are several examples of the PIFO programming model in action presented in the original paper [50]. The primitives presented in the original PIFO model capture scheduling policies that have one of the following features: 1) distinct packet rank enumerations, over a small range of values (e.g., strict priority), 2) per-packet ranking over a large range of priority values (e.g., Earliest Deadline First [41]), and 3) hierarchical policy-based scheduling (e.g., Hierarchical Packet Fair Queuing [17]).
Eiffel augments the PIFO model by adding two additional scheduler primitives. The first primitive is per-flow ranking and scheduling where the rank of all packets of a flow depend on a ranking that is a function of the ranks of all packets enqueued for that specific flow. We assume that a sequence of packets that belong to a single flow should not be reordered by the scheduler. Existing PIFO primitives keep per-flow state but use them to rank each packet individually where an incoming packet for a certain flow does not change the ranking of packets already enqueued that belong to the same flow. The per-flow ranking extension keeps track of that information along with a queue per flow for all packets belonging to that flow. A single PIFO block orders flows, rather than packets, based on their rank. The second primitive is on-dequeue scheduling where incoming and outgoing packets belonging to a certain flow can change the rank of all packets belonging to that flow on enqueue and dequeue.
The two primitives can be integrated in the PIFO model. All flows belonging to a per-flow transaction are treated as a single flow by scheduling transactions higher in the hierarchical policy. Also note that every individual flow in the flow-rank policy can be composed of multiple flows that are scheduled according to per packet scheduling transactions. We realize that this specification requires tedious work to describe a complex policy that handles thousands of different flows or priorities. However, this specification provides a direct mapping to the underlying priority queues. We believe that defining higher level programming languages describing packet schedulers as well as formal description of the expressiveness of the language to be topics for future research.
### 3.2.2 Arbitrary Shaping
A flexible packet scheduler should support any scheme of bandwidth division between incoming streams. Earlier work on flexible schedulers either didn’t support shaping at all (e.g., OpenQueue) or supported it with severe limitations (e.g., PIFO). We allow for arbitrary shaping by decoupling work conserving scheduling from shaping. A natural approach to this decoupling is to allow any flow or group of flows to have a shaper associated with them. This can be achieved by assigning a separate queue to the shaped aggregate whose output is then enqueued into its proper location in the scheduling hierarchy. However, this approach is extremely inefficient as it requires a queue per rate limit, which can lead to increased CPU and memory overhead. We im-
prove the efficiency of this approach by leveraging recent results that show that any rate limit can be translated to a timestamp per packet, which yields even better adherence to the set rate than token buckets [47]. Hence, we use only one shaper for the whole hierarchy which is implemented using a single priority queue.
As an example, consider the hierarchical policy in Figure 6. Each node represents a policy-defined flow with the root representing the aggregate traffic. Each node has a share of its parent’s bandwidth, defined by the fraction in the figure. Each node can also have a policy-defined rate limit. In this example, we have a rate limit at a non-leaf node and a leaf node. Furthermore, we require the aggregate traffic to be paced. We map the hierarchical policy in Figure 6 to its priority-queue-based realization in Figure 7. Per the PIFO model, each non-leaf node is represented by a priority queue. Per our proposal, a single shaper is added to rate limit all packets according to all policy-defined rate limits.
To illustrate how this single shaper works, consider packets belonging to the rightmost leaf policy. We explore the journey of packets belonging to that leaf policy through the different queues, shown in Figure 7. These packets will be enqueued to the shaper with timestamps set based on a 7 Mbps rate to enforce the rate on their node (step 1). Once dequeued from the shaper, each packet will be enqueued to PQ2 (step 2.1) and the shaper according to the 10 Mbps rate limit (step 2.2). After the transmission time of a packet belonging to PQ2 is reached, which is defined by the shaper, the packet is inserted in both the root’s (PQ1) priority queue
(3.1) and the shaper according to the pacing rate (3.2). When
the transmission time, calculated based on the pacing rate, is
reached the packet is transmitted. To achieve this function-
ality, each packet holds a pointer to the priority queue they
should be enqueued to. This pointer avoids searching for the
queue a packet should be enqueued to. Note that having the
separate shaper allows for specifying rate limits on any node
in the hierarchical policy (e.g., the root and leaves) which
was not possible in the PIFO model, where shaping transac-
tions are tightly coupled with scheduling transactions.
4 Eiffel Implementation
Packet scheduling is implemented in two places in the net-
work: 1) hardware or software switches, and 2) end-host
kernel. We focus on the software placements (kernel and
userspace switches) and show that Eiffel can outperform the
state of the art in both settings. We find that userspace and
kernel implementations of packet scheduling face signifi-
cantly different challenges as the kernel operates in an event-
based setting while userspace operates in a busy polling set-
ing. We explain here the differences between both imple-
mentations and our approach to each. We start with our ap-
proach to policy creation.
Policy Creation: We extend the existing PIFO open
source model to configure the scheduling algorithm [50, 2].
The existing implementation represents the policy as a graph
using the DOT description language and translates the graph
into C++ code. We rely on the cFFS for our implementation,
unless otherwise stated. This provides an initial implemen-
tation which we tune according to whether the code is going
to be used in kernel or userspace. We believe automating
this process can be further refined, but the goal of this work
is to evaluate the performance of Eiffel algorithms and data
structures.
Kernel Implementation We implement Eiffel as a qdisc
[36] kernel module that implements enqueue and dequeue functions and keeps track of the number of enqueued pack-
etks. The module can also set a timer to trigger dequeue. Ac-
cess to qdiscs is serialized through a global qdisc lock. In
our design, we focus on two sources of overhead in a qdisc: 1)
the overhead of the queuing data structure, and 2) the
overhead of properly setting the timer. Eiffel reduces the
first overhead by utilizing one of the proposed data struc-
tures to reduce the cost of both enqueue and dequeue oper-
ations. The second overhead can be mitigated by improving
the efficiency of finding the smallest deadline of an enqueued
packet. This operation of $\text{SoonestDeadline}()$ is required
to efficiently set the timer to wake up at the deadline of the
next packet. Either of our supported data structures can sup-
port this operation efficiently as well.
Userspace Implementation We implement Eiffel in the
Berkeley Extensible Software Switch (BESS, formerly Soft-
NIC [33]). BESS represents network processing elements as
a pipeline of modules. BESS is busy polling-based where
a set of connected modules form a unit of execution called
a task. A scheduler tracks all tasks and runs them accord-
ing to assigned policies. Tasks are scheduled based on the
amount of resources (CPU cycles or bits) they consume. Our
implementation of Eiffel in BESS is done in self-contained
modules.
We find that two main parameters determine the efficiency
of Eiffel in BESS: 1) batch size and 2) queue size. Batching
is already well supported in BESS as each module receives
packets in batches and passes packets to its subsequent mod-
ule in a batch. However, we find that batching per flow has an
intricate impact on the performance of Eiffel. For instance,
with small packet sizes, if no batching is performed per flow,
then every incoming batch of packets will activate a large
number of queues without any of the packets being actu-
al-ly queued (due to small packet size) which increases the
overhead per packet (i.e., queue lookup of multiple queues
rather than one). This is not the case for large packet sizes
where the lookup cost is amortized over the larger size of
the packet improving performance compared to batching of
large packets. Batching large packets results in large queues
for flows (i.e., large number of flows with large number of
enqueued packets). We find that batching should be applied
based on expected traffic pattern. For that purpose, we setup
$\text{Buffer}$ modules per traffic class before Eiffel’s module
in the pipeline when needed. We also perform output batching
per flow in units of 10KB worth of payload which was sug-
gested as a good threshold that does not affect fairness at a
macroscale between flows [19]. We also find that limiting the
number of packets enqueued in Eiffel can significantly affect
the performance of Eiffel in BESS. We limit the number of
packets per flow to 32 packets which we find, empirically,
to maintain performance.
5 Evaluation
5.1 Eiffel Use Cases
Methodology: We evaluate our kernel and userspace im-
plementation through a set of use cases each with its cor-
responding baseline. We implement two common use cases,
one in kernel and another in userspace$^2$. In each use case,
we evaluated Eiffel’s scheduling behavior as well as its CPU
performance as compared to the baseline. The comparison
of scheduling behavior was done by comparing aggregate
rates achieved as well as order of released packets. How-
ever, we only report CPU efficiency results as we find that
Eiffel matches the scheduling behavior of the baselines.
A key aspect of our evaluation is determining the metrics of
comparisons in kernel and userspace settings. The main
difference is that a kernel module can support line rate by
using more CPU. This requires us to fix the packet rate we
are evaluating at and look at the CPU utilization of different
$^2$A third use case the implements hClock in userspace can be found in
the extended version of this paper [48]
scheduler implementations. On the other hand, a userspace implementation relies on busy polling on one or more CPU cores to support different packet rates. Hence, in the case of userspace, we fix the number of cores used, to one core unless otherwise is stated, and compare the different scheduler implementations based on the maximum achievable rate.
5.1.1 Use Case 1: Shaping in Kernel
Traffic shaping (i.e., rate limiting and pacing) is an essential operation for efficient utilization [16] and correct operation of modern protocols (e.g., both TIMELY [43] and BBR [20] require per flow pacing). Recently, it has been shown that the canonical kernel shapers (i.e., FQ/pacing [26] and HTB qdiscs [25]) are inefficient due to reliance on inefficient data structures; there are outperformed by the userspace-based implementation in Carousel [47]. To offer a fair comparison we implement all systems in the kernel. We implement a rate limiting qdisc whose functionality matches the rate limiting features of the existing FQ/pacing qdisc [26].
We implemented Eiffel as a qdisc. The queue is configured with 20k buckets with a maximum horizon of 2 seconds and only the shaper is used. We implemented the qdisc in kernel v4.10. We modified only sock.h to keep the state of each socket allowing us to avoid having to keep track of each flow in the qdisc. We conduct experiments for egress traffic shaping between two servers within the same cluster in Amazon EC2. We use two m4.16xlarge instances equipped with 64 cores and capable of sustaining 25 Gbps. We use neper [5] to generate traffic with a large number of TCP flows. In particular, we generate traffic from 20k flows and use $SO_{MAX}_PACING\_RATE$ to rate limit individual flows to achieve a maximum aggregate rate of 24 Gbps. This configuration constitutes a worst case in terms of load for all evaluated qdiscs as it requires the maximum amount of calculations. We measure overhead in terms of the number of cores used for network processing which we calculate based on the observed fraction of CPU utilization. Without neper operating, CPU utilization is zero, hence, we attribute any CPU utilization during our experiments to the networking stack, except for the CPU portion attributed to userspace processes. We track CPU utilization using dstat. We run our experiments for 100 seconds and record the CPU utilization every second. This continuous behavior emulates the behavior handled by content servers which were used to evaluate Carousel [47].
Figure 8 shows the overhead of all three systems. It is clear that Eiffel is superior, outperforming FQ by a median 14x and Carousel by 3x. We find the overhead of FQ to be consistent with earlier results [47]. This is due to its complicated data structure which keeps track internally of active and inactive flows and requires continuous garbage collection to remove old inactive flows. Furthermore, it relies on RB-trees which increases the overhead of reordering flows on every enqueue and dequeue. To better understand the comparison with Carousel, we look at the breakdown of the main components of CPU overhead, namely overhead spent on system processes and servicing software interrupts. Figure 9 details the comparison. We find that the main difference is in the overhead introduced by Carousel in firing timers at constant intervals while Eiffel can trigger timers exactly when needed (Figure 9 right). The overhead of the data structures in both cases introduces minimal overhead in system processes (Figure 9 left).
5.1.2 Use Case 2: Least/Largest X First in Userspace
One of the most widely used patterns for packet scheduling is ordering packets such that the flow or packet with the least or most of some feature exits the queue first. Many examples of such policies have been promoted including Least Slack Time First (LSTF) [42], Largest Queue First (LQF), and Shortest/Least Remaining Time First (SRTF). We refer to this class of algorithms as L(X)F. This class of algorithms
is interesting as some of them were shown to provide theoretically proven desirable behavior. For instance, LSTF was shown to be a universal packet scheduler that can emulate the behavior of any scheduling algorithm [42]. Furthermore, SRTF was shown to schedule flows close to optimally within the pFabric architecture [14]. We show that Eiffel can improve the performance of this class of scheduling algorithms.
We implement pFabric as an instance of such class of algorithms where flows are ranked based on their remaining number of packets. Every incoming and outgoing packet changes the rank of all other packets belonging to the same flow, requiring on dequeue ranking. Figure 10 shows the representation of pFabric using the PIFO model with per-flow ranking and on dequeue ranking provided by Eiffel. We also implemented pFabric using $O(\log n)$ priority queue based on a Binary Heap to provide a baseline. Both designs were implemented as queue modules in BESS. We used packets of size 1500B. Operations are all done on a single core with a simple flow generator. All results are the average of ten experiments each lasting for 20 seconds. Figure 11 shows the impact of increasing the number of flows on the performance of both designs. It is clear that Eiffel has better performance. The overhead of pFabric stems from the need to continuously move flows between buckets which has $O(1)$ using bucketed queues while it has an overhead of $O(n)$ as it requires reheapifying the heap every time. The figure also shows that as the number of flows increases the value of Eiffel starts to decrease as Eiffel reaches its capacity.
5.2 Eiffel Microbenchmark
Our goal in this section is evaluate the impact of different parameters on the performance of different data structures. We also evaluate the effect of approximation in switches on network-wide objectives. Finally, we provide guidance on how one should choose among the different queuing data structures within Eiffel, given specific scheduler user-case characteristics. To inform this decision we run a number of microbenchmark experiments. We start by evaluating the performance of the proposed data structures compared to a basic bucketed priority queue implementation. Then, we explore the impact of approximation using the gradient queue both on a single queue and at a large network scale through ns2-simulation. Finally, we present our guide for choosing a priority queue implementation.
Experiment setup: We perform benchmarks using Google’s benchmark tool [12]. We develop a baseline for bucketed priority queues by keeping track of non-empty buckets in a binary heap, we refer to this as BH. We ignore comparison-based priority queues (e.g., Binary Heaps and RB-trees) as we find that bucketed priority queues performs 6x better in most cases. We compare cFFS, approximate gradient queue (Approx), and BH. In all our experiments, the queue is initially filled with elements according to queue occupancy rate or average number of packet per bucket param-
of non-empty buckets. As expected, as the ratio increases the overhead decreases which improves the throughput of the approximate queue. Figure 14 shows the error in the approximate queue’s fetching of elements. As the number of empty buckets increases the error in the approximate queue is larger and the overhead of linear search grows. We suggest that cases where the queue is more than 30% empty should trigger changes in the queue’s granularity based on the queue’s CPU performance and to avoid allocating memory to buckets that are not used.
The granularity of the queue determines the representation capacity of the queue. It is clear for our results that picking low granularity (i.e., high number of packets per bucket) yields better performance in terms of packets per second. On the other hand, from a networking perspective, high granularity yields exact ordering of packets. For instance, a queue with a granularity of 100 microseconds cannot insert gaps between packets that are smaller than 100 microseconds. Hence, we recommend configuring the queue’s granularity such that each bucket has at least one packet. This can be determined by observing the long term behavior of the queue. We also note that this problem can be solved by having non-uniform bucket granularity which is dynamically set to achieve the result of at least one packet per bucket. We leave this problem for future work.
Impact of Approximation on Network-wide Objectives: A natural question is: how does approximate prioritization, at every switch in a network, affect network-wide objectives? To answer that question, we perform simulations of pFabric, which requires prioritization at every switch. Our simulation are based on na2 simulations provided by the authors of pFabric [14] and the plotting tools provided by the authors of QJump [32]. We change only the priority queuing implementation from a linear search-based priority queue to our Approximate priority queue and increase queue size to handle 1000k elements. We use DCTCP [13] as a baseline to put the result in context. Figure 15 shows a snapshot of results of the simulations of a 144 node leaf-spine topology. Due to space limitations, We show results for only web-search workload simulations which are based on clusters in Microsoft datacenters [13]. The load is varied between 10% to 80% of the load observed. We note that the setting of the simulations is not relevant for the scope of this paper, however, what is relevant is comparing the performance of pFabric using its original implementation to pFabric using our approximate queue. We find that approximation has minimal effect on overall network behavior which makes performance on a microscale the only concern in selecting a queue for a specific scheduler.
A Guide for Choosing a Priority Queue for Packet Scheduling Figure 16 summarizes our takeaways from working with the proposed queues. For a small number of priority levels, we find that the choice of priority queue has little impact and for most scenarios a bucket-based queue might be overkill due to its memory overhead. However, when the number of priority levels or buckets is larger than a threshold the choice of queues makes a significant difference. We found in our experiments that this threshold is 1k and that the difference in performance is not significant around the threshold. We find that if the priority levels are
over a fixed range (e.g., job remaining time [14]) then an FFS-based priority queue is sufficient. When the priority levels are over a moving range, where the number of levels are not all equally likely (e.g., rate limiting with a wide range of limits [47]), it is better to use cFFS priority queue. However, for priority levels over a moving range with highly occupied priority levels (e.g., Least Slack Time-based [42] or hierarchical-based schedules [19]) approximate queue can be beneficial.
Another important aspect is choosing the number of buckets to assign to a queue. This parameter should be chosen based on both the desired granularity and efficiency which form a clear trade-off. Proposed queues have minimal CPU overhead (e.g., a queue with a billion buckets will require six bit operations to find the minimum non-empty bucket using a cFFS). Hence, the main source of efficiency overhead is the memory overhead which has two components: 1) memory footprint, and 2) cache freshness. However, we find that most scheduling policies require thousands to tens of thousands of elements which require small memory allocation for our proposed queues.
6 Conclusion
Efficient packet scheduling is a crucial mechanism for the correct operation of networks. Flexible packet scheduling is a necessary component of the current ecosystem of programmable networks. In this paper, we showed how Eiffel can introduce both efficiency and flexibility for packet scheduling in software relying on integer priority queuing concepts and novel packet scheduling programming abstractions. We showed that Eiffel can achieve orders of magnitude improvements in performance compared to the state of the art while enabling packet scheduling at scale in terms of both number of flows or rules and line rate. We believe that our work should enable network operators to have more freedom in implementing complex policies that correspond to current networks needs where isolation and strict sharing policies are needed.
We believe that the biggest impact Eiffel will have is making the case for a reconsideration of the basic building blocks of the packet schedulers in hardware. Current proposals for packet scheduling in hardware (e.g., PIFO model [50] and SmartNICs [28]), rely on parallel comparisons of elements in a single queue. This approach limits the size of the queue. Earlier proposals that rely on pipelined-heaps [18, 38, 55] required a priority queue that can capture the whole universe of possible packet rank values, which requires significant hardware overhead. We see Eiffel as a step on the road of improving hardware packet schedulers by reducing the number of parallel comparisons through an FFS-based queue meta data or through an approximate queue metadata. For instance, Eiffel can be employed in a hierarchical structure with parallel comparisons to increase the capacity of individual queues in a PIFO-like setting. Future programmable schedulers can implement a hardware version of cFFS or the approximate queue and provide an interface that allows for connecting them according to programmable policies. While the implementation is definitely not straight forward, we believe this to be the natural next step in the development of scalable packet schedulers.
Acknowledgments
The authors would like to thank the NSDI Shepherd, K. K. Ramakrishnan, and the anonymous reviewers for providing excellent feedback. This work is funded in part by NSF grants NETS 1816331.
References
Proof. We encode the occupancy of buckets by a bit string of length $N$ where zeros represent empty buckets and ones represent nonempty buckets. The value of the bit string is the value of the critical point $x = \frac{b}{a}$ for queue represented by the bit of strings. We prove the theorem by showing an ordering between all bit strings, where the maximum value is $N$ and the minimum value is larger than $N - 1$. The minimum value is when all buckets are nonempty (i.e., all ones). In that case, $a = \sum_{i=1}^{N} 2^i$ and $b = \sum_{i=1}^{N} i2^i$. Note that $b$ is an Arithmetic-Geometric Progression that can be simplified to $2N^2 + 1 - (2N + 1) - 2$ and $a$ is a Geometric Progression that can be simplified to $2^N + 1 - 2$. Hence, the critical point $x = \frac{2N^2 + 1}{2N^2 + 1 - 2} - 1 = \frac{N}{1 - \frac{1}{2} - \frac{1}{2}} - 1$ where \( \frac{N}{1 - \frac{1}{2} - \frac{1}{2}} < N + 1 \) and $\text{ceil}(x) = N$. The maximum value occurs when only bucket $N$ is nonempty (i.e., all zeros). It is straightforward to show that the critical point is exactly $x = N$. Now, consider any N-bit string, where the Nth bit is 1, if we flip one bit from 1 to zero, the value of the critical point increases. It is straightforward to show that $\frac{b - j2^j}{a - 2^j} - \frac{b}{a} > 0$, where $j$ is the index of the flipped bit.
\[ \text{B Examples of Errors in Approximate Gradient Queue} \]
To better understand the effect of missing elements on the accuracy of the approximate queue, consider the following cases of elements distribution for a maximum priority queue with $N$ buckets:
- Elements are evenly distributed over the queue with frequency $1/\alpha$, which is equivalent to an Exact Gradient Queue with $N/\alpha$ elements,
- $N/2$ elements are present in buckets from 0 to $N/2$ and then a single element is present in bucket indexed $3N/4$, where the concentration of the elements at the beginning of the queue will create an error on the estimation of the index of the maximum element $\epsilon = \text{ceil}(b/a) + u(\alpha) - 3N/4$. We note that in this case $\epsilon < 0$ because the estimation of $\text{ceil}(b/a)$ will be closer to the concentration of elements that is pulling the curvature away from $3N/4$. The error in such cases grows proportional to size of the concentration and inversely proportional to the distance between the low concentration and the high concentration.
- All elements are present, which allows the value $\epsilon = \text{ceil}(b/a) + u(\alpha)$ to be exactly where the maximum element is.
\section{A Gradient Queue Correctness}
\textbf{Theorem 1.} The index of the maximum non-empty bucket, $N$, is $\text{ceil}(b/a)$.
\textbf{Proof.} We encode the occupancy of buckets by a bit string of length $N$ where zeros represent empty buckets and ones represent nonempty buckets. The value of the bit string is the value of the critical point $x = \frac{b}{a}$ for queue represented by the bit of strings. We prove the theorem by showing an ordering between all bit strings, where the maximum value is $N$ and the minimum value is larger than $N - 1$. The minimum value is when all buckets are nonempty (i.e., all
|
{"Source-Url": "https://www.usenix.org/system/files/nsdi19spring_saeed_prepub.pdf", "len_cl100k_base": 14036, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 56196, "total-output-tokens": 17854, "length": "2e13", "weborganizer": {"__label__adult": 0.0003771781921386719, "__label__art_design": 0.00040030479431152344, "__label__crime_law": 0.00030612945556640625, "__label__education_jobs": 0.0008792877197265625, "__label__entertainment": 0.0001885890960693359, "__label__fashion_beauty": 0.00018393993377685547, "__label__finance_business": 0.0006394386291503906, "__label__food_dining": 0.00040078163146972656, "__label__games": 0.0010385513305664062, "__label__hardware": 0.0035305023193359375, "__label__health": 0.0006628036499023438, "__label__history": 0.0005168914794921875, "__label__home_hobbies": 0.00012362003326416016, "__label__industrial": 0.0008106231689453125, "__label__literature": 0.00028824806213378906, "__label__politics": 0.00032830238342285156, "__label__religion": 0.00048470497131347656, "__label__science_tech": 0.315673828125, "__label__social_life": 9.03606414794922e-05, "__label__software": 0.03155517578125, "__label__software_dev": 0.64013671875, "__label__sports_fitness": 0.0003561973571777344, "__label__transportation": 0.0009813308715820312, "__label__travel": 0.0003108978271484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73741, 0.02043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73741, 0.34587]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73741, 0.89354]], "google_gemma-3-12b-it_contains_pii": [[0, 5218, false], [5218, 11292, null], [11292, 17439, null], [17439, 20566, null], [20566, 25363, null], [25363, 31124, null], [31124, 37462, null], [37462, 42227, null], [42227, 48171, null], [48171, 52176, null], [52176, 55204, null], [55204, 58613, null], [58613, 64295, null], [64295, 70557, null], [70557, 73741, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5218, true], [5218, 11292, null], [11292, 17439, null], [17439, 20566, null], [20566, 25363, null], [25363, 31124, null], [31124, 37462, null], [37462, 42227, null], [42227, 48171, null], [48171, 52176, null], [52176, 55204, null], [55204, 58613, null], [58613, 64295, null], [64295, 70557, null], [70557, 73741, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73741, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73741, null]], "pdf_page_numbers": [[0, 5218, 1], [5218, 11292, 2], [11292, 17439, 3], [17439, 20566, 4], [20566, 25363, 5], [25363, 31124, 6], [31124, 37462, 7], [37462, 42227, 8], [42227, 48171, 9], [48171, 52176, 10], [52176, 55204, 11], [55204, 58613, 12], [58613, 64295, 13], [64295, 70557, 14], [70557, 73741, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73741, 0.03042]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
f450bab3ff9362b6b3c022eca3a8e54e49aa2222
|
## Parallel Continuous Outlier Mining in Streaming Data
<table>
<thead>
<tr>
<th>1<sup>st</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>2<sup>nd</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>3<sup>rd</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>4<sup>th</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>5<sup>th</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>6<sup>th</sup> Given Name Surname</th>
</tr>
</thead>
<tbody>
<tr>
<td>dept. name of organization (of Aff.)</td>
</tr>
<tr>
<td>name of organization (of Aff.)</td>
</tr>
<tr>
<td>City, Country</td>
</tr>
<tr>
<td>email address</td>
</tr>
</tbody>
</table>
### Abstract
Outlier (or anomaly) detection is a key mechanism in modern data analytics. In this work, we center our focus on distance-based outliers in a metric space, where the status of an entity as to whether it is an outlier is based on the number of other entities in its neighborhood. In the recent years, several solutions have tackled the problem of distance-based outliers in data streams. In such a setting, outliers must be mined continuously as new elements become available. In this work, we use the sliding window streaming model, where older elements expire as new ones become available. An interesting research problem is to combine the streaming nature of the problem with massively parallel systems to provide scalable stream-based algorithms. However, none of the previously proposed techniques refer to a massively parallel setting. Our proposal fills this gap and studies transferring state-of-the-art techniques in a modern platform for intensive streaming analytics, namely Apache Flink, which is not a trivial task. We thoroughly present the technical challenges encountered and the alternatives that may be applied. We show speed-ups up to 117 (resp. 2076) times over a naive parallel (resp. non-parallel) solution in Flink, by using just an ordinary 4-core machine and a real-world dataset. Our results demonstrate that outlier mining can be achieved in an efficient and scalable manner. The resulting techniques have been made publicly available in open-source.
### Index Terms
—streaming, anomaly detection, massively parallel algorithms
## I. Introduction
Outlier analysis forms a key mechanism in modern data science and analytics [1], aiming to detect objects that, as defined in [12], appear to be inconsistent with the remainder of the objects in the same dataset. They are used in a variety of applications, such as fraud detection, spam detection, medical diagnosis, just to name a few. One of the most commonly used definitions for an outlier is the distance-based one [13], where an object is considered an outlier if it does not have more than $k$ neighbors in a distance up to $R$. Continuous outlier detection in data streams deals with the problem of keeping an updated list of all outliers after each new object arrives and/or expires. Apparently, when data grows large, this becomes a challenging task, since applying even an one-pass algorithm to all active data is prohibitively expensive. To improve efficiency and scalability, the main target of this work is to propose massively parallel solutions for continuous outlier detection in data streams.
Briefly, the relevant state-of-the-art falls into two categories. The first category contains efficient non-parallel solutions for streaming outlier detection, e.g., [4], [8], [16], [21]. The second category contains parallel solutions for outlier detection, where, to date, there is a single proposal that assumes modern distributed computing platforms, such as MapReduce [7]; nevertheless, this solution does not deal with the streaming case. The novelty of our proposal is that it proposes the first solution that combines both massive parallelism and continuous outlier detection in a streaming setting. Orthogonally, techniques are classified as either exact or approximate. We target exact solutions in this work.
Devising efficient parallel solutions for this problem involves addressing a series of important issues. First, outlier detection algorithms in data streams involve windows that cannot be partitioned into non-overlapping partitions, among which no communication is required. Second, low latency is of high significance in order to deliver results in a timely manner. Third, state information needs to be kept between window slides in order to avoid unnecessary recomputations. We provide solutions to the above issues through transferring key ideas in non-parallel techniques, such as [4], [16] to the Flink<sup>1</sup> platform.
This work aspires to become a reference point for all future work on streaming outlier detection in massively parallel
<sup>1</sup>https://flink.apache.org/
settings. The contributions of this work are summarized as follows:
(i) We explore a series of implementation alternatives, differing in the algorithmic features they employ and in the way data is partitioned.
(ii) We provide thorough experimental evaluation results along with a comparison against a recent study in [17].
(iii) We offer the source code as an open-source library.²
In summary, we show that our best performing alternative, when tested on a real-world dataset, can yield speed-ups up to 117 (resp. 2076) times over a naive parallel (resp. non-parallel) solution in Flink using just a commodity 4-core machine. Similar performance is observed for synthetically generated datasets as well.
The remainder of this work is structured as follows. Section II contains background material on parallel streaming platforms and outlier detection algorithms. Section III introduces our first parallel solution, which is extended in Sections IV and V. Performance evaluation results are offered in Section VI. We close this work with a discussion of related work and issues pertaining extensions to our techniques in Sections VII and VIII, respectively.
II. FUNDAMENTAL CONCEPTS AND A BASELINE SOLUTION
The purpose of this section is to make the paper as self-contained as possible through providing background material. We split such background material into two parts, referring to the main massively parallel platform alternatives for streaming data and the distance-based outlier algorithms inspired the solutions hereby, respectively.
A. Parallel Frameworks and Streaming Semantics
1) Parallel Frameworks for Streaming Applications: The main choices examined include the three main parallel streaming platforms from Apache: (i) Storm³, (ii) Spark⁴ and (iii) Flink.
Apache Storm is the first widely used large scale stream processing framework. Storm is able to connect with a number of queuing systems such as Kestrel, Kafka and Amazon Kinesis. It is also able to write the resulting data to any database system. Storm provides low latency by using a record acknowledgment architecture, where each record that is processed by an operator sends back an acknowledgment that it is being processed to the previous operator. Its architecture is fault-tolerant providing at-least-once semantics, which means that in case of failure, the data is re-processed. This, however, may result in duplicate production. To offer exactly-once semantics, there is a high level Storm API called Trident that employs micro-batches.
Spark Streaming enables scalable, high-throughput and fault tolerant processing of data streams. It supports many data sources such as Kafka, Flume, Twitter and TCP sockets. The processed data can be written to filesystems and databases, such as HDFS (Hadoop Distributed File System) and Cassandra. Spark receives data from any live stream and divides them into micro-batches. Each of these batches goes through a processing step generating another micro-batch (result stream) until it has passed through all of the steps and the final result is written into a data sink. Spark Streaming is fault-tolerant supporting exactly-once semantics with a high-throughput by using the micro-batch architecture. This architecture, however, incurs higher latency than the continuous streaming approach employed by Storm and Flink, because of the delay caused by the micro-batches.
Apache Flink is a massively parallel platform for continuous stream processing providing low-latency, high-throughput and fault tolerance. Flink supports a number of data connectors including Kafka, Amazon Kinesis and Twitter along with data sinks such as HDFS, Cassandra and ElasticSearch. Flink is a framework that provides exactly-once semantics without resorting to micro-batches. Using a snapshot algorithm, it periodically generates state snapshots of a running stream topology, storing them in persistent storage, such as HDFS. In case of failure, Flink restores the latest snapshot from the storage and re-winds the stream source to the point where the last snapshot was taken. This algorithm combines the exactly-once semantics with low latency stream processing. Flink also provides a high level API, facilitating the partitioning of a stream into windows and the development of processing operators.
In summary, Storm provides low latency, similar to Flink, but it incorporates the at-least-once semantics, which allows duplicates to pass through the process in case of failures. Spark, like Storm Trident, provides exactly-once semantics with the use of micro-batches. The main drawback of micro-batches is the higher latency compared to the continuous processing model of Storm and Flink. Finally, Flink combines the advantages of Storm and Spark, namely low latency regarding the continuous record processing, coupled with the exactly-once semantics. In addition, Flink naturally supports both time- and count-based windows, which is not the case for Spark, since micro-batches essentially correspond to time-based sliding windows (see discussion below).
2) Streaming Semantics: A continuous stream is an infinite sequence of data points. Each data point o is annotated with its arrival time, o.t. The analysis of such infinite streaming data requires different techniques than those for finite datasets. A common approach is to adopt the notion of window, which refers to the most recent data items. Windows are typically small enough so that they can be stored in main memory, either of a single machine or of a parallel cluster. Windowing essentially splits the data stream into either overlapping (sliding windows) or non-overlapping (tumbling windows) finite sets of data points. Orthogonally, the splitting can be based either on the time of arrival of the data points (time-based windows) or on the number of data points (count-based windows). In the former case, the window size W is measured in time units,
while in the latter case the size corresponds to the number of the most recent data items held. In time-based windows, $W$ is defined by the minimum and maximum timestamps for data items in order to be included in the window, denoted as $W_{\text{start}}$ and $W_{\text{end}}$, respectively. More specifically, $W = W_{\text{end}} - W_{\text{start}}$.
Figure 1 shows a stream discretized in 3 windows based on time. The windows are non-overlapping with $W = 2$ time units. In a time-based window, the amount of data in the window varies through time and the contents of consecutive windows are disjoint sets. Tumbling windows conceptually divide a stream into non-overlapping partitions.
In this work, we focus on sliding windows, which generalize the tumbling ones, and our techniques support both time- and count-based windows. Therefore, without any loss of generality, whenever we use the term window, we will be referring to a sliding time-based window. Figure 2 shows examples of such windows. In sliding windows, the magnitude of each slide is denoted as $S$. Every time the window moves by $S$, $W_{\text{start}}$ and $W_{\text{end}}$ are increased by $S$ as well. For example, in Figure 1, $S=2$ time units, and in Figure 2, $S=1$ time unit. In each slide, some points may expire, i.e., they are dropped, while new points are included in the current window. Table I summarizes the notation used throughout the paper.
**B. Problem Definition and Non-Parallel Solutions**
The problem of continuous distance-based outlier detection is defined formally as follows.
**Definition 1:** Given a set of objects $O$, the threshold parameters $R$ and $k$, in each window slide $S$ report all the objects $o_i$ for which the number of neighbors $o_{\text{nn}} < k$, i.e., the number of objects $o_j$, $j \neq i$ for which $\text{dist}(o_i, o_j) \leq R$ is less than $k$.
The main challenges stem from the fact that all active objects need to be continuously assessed during their lifetime, since an object may change its status as many times as the number of slides during which it remains within the window. We exclusively focus on exact solutions. There are several exact algorithms for continuous outlier detection in data streams, such as exact-Storm [4], Abstract-C [21], LUE [14], [16], DUE [14], [16], COD [14], [16], MCOD [14], [16] and Thresh_LEAP [8]. All these algorithms assume a centralized environment. In a recent impartial comparison presented in [17], these algorithms were compared in terms of their memory consumption and CPU time. The comparisons were made using four datasets with varying dimensionality and settings of window length $W$, window slide $S$, number of neighbors $k$ and neighbor range $R$. This study showed that MCOD is superior across multiple datasets in most stream settings. In addition, Thresh_LEAP and MCOD displayed the lowest memory consumption and CPU times, while exact-Storm, Abstract-C and DUE are the slowest and most memory consuming algorithms. Based on these findings, MCOD has served as our main inspiration of our solution to the problem of parallelization of continuous distance-based outlier detection algorithms. However, we start with a simpler and easier to parallelize algorithm, namely exact-Storm, which contains several key elements in common with MCOD, such as index structures for range queries and safe inliers, and so represents a preliminary step towards our proposed solution. The key details of these two algorithms are discussed in the following.
1) **Exact-Storm:** A key operation in distance-based outlier detection is the distance computation between objects and the need to continuously examine the neighborhood of each object. To avoid a quadratic number of comparisons in each slide, appropriate indices are required. To this end, exact-Storm uses a data structure called ISB to store the data points in nodes. A node is a record containing the data point $o$, the arrival time of the point $o_t$, the number of succeeding neighbors $o_{\text{count after}}$ and a list $o_{\text{nn before}}$ of size $k$ containing the arrival time of the preceding neighbors of $o$ (i.e., each node contains a different data stream object along with some metadata). This data structure is a pivot-based index that provides support for fast range query search in any metric space. The range query, given a data point $o$ and the range $R$, returns the nodes in the ISB whose distance to $o$ is less than or equal to $R$.
---
**TABLE I**
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Short description</th>
</tr>
</thead>
<tbody>
<tr>
<td>$W$</td>
<td>The size of the stream window</td>
</tr>
<tr>
<td>$S$</td>
<td>The slide of the stream</td>
</tr>
<tr>
<td>$W_{\text{start}}$</td>
<td>The starting timestamp of the window</td>
</tr>
<tr>
<td>$W_{\text{end}}$</td>
<td>The ending timestamp of the window</td>
</tr>
<tr>
<td>$o_i$</td>
<td>The $i^{th}$ data object in the stream</td>
</tr>
<tr>
<td>$o_{\text{id}}$</td>
<td>The object identifier $o$ (either $i$ for $o_i$ or any other identifier)</td>
</tr>
<tr>
<td>$o_{\text{value}}$</td>
<td>The value of $o$</td>
</tr>
<tr>
<td>$o_t$</td>
<td>The arrival time of $o$</td>
</tr>
<tr>
<td>$o_{\text{count after}}$</td>
<td>The number of succeeding neighbors of $o$</td>
</tr>
<tr>
<td>$o_{\text{nn before}}$</td>
<td>A list with the arrival time of the preceding neighbors of $o$</td>
</tr>
<tr>
<td>$o_{\text{nn}}$</td>
<td>The count of neighbors of $o$</td>
</tr>
<tr>
<td>$P^O_+$</td>
<td>A list containing data points that are potentially outliers</td>
</tr>
<tr>
<td>$R$</td>
<td>The distance threshold in the outlier definition</td>
</tr>
<tr>
<td>$k$</td>
<td>The neighbor count threshold in the outlier definition</td>
</tr>
<tr>
<td>$\text{dist}(o_i, o_j)$</td>
<td>The distance function between objects $o_i$ and $o_j$</td>
</tr>
</tbody>
</table>
$P$ set of Flink partitions, each handled by a separate Flink node
A sketch of the algorithm steps in each slide is as follows. For each new data point $o$, a node is created as described above. Then, a range query is issued on the ISB structure to find the neighbors $o'$ of the new node. The result of the range query is used to initialize the values of the new node’s $o$.\textit{count} \textit{after}$ and $o$.\textit{nn} \textit{before}$. If the size of $o$.\textit{nn} \textit{before}$ is more than $k$ then the oldest timestamps are removed. For each $o'$, the value of $o'$.\textit{count} \textit{after}$ is increased by 1. Finally the new node is inserted into the ISB. When a data point $o$ expires, meaning that $o$.\textit{t}$ is lower than the window’s starting timestamp $W$.\textit{start}$, it is removed from the ISB. Its timestamp, however, is not removed from other nodes’ list of preceding neighbors to mitigate overheads.
The above steps are applied in each slide. After they have been completed, ISB is scanned for outliers. If a node’s sum of $\textit{count} \textit{after}$ and the size of $\textit{nn} \textit{before}$, whose timestamps is within the window borders, is lower than $k$, then the node is an outlier. An optimization that avoids checking all objects in each slide is through the notion of $\textit{safe inliers}$: if a node’s $\textit{count} \textit{after}$ is more than $k$, then this node is a safe inlier and does not need to be checked again in future scans as it is guaranteed to have at least $k$ neighbors in the remainder of its lifetime.
In our parallel solution, we adopt both structures for fast range queries (using however an M-tree, not ISB) and the notion of safe inliers.
2) MCOD: The main motivation behind MCOD is that range queries are better than brute-force (all-pairs) distance computations but are still expensive. MCOD (standing for Micro-cluster-based Continuous Outlier Detection) aims to mitigate the need for range queries. The algorithm drastically reduces the number of data points that need to be addressed during a range query through creating micro-clusters and assigning data points to them. A micro-cluster has at least $k + 1$ data points all of which are neighbors to each other. Its center can be a data point or just a point in the metric space and has a radius of $R / 2$, implying that the maximum distance between any two objects in the micro-cluster is at most $R$. Each data point in any micro-cluster is an inlier and does not need to be checked in outlier queries. However, a data point that does not belong to a micro-cluster can be either an inlier or an outlier. Such objects are stored in a list $\mathcal{PO} \subseteq \mathcal{O}$.
On average, MCOD stores less metadata for each object than exact-Storm. More specifically, for each $o$ in a micro-cluster, it stores the identifier of its cluster. For each $o$ in $\mathcal{PO}$, it stores the $o$.\textit{count} \textit{after}$ and the expiration time of the $k$ most recent preceding neighbors. MCOD also uses an event queue to store unsafe inliers that are not in any cluster. This event queue is a specific priority queue that keeps the time point at which a non-safe inlier should be re-checked.
A sketch of the algorithm steps is as follows. For each new data point $o$, if $o$ is within $R / 2$ range of a micro-cluster, it is added to it; if there are multiple such micro-clusters, the closest one is picked. Otherwise, if it has at least $k$ neighbors in $\mathcal{PO}$ within a distance of $R / 2$, it becomes the center of a new micro-cluster. If none of the above conditions are met, $o$ is added to $\mathcal{PO}$ and possibly to the event queue, if it is not an outlier. At each slide, all the previous non-expired outliers are checked along with the inliers for which the check time has arrived (with the help of the event queue). When a data point $o$ expires, it is removed from the micro-cluster or $\mathcal{PO}$ and the event queue updates the unsafe inliers. If $o$ is removed from a micro-cluster and the points remaining in that micro-cluster are less than $k + 1$, then the cluster is destroyed and each data point of the cluster is processed as a new data point, without however updating their neighbors.
In our final parallel solution, we also adopt the notion of micro-clusters.
III. Simple Solutions
The aim of this work is to build upon the state-of-the-art non-parallel techniques and manage to parallelize the window and the associated workload efficiently. First, to explain the main engineering approach and to be capable of assessing the efficiency of the implementation of the parallel streaming solutions for distance-based outlier detection proposed in this paper, we introduce a baseline approach, which broadly corresponds to a single-partition implementation of exact-Storm in Flink. By single-partition, we mean that the logical window is physically allocated to a single Flink node as a whole; the norm in Flink is windows to physically partitioned across multiple Flink nodes. Then, we proceed to its parallelization, where we are forced to employ the notion of a meta-window.
A. A baseline approach in Flink
We use two modules: (i) a stream handler; and (ii) a window processor. The stream handler applies a map function on each stream object and sends it to the window. The window processor runs an outlier detection algorithm in each slide.
In the map function of the stream handler, each data point is initialized with a null $o$.\textit{count} \textit{after}$ count and an empty list $o$.\textit{nn} \textit{before}$. These records are sent to the single-partitioned window. The window has its own state in which it stores the records, which is persistent across slides. In other words, changes made to a data point’s metadata in a slide are kept throughout the data point’s lifetime. Overall, the contents of a window at any point in time contain all the active points along with their metadata, i.e., $o$.\textit{id}$, $o$.\textit{value}$, $o$.\textit{t}$, $o$.\textit{count} \textit{after}$ and $o$.\textit{nn} \textit{before}$.
The outlier detection algorithm contains two steps. The first step is the update of each data point’s metadata. In particular, the algorithm checks only the new arrivals. For every such data point, $o$, it finds the neighbors, $o'$ in range $R$, checking all the points in the window; the range is according to the euclidean distance by default but any type of metric distance can be employed. If $o'$ is in the same slide as $o$, then $o$.\textit{count} \textit{after}$ is increased; otherwise the timestamp $o'$.\textit{t}$ is added to the list $o$.\textit{nn} \textit{before}$. For each $o'$ the value of $o'$.\textit{count} \textit{after}$ is increased by 1. The rest of the metadata of the older data points, i.e. the $o'.\textit{nn} \textit{before}$ values, have already been computed when these objects were inserted in the window, and do not need to be recalculated. The second step
of the algorithm is to detect the outliers. For each data point in the current window, the algorithm computes the total number of neighbors. This is done by summing the $o.count$ and the size of the list $o.nn$ taking into account only those values $t$ where $t \geq W.start$, as explained previously.
Both time-based and count-based windows are naturally supported in Flink. In this work, we mainly focus on time-based windows, but it is worth pointing out that, even without explicit support of count-based windows, it is straightforward to emulate them through artificially tweaking the initial timestamps, so that a fixed number of objects arrive and expire in each slide, and thus the amount of alive objects remains stable during stream processing.
To gain insights into the performance of the solution, we evaluate this baseline approach using the Stock dataset from [17]. It is a one-dimensional dataset with 1,048,575 data points. Each data point, $o$, is assigned a unique identifier, $o.id$, and has a numeric value of type $Double$, $o.value$. We employ a machine with an Intel i7-3770K CPU at 3.5GHz, which possesses 4 cores (8 threads) and 32GB of RAM. Figure 3(left) shows the average processing time for each slide step for four values of slide magnitude $S$ with $W = 10K$; $S$ is given as a percentage of $W$. Figure 3(right) shows the corresponding input consumption rate, which is equal to the maximum throughput of stream data the baseline approach can support; in these settings, this throughput can reach 50 objects/sec. We can see that in general, the average processing time per new arrival increases for either small slides, where a few new points arrive, or relatively large ones, where most of the window contents are replaced.
### B. A Naïve Solution
The parallelization of the baseline approach in Section III-A yields a naïve parallel solution, where the window is split into a set of multiple partitions, $P$. It is termed naïve in the sense that it does not benefit from data structures to speed-up range queries. Notwithstanding the simplicity, the parallelization technique needs to efficiently address the challenges of (i) collaboration between physical window partitions to establish whether a point at a specific time is an outlier or not through aggregating local statistics; and (ii) keeping the window state across slides, where the state includes the object metadata.
The engineering solutions devised need to respect the principle that, in each window slide, each new object is processed only once. The latter does not allow first to compute the local aggregates for a given point and then to compute the global aggregates into the same window. Therefore, the key idea is to split the window processor into two parts, the sliding window processor and the tumbling window processor, as shown in Figure 4. The former holds the active points in its partition allowing some temporary replication, as discussed in the following, while the latter keeps the final metadata in each slide. These metadata also include the information about the outliers. Since they evolve in each slide for both the new and the old points, the window state is fully updated, and thus the tumbling window semantics apply. Essentially, the second window serves as a meta-window.
The implementation details are as follows. First, we extend the object record with two new fields, namely $o.flag$ and $o.partition$. The former is a binary variable, where 0 means that the object should be kept to the assigned window during its lifetime, and 1 means that the object is redundant, and should be evicted in the next slide, regardless of the $W.start$ value. The stream handler applies a $flatMap$ that, apart from initializing an object’s extended record, computes its $o.partition$ based on the $o.id$. Then it dispatches the point to all partitions, setting the $o.flag$ to 1 to all partitions different to $o.partition$. In other words, new objects are replicated across the partitions.
The sliding window processor is responsible for comparing the distances between a) the new objects in the slide and b) the new objects and all its previous contents, the timestamp of which comes after $W.start$. This leads to updating the $o.count$ metadata for all objects and the $o.nn$ metadata for the new ones. However, the metadata for the new objects are local aggregates spread across all partitions.
---
*available from https://wrds-web.wharton.upenn.edu/wrds
To produce the global aggregates, the updated objects are partitioned again according to \( o.partition \) in a new tumbling window, but without replication this time. However, to save communication cost, not all objects are shuffled, but only these that are not safe inliers. The set of the safe inliers is the complement of potential outliers, \( \mathcal{O} \setminus \mathcal{O} \). In each slide, the sliding window processor, first creates the \( \mathcal{O} \) set, through checking whether \( o.count\_after \) is less than \( k \) or not. The tumbling window aggregates the local aggregates of non-safe inliers, and thus derives the exact metadata needed to establish outlierness. As in [4], the list in \( o.nn\_before \) may contain preceding neighbors that have expired; thus a filter is required to establish the alive ones, termed \( o.nn\_prec \).
Algorithm 1 summarizes the naive approach.
### IV. Advanced Solution
The advanced solution extends the naive one in two complementary and orthogonal dimensions, namely through: (i) employing data structures to support fast range queries; and (ii) through performing value-based partitioning, which eliminates the need to employ a meta-window.
More specifically, the first extension involves a more advanced approach to holding state in the sliding window. Such state is stored in a M-tree [10], to which range queries are submitted. M-trees are part of each partition state, therefore each Flink partition has its own local tree. Compared to Algorithm 1, the key difference is in the sliding window processor.
The second extension is more intrusive. A limitation of the solutions thus far is that they replicate each new data point to all partitions. This is inevitable, given that the stream handler assigns points to partitions randomly and thus a new object may have neighbors in all partitions. Value-based partitioning addresses this limitation without sacrificing the accuracy of the results. Also, as will be explained below, it eliminates the need to exchange information between window partitions during a slide.
We may use the M-tree also to perform value-based partitioning. However, in this work, we make the assumption that the space is euclidean and it can be partitioned into grid cells. Further, we assume that there are some sample data available before execution. Based on these data, we can extract min, max and quantile information about the value distribution in each dimension, in order to construct the grid cell appropriately. The rationale of the approach for a two-dimensional space is illustrated in Figure 5.
Each cell is assigned to a single Flink node. However, an object in a cell may have neighbors in other cells as well. The key difference is that these neighbors belong to adjacent cells only; therefore the number of Flink nodes that need to be aware of the arrival of each new object is limited. More specifically, the borders of each cell are extended by a buffer zone of width equal to \( R \). The stream handler sends a new data point (i) to the partition corresponding to its cell with flag \( o.flag \) set to 0 and (ii) to all the partitions, the buffer zone of which includes the new data point with flag \( o.flag \) set to 1; these partitions form the set \( AP \) in Algorithm 2. Assuming that \( R \) is much smaller than the size of a grid cell side, each data object is replicated at most 4 times if the data is 2-dimensional; this is because, when it falls near to a cell corner, it may fall into the buffer zone of three other adjacent cells. In the example in the figure, the buffer zone borders for the upper-left cell are depicted; points 1,2 and 3 are sent to the Flink node responsible for the upper-left cell along with all the other three points.
According to the partitioning above, each partition has all the necessary information in order to establish object outlierness locally. Therefore, a single sliding window partition processor is required, which incorporates the responsibility of the tumbling window partition processor in the previous
Algorithm 2 Advanced solution with value-based partitioning
```
procedure STREAMHANDLER
for each new object o do
initialize record
if there is no timestamp then
add artificial timestamp
o.partition ← findGridCell (o.value)
o.flag ← 0
send o to o.partition
(o.value) ← findRelevantAdjacentPartitions
o.flag ← 1
for each partition p ∈ AP do
send o to p
procedure SLIDINGWINDOWPARTITIONPROCESSOR
for each slide do
evict expired objects from M-tree
insert new objects in M-tree
compute distances involving new objects with flag 0
update o.count_after and o.nn_before metadata
for each object o ∈ PO do
if (o.count_after ≥ k) then
PO ← PO \ o
else
o.nn_prec ← prune o.nn_before
if (o.count_after + |o.nn_prec| < k) then
report o as an outlier
```
Each window slide starts with the eviction of the expired data points from the state and the dissolution of the micro-clusters with \( \leq k \) elements. Each data point that belonged to a dissolved micro-cluster is treated as a new data point. For each new data point, the algorithm computes its distance to the micro-clusters and if it belongs to any of them, it proceeds to updating the PO metadata only. If the data point does not belong to any micro-cluster, it is inserted into the PO set and a range query is executed to find its neighbors. Based on the number of neighbors of a data point in PO, a new micro-cluster may be created.
Broadly, the set of potential outliers includes only points that do not belong to a micro-cluster. Also, if a point belongs to a micro-cluster, only the metadata of points in PO need to be updated. After the update of each data point’s metadata, the algorithm reports the outliers by checking the data points in PO. Each data point that has \( o\.count_{after} + |o\.nn prec| < k \) is reported as an outlier for the corresponding slide.
V. Employing Micro-clustering
The motivation behind using micro-clusters is to drastically reduce the number of range queries submitted to the M-tree, as explained in Section II-B2. pMCOD extends the previous algorithm with the notion of micro-clusters, as shown in Algorithm 3. In contrast to the work in [16], this version does not contain an event queue. The sliding window’s state consists of the micro-clusters, the potential outliers PO and the M-tree. The notion of value-partitioning from section IV along with the introduction of micro-clusters means that each partition is able to fully report its outliers without the need to communicate with the other partitions and at a faster rate.
VI. Performance Evaluation
The experimental setting is as follows. We focus on presenting the performance as a function of the window size, the slide size, the amount of outliers and the degree of parallelism. The accuracy is always 100%, since all techniques are exact. We have employed three real and one artificial dataset. These datasets are static and finite, but are adequate to emulate a streaming setting. Unless stated otherwise, the times presented correspond to the average time per slide, aggregated over 200 slides overall. Note that, given that we are in a streaming environment, the actual full dataset size does not matter.
Initially, we focus on the Stock real-world dataset using the same machine as the one described in Section III-A. Each
experiment is repeated 5 times. \( R \) and \( k \) are set to 0.45 and 50, respectively, yielding 1.02% of outliers (i.e., the setting is similar to [17]). The default degree of parallelism, i.e., the number of Flink partitions of the window is 16, and each Flink node runs on a single core. Stock is an one-dimensional dataset. To allow a fair comparison, the timestamps are assigned in such a way that all windows are of the same size, and the slide is given as a percentage of \( W \), e.g., a slide of 5% means that the 5% of the window contents are new arrivals.
A. Main Results
In the first experiment, we employ a window of 10K objects, while the slide varies from 5% to 50%. The results are shown in Figure 6, where both the average and the median times are reported. From the figure, we can draw the following observations:
1) \( pMCOD \) improves upon the naive solution by two orders of magnitude; for example, it is 117X faster for slide 10%.
2) \( pMCOD \) improves upon the advanced solutions with both random and value-based partitioning (labeled as advanced and advanced(VP), respectively) by an order of magnitude for slides up to 20%; e.g., for slide of 10%, it is 10X faster, and for 20%, it is 13X.
3) For slides of 50%, where half of the window points are new in each slide, \( pMCOD \) is faster than advanced(VP) by 2.74 times.
4) Advanced dominates advanced(VP) for small slides, while the latter is better for large ones, which is mostly attributed to the fact that advanced(VP) inherent load imbalance\(^7\) is outweighed by the benefits of less replication and communication in large slides.
5) Despite the non-negligible standard deviation, the trends in the average values are similar to those in the median ones.
Figure 7 compares \( pMCOD \) against the throughput results obtained by the baseline technique, described in Section III-A. The improvements are up to 2076 times, whereas \( pMCOD \)'s throughput exceeds 33240 new objects per second for \( S = 5\% \).
In the second experiment, we focus on \( pMCOD \) and we examine the impact of three parameters, namely the degree of parallelism \( P \), \( k \) and the window size \( W \). The results are summarized in Figure 8. Regarding the degree of parallelism, the left figure refers to a setting where \( W = 10K \) and the slide is 5%. We see that \( pMCOD \) scales well and the time per slide drops nearly two times when we go from two partitions to four (the machine is a 4-core one). We also see that our default configuration of \( P = 16 \) yields the highest performance. In the middle figure, we see that, as we increase the \( k \) value, the performance degrades. Increasing the \( k \) value implies more outliers and higher difficulty in forming micro-clusters; as such, this result is reasonable. Finally, the rightmost figure reveals that \( pMCOD \) scales well with the size of the window: a ten-fold increase in the window size results in similar increases in the processing time for slides of 5% and 10%, and smaller increases for larger slides. More specifically, for 20% slide magnitude, the increase in the processing time in the 100K window is less than 8 times, and for 50% slide, it is 6.62 times only.
B. Using More Datasets
First, we provide results using an artificial dataset, which is generated from a mixture of three Gaussian distributions and taken from [17]. We set \( W = 10K \), \( R = 0.28 \) and \( k = 50 \). Figure 9 presents the results, where it is shown that the main observations drawn for the Stock real-world dataset hold.
In the next experiment, we employ two additional real-world datasets, namely Forest Cover (FC)\(^8\) and TAO\(^9\), considering 2 and 3 dimensions, respectively. We configure their parameters so that always \( k = 50 \), while \( W \) is kept to 10K. More specifically, for FC, we set \( R = 34 \) on the 2nd and the 5th dimension (corresponding to 1.3% outliers), and for TAO, we
\(^7\)The grid cell used for partitioning is based on an initial sample and thus no guarantees can be provided as to how balanced the workload distribution is throughout stream processing.
\(^8\)Available from http://kdd.ics.uci.edu
\(^9\)Available from http://www.pmel.noaa.gov
The results are shown in Figure 10 (actually, for TAO and S=50%, advanced crashed). The key observations are as follows: (i) \textit{pMCOD} behavior is nearly the same for 2 or 3 dimensions but significantly worse than the behavior for the one-dimensional datasets; (ii) for the FC dataset, and due to the increased number of outliers compared to the other settings, \textit{advanced(VP)} slightly outperforms \textit{pMCOD} when S = 50%, which means that half of the points in each window slide are new arrivals; and (iii) \textit{advanced} and \textit{advanced(VP)} are significantly affected by the increase in the number of dimensions considered from 2 to 3.
\textbf{C. Comparison against results in [17]}
Finally, we compare our results regarding the Stock dataset against those of non-parallel MCOD [16], as evaluated by third parties in [17]. The evaluation in [17] also uses a processor with clock speed at 3.5GHz, but without giving details about the number of processors. Nevertheless, our results can directly compare against those in Figures 6 and 10 in [17]. Due to the log scale used, we can only report approximate values, as shown in Table VI-C. The speedup is between 2.66X and 6.65X, which provides strong insights into the parallelization efficiency of our solutions on a 4-core machine.
\section*{VII. Related Work}
Outlier detection has been a topic that has attracted a lot of interest and there are several comprehensive surveys, e.g., [1], [2], [9], [11]. In Section II we have already discussed algorithms for outlier detection in streams. The next most related area to our work is parallel algorithms for outlier detection. A distributed outlier detection algorithm for massive datasets is proposed in [7]. The two key points of this research is the initial partitioning of the data and the different outlier detection algorithm that each partition may run. The partitioning resembles our value-based one and focuses on the workload that each partition receives. Then, in each partition, the exact outlier detection algorithm out of two candidates is chosen. However, none of these algorithms is suitable in a streaming setting. There are also some works that assume parallel infrastructures that cannot scale and do not follow the paradigm introduced by MapReduce and its modern extensions, e.g., [3], [5]. Overall, our work is the first one that combines streaming and massively parallel solutions to the problem of outlier detection.
A related yet different problem is examined in [20]. In a production distributed environment, a stream of data points may be split across multiple nodes, each holding part of the values of a data point. These parts will eventually need to be aggregated on a core node for outlier detection, but this incurs
significant communication cost. The solution proposed is based on compressing local data into a sketch. Another related problem is that of supporting multiple outlier detection queries, i.e., combinations of $R$ and $k$ values. Examples include [6] and [16]. The latter presents multi-query extensions to MCOD and its approach is compatible with our parallel pMCOD solution; here, we have examined single-query solutions only.
Finally, apart from the platforms discussed, there are additional alternatives. For example, ChronoStream [19] is a prototype system for big stream processing in a distributed environment, providing low latency. It incorporates horizontal elasticity for workload balancing and efficiency, as well as vertical scaling for resource management. However, we have decided to adopt Flink because it combines strong positive features, as discussed in Section II-A, with mature engineering and a broad user community, while we did not consider scaling issues in this work.
VIII. CONCLUDING REMARKS AND DIRECTIONS FOR FUTURE WORK
This work targets streaming distance-based outlier detection and provides the first solutions to date to this problem, when examined in a massively parallel setting, such as Flink. We have proposed a series of alternative techniques, with the one termed as pMCOD being a clear winner in the experiments that we have conducted using three real-world and one synthetic dataset. The improvements upon other solutions are significant, if not impressive, reaching up to an order of magnitude compared to the second best solution and up to three orders of magnitude compared to baseline solutions. There are also good speedups, between 2.66 and 6.65 times, compared to the non-parallel solutions implemented by third parties in [17], when running on a 4-core machine. Also, our solutions have been made publicly available. The motivation behind our work is to fill a gap in the currently offered solutions in large-scale streaming big data analytics. Moreover, our solutions aspire to act as a reference point for future techniques that target both continuous reporting of distance-based outliers and a massively parallel setting.
We identify three avenues for such future extensions. First, there are several features in current non-parallel solutions the parallelization of which might yield benefits. Two such features is the event queue and the full MCOD algorithm in [16]. The event queue is a priority queue, and its efficient distribution across several nodes depends heavily on the partitioning of the stream into different nodes. In addition, in the original MCOD proposal, the expensive $M$-tree is less used. Both features reduce the number of range queries considerably and their efficient parallel implementation are left to the extension of this paper. Second, further research is required in order to make value-based partitioning more practical and adaptively balanced, possibly using a $M$-tree instead of a grid, addressing also the issue of acquiring both initial and evolving metadata to reach efficient partitioning decisions; to this end, the early results in [7] need to be transferred into a streaming environment. Third, there are additional ways, in which the logical window can be partitioned, e.g., through adopting the notion of time slicing [18], which may be more efficient when multiple distance-based outlier detection queries are active simultaneously; this notion is also employed in [8]. Finally, another line of future research is on approximate outlier mining and on additional definitions of outlierness; here, we have provided exact solutions and we considered distance-based outliers only.
REFERENCES
|
{"Source-Url": "https://www.research.manchester.ac.uk/portal/files/74978347/dsaa_v2.pdf", "len_cl100k_base": 10728, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39506, "total-output-tokens": 12204, "length": "2e13", "weborganizer": {"__label__adult": 0.00036406517028808594, "__label__art_design": 0.0004656314849853515, "__label__crime_law": 0.0007104873657226562, "__label__education_jobs": 0.001270294189453125, "__label__entertainment": 0.00015103816986083984, "__label__fashion_beauty": 0.00021469593048095703, "__label__finance_business": 0.0006666183471679688, "__label__food_dining": 0.0004134178161621094, "__label__games": 0.0007715225219726562, "__label__hardware": 0.001750946044921875, "__label__health": 0.0009446144104003906, "__label__history": 0.00043892860412597656, "__label__home_hobbies": 0.00014412403106689453, "__label__industrial": 0.0008816719055175781, "__label__literature": 0.0004169940948486328, "__label__politics": 0.0004777908325195313, "__label__religion": 0.0005879402160644531, "__label__science_tech": 0.449951171875, "__label__social_life": 0.00017821788787841797, "__label__software": 0.0323486328125, "__label__software_dev": 0.505859375, "__label__sports_fitness": 0.00026726722717285156, "__label__transportation": 0.000522613525390625, "__label__travel": 0.00018596649169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49877, 0.01393]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49877, 0.39893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49877, 0.91005]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5298, false], [5298, 11230, null], [11230, 16797, null], [16797, 23752, null], [23752, 28231, null], [28231, 32310, null], [32310, 35702, null], [35702, 39936, null], [39936, 42711, null], [42711, 49877, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5298, true], [5298, 11230, null], [11230, 16797, null], [16797, 23752, null], [23752, 28231, null], [28231, 32310, null], [32310, 35702, null], [35702, 39936, null], [39936, 42711, null], [42711, 49877, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49877, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49877, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5298, 2], [5298, 11230, 3], [11230, 16797, 4], [16797, 23752, 5], [23752, 28231, 6], [28231, 32310, 7], [32310, 35702, 8], [35702, 39936, 9], [39936, 42711, 10], [42711, 49877, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49877, 0.25728]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
446a7730530bdccaa4d4f1261594ed54a8f0a9ff
|
In proceedings of PPoPP’14
Concurrency Testing Using Schedule Bounding: an Empirical Study *
Paul Thomson, Alastair F. Donaldson, Adam Betts
Imperial College London
{paul.thomson11,afd,abetts}@imperial.ac.uk
Abstract
We present the first independent empirical study on schedule bounding techniques for systematic concurrency testing (SCT). We have gathered 52 buggy concurrent software benchmarks, drawn from public code bases, which we call SCTBench. We applied a modified version of an existing concurrency testing tool to SCTBench to attempt to answer several research questions, including: How effective are the two main schedule bounding techniques, preemption bounding and delay bounding, at bug finding? What challenges are associated with applying SCT to existing code? How effective is schedule bounding compared to a naive random scheduler at finding bugs? Our findings confirm that delay bounding is superior to preemption bounding and that schedule bounding is more effective at finding bugs than unbounded depth-first search. The majority of bugs in SCTBench can be exposed using a small bound (1-3), supporting previous claims, but there is at least one benchmark that requires 5 preemptions. Surprisingly, we found that a naive random scheduler is at least as effective as schedule bounding for finding bugs. We have made SCTBench and our tools publicly available for reproducibility and use in future work.
Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verification; D.2.5 [Software Engineering]: Testing and Debugging
Keywords Concurrency; systematic concurrency testing; stateless model checking; context bounding
1. Introduction
In recent years, researchers have shown great interest in systematic techniques for testing concurrent programs [7, 12, 26, 32, 34, 36] to expose concurrency bugs—software defects (such as crashes, deadlocks, assertion failures, memory safety errors and errors in algorithm implementation) that arise directly or indirectly as a result of concurrent execution. This is motivated by the rise of multicore systems [31], the ineffectiveness of traditional testing for detecting and reproducing concurrency bugs due to nondeterminism [19], and the desire for automatic, precise analysis, which is hard to achieve using static techniques [1].
Systematic concurrency testing (SCT) [7,12,26,32,34], also known as stateless model checking [12], is used to find and reproduce bugs in multi-threaded software. It has been implemented in a variety of tools, including CHESS [26] and Verisoft [12]. The technique involves repeatedly executing a multi-threaded program, controlling the scheduler so that a different schedule is explored on each execution. This process continues until all schedules have been explored, or until a time or schedule limit is reached. The analysis is highly automatic, has no false-positives and bugs can be reproduced by forcing the bug-inducing schedule.
Assuming a nondeterministic scheduler, the number of possible thread interleavings for a concurrent program is exponential in the number of execution steps, so exploring all schedules for large programs using SCT is infeasible. To combat this schedule explosion, schedule bounding techniques have been proposed, which reduce the number of thread schedules that are considered with the aim of preserving schedules that are likely to induce bugs. Preemption bounding [23] bounds the number of preemptive context switches that are allowed in a schedule. Delay bounding [7] bounds the number of times a schedule can deviate from the scheduling decisions of a given deterministic scheduler. During concurrency testing, the bound on preemptions or delays can be increased iteratively, so that all schedules are explored in the limit; the intention is that interesting schedules are explored within a reasonable resource budget. Schedule bounding has two additional benefits, regardless of bug finding ability. First, it produces simple counterexample traces; a
Concurrency Testing Using Schedule Bounding: an Empirical Study
trace with a small number of preemptions is likely to be easy to understand. This property has been used in trace simplification [15],[16]. Secondly, it gives bounded coverage guarantees; if the search manages to explore all schedules with at most $c$ preemptions, then any undiscovered bugs in the program require at least $c + 1$ preemptions. A guarantee of this kind provides some indication of the necessary complexity and probability of occurrence of any bugs that might remain, and recent works on concurrent software verification employ schedule bounding to improve tractability [6],[20].
The hypothesis that preemption and delay bounding are likely to be effective is based on empirical evidence suggesting that many interesting concurrency bugs require only a small number of preemptive context switches to manifest [7],[23],[26]. Prior work has also shown that delay bounding improves on preemption bounding, allowing additional bugs to be detected [7]. However, these works have focused on a particular set of C# and C++ programs that target the Microsoft Windows operating system, most of which are not publicly available. Additionally, these works do not explicitly show that schedule bounding provides benefit over a naive random scheduler for finding bugs [1].
We believe that these exciting and important claims about the effectiveness of scheduling would benefit from further scrutiny using a wider range of publicly available applications. To this end, we present the first independent, fully reproducible empirical study of schedule bounding techniques for SCT. We have put together SCTBench, a set of 52 publicly available benchmarks amenable to systematic concurrency testing, gathered from a combination of stand-alone multi-threaded test cases, and test cases drawn from 13 distinct applications and libraries. These are benchmarks that have been used in previous work to evaluate concurrency testing tools, with a few additions. Our study is based on an extended version of Maple [36], an open source concurrency testing tool. Our aim was to answer the following questions over a large and varied set of benchmarks:
1. Can we find the known bugs in the publicly available benchmark suites using SCT?
2. How do preemption and delay bounding compare in their effectiveness at finding concurrency bugs?
3. How effective is schedule bounding compared to a naive random scheduler at finding bugs?
4. How easy is it to apply SCT to various existing codebases in practice?
5. Can we find examples of concurrency bugs that require more than three preemptions (the largest number of preemptions required to expose a bug in previous work) [7]?
\footnote{We note that [23] plots the state (partial-order) coverage of preemption bounding against a technique called “random” on a single benchmark, but the details of this and the bug finding ability are not mentioned.}
1.1 Main findings and contribution
We now summarise the main findings of our study. The conclusions we draw of course only relate to the 52 benchmarks in SCTBench, but this does include publicly available benchmarks used in prior work to evaluate concurrency testing tools. We forward-reference the Venn diagrams of Figure 2, which are discussed in detail in §6. These diagrams provide an overview of our results in terms of the bug-finding ability of the various techniques we study: iterative preemption bounding (IPB), iterative delay bounding (IDB), depth-first search with no schedule bound (DFS) and naive random scheduling (Rand). For each method evaluated, a limit of 10,000 schedules per benchmark is used.
Schedule bounding is similar to naive random scheduling in terms of bug-finding ability. Our assumption prior to this study was that a naive random scheduler would not be effective at finding bugs. This claim is not made explicitly in prior work, but neither is it addressed; prior work (such as [7],[23],[26]) only includes depth-first search or preemption bounding as a baseline for finding bugs. Our findings, summarised in Figure 2, contradict this assumption: the bugs in 44 benchmarks were found by both schedule bounding and a naive random scheduler within 10,000 executions. Schedule bounding and random scheduling each found one additional, distinct, bug. The random scheduler almost always led to faster bug detection than with schedule bounding. This raises two important questions: Does schedule bounding actually aid in bug finding, compared to more naive approaches? Are the benchmarks used to evaluate concurrency testing tools (captured by SCTBench) representative of real-world concurrency bugs? Our findings indicate that the answer to at least one of these questions must be “no”. As noted above, schedule bounding provides several benefits regardless of bug finding ability which are not questioned by our findings.
Many bugs can be found via a small (1-3) schedule bound. Schedule bounding exposed each bug in 45 of the 52 benchmarks and the highest preemption bound required in these cases was three. Thus, a large majority of the bugs in SCTBench can be found with a small schedule bound. This supports previous claims [2],[23],[26]. It also adds weight to the argument that bounded guarantees provided by schedule bounding are useful. However, we note that one benchmark was reported to require a minimum of five preemptions for the bug to manifest. A straightforward depth-first search with no schedule bounding exposed bugs in 33 benchmarks, all of which were also found with schedule bounding.
Delay bounding beats preemption bounding. Delay bounding found all of the 38 bugs that were found by preemption bounding, plus seven that were not (see Figure 2a).
SC can be difficult to apply. Many interesting benchmarks could not be included in our study, as they use nondeterministic features or additional synchronisation that is not modelled or controlled appropriately by most SCT tools. This in-
cludes network communication, multiple processes, signals (other than pthread condition variables) and event libraries.
Additionally, we found several program modules that could not easily be tested in isolation due to direct dependencies on system functions and other program modules. Thus, creating isolated tests suitable for SCT may require significant effort, especially for those who are not developers of the software under test.
Data races are common. Many benchmarks feature a large number of data races that are not regarded as bugs. Treating them as errors would be too easy for benchmarking purposes, as they are very common. For the study, we explore the interleavings arising from sequentially consistent outcomes of racy memory accesses in order to expose bugs such as assertion failures and incorrect output.
Bugs may not be detected without additional checks. Some concurrency bugs manifest as out-of-bound memory accesses, which do not always cause a crash. Tools need to check for these, otherwise bugs may be missed or manifest nondeterministically, even when the required thread schedule is executed. Performing such checks reliably and efficiently is non-trivial.
Trivial benchmarks. We argue that certain benchmarks used in prior work are “trivial” (based on certain properties – see Table 2) and cannot meaningfully be used to compare the performance of competing techniques. Instead, they provide a minimum baseline for any respectable concurrency testing technique. For example, the bugs in 19 benchmarks were exposed 50% of the time when using a random scheduler, with 10,000 runs. In nine of these cases, the bugs were exposed 100% of the time.
Non-trivial benchmarks. We believe most benchmarks from the CHESS, PARSEC and RADBench suites, as well as the misc.safestack benchmark, present a non-trivial challenge for concurrency testing tools. Furthermore, these represent real bugs, not synthetic tests. Future work can use these challenging benchmarks to show the improvement obtained over schedule bounding and other techniques.
1.2 SCTBench and reproducibility of our study
To make our study fully reproducible, we provide the 52 benchmarks (SCTBench), our scripts and the modified version of Maple used in our experiments, online:
http://sites.google.com/site/sctbenchmarks
We believe SCTBench will be valuable for future work on concurrency testing in general and SCT in particular. Each benchmark is directly amenable to SCT and exhibits a concurrency bug.
As discussed further in [55] our results are given in terms of number of terminal schedules, not time, which allows them to be easily compared with other work and tools.
2. Systematic Concurrency Testing
Systematic concurrency testing (SCT) works by repeatedly executing a concurrent program using a custom scheduler, forcing a different thread schedule to be explored on each execution. Execution is serialised, so that concurrency is emulated by interleaving instructions from different threads. It is assumed that the only source of nondeterminism is from the scheduler so that repeated execution of the same schedule always leads to the same program state. Nondeterminism such as user input, network communication, etc. must be fixed or modelled. This continues until all schedules have been explored, or until a time or schedule limit is reached.
The search space is over schedules; unlike model checking, program states are not represented. This is appealing because the state of real software is large and difficult to capture.
A schedule \( \alpha = (\alpha(1), \ldots, \alpha(n)) \) is a list of thread identifiers. We use the following shorthands for lists: \( \text{length}(\alpha) = n; \alpha \cdot t = (\alpha(1), \ldots, \alpha(n), t); \text{last}(\alpha) = \alpha(n) \). The element \( \alpha(i) \) refers to the thread that is executing at step \( i \) in the execution of the multi-threaded program, where step \( 1 \) is the first step. For example, the schedule \( \langle T0, T0, T1, T0 \rangle \) specifies that, from the initial state, two steps are executed in the context of \( T0 \), one step in \( T1 \) and then a step in \( T0 \).
A step corresponds to a particular thread executing a visible operation [12], such as a synchronisation operation or shared memory access, followed by a finite sequence of invisible operations until immediately before the next visible operation. Considering interleavings involving non-visible operations is unnecessary when checking safety property violations, such as deadlocks and assertion failures [12]. The point just before a visible operation, where the scheduler decides which thread to execute next, is called a scheduling point. Let \( \text{enabled}(\alpha) \) denote the set of enabled threads (those that are not blocked, and so can execute) in the state reached by executing \( \alpha \). We say that the state reached by \( \alpha \) is a terminal state when \( \text{enabled}(\alpha) = \emptyset \). A schedule that reaches a terminal state is referred to as a terminal schedule.
Context switches A context switch occurs in a schedule when execution switches from one thread to another. Formally, step \( i \) in \( \alpha \) is a context switch if and only if \( \alpha(i) \neq \alpha(i - 1) \). The context switch is preemptive if and only if \( \alpha(i - 1) \in \text{enabled}(\langle \alpha(1), \ldots, \alpha(i - 1) \rangle) \). In other words, the thread executing step \( i - 1 \) remained enabled after that step. Otherwise, the context switch is non-preemptive.
Preemption bounding Preemption bounding [23] bounds the number of preemptions in a schedule. Let the preemption count \( PC \) of a schedule be defined recursively; a schedule of length zero or one has no preemptions, otherwise:
\[
PC(\alpha \cdot t) = \begin{cases}
PC(\alpha) + 1 & \text{if } \text{last}(\alpha) \neq t \land \text{last}(\alpha) \in \text{enabled}(\alpha) \\
PC(\alpha) & \text{otherwise}
\end{cases}
\]
With a preemption bound of \( k \), any schedule \( \alpha \) with \( PC(\alpha) > k \) will not be explored.
Example 1. Consider Figure 1 which shows a simple multi-threaded program. T0 launches three threads concurrently and is then disabled. All variables are initially zero and threads execute until there are no statements left. We refer to the visible actions of each thread via the statement labels \((a, b, c, \text{etc.})\) and we (temporarily) represent schedules as a list of labels. Note that ‘\(a\)’ cannot be preempted, as there are no other threads to switch to. A schedule with zero pre-emptions is \(\langle a, b, c, e, d \rangle\). Note that, for example, \(e\) is not a preemption because \(T1\) has no more statements and so is considered disabled after \(c\). A schedule that causes the assertion to be violated is \(\langle a, b, e \rangle\), which has one preemption at operation \(e\). The bug will not be found with a preemption bound of zero, but will be found with any greater bound.
Delay bounding A delay conceptually corresponds to blocking the thread that would be chosen by the scheduler at a scheduling point, which forces the next thread to be chosen instead. The blocked thread is then immediately re-enabled. Delay bounding \(7\) bounds the number of delays in a schedule, given an otherwise deterministic scheduler. Executing a program under the deterministic scheduler (without delaying) results in a single terminal schedule – this is the only terminal schedule that has zero delays.
In the remainder of this paper we assume the deterministic scheduler that is non-preemptive and when blocked chooses the next enabled thread in thread creation order in a round-robin fashion. We assume this instantiation of delay bounding because it has been used in previous work \(7\) and is straightforward to explain and implement.
The following is a definition of delay bounding assuming the non-preemptive round robin scheduler. Assume that each thread id is a non-negative integer, numbered in order of creation; the initial thread has id 0, and the last thread created has id \(N-1\). For two thread ids \(x, y \in \{0, \ldots, N-1\}\), let \(distance(x, y)\) be the unique integer \(d \in \{0, \ldots, N-1\}\) such that \((x+d) \mod N = y\). Intuitively, this is the “round-robin distance” from \(x\) to \(y\). For example, given four threads \(\{0, 1, 2, 3\}\), \(distance(1,0) = 3\). For a schedule \(\alpha\) and a thread id \(t\), let \(delays(\alpha, t)\) yield the number of delays required to schedule thread \(t\) at the state reached by \(\alpha\):
\[
delays(\alpha, t) = \{x : 0 \leq x < distance(last(\alpha), t) \land (last(\alpha) + x) \mod N \in enabled(\alpha)\}\n\]
This is the number of enabled threads that are skipped when moving from \(last(\alpha)\) to \(t\). For example, let \(last(\alpha) = 3\), \(enabled(\alpha) = \{0, 2, 3, 4\}\) and \(N = 5\). Then, \(delays(\alpha, 2) = 3\) because threads 3, 4 and 0 are skipped (but not thread 1, because it is not enabled).
Define the delay count \(DC\) of a schedule recursively; a schedule of length zero or one has no delays, otherwise:
\[
DC(\alpha \cdot t) = DC(\alpha) + delays(\alpha, t)
\]
With a delay bound of \(k\), any schedule \(\alpha\) with \(DC(\alpha) > k\) will not be explored.
The set of schedules with at most \(c\) delays is a subset of the set of schedules with at most \(c\) preemptions. Thus, delay bounding reduces the number of schedules by at least as much as preemption bounding.
Example 2. Consider Figure 1 once more. Assume thread creation order \(\langle T0, T1, T2, T3 \rangle\). The assertion can also fail via: \(\langle a, b, d, e \rangle\), with one delay/preemption at \(d\). However, a preemption bound of one yields 11 terminal schedules, while a delay bound of one yields only 4 (note that an assertion failure is a terminal state). Now assume that \(T2\) comprises the same statements as \(T1\), which we label as: \(f) x=1; g) y=1\). Now, the assertion cannot fail with a delay bound of one because two delays must occur so that \(T1\) and \(T2\) do not both execute all their statements. For example, \(\langle a, b, e \rangle\) exposes the bug, but executing \(e\) uses two delays. However, note that this schedule only has one preemption, so the assertion can still fail under a preemption bound of one. Adding an additional \(n\) threads between \(T1\) and \(T3\) (in the creation order) with the same statements as \(T1\) will require \(n\) additional delays to expose the bug, while still only one preemption will be needed. Empirical evidence \(27\) suggests that adversarial examples like this are not common in practice. Our results \(50\) also support this.
Theoretical Complexity Upper-bounds for the number of terminal schedules produced by SCT techniques are described in \(23\). In summary, assume at most \(n\) threads and at most \(k\) execution steps in each thread. Of those \(k\), at most \(b\) steps block (cause the executing thread to become disabled) and \(i\) steps do not block. Complete search is exponential in \(n\) and \(k\), and thus infeasible for programs with a large number of execution steps. With a scheduling bound of \(c\), preemption bounding is exponential in \(c\) (a small value), \(n\) (often, but not necessarily, a small value) and \(b\) (usually much smaller than \(k\)). Crucially, it is no longer exponential in \(k\). Delay bounding is exponential only in \(c\) (a small value). Thus, it performs well (in terms of number of schedules) even when programs create a large number of threads.
Finding bugs The intuition behind schedule bounding is that it greatly reduces the number of schedules, but still allows many bugs to be found \(7\). The reasoning is that only a few preemptions are needed at the right places in order to enforce an ordering that causes the bug to manifest. Performing a preemption elsewhere will have little impact. A complete depth-first search becomes infeasible as the execution length increases due to the large number of context switches, many of which are likely to be irrelevant.
Iterative schedule bounding Schedule bounding can be performed iteratively [23], where all schedules with zero preemptions or delays are all executed, followed by those with one preemption or delay, etc. until there are no more schedules or a time or schedule limit is reached. In the limit, all schedules are explored. Thus, iterative schedule bounding creates a partial-order in which to explore schedules: schedule $\alpha$ will be explored before schedule $\alpha'$ if $PC(\alpha) < PC(\alpha')$, while there is no predefined exploration order between schedules with equal preemption counts. The partial order for iterative delay bounding with respect to DC is analogous. Thus, iterative schedule bounding is a heuristic that aims to expose buggy schedules before the time or schedule limit is reached, based on the hypothesis discussed above.
In this study, we perform iterative schedule bounding to compare preemption and delay bounding.
3. Modifications to Maple
We chose to use a modified version of the Maple tool [30] to conduct our experimental study. Maple is a concurrency testing tool framework for pthread [21] programs. It uses the dynamic instrumentation library, PIN [22], to test binaries without the need for recompilation. One of the modules, systematic, is a re-implementation of the CHESS [26] algorithm for preemption bounding. The main reason for using Maple, instead of CHESS, is that it targets pthread programs. This allows us to test a wide variety of open source multi-threaded benchmarks and programs. Previous evaluations [7, 23, 26] focus on C# programs and C++ programs that target the Microsoft Windows operating system, most of which are not publicly available. In addition, CHESS requires re-linking the program with a test function that can be executed repeatedly; this requires resetting the global state (e.g. resetting the value of global variables) and joining any remaining threads, which can be non-trivial. In contrast, Maple can test native binaries out-of-the-box, by restarting the program for each terminal schedule that is explored, although a downside of this approach is that it is slower. Checking for data races is also supported by Maple; as discussed in [33] this is important for identifying visible operations. The public version of CHESS can only interleave memory accesses in native code if the user adds special function calls before each access.\footnote{See “Why does wc/chess not support /detectracets?” at http://social.msdn.microsoft.com/Forums/en-us/home?forum=chess}
Delay bounding We modified Maple to add support for delay bounding, following a similar design to the existing support for preemption bounding. At each scheduling point, Maple conceptually constructs several schedules consisting of the current schedule concatenated with an enabled thread $t$. These are added to a set and will be explored on subsequent executions. If switching to thread $t$ will cause the delay bound to be exceeded (as explained in [23]), the schedule is not added to the set.
Depth-first search Even with a schedule bound, there are many possible orders in which to explore schedules. Maple’s systematic mode only supports a depth-first search, as this allows a stack to be used to efficiently record which schedules still need to be explored. Since the stack is deeply ingrained in Maple’s data structures and algorithms, we did not attempt to implement other search strategies. We note that the initial terminal schedule explored by iterative preemption bounding, iterative delay bounding and unbounded depth-first search is the same for all techniques (a non-preemptive round-robin schedule). We discuss the impact of depth-first search on our study further in [5].
Random scheduler Maple also includes a naive random scheduler mode, where, at each scheduling point, one enabled thread is randomly chosen from the set of enabled threads to execute a visible operation. Unlike schedule fuzzing, where randomisation is used to perturb the OS scheduler, this yields truly (pseudo-)random schedules because scheduling nondeterminism is fully controlled. No information is saved by the random scheduler for subsequent executions, so it is possible that the same schedule will be explored multiple times over many runs. This could be rectified by modifying Maple to record a history of schedules during random scheduling, but such a change would not be straightforward due to the way in which the tool is designed. As a result, with random scheduling the search cannot “complete”, even for programs with a small number of schedules.
We include random scheduling as a baseline for non-systematic approaches, and to provide further insight on the complexity of the benchmarks.
Maple algorithm The default concurrency testing used by Maple (which we refer to as the Maple algorithm) is not systematic; it performs several profiling runs, recording patterns of inter-thread dependencies through shared-memory accesses [56]. From the recorded patterns, it predicts possible alternative interleavings that may be feasible, which are referred to as interleaving idioms. It then performs active runs, influencing thread scheduling to attempt to force untested interleaving idioms, until none remain or they are all deemed infeasible (using heuristics). Although the focus of our study is on SCT techniques, we also compare with the Maple algorithm since it is readily available in the tool.
4. Benchmark Collection
We have collected a wide range of pthread benchmarks from previous work and other sources. Table 1 summarises the benchmark suites (with duplicates removed), indicating where it was necessary to skip benchmarks due to the difficulty of applying SCT, or otherwise. “Non-buggy” means there were no existing bugs documented and we did not find any during our examination of the benchmark. We now provide details of the benchmark suites [5, 41] and barriers to the application of SCT identified through our benchmark gathering exercise [4, 2].
### 4.1 Details of benchmark suites
**Concurrency Bugs (CB) Benchmarks**
Includes buggy versions of programs such as `aget` (a file downloader) and `pbzip2` (a file compression tool). We modified `aget`, modelling certain network functions to return data from a file and to call its interrupt handler asynchronously. Many benchmarks were skipped due to the use of networking, multiple processes and signals (Apache, Memcached, MySQL).
**CHESS**
A set of test cases for a work stealing queue, originally implemented for the Cilk multithreaded programming system under Windows. The `WorkStealQueue` benchmark has been used frequently to evaluate concurrency testing tools. After manually translating the benchmarks to use `pthread` and C++11 atomic, we found a bug in two of the tests that caused heap corruption, which always occurred when we ran the tests natively (without Maple). We fixed this bug and SCT revealed another bug that is much rarer, which we use in the study.
**Concurrency Software (CS) Benchmarks**
Examples used to evaluate the ESBMC tool, including small multithreaded algorithm test cases (e.g. bank account transfer, circular buffer, dining philosophers, queue, stack), a file system benchmark and a test case for a Bluetooth driver. These tests included unconstrained inputs. None of the bugs are input dependent, so we selected reasonable concrete values. We had to remove or define various ESBMC-specific functions to get the benchmarks to compile.
**Inspect Benchmarks**
Used to evaluate the INSPECT concurrency testing tool. We skipped `swarm_isort64` that did not terminate after five minutes when performing data race detection (see [55]). There were no documented bugs, and testing all benchmarks revealed a bug in only one benchmark, `qsort_mt`, which we include in the study.
**Miscellaneous**
We encountered two individual test cases, which we include in the study. The `safestack` test case, which was posted to the CHESS forum by Dmitry Vyukov, is a lock-free stack designed to work on weak-memory models. The bug exposed by the test case also manifests under sequential consistency, so it should be detectable by existing SCT tools. Vyukov states that the bug requires at least three threads and at least five preemptions. Previous work reported a bug that requires three preemptions [17], which was the first bug found by CHESS that required that many preemptions.
The `ctrace` test case, obtained from the authors of [18], exposes a bug in the `ctrace` multithreaded debugging library.
**PARSEC 2.0 Benchmarks**
A collection of multithreaded programs from many different areas. We used `ferret` (content similarity search) and `streamcluster` (online clustering of an input stream), both of which contain known bugs. We created three versions of `streamcluster`, each containing a distinct bug. One of these is from an older version of the benchmark and another was a previously unknown bug which we discovered during our study (see §4.2). We configured the `streamcluster` benchmarks to use non-spinning synchronisation and added a check for incorrect output. All benchmarks use the “test” input values (the smallest) with two threads, except for `streamcluster2`, where the bug requires three threads.
**RADBenchmark**
Consists of 15 tests that expose bugs in several applications. The 6 benchmarks we use test parts of Mozilla (SpiderMonkey and the Mozilla Netscape Portable Runtime Thread Package), which are suitable for SCT. The others were skipped due to use of networking and multiple processes. Several tested the Chromium browser; the use of a GUI leads to nondeterminism that cannot be controlled or modelled by any SCT tools we know of. Some of the benchmarks were stress tests; we reduced the number of threads and other parameters as much as possible.
**SPLASH-2**
Three of these benchmarks have been used in previous work [4, 29]. SPLASH-2 requires a set of macros to be provided; the bugs are caused by a set that fail to include the “wait for threads to terminate” macro. Thus, all the bugs are similar. For this reason, we just use the three benchmarks from previous work, even though the macros are likely to cause issues in the other benchmarks. We added assertions to check that all threads have terminated as expected. We reduce the number of input parameters, such as the number of particles in `barnes` and the size of the matrix in `lu`, so the tests complete quickly on our implementation, without exhausting memory. We discuss this further in §6.
---
<table>
<thead>
<tr>
<th>Benchmark set</th>
<th>Benchmark types</th>
<th># used</th>
<th># skipped</th>
</tr>
</thead>
<tbody>
<tr>
<td>CB</td>
<td>Test cases for real applications</td>
<td>3</td>
<td>17 networked applications.</td>
</tr>
<tr>
<td>CHESS</td>
<td>Test cases for several versions of a work stealing queue</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>CS</td>
<td>Small test cases and some small programs</td>
<td>29</td>
<td>24 were non-buggy.</td>
</tr>
<tr>
<td>Inspect</td>
<td>Small test cases and some small programs</td>
<td>1</td>
<td>28 were non-buggy.</td>
</tr>
<tr>
<td>Miscellaneous</td>
<td>Test case for lock-free stack and a debugging library test case</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>PARSEC</td>
<td>Parallel workloads</td>
<td>4</td>
<td>29 were non-buggy.</td>
</tr>
<tr>
<td>RADBenchmark</td>
<td>Tests cases for real applications</td>
<td>6</td>
<td>5 Chromium browser; 4 networking.</td>
</tr>
<tr>
<td>SPLASH-2</td>
<td>Parallel workloads</td>
<td>3</td>
<td>9 (see text)</td>
</tr>
</tbody>
</table>
Table 1: An overview of the benchmark suites used in the study.
4.2 Effort Required For SCT
We encountered a range of issues when trying to apply systematic concurrency testing to the benchmarks. These are general limitations of SCT, not of our method specifically, and all SCT tools that we know of would have similar issues.
Environment modelling System calls that interact with the environment, and hence can give nondeterministic results, must be modelled or fixed to return deterministic values. Similarly, functions that can cause threads to become enabled or disabled must be handled specially, as they affect scheduling decisions. This includes the forking of additional processes, which requires both modelling and engineering effort to make the testing tool work across different processes. For the above reasons, a large number of benchmarks in the CB and RADBenchmark suites had to be skipped because they involve testing servers, using several processes and network communication. Modelling network communication and testing multiple processes are both nontrivial tasks. We believe the difficulty of controlling various sources of nondeterminism is a key issue in applying SCT to existing code bases. In contrast, non-systematic techniques (discussed in §7) are able to handle such nondeterminism.
Isolated concurrency testing An alternative approach to modelling nondeterminism is to create isolated tests, similar to unit testing, but with multiple threads. Unfortunately, we found that many programs are not designed in a way that makes this easy. An example is the Apache httpd webserver; the server module that we inspected had many dependencies on other parts of the server and directly called system functions, making it difficult to create an isolated test case. Developers test the server as a whole; network packets are sent to the server by a script running in a separate process.
Many applications in the CB benchmarks use global variables and function-static variables that are scattered throughout several source files. These would need to be handled carefully with some SCT tools like CHESS, that require a repeatable function to test, in which the state must be reset when the function returns. This is not a problem for Maple, which restarts the test program for every schedule explored.
Memory safety We found that certain concurrency bugs manifest as out-of-bounds memory accesses, which do not always cause a crash. We implemented an out-of-bounds memory access detector on top of Maple, which allowed us to detect a previously unknown bug in the PARSEC suite, which is tested in the streamcluster3 benchmark. Detecting certain types of out-of-bound memory accesses, such as accesses to the stack or data segments, is difficult, as information about the bounds of these regions is lost during compilation. Thus, our implementation had many false positives. However, a more serious issue was that the extra instrumentation code caused a slowdown of up to 8x; Maple’s existing information on allocated memory was not designed to be speed-efficient. We disabled the out-of-bound access detector in our experiments, but we note that a production quality SCT tool would require an efficient method for detecting out-of-bound accesses to automatically identify this important class of bug. We manually added assertions to detect out-of-bound accesses in the streamcluster3 benchmark and in fsbench_bad in the CS benchmarks. Out-of-bound accesses to synchronization objects, such as mutexes, are still detected. This proved to be useful in pbzip2 from the CS benchmarks.
Data races We found that 33 of the 52 benchmarks contained data races. There are many compelling arguments against the tolerance of data races [1], and technically, according to the C++11 standard, the existence of a data race in a C++ program means that the behaviour of the entire program is undefined. Nevertheless, in practice, programs that exhibit races are often compiled in predictable ways by standard compilers so that many data races are not regarded as bugs by software developers. A particular pattern we noticed was that data races often occur on flags used in ad-hoc busy-wait synchronisation, where one thread keeps reading a variable until the value changes. In principle the “benign” races could be rectified through the use of C++11 relaxed atomics, the “busy wait” use of data races could be formalised using C++11 acquire/release atomics, and synchronisation operations could be added to eliminate the buggy cases. However, telling the difference between benign and buggy data races is non-trivial in practice [18, 28]. We explain how we treat data races in our study in §5.
Output checking The bugs in the benchmarks CB.aget and parsec.streamcluster2, lead to incorrect output. Thus, we added extra code to read the output file and trigger an assertion failure when incorrect; the output checking code for the CB.aget was provided as a separate program, which we added to the benchmark. Several of the PARSEC and SPLASH benchmarks do not verify their output, greatly limiting their usefulness as test cases.
5. Experimental Method
Our experimental evaluation aims to compare a straightforward depth-first search (DFS), iterative preemption bounding (IPB), iterative delay bounding (IDB) and the use of a naive random scheduler (Rand). We also test the default Maple algorithm (MapleAlg). Bugs are deadlocks, crashes or assertion failures (including those that identify incorrect output). Each benchmark contains a concurrency bug and goes through the following phases:
Data Race Detection Phase When checking safety properties, it is sound to only consider scheduling points before each synchronisation operation, such as locking a mutex, as long as execution aborts with an error as soon as a data race is detected [25]. This greatly reduces the number of schedules that need to be considered. However, treating data races as errors is not practical for this study due to the large num-
ber of data races in the benchmarks (see §4.2), which would make bug-finding trivial and arguably not meaningful.
As in previous work [36], we circumvent this issue by performing dynamic data race detection to identify a reasonable subset of load and store instructions that participate in data races. We treat these instructions as visible operations during SCT. For each benchmark, we execute Maple in its data race detection mode ten times, without controlling the schedule. Each racy instruction (stored as an offset in the binary) is treated as a visible operation in the IPB, IDB, DFS and Rand phases. We also tried detecting races during SCT, but this caused an additional slow-down of up to 8x, as Maple’s race detector is not optimised for this scenario.
Thus SCT explores nondeterminism arising due to sequentially consistent outcomes of a subset of the possible data races for a concurrent program. Bugs found by this method are real (there are no false-positives), but bugs that depend on relaxed memory effects or data races not identified dynamically will be missed. We do not believe these missed bugs threaten the validity of our comparison of IPB, IDB, DFS and Rand, since the same information about data races is used by all of these techniques; the set of racy instructions could be considered as part of the benchmark.
An alternative to under-approximation would be to use static analysis to over-approximate the set of racy instructions. We did not try this, but speculate that imprecision of static analysis would lead to many instructions being promoted to visible operations, causing schedule explosion.
**Iterative Preemption Bounding (IPB) Phase** We next perform SCT on the benchmark using iterative preemption bounding, with a schedule limit. By repeatedly executing the program, restarting after each execution, we first explore all terminal schedules that have zero preemptions, followed by all schedules that have one preemption, etc. until either the schedule limit is reached, all schedules have been explored or a bug is found. If a bug is found, the search does not terminate immediately; the remaining schedules within the current preemption bound are explored (for our set of benchmarks, it was always possible to complete this exploration without exceeding the schedule limit). As explained in [33] this allows us to check whether non-buggy schedules could exceed the schedule limit when an underlying search strategy other than depth-first search is used.
We use a limit of 10,000 terminal schedules to enable a full experimental run over our large set of benchmarks to complete on a cluster within 24 hours. We chose to use a schedule limit instead of a time limit because there are many factors and potential optimisation opportunities that can affect the time needed for a benchmark to complete, and the cluster we have access to shares its machines with other jobs, making accurate time measurement difficult. On the other hand, the number of terminal schedules explored cannot be improved upon, without changing key aspects of the search algorithms themselves. By measuring the number of schedules, our results can potentially be compared with other algorithms and future work that use different implementations with different overheads.
**Iterative Delay Bounding (IDB) Phase** This phase is identical to the previous, except delay bounding is used instead of preemption bounding.
**Depth-First Search (DFS) Phase** We perform a depth-first search, with no schedule bounding and a limit of 10,000 terminal schedules. This provides a point of comparison for schedule bounding.
**Random scheduler (Rand) Phase** We run each benchmark 10,000 times using Maple’s naive random scheduler mode. This allows us to compare the systematic techniques against a straightforward non-systematic technique.
**Maple Algorithm (MapleAlg) Phase** We test each benchmark using the Maple algorithm. This algorithm terminates based on its own heuristics; we enforced a time limit of 24 hours per benchmark.
**Notes on depth-first search and partial order reduction** As discussed in [33] the SCT methods we evaluate are built on top of Maple’s default depth-first search strategy. Although depth-first search is just one possible search strategy, and different strategies could give different results, we argue that this is not important in our study. First, if the depth-first search biases the search for certain benchmarks, then both schedule bounding algorithms are likely to benefit or suffer equally from this. Second, iterative schedule bounding explores all schedules with c preemptions/delays before any schedule with c + 1 preemptions/delays. This means that when the first schedule with c + 1 preemptions/delays is considered, exactly the same set of schedules, regardless of search strategy, will have been explored so far. If a bug is revealed at bound c then, by enumerating all schedules with bound c (as described above), we can determine the worst case number of schedules that might have to be explored to find a bug, accounting for an adversarial search strategy.
Partial-order reduction (POR) [11] is a commonly used technique in concurrency testing [9, 11, 24, 26]. We do not attempt to study the various POR techniques, to avoid an explosion of combinations of methods and because the relationship between POR and schedule bounding is complex and the topic of recent and ongoing work [5, 14, 24].
6. Experimental Results
**Experimental platform** We conducted our experiments on a Linux cluster, with Red Hat Enterprise Linux Server release 6.4, an x86_64 architecture and gcc 4.7.2. Our modified version of Maple is based on the latest commit. The benchmarks, scripts and the modified version of Maple used in our experiments can be obtained from [http://sites.google.com/site/sctbenchmarks](http://sites.google.com/site/sctbenchmarks) or [http://github.com/jieyu/maple](http://github.com/jieyu/maple) commit at Sept 24, 2012.
Overview of results
The Venn diagrams in Figure 2 give a concise summary of the bug-finding ability of the techniques. When we say that a technique found $x$ bugs, we mean that the technique found each bug in $x$ benchmarks.
Figure 2a summarises the bugs found by the systematic techniques. IPB was superior to DFS, finding all 33 bugs found by DFS, plus an additional 5. IDB beat both DFS and IPB, finding all 38 bugs found by these techniques, plus an additional 7. The bugs in 7 benchmarks were missed by all systematic techniques, which we discuss below.
Figure 2b shows the bugs found by schedule bounding (IDB), a naive random scheduler (Rand) and the default Maple algorithm (MapleAlg). The bugs in 44 benchmarks were found by both IDB and Rand. IDB and Rand each found 1 additional, distinct, bug. Thus, these techniques performed similarly in terms of number of bugs found. We discuss this surprising result in detail below. MapleAlg found 31 bugs that were found by the other techniques, plus 1 additional bug. However, it missed 15 bugs that were found by the other techniques. There were 5 bugs missed by all techniques, but 3 of these are identical to benchmarks in which we did find bugs, except that they run a larger number of threads; the remaining 2 benchmarks, radbench.bug1 and misc.safestack, are discussed below.
Detailed results
The full set of experimental data gathered for our benchmarks is shown in Table 3. We use schedules to refer to terminal schedules, for brevity. As explained in §5, we focus on the number of schedules explored rather than time taken for analysis. The execution of a single benchmark during SCT varied between 1-7 seconds depending on the benchmark; there was negligible variance between runtimes for multiple executions of the same benchmark. The longest time taken to perform ten data race detection runs for a single benchmark was five minutes, but race detection was significantly faster in most cases. Race detection could be made more efficient using an optimised, state-of-the-art method. Because race analysis results are shared between all systematic techniques and Rand, the time for race analysis is not relevant when comparing these methods.
For each benchmark, # threads and # max enabled threads show the total number of threads launched and the maximum number of threads simultaneously enabled at any scheduling point, respectively, over all runs of the benchmark. The # max scheduling points column shows the maximum number of visible operations for which more than one thread was enabled, over all systematic testing. The smallest preemption or delay bound required to find the bug for a benchmark, or the bound reached (but not fully explored) if the schedule limit was hit, is indicated by bound; # schedules to first bug shows the number of schedules that were explored up to and including the detection of a bug for the first time; # schedules shows the total number of schedules that were explored; # new schedules shows how many of these schedules have exactly bound preemptions (for IPB) or delays (for IDB); # buggy schedules shows how many of the total schedules explored exhibited the bug. As explained in §5, when a bug is found, we continue to explore all buggy and non-buggy schedules within the preemption or delay bound; the schedule limit was never exceeded while doing this. An L entry denotes 10,000 (the schedule limit discussed in §5). When no bugs were found, the bug-related columns contain X. We indicate by % buggy, the percentage of schedules that were buggy out of the total number of schedules explored during DFS. We prefix the percentage with a '*' when the schedule limit was reached, in which case the percentage does not apply to all schedules.
For the Rand results, the # schedules column is omitted, as it is always 10,000. Note that # schedules to first bug and # buggy schedules may contain duplicate schedules.
For the Maple algorithm, we report whether the bug was found (found?), the total number of (not necessarily distinct) schedules explored, as chosen by the algorithm’s heuristics, and the total time in seconds for the algorithm to complete. Benchmarks 32, 33 and 34 caused Maple to livelock, so the 24 hour time limit was exceeded. We indicate this with ‘-‘.
Benchmark Properties
The # max enabled threads and # max scheduling points columns from Table 3 can be used to estimate the total number of schedules and, perhaps, the complexity of the benchmark. With at most $n$ enabled threads and at most $k$ scheduling points, there are at most $n^k$ terminal schedules. On the other hand, if most of the schedules are buggy (see the % buggy column in Table 3), then the number of schedules is not necessarily a good indication of bug complexity. For example, CSdin_phil13_sat has a relatively high number of schedules, but since 87% of them are buggy, this bug is trivial to find. Of course, the majority of benchmarks cannot be explored exhaustively, and estimating the percentage of buggy schedules from the partial
DFS results is problematic because DFS is biased towards exploring deep context switches.
Table 2 provides some further insight into the complexity of the benchmarks, using properties derived from Table 3. Bugs found with a delay bound of zero will always be found on the initial schedule for IPB, IDB and DFS, as they all initially execute the same schedule. Any technique based on this same depth-first schedule will also find the bug immediately. It could be argued that this schedule is effective at finding bugs, or that the bugs in question are trivial, since the schedule includes minimal interleaving (there are no preemptions). Benchmarks with fewer than 10,000 terminal schedules (for DFS) will always be exhaustively explored by all systematic techniques, so the bug will always be found. Techniques can still be compared on how quickly they find the bugs. Bugs that were exposed more than 50% of the time when using the random scheduler could arguably be classified as “easy-to-find”. Bugs that were exposed 100% of the time when using the random scheduler are almost certainly trivial to detect; indeed, Table 3 shows that all of these benchmarks were buggy for all schedules over all techniques, suggesting that these bugs are not even schedule-dependent.
In our view the relatively trivial nature of some of the bugs exhibited by our benchmarks has not been made clear in prior works that study these examples. We regard these easy-to-find bugs as having value only in providing a minimum baseline for any respectable concurrency testing technique. Failure to detect these bugs would constitute a major flaw in a technique; detecting them does not constitute a major achievement.
IPB vs. IDB Figure 3 compares IPB and IDB by plotting data from the following columns in Table 3: # schedules to first bug (as a cross) and # schedules (as a square). Each benchmark, for which at least one technique found a bug, is depicted as a line connecting a cross and a square. Where the bug was not found by one of the techniques, this is indicated with a cross at 10,000 (the schedule limit discussed in §5). Each square is labelled with its benchmark id from Table 3. The cross indicates which technique was faster at finding the bug (with depth-first search as the underlying search strategy); crosses below/above the diagonal indicate that IPB/IDB was faster. The square indicates how many additional non-buggy schedules were considered before a bug was found. Since the search terminated before reaching the schedule limit, we know that the bug would be found even if we were using an underlying search strategy other than depth-first search. Notice that a number of benchmarks appear at (x, 10,000), with x < 10,000: this is where IPB failed to find a bug and IDB succeeded.
The bug-finding ability of the techniques in Figure 3 is tied to the underlying depth-first search. It is possible that this might cause one of the techniques to “get lucky” and find a bug quickly, while another search order could lead to many additional non-buggy schedules being considered before a bug is found. To avoid this implementation-dependent bias, in Figure 4 we consider the worst-case bug-finding ability. Each cross plots, for IDB and IPB, the total number of schedules within the bound exposing the bug that are not buggy. This corresponds to the difference between the # schedules and # buggy schedules columns presented in Table 3, and represents the worst-case number of schedules that might
have to be explored to find a bug, given an unlucky choice of search ordering. The squares are the same as in Figure 3.
Overall, IDB finds all bugs found by IPB, plus seven that were missed. In Figure 3, most crosses fall on or above the diagonal, showing that IDB was as fast or faster than IPB in terms of number of schedules to the first bug. The same is mostly true for the squares, showing that IDB generally leads to a smaller total number of schedules than IPB. In the worst case (Figure 4), some crosses fall under the line, but most are still very close, or represent a small number of schedules (less than 100) where the difference between the techniques is negligible. An outlier is benchmark 42 where, in the worst case, IPB requires 3 schedules to find the bug, while IDB requires 1366 schedules. Table 3 shows that the bug does not require any preemptions, but requires at least one delay; this difference greatly increases the number of schedules for IDB. We believe this can be explained as follows. First, there must be a small number of blocking operations, leading to a very small number of schedules with a preemption bound of zero. Second, the bug in question requires that when two particular threads are started and reach a particular barrier, the “master” thread (the thread that was created before the other) does not leave the barrier first. With zero preemptions, the non-master thread can be chosen at the first blocking operation (as any enabled thread can be chosen). With zero delays, only the master thread can be chosen, as one delay is required to skip over the master thread. Thus, this is an example where IDB performs worse than IPB. Nevertheless, IDB is still able to find the bug within the schedule limit.
The Cs reorder X_bad benchmark (where X is the number of threads launched – see Table 3) is the adversarial delay bounding example given in [22]; the smallest delay bound required for the bug to manifest is incremented as the thread count is incremented. However, IDB still performs better than IPB, as the number of schedules in IPB increases exponentially with the thread count. Furthermore, this is a synthetic benchmark for which the bug is found quickly by both techniques with a low thread count.
**Effectiveness of random scheduling** Rand performed similarly to IDB in terms of bugs found (Figure 2b). Over all the benchmarks, it can be seen in Table 3 that Rand was nearly always similar to or much faster than IDB in terms of # schedules to first bug. This said, for any particular benchmark the # schedules to first bug value for Rand should be treated with caution due to the role of randomness in selecting the bug-inducing schedule.
We had not anticipated that a random scheduler would be so effective at finding bugs. A possible intuition for this is as follows. If a bug can be exposed with just one delay, say, then there is a single key preemption that exposes the bug. Any schedule where (a) the key preemption occurs, and (b) additional preemptions are irrelevant to the bug, will also expose the bug. There may be many such schedules and thus a good chance of exposing the bug through random preemptions. More generally, if a bug can be exposed with a small delay or preemption count, there may be a high probability that a randomly selected schedule will expose the bug. On the other hand, radbench.bug2 (discussed below) requires three preemptions but was still found by Rand.
The CHESS benchmarks, used for evaluation in the introduction of preemption bounding [23], test several versions of a work stealing queue. Depth-first search fails for chess.WSQ, while IPB succeeds (as in prior work). However, Rand is also able to find the bug; prior work did not compare against a random scheduler in terms of bug finding ability. The remaining CHESS benchmarks are more complex (lock-free) versions of chess.WSQ, which were also used in prior work. IPB and DFS fail on these, while IDB and Rand are, again, both successful in finding the bugs. Rand found the bugs in fewer terminal schedules than IDB and IPB for all the CHESS benchmarks.
The bug in the parsec.ferret benchmark is missed by Rand, but found by IDB. The bug requires a thread to be preempted early in the execution and not rescheduled until other threads have completed their tasks. Since Rand is very likely to reschedule the thread, it is not effective at finding this bug. For IDB, only one delay is required, but, as seen in Table 3, only one buggy schedule was found; thus, the delay must occur at a specific visible operation.
The bug in radbench.bug4 is missed by IDB, but found by Rand. The bug is caused by a shared mutex being lazily initialised by two threads at once, without synchronisation. This can lead to a double-unlock or similar error. From Table 3, it can be seen that this bug requires more than one delay. The benchmark has a relatively large number of scheduling points, such that the number of schedules with at most two delays exceeds the schedule limit.
There are several benchmarks for which the percentage of buggy schedules encountered during DFS is close to 100% of all buggy schedules observed for Rand. For example, 4% vs. 5% in Cs.stringbuffer-jdk1.4, and 14% vs. 10% in Cs.account_bad. However, there are counter-examples: Cs.carter01_bad: 2% vs. 48%; Cs.deadlock01_bad: 6% vs. 40%. Since the majority of benchmarks cannot be explored exhaustively and DFS is biased towards exploring deep context switches, it is impossible to estimate the percentage of buggy schedules for most of the benchmarks.
**Comparison with the default Maple algorithm** As shown in Figure 2b, MapleAlg missed 15 bugs that were found by the other techniques, and found 32 bugs, including 1 that was missed by the others. MapleAlg is impressive considering the low number of schedules it explores. For example, all other techniques missed the bug in radbench.bug5 after 10,000 terminal schedules. In contrast, MapleAlg found it after just 14 schedules. MapleAlg attempts to force certain patterns of inter-thread accesses (or interleaving idioms) that might lead to concurrency bugs. This allows it to expose
**Concurrency Testing Using Schedule Bounding: an Empirical Study**
many bugs quickly. It is possible that the bugs it misses require interleaving idioms that are not included in MapleAlg.
Discussion No technique found the bugs in 19, 20 and 28. However, these bugs can be exposed using a lower number of threads (as shown by the other versions of these benchmarks), so these results are arguably less useful.
The schedule bounding results reveal that the bug in radbench.bug2 requires at least three delays or preemptions. The benchmark was modified to use just two threads in total and IPB and IDB explored the same schedules. This matches largest number of preemptions required to expose a bug found in previous work [7]. However, the misc safestack benchmark reportedly requires five preemptions and three threads in order for the bug to manifest. We reproduced the bug using Relacy [5] a weak memory data race detector that performs either systematic or random search for C++ programs that use C++ atomic.
The bug in radbench.bug1 requires a thread to be preempted after destroying a hash table and a second thread to access the hash table, causing a crash. From the description, the bug may only require one delay, but it is likely that the large number of scheduling points is what pushes this bug out of reach of all the techniques tested.
As explained in §4.1, we reduced the input values in the SPLASH-2 benchmarks; this resulted in fewer scheduling points and allowed our data race detector to complete, without exhausting memory. Due to these changes, the results are not directly comparable with other experiments that use the SPLASH-2 benchmarks (unless parameters are similarly reduced). However, the bugs are found by all systematic techniques after just two schedules; this would be the same, regardless of parameter values. Therefore, the # schedules to first bug data are accurate.
7. Related Work
To our knowledge, ours is the first independent empirical study to compare schedule bounding techniques. Background and related work on systematic schedule bounding was discussed in §2. We now discuss other relevant approaches to reducing thread schedules in order to find bugs.
Partial-order reduction (POR) [11] reduces the number of schedules that need to be explored without missing errors. It relies on the fact that executions are a partial-order of operations, and explores only one schedule of each partial-order. Dynamic POR [9] computes persistent sets [11] during systematic search; as dependencies between operations are detected, additional schedules are considered. Happens-before graph caching [24, 26] is similar to state-hashing [13], except the partial-order of synchronisation operations is used as an approximation of the state, resulting in a reduction similar to sleep-sets [11]. The combination of dynamic POR and schedule bounding is the topic of recent research [5, 14, 24].
8. Conclusions and Future Work
We have presented the first independent empirical study on schedule bounding techniques for systematic concurrency testing. Our most surprising finding is that a naive random scheduler performs at least as well as the more sophisticated iterative schedule bounding approach, when trying to expose bugs within 10,000 terminal schedules. This may indicate that the benchmarks typically used to evaluate concurrency testing tools are not adequate, as they contain bugs that can be found fairly easily through random search. On the other hand, we have proposed an intuition for why bugs that can be exposed with few preemptions may be exposed by a high percentage of schedules, and thus are amenable to exposure through randomisation.
Our findings confirm results in previous work: that schedule bounding is superior to depth-first search; many, but not all, bugs can be found using a small schedule bound; and delay bounding beats preemption bounding.
In future work we plan to expand SCTBench to conduct larger studies, and to study additional methods, such as various partial-order reduction techniques that reduce the number of schedules explored during systematic testing, as well as non-systematic approaches to concurrency testing.
Acknowledgements We are grateful to the PPoPP reviewers for their useful comments, and especially to reviewer #1 who suggested that we try random scheduling, which led to interesting results. We are also grateful for feedback on this work from Ethel Bardsley, Nathan Chong, Pantazis Deligiannis, Tony Field, Jeroen Ketema and Shaz Qadeer.
and using the Maple algorithm (MapleAlg). Entries marked ‘L’ indicate 10,000, our schedule limit. A ‘X’ indicates that no bug was found. In the MapleAlg results, ‘-’ indicates that the Maple tool timed out after 24 hours. A percentage prefixed with ‘*’ does not apply to all schedules, only those that were explored via DFS before the schedule limit was reached.
References
|
{"Source-Url": "https://spiral.imperial.ac.uk:8443/bitstream/10044/1/14146/5/PPoPP14.pdf", "len_cl100k_base": 13875, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 52213, "total-output-tokens": 15677, "length": "2e13", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.00034689903259277344, "__label__crime_law": 0.0003802776336669922, "__label__education_jobs": 0.0009469985961914062, "__label__entertainment": 8.183717727661133e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.00019419193267822263, "__label__food_dining": 0.0003159046173095703, "__label__games": 0.0009007453918457032, "__label__hardware": 0.0009174346923828124, "__label__health": 0.0005002021789550781, "__label__history": 0.0003306865692138672, "__label__home_hobbies": 9.262561798095704e-05, "__label__industrial": 0.00040078163146972656, "__label__literature": 0.00029969215393066406, "__label__politics": 0.00030159950256347656, "__label__religion": 0.0004711151123046875, "__label__science_tech": 0.043304443359375, "__label__social_life": 0.00011473894119262697, "__label__software": 0.007808685302734375, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.0003654956817626953, "__label__transportation": 0.0006060600280761719, "__label__travel": 0.00023281574249267575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67641, 0.02005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67641, 0.36197]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67641, 0.92739]], "google_gemma-3-12b-it_contains_pii": [[0, 4021, false], [4021, 10017, null], [10017, 16108, null], [16108, 22111, null], [22111, 28121, null], [28121, 33542, null], [33542, 39495, null], [39495, 45482, null], [45482, 50529, null], [50529, 54027, null], [54027, 60253, null], [60253, 64717, null], [64717, 65080, null], [65080, 67641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4021, true], [4021, 10017, null], [10017, 16108, null], [16108, 22111, null], [22111, 28121, null], [28121, 33542, null], [33542, 39495, null], [39495, 45482, null], [45482, 50529, null], [50529, 54027, null], [54027, 60253, null], [60253, 64717, null], [64717, 65080, null], [65080, 67641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67641, null]], "pdf_page_numbers": [[0, 4021, 1], [4021, 10017, 2], [10017, 16108, 3], [16108, 22111, 4], [22111, 28121, 5], [28121, 33542, 6], [33542, 39495, 7], [39495, 45482, 8], [45482, 50529, 9], [50529, 54027, 10], [54027, 60253, 11], [60253, 64717, 12], [64717, 65080, 13], [65080, 67641, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67641, 0.05051]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
0f59b71d27a6ce2130549647ec74f7055be245de
|
2006
Scaling a Dataflow Testing Methodology to the Multiparadigm World of Commercial Spreadsheets
Marc Randall Fisher II
*University of Nebraska at Lincoln*, fisherii@google.com
Gregg Rothermel
*University of Nebraska - Lincoln*, grothermel2@unl.edu
Tyler Creelan
*Oregon State University*
Margaret Burnett
*Oregon State University*
Follow this and additional works at: [http://digitalcommons.unl.edu/csetechreports](http://digitalcommons.unl.edu/csetechreports)
Part of the Computer Sciences Commons
This Article is brought to you for free and open access by the Computer Science and Engineering, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in CSE Technical reports by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln.
Scaling a Dataflow Testing Methodology to the Multiparadigm World of Commercial Spreadsheets
Marc Fisher II, Gregg Rothermel
University of Nebraska-Lincoln
{mfisher, grother}@cse.unl.edu
Tyler Creelan, Margaret Burnett
Oregon State University
{creelan, burnett}@eecs.oregonstate.edu
Abstract
Spreadsheets are widely used but often contain faults. Thus, in prior work we presented a dataflow testing methodology for use with spreadsheets, which studies have shown can be used cost-effectively by end-user programmers. To date, however, the methodology has been investigated across a limited set of spreadsheet language features. Commercial spreadsheet environments are multiparadigm languages, utilizing features not accommodated by our prior approaches. In addition, most spreadsheets contain large numbers of replicated formulas that severely limit the efficiency of dataflow testing approaches. We show how to handle these two issues with a new dataflow adequacy criterion and automated detection of areas of replicated formulas, and report results of a controlled experiment investigating the feasibility of our approach.
1. Introduction
Spreadsheets are used by a wide range of end users to perform a variety of important tasks, such as managing retirement funds, performing tax calculations, and forecasting revenues. Evidence shows, however, that spreadsheets often contain faults, and that these faults can have severe consequences. For example, spreadsheet errors caused Shurgard Inc. to overpay employees by $700,000 [21] and cost Transalta Corporation 24 million dollars through overbidding [8].
Researchers have been responding to these problems by creating approaches that address dependability issues for spreadsheets, including unit inference and checking systems [1, 2], visualization approaches [6, 20], interval analysis techniques [3, 4], and approaches for automatic generation of spreadsheets from a model [9]. Commercial spreadsheet systems such as Microsoft Excel have also incorporated several tools for assisting with spreadsheet dependability, including dataflow arrows, anomaly detection heuristics, and data validation facilities.
In our own prior research, we have presented an integrated family of approaches to help end users improve the dependability of their spreadsheets, called the “What You See is What You Test” (WYSIWYT) methodology. At the core of this methodology is a testing approach that helps spreadsheet users identify problems in interactions between cell formulas – a prevalent source of spreadsheet errors [14]. We have augmented this methodology with techniques for automated test case generation [12], fault localization [19], and test reuse and replay mechanisms [10]. Our studies of the WYSIWYT methodology itself [15, 18] suggest that it can be effective, and can be applied by end users with no specific training in the underlying testing theories.
Results such as these are encouraging; however, to date, our work on spreadsheet dependability mechanisms, and our studies of them, have been performed in the context of the research spreadsheet environment Forms/3. Commercial spreadsheet environments are multiparadigm languages with features such as higher-order functions (functional paradigm), table query constructs (database query languages), user-defined functions (implemented in an imperative sublanguage), meta-program constructs, and pointers, and these features are not accommodated by prior approaches. In addition, most spreadsheets have large areas of replicated formulas which require some form of aggregation and abstraction to allow our methodologies to scale reasonably (i.e., operate sufficiently efficiently). The only previous approach to consider testing methodologies for spreadsheet regions [5] has required a form of region declaration, and thus does not provide unassisted discovery of the testing needs of the informal regions that exist in commercial spreadsheets.
In this paper, we address these two problems. To support multiparadigmatic features, we devised a generalization of our prior test adequacy criterion that con-
siders functions in the formulas to determine their patterns of execution. For replicated formulas, we implemented a family of techniques for combining them into regions. Throughout this work, we focus on Excel, the de-facto standard commercial spreadsheet environment, but our methodology could be extended to the wide variety of Excel work-alike environments, e.g. OpenOffice/StarOffice or Gnumeric.
To assess the resulting new methodology we performed an experiment within a prototype Excel-based WYSIWYT system on a set of non-trivial Excel spreadsheets. This experiment evaluates the costs of our methodology along several dimensions, and also compares the different techniques we have devised for finding regions to a baseline (no-regions) approach. Our results suggest that our algorithms can support the use of WYSIWYT on commercial spreadsheets; they also reveal tradeoffs among the region inference algorithms.
2. Background: WYSIWYT
The WYSIWYT methodology [4, 12, 10, 17, 19] provides several techniques and mechanisms with which end-user programmers can increase the dependability of their spreadsheets. Underlying these approaches is a dataflow test adequacy criterion that helps end users incrementally check the correctness of their spreadsheet formulas as they create or modify a spreadsheet. End-user support for this approach is provided via visual devices that are integrated into the spreadsheet environment, and let users communicate testing decisions and track the adequacy of their testing efforts.
The basic computational unit of a spreadsheet is a cell’s formula. Thus, our adequacy criterion is developed at the granularity of cells. Since many of the errors in spreadsheets are reference errors, we focus on dependencies between cells. This allows us to catch a wide range of faults, including reference, operator, and logic faults.
The test adequacy criterion underlying WYSIWYT is based on a model of a spreadsheet called the Cell Relation Graph (CRG). Figure 1 shows an Excel spreadsheet, Grades, and Figure 2 shows a portion of the CRG corresponding to row 4 of that spreadsheet. In the CRG, nodes correspond to the cells in the spreadsheet. Within each CRG node there is a cell formula graph (CFG) that uses nodes to represent subexpressions in formulas, and edges to represent the flow of control between subexpressions. The CFG has two types of nodes, predicate nodes such as node 29 in R4C11, and computation nodes such as node 30 in R4C11.
The edges between CFGs in the CRG in Figure 2 represent du-associations, which link definitions of cell values to their uses. A definition is an assignment to a cell of a value; each computation node provides a definition of the cell in which it resides. A use of a cell C is a reference to C in another cell. For each use U of cell C, a du-association connects each definition of C to U. CRGs can be generated efficiently for a spreadsheet using the algorithms presented in Reference [17].
Based on the CRG model, we defined the output influencing definition-use adequacy criterion (du-adequacy) for spreadsheets. Under this criterion, a du-association is considered exercised if, given the current inputs, both the definition and the use node are executed, and the cell containing the use or some cell downstream in dataflow from it is explicitly marked by a user as containing a value that is valid given the current assignment of values to other cells. A test suite is considered adequate if all feasible (executable) du-associations in the CRG are exercised.
Spreadsheets often contain many duplicated formulas. In such cases it is impractical to require a tester to make separate decisions about each cell containing one of these duplicated formulas. Thus, in prior work [5], we extended WYSIWYT to handle regions of duplicate formulas. In that approach, a region is a set of cells explicitly identified by the user as sharing the same formula. (It is also possible that regions could be identi-
To extend the du-adequacy criterion to spreadsheets containing such regions we grouped nodes and du-associations. Within a given region, two CFG nodes are corresponding if they are in the same location and their respective CFGs. In Figure 2, CFG nodes 11, 14, 23, and 26 are corresponding nodes. We defined an equivalence class relationship over du-associations such that two du-associations are in the same class if and only if their definition nodes are corresponding and their use nodes are corresponding. In Figure 2, du-associations (11, 30), (14, 30), (23, 30), and (26, 30) are in the same equivalence class. Our modified adequacy criterion stated that if any du-association in an equivalence class is tested, then all of the du-associations in that class are tested.
3. Supporting the Multiparadigmatic Nature of Cell Formulas
In Section 2, we presented the du-adequacy criterion that has been used in WYSIWYT research to date based on the CRG model of spreadsheets, but as outlined in Section 1, there are formula constructs in commercial spreadsheet languages that this du-adequacy criterion does not support. For example, consider cell A3 in Figure 3. With two IF expressions added together, it is unclear what the definitions for A3 are. We illustrate our new adequacy criterion by first describing how we handle this (still purely declarative) subtlety, and then demonstrate the criterion’s ability to scale to multiparadigmatic aspects of spreadsheets.
We decompose the problem of handling formulas into two steps. The first step involves identifying sources, a generalized form of definitions that represent part of a cell’s computation, and destinations, a generalized form of uses. The second step involves connecting sources to destinations to define interactions between cells that need to be tested. To show how this process works, we walk through it using Figure 3.
To determine the cell interactions for this example, we need to determine the sets of sources and destinations for each of the cells. Cells A1, A2 and B1 are simple cases that can be handled in the same fashion as in previous versions of WYSIWYT. Any formula that does not include conditional functions, functions that operate on or return references, or user-defined functions has only a single source. Any references in such a formula become destinations.
Cell A3 is more interesting. To facilitate discussion of its handling we use the AST in Figure 4. To determine sources for complex formulas such as this, we follow two steps. The first step is to identify the source components that represent different patterns of computations that can be performed by functions in the formula. The second is to combine these source components into the sources that represent the patterns of computation for the formula.
The formula for cell A3 contains two function calls that need to be considered; namely each of the IF subexpressions. All IF’s have two possible patterns of evaluation, one that corresponds to the predicate evaluating to true, and one that corresponds to the predicate evaluating to false. We would like to capture these differing patterns of evaluation in the definition of our source components. One approach we considered was to convert all Excel functions into an equivalent UDF, and use the technique described later in Section 3.2 to determine source components and destinations. However, because this requires at least as much effort as considering the functions individually (since we do not have access to source code for the built-in functions, we would have to reverse-engineer UDF code for each of them), and because of imprecisions involved in the handling of UDFs, we chose to consider them individually. Consider the first IF (node 2 and its children in the AST); for this IF, we recognize two source components, (2, T) and (2, F). (The 2 indicates the AST node, and T or F indicates which “behavior” we are interested in. Similarly, for node 3 and its children we create the source components (3, T) and (3, F).
The source components are combined to form sources for cell A3. We consider two methods for doing this. One method is to consider sets of feasible combinations of source components. For cell A3, these combinations are \{2, 3, T\}, \{(2, T), (3, F)\}, \{(2, F), (3, F)\} and \{(2, F), (3, F)\}. For the current input assignment, the source \{(2, T), (3, F)\} is exercised. This method captures all of the possible computation pat-
terns for the formula and could be used when particularly rigorous testing is needed, but generates a number of sources exponential in the number of function calls in the formula.
The second method is to create a source for each source component in the formula. This creates fewer sources (in general), and on any given execution, allows multiple sources to be exercised. In our example, for the given inputs, sources (2, T) and (3, F) would be exercised. For the rest of the discussion, we assume we are using this simpler method.
Destinations for A3 are defined in the same way as uses were for du-adequacy. The destinations are (4, A1, T), (4, A1, F), (5, A1), (7, A2, T), (7, A2, F), and (8, A2).
Next we build a set of interactions that we wish to test. As we did with du-adequacy, we consider all source-destination pairs. For the example we have been considering these are \{(A1, (4, A1, T)), (A1, (4, A1, F)), (A1, (6, A1)), (A2, (7, A2, T)), (A2, (7, A2, F)), (A2, (9, A2)), ((2, T), B1), ((2, F), B1), ((3, T), B1), ((3, F), B1)\}.
Since the process of generating source components, sources, and destinations is syntax-driven, it can be automated using standard parsed representations (such as ASTs) of cell formulas. In addition, determining which source components and destinations are exercised requires only execution traces of the formulas, which are easy to gather in a spreadsheet engine [17].
An additional question involves the interaction of our new du-adequacy criterion with the region mechanism for handling duplicated formulas described in Section 2. In that description we defined corresponding definitions and uses, and used those to define corresponding du-associations. For our new du-adequacy criterion we can use a similar process, defining corresponding source components and destinations based on the locations of the constructs in the cell formulas. Then two sources, S₁ and S₂, are corresponding if for each source component Cᵢ in S₁ there is at least one corresponding source component in S₂, and for each source component Cᵢ in S₂ there is at least one corresponding source component in S₁. Interactions are considered corresponding if their sources and destinations are corresponding.
3.1. Handling Built-in Excel Functions
The previous section described our new adequacy criterion, but we still have to demonstrate how it can be applied to the built-in Excel functions that give rise to the multiparadigmatic nature of the language. To facilitate consideration of this, we partition the built-in functions into a small number of classes according to language features to which they relate: higher-order functions, meta-programming constructs, pointers, querying, and matrix operations. These partitions include all of the functions listed in the Excel 2003 documentation that are purely functional (Excel also includes functions such as NOW that access the state of the environment) and are not strictly computational (functions such as SUM and AVERAGE that perform simple arithmetic procedures on their parameters).
3.1.1. Handling higher-order functions. Although higher-order functions are often considered to be a programming language feature commonly associated with functional programming languages, there is support for a form of higher-order functions in Excel formulas. More precisely, Excel has a small number of functions that allow the dynamic construction of predicate expressions used for simple iterative computations, including SUMIF, COUNT, COUNTA, COUNTBLANK, and COUNTIF. To show how our approach handles these, we consider the formula = SUMIF(A1 : A2,”>0”). The first parameter of SUMIF is a reference to a range of cells. The second parameter is a predicate to be applied to each of the cells referred to by the first parameter, which, if it evaluates to true, causes that cell’s value to be added to the running total. We can convert the SUMIF into a corresponding formula using addition and IF. For our example, this would be = IF(A1 > 0,A1,0) + IF(A2 > 0,A2,0). Notice that this transformed version is the same as the formula in cell A3 of Figure 3, and the source components and destinations are the same.
One issue with this method is that it generates sources and destinations for each of the IF functions, without consideration for the symmetry between the IF expressions. To address this, we can exploit the symmetry in a fashion similar to that used for regions. By defining sets of corresponding source components and destinations, and applying the modified du-adequacy criterion, we can greatly reduce the number of interactions. In the above example, (2, T) and (3, T) are one set of corresponding source components, and (4, A1, F) and (7, A2, F) are one set of corresponding destinations.
3.1.2. Handling meta-programming constructs. Excel includes a class of functions that allow meta-programming constructs. Meta-programming constructs allow programming logic based on attributes of the source code rather than attributes of the data. These include ISBLANK, CELL, AREAS, COLUMN, COLUMNS, ROW, and ROWS. ISBLANK is a predicate that returns true if and only if the referenced cell’s formula is blank. CELL allows a user to query for cell formatting, protection, and address information. AREAS, COLUMNS, and ROWS return information about
the number of areas, columns, or rows included in a cell reference. COLUMN and ROW return the position (column or row) of the first cell in a cell reference. For each of these functions, the important thing to note is that they do not operate on values, and instead operate on features of the spreadsheet akin to the source code of most other languages. Consider the formula = ROW(A1). This formula returns the value 1, regardless of the value in cell A1. Therefore we do not create destinations for the references in parameters to these functions or propagate testing decisions to the referenced cells.
3.1.3. Handling pointer constructs. Excel has three functions that are similar to pointer arithmetic as found in some imperative languages such as C: INDIRECT, OFFSET, and INDEX. Consider the formula = OFFSET(A1, B1, C1). Assume that cells B1 and C1 have values 1 and 2 respectively. In this case, the call to OFFSET returns a reference to cell C2 (1 row down and 2 columns right from cell A1). There are two potential issues with these functions. First, they can use references in their arguments. For INDEX and OFFSET, the first argument is a reference to a cell or range that is used as a starting point, and the additional arguments provide an offset relative to the original cell or range. Since the value in the range referred to in the first argument (A1 in the example) is not used, we do not create any destinations for this reference or propagate testing decisions back to the referencing cells. However, any references used in the other arguments (B1 and C1) are dereferenced, and the corresponding values (1 and 2) are used in the calculation, therefore we can create destinations for these references and propagate testing decisions to the referenced cells just as we do for computational functions.
The second issue with these functions is the handling of the returned reference (C2 in the example). For purposes of propagating testing decisions, it makes sense to treat the returned reference as we would a regular reference. The issue of generating destinations for the returned reference is more complicated. In general, these functions allow a reference to any cell in any spreadsheet ever created, although in practice their use will be much more limited (for INDEX we know the returned reference will be in the range provided in the first parameter, and for OFFSET we know the returned reference will be in the worksheet referenced in the first parameter). Since in many cases it may be intractable to calculate all of the references that can be returned by these functions, we require an approximation to determine which destinations to create.
There are several approaches that can be used for this. We could create no destinations for the returned reference; this minimizes the effort required of both the system and the user testing the spreadsheet, but may cause some interactions to be untested. We could generate the set of destinations based on the history of the spreadsheet by keeping track of the returned references of these functions and creating a new destination any time a cell that had not been used before is referenced. This method forces the user to make testing decisions that are influenced by each of the interactions seen by the system, but could still miss possible interactions. It also has the undesirable effect of having input cell changes potentially change the testedness of the spreadsheet (by creating new, necessarily untested, interactions). A third possibility is to create destinations for any cells that could be referenced by the function (in the case of INDIRECT, we would limit this to cells in the workbook containing the function call). This would prevent the methodology from missing any interactions, but could create a large number of infeasible interactions. Further experimentation is needed to determine which of these possibilities is best, but for now our prototype does not create any destinations for the returned references.
3.1.4. Handling query constructs. Excel has four functions, LOOKUP, HLOOKUP, VLOOKUP, and MATCH, that search for values in a range or array and return either a corresponding value or position. These are similar to standard query operations found in database query languages. Consider the formula = HLOOKUP(6, A1 : B3, 2): the function searches through the cells in the top row of the range A1 : B3, in order from left to right, until it finds a cell with a value equal to or greater than 6, and returns a corresponding value from the second row of the range A1 : B3.
For these functions, we use a method similar to that used for higher-order functions, converting the function to a series of nested IF expressions and defining the corresponding source components and destinations. The formula = HLOOKUP(6, A1 : B3, 2) is converted to IF(A1 >= 6, B1, IF(A2 >= 6, B2, IF(A3 >= 6, B3, #N/A!))). This formula has two sets of corresponding destinations, \{A1, A2, A3\} and \{B1, B2, B3\} and three sets of corresponding source components, \{(IF1, T), (IF2, T), (IF3, T)\}, \{(IF1, F), (IF2, F)\} and \{(IF3, F)\}.
3.1.5. Handling matrix constructs. Excel has several matrix processing functions (Excel uses the term arrays) such as MMULT. Formulas using these functions are typically assigned to a range of cells. Although there is
function SUMGREATERTHAN(R, V)
1. total = 0
2. for each cell C in R
3. if C > V then
4. total = total + C
5. return total
Figure 5. A user-defined function
3.2. Handling Imperative Code in Spreadsheets
Excel allows imperative code to be added to spreadsheets for a variety of tasks. One of the most common uses is for creating user-defined functions (UDFs). To integrate UDFs into our new adequacy criterion, we need to statically determine the source components and destinations relevant to those UDFs, and dynamically determine which source components and destinations are exercised when tests are applied. We use program analysis techniques on the UDFs to determine the source components and destinations.
To determine the destinations in the UDF, we consider references in the parameters of the UDF. For each reference, we create a destination. To determine which destinations are executed, we use dynamic slicing on the return value of the UDF. In the case of a range being passed in as a parameter to the UDF, we create a destination for each cell in the range, and classify these destinations as corresponding destinations (similar to the corresponding destinations created for SUMIF). Therefore, for the formula = SUMGREATERTHAN(A1 : A2, 0), the destinations are {A1, A2}, and they are corresponding destinations. If the functionally equivalent formula in A3 in Figure 3 was replaced with this formula, both destinations would be considered exercised (and would in fact be considered exercised regardless of the inputs).
This difference is one of the reasons we have chosen to handle the built-in functions on a case-by-case basis rather than by converting them into equivalent UDFs.
Determining the source components of the UDF is more complicated. Since source components represent subcomputations of formulas, one approach is to consider the subcomputations, or statements (which can be generalized to flow graph nodes), of the function. Then we have a source component for each statement. For SUMGREATERTHAN, the source components are {1, 2, 3, 4, 5}, and if the formula considered above was substituted for the formula in cell A3 in Figure 3, all of these source components would be considered exercised (if A1 was changed to a value less than 0, then 4 would not be exercised); again this is weaker than the source components for the equivalent IF or SUMIF expressions.
4. Handling Replicated Formulas
The notion of aggregating cells into regions of similar cells in spreadsheets is not new. For example, Sajaniemi defines a number of methods for doing so [20], and others have extended his definitions [6]. However, prior work has focused on using these regions for visualization and auditing tasks. To use regions for our testing methodology we require that it be possible to define corresponding source components and destinations between the cells in the region, and to efficiently update regions as formulas change; neither of these requirements is met by the approaches of [6, 20].
We divide the task of inferring regions into two sub-tasks. The first subtask involves determining whether cells are similar, and the second involves grouping similar cells into regions.
4.1. Determining Whether Cells are Similar
The first step in developing a region inference algorithm is to define a criterion for determining whether two cells belong in the same region. Work by Sajaniemi [20] defines a number of equivalence relationships over cells. For our purposes, we consider his formula equivalence and similarity relationships, and define a variation on these that we call formula similarity.
Two cells are formula equivalent if and only if one cell’s formula could have directly resulted from a copy action applied to the other cell’s formula. Sajaniemi goes on to show that, under a certain referencing scheme, formula equivalence can be determined by textual comparison of the formulas. Most commercial spreadsheets include support for the necessary referencing scheme; in Excel it is called R1C1-style.
Sajaniemi defines two cells as being similar if and only if they are formula equivalent and format equivalent (two cells are format equivalent if all formatting options, e.g., font, background color, or border color, are the same), or neither contains any references to other cells and they are format equivalent. In order to find
regions in the widest variety of situations we choose to ignore format equivalence. Therefore, we define two cells as "formula similar" if and only if they are formula equivalent or neither contains any references to other cells.
4.2. Finding Regions
The second issue we considered when defining our region inference techniques is the spatial relationships between cells. Prior work has focused on rectangular areas. However, it is not necessary that regions be rectangular, and by allowing non-rectangular regions we allow larger regions to be found, thereby decreasing testing and computational effort (as well as avoiding problems with updating rectangular regions). Therefore, we consider three different candidate spatial relationships for inferring regions: discontiguous, contiguous, and rectangular. For each relationship, we describe our algorithm for finding regions in an existing spreadsheet, and we then discuss mechanisms for incrementally updating regions as the spreadsheet is updated (algorithms and run time analyses are available in Reference [13]).
4.2.1. Discontiguous regions. Using formula similarity and no additional constraints yields the most general concept of what constitutes a region: all cells in a worksheet that are formula similar are in the same region. Under this concept, regions can be discontiguous, containing cells that are not neighbors.
Discontiguous regions can be identified by iterating through the cells in a spreadsheet and looking up region identifiers in a hashtable indexed by cell formula. This process is linear in the number of cells. This technique finds two regions in Grades (Figure 1): (1) the cells in the areas labeled 1, 2 and 3, and (2) the cells in the area labeled 4.
To incrementally update regions there are several operations to consider. A cell’s formula could be changed (through user entry or a copy/paste operation), a cell could be inserted into the spreadsheet, or a cell could be deleted from the spreadsheet. First suppose cell C’s formula is changed. In this case, C is removed from the region it is in, and if C is the only cell in its region, that region is deleted. Next the technique finds the region to which C should be added; this is done by looking up the new region in the hashtable used to find the regions initially. This is a constant time operation.
When a cell (or cells) is (are) added to a spreadsheet, all of the cells below (or to the right of, at the user’s discretion) the inserted cell are shifted down (or to the right). This also causes references to the shifted cells to be updated to reflect the cells’ new locations. Each cell that references a cell that is shifted must have its region information updated. References change in a similar manner when cells are deleted from the spreadsheet, and are treated similarly.
4.2.2. Contiguous regions. The discontiguous algorithm is simple and efficient; however, it is important to consider what kinds of regions end users will be able to make use of. Allowing discontiguous regions requires the creation of some device to indicate the relationship between the disconnected areas that comprise regions, which could be difficult to do in a fashion that users can understand and use. Therefore, it may be useful to require regions to be contiguous.
To find contiguous regions, our technique iterates through the cells in a spreadsheet, comparing their formulas to those of their neighboring cells, and merging formula similar cells into regions. With an efficiently implemented merge operation, the cost of this approach is linear in the number of cells in the spreadsheet. This technique finds three regions in Grades (Figure 1): (1) the cells in the areas labeled 1 and 2, (2) the cells in the area labeled 3, and (3) the cells in the area labeled 4.
With contiguous regions, to update regions when a formula in cell C in region R is changed, there are two factors to consider. First, C is removed from R, but then it must be determined whether C is required to keep two or more areas of R connected. This can occur only if two or more of the cells adjacent to C were in R. To determine whether R should be split, a search is performed on the cells in R starting with one of the cells adjacent to C. If all cells in R that were adjacent to C can be reached, it is not necessary to split the region. If any adjacent cells are not reached in the search, then the cells traversed in the search must be split off from the rest of the region. If two or more adjacent cells are not reached, the search process is repeated with another adjacent cell. In addition, it is also possible that changing the formula allows two neighbor regions to be merged. If the changed cell now has the same formula as two of its neighbors and those cells are in different regions, they need to be merged. Because of the need to potentially split or merge regions, this operation is linear in the size of R and of any other regions adjacent to C. A similar procedure is performed when a cell is deleted or inserted, taking into account changing references as in Section 4.2.1.
4.2.3. Rectangular regions. Forms/3 required regions to be rectangular, and Excel users may tend to think of their spreadsheets in rectangular blocks. Thus we also consider an algorithm that creates rectangular regions. To find rectangular regions, our technique first iterates through the cells, comparing their formulas to those of the cells directly above or below, creating all regions one cell wide of maximum height. It then iter-
ates through these regions, comparing them to the regions on either side of them, and merging the adjacent regions with formula similar formulas with the same height. Again, assuming an efficient region merge algorithm, this technique is linear in the number of cells. This technique finds four regions in the Grades spreadsheet in Figure 1, one for each of the labeled areas.
When a formula in cell $C$ in region $R$ is changed, the region is split into five regions. This can be done in many ways, but to be consistent with our algorithm for finding regions it proceeds as follows: one region includes all cells in $R$ to the left of $C$, one region includes all cells in $R$ to the right of $C$, one includes the cells in $R$ directly above $C$, one includes the cells in $R$ directly below $C$, and the last includes only $C$ (depending on where the modified cell is located in the original region, one or more of these regions may include no cells). Each of these regions is then compared with its neighbor regions to determine whether they should be merged. The total cost of this operation depends on the number of cells in the region that is broken up and its neighboring regions.
There is one important thing to note about this approach: it does not guarantee that the regions created are the same as they would be if we re-ran the batch operation. For example, in Figure 1, if the formula of cell $I9$ was changed to match the formulas in area $3$, $I9$ would be assigned to its own region. However, if this formula had been the same as the formulas in area $3$ when the batch operation was performed, area $3$ would have been divided into two regions (one for column $I$ with $I9$ and one for column $J$). (Any update algorithm that attempted to recreate the regions that were inferred by the batch rectangular regions algorithm could potentially have wide-ranging effects on the structure of the updated regions that could be confusing to the user.) A similar procedure is performed when a cell is deleted or inserted, taking into account the issues mentioned in Section 4.2.1.
5. Assessment
Ultimately, our techniques must be empirically studied in the hands of end users, to address questions about their usability and effectiveness. Such studies, however, are expensive, and before undertaking them, it is worth first assessing the more fundamental questions of whether our techniques for handling formulas and regions scale cost-effectively to real world spreadsheets, and how our different region inference algorithms perform when applied to real spreadsheets. If such assessments prove negative, they obviate the need for human studies; if they prove positive, they provide insights into the issues and factors that should be considered in designing and conducting human studies.
More formally we consider the following research questions:
**RQ1:** How much does the use of WYSIWYT as extended slow down commercial spreadsheets, and how does this vary with region inference algorithms?
**RQ2:** How much savings in testing effort can be gained by each of the region inference algorithms?
**RQ3:** How do the different region inference algorithms differ in terms of the regions they identify?
To investigate these questions, we implemented a prototype in Excel using Java and VBA. The Java component performs the underlying analysis required for determining du-associations and tracking coverage, while the VBA component evaluates formulas and expressions and displays our visual devices. The prototype version used for this study provides support for most of the functions described in Section 3, treating unsupported functions as simple computational functions for purposes of testing. (It does not yet support imperative code in spreadsheets.)
### 5.1. Experimental procedure
As objects of analysis, we drew a sample of the spreadsheets from the EUSES Spreadsheet Corpus [11], selecting from the 1826 of those spreadsheets that contained formulas and did not use macros. The 176 selected spreadsheets ranged in size from 41 to 12,121 non-empty cells, with a mean of 1,235 non-empty cells.
Our experiment involved two independent variables: region inference algorithm and spreadsheet size.
We used all three region inference algorithms described in Section 4: D-Regions, C-Regions, and R-Regions. As a baseline we also used a version without region inference, No-Regions.
To measure spreadsheet size we used the number of non-empty cells in the spreadsheet.
We explored three dependent variables: time required for analysis on load, number of interactions in the spreadsheet, and number of regions found.
To measure time for analysis on load, we measured the time that was spent in the analysis portion of loading the spreadsheet. This measure allows us to estimate how much overhead the use of WYSIWYT requires. This measure includes the time required to infer regions and find all interactions in the spreadsheet.
To approximate the testing effort required by the different region algorithms we use the number of in-
interactions in the spreadsheet. This works as an upper-bound on the amount of testing required, since any adequate test suite requires, at most, the same number of tests as there are du-associations.
Due to the properties of the algorithms, we know that if two of our region inference algorithms find the same number of regions in a spreadsheet, they have found identical regions. Thus, measuring the number of regions found lets us quickly determine whether two algorithms act identically, and we can then further inspect interesting cases when this metric differs. For the No-Regions algorithm, the number of regions is equal to the number of cells.
For each spreadsheet, we ran four different executions that each sequentially opened a spreadsheet, collected our measures, and then closed the spreadsheet. We did an execution for each of the four region inference algorithms utilizing the prototype Excel interface and Java analysis engine described in [7].
5.2. Data and Analysis
RQ1. Our goal with RQ1 is to determine how much our algorithms slow down the normal operation of Excel. We chose to look at load time because it is during the loading of the spreadsheet that the most work in calculating regions and interactions must occur and because previous work has demonstrated that reasonable bounds hold on the time required to respond to other user actions within the WYSIWYT methodology [17].
Figure 6 plots analysis time on load against spreadsheet size. Looking at the different techniques, it appears that the No-Regions approach is generally slower than the other three approaches. To explore this, we considered the differences in time between techniques using paired t-tests. As suggested by the graph, there were significant differences between No-Regions and the other techniques (mean differences between 9.24 and 9.69, p-values < .05), with no significant differences between any of the region techniques. For the techniques with regions, there does not appear to be any correlation between size of spreadsheet and time; however, with the No-Regions approach it appears that such a correlation might exist. A bivariate linear correlation analysis of the data resulted in a Pearson value of .863 significant with a p-value of less than .01, indicating a reasonably strong correlation between analysis time for No-Regions and spreadsheet size.
RQ2. Table 1 shows the total number of interactions found by each technique. No-Regions has more than 14 times as many interactions as any of the other techniques on average (significant, paired t-test, p-value < .05). Both C-Regions and R-Regions had a slightly larger number of interactions than D-Regions (significant, paired t-test, p-value < .05), and approximately the same number of interactions as each other. These results suggest that testing effort could be reduced dramatically through the use of our region inference algorithms.
RQ3. Examination of the number of regions found by the different techniques shows that for 172 of the spreadsheets R-Regions found the same number of regions as C-Regions. This implies that R-Regions found regions identical to those found by C-Regions in these cases.
D-Regions found fewer regions than R-Regions and fewer than C-Regions, as indicated in Table 1 (significant, paired t-test, p-value < .05). Further examination shows that D-Regions found the same set of regions as C-Regions on only 36 spreadsheets.
5.3. Discussion
Our analysis timings show that it is feasible to perform WYSIWYT analysis on real spreadsheets, and that with region inference and our formula extensions, WYSIWYT seems to scale quite well to larger spreadsheets. In addition, from the point of view of timing, it does not seem to make much difference which region inference algorithm is used.
As expected, D-Regions found significantly fewer (therefore larger) regions than the other techniques, which led to fewer interactions in the spreadsheet, implying less testing effort. The lack of difference between C-Regions and R-Regions, however, was somewhat surprising, although useful. As mentioned in Section 4, R-Regions are difficult to update and efficient updating algorithms could lead to an inconsistent state, a problem that C-Regions does not suffer from. Since the vast majority of con-
tiguous regions in the study are inherently rectangular in nature, there seems to be little reason to use R-Regions. However, since there is a significant difference between the regions identified by D-Regions and R-Regions, user studies are needed to determine which of these methodologies provides the best balance between usability and efficiency for users.
6. Conclusions
In this paper we have presented a new test adequacy criterion, aimed at supporting not only the usual dataflow relationships between formulas, but also the more challenging multiparadigmatic features of commercial spreadsheets. We show how the adequacy criterion can be applied to Excel’s support for higher-order functions, meta-programming constructs, pointer constructs, query language mechanisms, matrix constructs and user-defined functions. We also present algorithms to support the high degree of formula replication common in commercial spreadsheets. Finally, we report on the first studies of WYSIWYT to ever be conducted within a commercial spreadsheet environment.
In our continuing work, we are considering approaches for handling other features of commercial spreadsheets. Charts could be handled as a special form of cell that have targets for each cell whose value is used to generate the chart. External data sources are a form of input cell into the system; replacing them with temporary user-settable input cells would allow the user to test the logic of the spreadsheet. Using an anomaly detection mechanism on the data feeds themselves similar to that proposed in Reference [16] could help to ensure that the data feeds are reliable.
Through this work we hope to provide a system that can be used to further evaluate dependability devices with end users using large-scale spreadsheets, and in particular, that can be used in long-term studies.
Acknowledgements. This work was supported in part by the EUSES Consortium via NSF Grant ITR-0325273.
References
|
{"Source-Url": "http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1040&context=csetechreports", "len_cl100k_base": 9396, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39253, "total-output-tokens": 11078, "length": "2e13", "weborganizer": {"__label__adult": 0.0002963542938232422, "__label__art_design": 0.0005092620849609375, "__label__crime_law": 0.00030875205993652344, "__label__education_jobs": 0.001926422119140625, "__label__entertainment": 7.975101470947266e-05, "__label__fashion_beauty": 0.00016033649444580078, "__label__finance_business": 0.0009136199951171876, "__label__food_dining": 0.0003173351287841797, "__label__games": 0.0004017353057861328, "__label__hardware": 0.0010061264038085938, "__label__health": 0.0004818439483642578, "__label__history": 0.00028824806213378906, "__label__home_hobbies": 0.00012052059173583984, "__label__industrial": 0.0005197525024414062, "__label__literature": 0.0003285408020019531, "__label__politics": 0.00022172927856445312, "__label__religion": 0.0003421306610107422, "__label__science_tech": 0.088134765625, "__label__social_life": 0.0001137852668762207, "__label__software": 0.0193939208984375, "__label__software_dev": 0.88330078125, "__label__sports_fitness": 0.0002028942108154297, "__label__transportation": 0.0004498958587646485, "__label__travel": 0.00017952919006347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48563, 0.02863]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48563, 0.84346]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48563, 0.91604]], "google_gemma-3-12b-it_contains_pii": [[0, 823, false], [823, 4939, null], [4939, 8926, null], [8926, 13390, null], [13390, 18709, null], [18709, 24045, null], [24045, 28408, null], [28408, 33953, null], [33953, 39008, null], [39008, 43300, null], [43300, 48563, null]], "google_gemma-3-12b-it_is_public_document": [[0, 823, true], [823, 4939, null], [4939, 8926, null], [8926, 13390, null], [13390, 18709, null], [18709, 24045, null], [24045, 28408, null], [28408, 33953, null], [33953, 39008, null], [39008, 43300, null], [43300, 48563, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48563, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48563, null]], "pdf_page_numbers": [[0, 823, 1], [823, 4939, 2], [4939, 8926, 3], [8926, 13390, 4], [13390, 18709, 5], [18709, 24045, 6], [24045, 28408, 7], [28408, 33953, 8], [33953, 39008, 9], [39008, 43300, 10], [43300, 48563, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48563, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
43434784f07b9bce529e2e58f063fca7630dbcbe
|
Processing Aggregates in Parallel Database Systems
Ambuj Shatdal
Jeffrey F. Naughton
Technical Report #1233
June 1994
Processing Aggregates in Parallel Database Systems*
Ambuj Shatdal Jeffrey F. Naughton
Computer Sciences Department
University of Wisconsin-Madison
{shatdal,naughton}@cs.wisc.edu
Computer Sciences Technical Report # 1233
June, 1994
Abstract
Aggregates are rife in real life SQL queries. However, in the parallel query processing literature aggregate processing has received surprisingly little attention; furthermore, the way current parallel database systems do aggregate processing is far from optimal in many scenarios. We describe two hashing based algorithms for parallel evaluation of aggregates. A performance analysis via an analytical model and an implementation on the Intel Paragon multi-computer shows that each works well for some aggregation selectivities but poorly for the remaining. Fortunately, where one does poorly the other does well and vice-versa. Thus, the two together cover all possible selectivities. We show how, using sampling, an optimizer can decide which of the two algorithms to use for a particular query. Finally, we investigate the impact of data skew on the performance of these algorithms.
1 Introduction
SQL queries in the real world are replete with aggregate operations. One measure of the perceived importance of aggregation is that in the proposed TPC-D benchmark [TPC94] 15 out of 17 queries contain aggregate operations. Yet we find that aggregate processing is an issue almost totally ignored by the researchers in the parallel database community. The deceptive simplicity of aggregate processing is possibly the reason for this negligence. However, we find that though it looks straightforward, most parallel database systems (hereafter called PDBMSs) implement aggregate processing algorithms that are far from optimal. In this paper we study aggregate processing on shared nothing parallel database systems, and propose two different schemes for aggregate processing on PDBMSs.
The standard parallel algorithm for aggregation is for each node in the multiprocessor to first do aggregation on its local partition of the relation. Then these partial results are sent to a centralized coordinator node, which merges these partial results to produce the final result.
*This research was supported by NSF grants IRI-9113736 and IRI-9157357.
Briefly, the first approach we propose parallelizes the second phase of the traditional approach. The second approach is to first redistribute the relation on the GROUP BY attributes and then do aggregation on each of the nodes producing the final result in parallel. The algorithms we propose are simple enough that we hesitate to call them “new” algorithms, as we suspect that they may have already been thought of and perhaps even implemented. However, to our knowledge neither a description of these algorithms nor an analysis of their performance has appeared in the literature.
While these approaches are simple, their performance behavior is not obvious. The analytical models and the implementation on the Intel Paragon parallel super computer [Int93] show that the two proposed schemes are complementary in terms of performance. We find that whereas first approach works well when the number of result tuples is small, the second approach works better when the GROUP BY is not very selective. We show that it is relatively easy for the optimizer to decide which scheme is the best in a given scenario. This can be decided either on the basis of some known statistic about the relation, or by an efficient sampling based strategy. Finally, we study how these schemes perform in presence of data skew. We first characterize the data skew problem in aggregate processing and investigate the impact of skew on the performance of these algorithms.
As mentioned, there has been little work reported in literature on aggregate processing. Epstein [Eps79] discusses some algorithms for computing scalar aggregates and aggregate functions on a uniprocessor. Bitton et al. [BBDW83] discuss two sorting based algorithms for aggregate processing on a shared disk cache architecture. The first is somewhat similar to the proposed two phase approach in that it uses local aggregation. We study its performance and show that it fares better than the traditional approach but worse than the proposed two phase approach. The second algorithm of Bitton et al. uses broadcast of the tuples and lets each node process the tuples belonging to a subset of groups. This is too impractical on today’s multiprocessor interconnects which do not efficiently support broadcasting. Su et al. [SM82] discuss an implementation of the traditional approach. Graefe [Gra93] discusses some issues in dealing with bucket overflow when using hash bases aggregation in a memory constrained environment.
The rest of the paper is organized as follows. Section 2 introduces aggregation, the previous approaches to aggregate processing and describes the two proposed approaches. An analytical evaluation of the different algorithms is presented in Section 3. In Section 4 we describe the implementation of the algorithms on the Intel Paragon and show their performance results. Section 5 shows how we can decide which algorithm to use given a particular scenario. In Section 6 we discuss the effect of data skew on aggregate processing. Section 7 offers our conclusions.
2 The Algorithms
An SQL aggregate function is a function that operates on groups of tuples. Its basic form is:
```sql
select [group by attributes] aggregates from {relations}
```
[where {predicates}]
group by {attributes}
having {predicates}
We note that in practice the aggregate operation is often accompanied by the GROUP BY operation\(^1\). Thus the number of result tuples depends on the selectivity of the GROUP BY attributes. We define GROUP BY selectivity as \(\frac{|\text{Result}|}{|\text{Relation}|}\). We find that it does indeed vary quite a lot. For example, in the TPC-D benchmark we found GROUP BY selectivities of 0.25, 0.00167 and 1e-5. The HAVING clause, when properly constructed (i.e., one that can’t be converted to a WHERE clause), is evaluated after the processing of the GROUP BY clause and it does not directly affect the performance of the aggregation algorithms we are trying to study. Hence we will assume that the query does not have a HAVING clause.
In the remainder of the paper we will assume that aggregation is always accompanied with GROUP BY and that scalar aggregation can be considered as a special case where number of groups is 1. The following simple query will serve as the running example using relation \(R(\text{tid,cardNo,amount})\).
\[
\text{select cardNo, \text{avg(amount)}} \text{ from } R \\
\text{group by cardNo;}
\]
Further, we assume a Gamma [DGS+90] like architecture where each relational operation is represented by operators. The data “flows” through the operators in a pipelined fashion as far as possible. For example, a join of two base relations is implemented as two select operators followed by a join operator. Aggregation can be implemented by one or two operators, as detailed below, which are fed by some child operator (e.g., a select or a join) and the result is sent to some parent operator (e.g., a store). In our study we assume that the child operator is a scan/select and the parent operator is a store.
We also present analytical cost models of the four approaches. The cost models developed below are quite simple and as such they should not be interpreted to predict exact running times of the algorithms. The intention is that although the models will not be able to predict the actual running times, they will be good enough to predict the relative performance of the algorithms under varying circumstances. Fortunately, as we shall see in Section 3, in this study the results are robust under even fairly significant perturbations of the constants in the cost model, so even an approximate model is sufficient. The simplifying assumptions in the model include no overlap between CPU, I/O and message passing, sufficient network bandwidth, and that all nodes work completely in parallel thus allowing us to study the performance of just one node. With few exceptions (noted later in this paper) even this simple model generates results that are qualitatively in agreement with measurements from our implementation.
We assume that the aggregation is being performed directly on a base relation stored on disks as in the example query. The parameters of the study are listed in Table 1 unless otherwise specified. These parameters are similar to those in previous studies e.g., [BCL93]. The CPU speed and network speed were chosen to reflect the characteristics of the current generation of
\(^1\)In the TPC-D benchmark 13 out of 15 queries with aggregates have GROUP BY.
<table>
<thead>
<tr>
<th>Symbol</th>
<th>Description</th>
<th>Values</th>
</tr>
</thead>
<tbody>
<tr>
<td>N</td>
<td>number of processors</td>
<td>32</td>
</tr>
<tr>
<td>Mips</td>
<td>MIPS of the processor</td>
<td>15</td>
</tr>
<tr>
<td>R</td>
<td>size of relation</td>
<td>400 MB</td>
</tr>
<tr>
<td></td>
<td>R</td>
<td></td>
</tr>
<tr>
<td></td>
<td>R_i</td>
<td></td>
</tr>
<tr>
<td>P</td>
<td>page size</td>
<td>4 KB</td>
</tr>
<tr>
<td>IO</td>
<td>effective time to read a page</td>
<td>3.5ms</td>
</tr>
<tr>
<td>p</td>
<td>projectivity of the aggregation</td>
<td>25%</td>
</tr>
<tr>
<td>t_r</td>
<td>time to read a tuple</td>
<td>300/Mips</td>
</tr>
<tr>
<td>t_w</td>
<td>time to write a tuple</td>
<td>100/Mips</td>
</tr>
<tr>
<td>t_h</td>
<td>time to compute hash value</td>
<td>400/Mips</td>
</tr>
<tr>
<td>t_a</td>
<td>time to add a tuple to current aggr value</td>
<td>300/Mips</td>
</tr>
<tr>
<td>t_m</td>
<td>time to merge two aggr values in sorted streams</td>
<td>150/Mips</td>
</tr>
<tr>
<td>t_s</td>
<td>time to compare and swap two keys</td>
<td>500/Mips</td>
</tr>
<tr>
<td>t_v</td>
<td>time to move a tuple</td>
<td>400/Mips</td>
</tr>
<tr>
<td>S</td>
<td>selectivity of the GROUP BY</td>
<td>$\frac{1}{N}$ to 0.5</td>
</tr>
<tr>
<td>S_1</td>
<td>phase 1 selectivity in Two Phase</td>
<td>$\max(S \ast N, 1)$</td>
</tr>
<tr>
<td>S_2</td>
<td>phase 2 selectivity in Two Phase</td>
<td>$\max(S, \frac{1}{N})$</td>
</tr>
<tr>
<td>t_d</td>
<td>time to compute destination</td>
<td>10/Mips</td>
</tr>
<tr>
<td>m_p</td>
<td>message protocol cost per page</td>
<td>1000/Mips</td>
</tr>
<tr>
<td>m_l</td>
<td>message latency for one page</td>
<td>1.3 ms</td>
</tr>
</tbody>
</table>
Table 1: Parameters for the Analytical Models
commercially available multiprocessors (e.g. the Intel Paragon). The I/O rate was as observed on the Maxtor disk on the Paragon. The software parameters are based on instruction counts taken from the Gamma prototype and are similar to those in previous studies e.g. [BCL93]. In the following we assume that aggregation on a node is done by hashing.
In the remainder of this section we discuss previously proposed approaches to parallel aggregation and two new approaches that we have not seen discussed in the literature.
2.1 Traditional Approach

**Figure 1: Traditional Scheme**
The traditional approach, e.g. the one implemented in Gamma [DGS+90] and some commercial PDBMSs, is for each node to do aggregation on the partition of the relation on the node. These result in local (group, aggregate-value) tuples. These are sent to a central coordinator which merges the local results into the overall (global) aggregate value (Figure 1). In our example, it will result in each node computing local sum, and count for each group resulting in tuples like (cardNo = 1234, sum = 100, count = 2) being generated on each node. Assuming the second node (in a 2 node system) produces (cardNo = 1234, sum = 300, count = 3), the coordinator computes the final value to be (cardNo = 1234, avg = (100 + 300)/(2 + 3) = 80).
As we will show, this approach works well if the number of resulting tuples is very small. But as soon as the selectivity of the GROUP BY becomes moderate, the central coordinator node starts becoming a bottleneck. In TPC-D queries, for example, there is a GROUP BY with selectivity as low as 0.25 (i.e. on average, 4 tuples form a group). Hence, in general using single node global aggregation forms a serial bottleneck at that node.
The cost components of the analytical model are as follows. In the first phase each node processes the tuples residing locally.
- scan cost (IO): \((R_i/P) \times IO\)
- select cost, getting tuple out of data page: \(|R_i| \times (t_r + t_w)\)
- local aggregation involving reading, hashing and computing the cumulative value: \(|R_i| \times (t_r + t_h + t_a)\), or if using sorting then sorting and scanning/aggregating the cumulative value: \(|R_i| \times \log |R_i| + t_s + |R_i| \times (t_v + t_a)\)
- generating result tuples: $|R_i| \cdot S_l \cdot t_w$
- message cost for sending result to coordinator: $(R_i \cdot S_l / P) \cdot (m_p + m_l)$
In the second phase these local values are merged by the coordinator. The number of tuples arriving at the coordinator are: $|G| = \sum |R_i| \cdot S_l = |R| \cdot S_l$ and $G = p \cdot R \cdot S_l$.
- receiving tuples from local aggregation operators: $(G/P) \cdot m_p$
- computing the final aggregate value for each group involves reading and computing the cumulative values: $|G| \cdot (t_r + t_a)$ or using merging of sorted streams: $|G| \cdot t_m$
- generating final result: $|G| \cdot S_g \cdot t_r$
- I/O cost for storing result: $(G \cdot S_g / P) \cdot IO$
2.2 Approach of Bitton et al. : Hierarchical Merging

Figure 2: Local Aggregation with Hierarchical Merging
The first algorithm proposed in [BBDW83] first makes each node do aggregation on locally resident data like the traditional approach. However, instead of one node doing the merging of these local aggregate results into final ones, it utilizes a (pipelined) binary merging scheme, thus off-loading some of the work from the final node (see Figure 2). However, the final node still has to generate all the final aggregate values, as it has to do the final merge.
This approach also works well if the number of resulting tuples is very small. It even handles the moderate selectivity ranges well because of the hierarchical merging. However, when the number of tuples become sufficiently large, its performance declines as the final merging phase becomes the bottleneck.
The central merging phase of the traditional approach is now replaced by a pipelined hierarchical merging. This necessitates addition of a sorting step in the hashed based aggregation which consumes $|R_i \cdot S_l| \log (|R_i| \cdot S_l) \cdot t_s + |R_i \cdot S_l| \cdot t_v$ amount of CPU. Otherwise, the cost of the first phase remains identical. Assuming ideal pipelining, the cost is determined the bottleneck node in the pipeline. Thus the cost will be determined by the maximum of the costs
on each node which are computed as follows, where $n$ is the number of tuples arriving to that node and $f$ is the fraction reduction (up to at best 0.5) in the number of tuples after the merge.
- receiving tuples from previous operator in the pipeline: $(n/P) \times m_p$
- merging the two arriving aggregate streams: $n \times t_m$
- generating result tuples: $n \times f \times t_w$
- message cost for sending result to next operator: $(n \times f/P) \times (m_p + m_l)$
- storing result by the last operator: $p \times R \times S \times IO$
We propose two possible alternatives for GROUP BY aggregate evaluation using hash based redistribution. The first one extends the traditional scheme by parallelizing the second phase. The second approach redistributes the data first by hashing on the GROUP BY attributes and then computes the aggregates locally thus avoiding the second phase necessary in the previous approaches.
2.3 Two Phase Parallel Aggregation
In the first phase, the tuples after being read from disk are passed to the aggregate operator on the same node. This local “message passing” is more efficient than redistribution. Each aggregate operator then computes the aggregate on the set of tuples its scan operator generates.
In the second phase, the local aggregate results are partitioned on the GROUP BY attributes and sent to one of the “global aggregation” operators that are running on each of the nodes. The global aggregation operator merges these individual local results into the final aggregate values for each group. In our example query, each node will send tuples of the form (cardNo, sum, count) to the global aggregation operator. The global aggregation operator will compute the final result by summing up the tuples belonging to the same group received from all the nodes and dividing by the total count. This operation for different groups is done in parallel by redistributing these tuples on ‘cardNo’ attribute, so as to avoid the bottleneck in the traditional approach.
The main performance difference from the traditional approach is that the second phase is now parallel. The first phase remains the same except that now the destination for the tuples containing local aggregation information has to be computed adding $|R_i| + S_i + t_d$ to the cost. The second phase is parallelized becoming:
- receiving tuples from local aggregation operators: $(G_i/P) * m_p$ where $G_i = p * R_i * S_i$ and $|G_i| = |R_i| * S_i$.
- computing the final aggregate value for each group arriving to this node: $|G_i| * (t_r + t_a)$ or using merging of sorted streams: $|G_i| * t_m$
- generating result tuples: $|G_i| * S_p * t_r$
- storing result to local disk: $(G_i + S_p/P) * IO$
In practice, however, there will be an additional but negligible overhead of sending control messages to all the nodes and receiving control messages after the termination of the operators.
2.4 Parallel Aggregation by Redistribution

Figure 4: Repartitioning Scheme
The second algorithm is motivated by the fast message passing in today’s multi-computers. The tuples after being read are redistributed by hashing on the GROUP BY attributes (just like in a join operation where tuples are redistributed on the join attribute) as in Figure 4. Each node now does a GROUP BY as in a single node system and the result produced is final for that group because tuples belonging to a group are on only one node. In our example, all the tuples with cardNo = 1234 will be on one node and the result of the average will be final. In this method, therefore, we avoid a second phase in aggregation at the cost of redistributing the entire relation (which is not too expensive in current day multi-computers). We know of one commercial PDBMS that uses this approach for vector aggregates. Unfortunately, this strategy does not work efficiently as given when the number of groups is comparable or less than the number of nodes available because then not all the nodes will be exploited. We expect this strategy to be efficient when the number of groups is more than the number of processors available.
The cost model for repartitioning approach is as follows.
- scan cost (IO): $(R_i/P) * IO$
• select cost involving reading, writing, hashing and finding the destination for the tuple:
\(|R_i| \times (t_r + t_w + t_h + t_d)\)
• repartitioning send and receive: \(R_i/P \times (m_p + m_t + m_p)\)
• aggregate by reading and computing the cumulative sum: \(|R_i| \times (t_r + t_a)\) or by sorting
and scanning/aggregating the cumulative value: \(|R_i| \times \log |R_i| \times t_s + |R_i| \times (t_v + t_a)\)
• generating result tuples: \(|R_i| \times S \times t_r\)
• storing result to local disk: \((p \times R_i \times S_g/P) \times IO\)
However, one must observe that if the number of groups is less than the number of available
processors, then all processors can not be exploited by the basic scheme. That is \(R_i = R \times \max(S, \frac{1}{N})\). Implemented as is, it will show poor performance when number of groups is small,
i.e. \(S > \frac{1}{N}\).
3 Analytical Results
We studied performance of the four approaches under varying assumptions. The main perfor-
manace characteristics are evident from Figure 5. It shows the performance characteristics of the

**Figure 5: Relative Performance of the Approaches**
algorithms for a standard configuration: 32 nodes each with 25 MIPS CPU, sufficient memory,
1 disk and an Intel Paragon-like network. The relation size was 4 million tuples. Two main
observations are common to both.
1. The two phase approach can easily replace the previous approaches because it is never
worse than them and the previous approaches suffer at higher selectivities.
2. The two phase approach does much better than repartitioning till number of groups is less than the number of processors. Then both algorithms perform similarly but finally the repartitioning approach is significantly better.
The reason why two phase scheme does better at low GROUP BY selectivities is that the overhead of repartitioning the whole relation is avoided and since very few tuples are generated the second phase does not affect the overall performance. As mentioned earlier, the repartitioning approach does not exploit all nodes when number of groups is less (or comparable) than the number of nodes. However for high selectivities, the repartitioning scheme does better because it does not duplicate the aggregation work, which the two phase scheme is forced to do for high selectivities thus nullifying the advantage of avoiding the redistribution by the merging of local aggregate values in the second phase.
The above results hold even when the parameters are changed. Figures 6, 7, 8, 9 show graphs with 4 times faster CPU, 4 times faster network, 4 times faster I/O and 4 times less memory respectively.

Most of the differences are expected. The faster CPU results in faster response time for both the algorithms; the faster network improves the relative performance of the repartitioning approach because it uses the network more than the others; the faster I/O improves the overall response time of all algorithms. However, we find that if we constrain memory a little such that not all of the data structure for the group by information (e.g. a hash table) can be kept in memory, then we find that the all except the repartitioning approach suffer significantly because of memory thrashing. Memory thrashing is modeled simply by assuming that a reference will miss and suffer a page fault with the probability that the group it refers to is not in memory.
This points to an interesting observation regarding the behavior of the algorithms. In the hash based aggregation a hash table entry is maintained per group. Since tuples belonging to
Figure 7: Performance with 4 times faster Network
Figure 8: Performance using 4 disks per node in parallel
a group are initially randomly distributes across the nodes, each node is likely to have tuples belonging to a particular group. Hence there will be a hash table entry for that group on all the nodes containing tuples belonging to the group. Thus the hash table is, in a way, replicated across the nodes of the system. In contrast, the repartitioning algorithm brings all tuples belonging to a group to a particular node and hence there is only one hash table entry of that group in the entire system. Hence the total memory requirement of the redistribution approach is significantly smaller. This smaller memory requirement results lets it handle larger number of groups without thrashing or using alternative (e.g. multi-bucket aggregation) approaches.
Figure 10: Speedup and Scaleup Characteristics of the Approaches at $\frac{1}{217}$ Selectivity
Figure 10 shows the speedup and scaleup for a selectivity of $\frac{1}{217}$ which lies in the “middle” range of the selectivity where both the algorithms are expected to perform well. We find that
the speedup and scaleup characteristics of both the approaches are very close to ideal, the repartitioning approach being a little better. This is expected in the middle selectivity range because both the algorithms (modulo data skew) are fully parallelizable with little overhead. The two phase algorithm suffers the overhead of the second phase (which grows with the selectivity) whereas the overhead in the repartitioning scheme is the redistribution of the relation. As a system grows larger, the repartitioning algorithm will show better performance as evident from the speedup results.

**Figure 11: Speedup of the Approaches at low, $\frac{1}{2^{16}}$, and high, $\frac{1}{16}$ Selectivities**
However, as evinced by Figure 11, the speedup characteristics of the algorithms at the low and high GROUP BY selectivities are quite different. At low selectivity, the two phase algorithm clearly wins as the repartitioning approach is unable to exploit all the processors. For high selectivities the repartitioning approach shows a significantly better speedup than the two phase. This further supports our claim that we need both the approaches in a PDBMS.
## 4 Implementation Results
In order to further investigate the performance of the two proposed approaches, we implemented them on the Intel Paragon parallel super computer. The Intel Paragon has a shared-nothing architecture with 64 nodes each having a i860 CPU, 16 MB memory and a Maxtor MXT-1240S disk. The nodes are connected by a high bandwidth, low latency interconnection network. We implemented the algorithms on top of the OSF/1 file system using the Paragon message passing library.
Since the using multiple processes per node was not recommended for performance reason, and we did not have a thread package, we decided to do our own mini thread management among the related processes. Our implementation had no concurrency control and did not use slotted pages. Hence the algorithms are significantly more CPU efficient than what would be found in a complete database system. As we will see, the performance numbers therefore match
those of a fast CPU with a constrained memory in the analytical model.
We used 32 nodes of the system in our experiments. The 4 Million 100 byte tuples were partitioned in a round-robin fashion. Thus each node had 12.5 MB of relation. We decided to “block” the messages into 4 KB pages because sending large messages is more efficient than sending several smaller messages.

Figure 12: Relative Performance of the Approaches
The algorithms performed almost as expected from the analytical model. Figure 12 shows the performance of the traditional, two phase and the repartitioning approach.
As mentioned earlier, the low CPU cost of our implementation makes the performance numbers comparable to the fast CPU case in the analytical model. However there are some differences which must be noted. First, in the low selectivity range, the repartitioning approach does not do as bad as predicted by the analytical model even when only one processor is being used for aggregation. We investigated this and found that the reason was that the CPU utilization of the node being used is significantly higher than the average utilization of the CPU when all processors are being used. This shows that one disk per node is not sufficient to drive the processors at their full capacity. Second, in the high selectivity case, the memory thrashing results in a significantly poorer performance of the two phase and the traditional approaches. This is predicted, but not accurately, by the analytical model using constrained memory. The repartitioning approach requiring significantly less memory is not affected.
Figure 13 shows the speedup and scaleup for a selectivity of $\frac{1}{210}$ which lies in the “middle” range of the selectivity where both the algorithms are expected to perform well. We find that even in practice, the speedup and scaleup characteristics of both the approaches are close to ideal.
5 Selecting the Appropriate Approach
From the discussion above it is clear that the two proposed algorithms work well in their respective domains. The two phase approach works well when the number of groups is small and the repartitioning works well when the number of groups is large. In the middle ranges both algorithms show comparable performance. We also saw that this general observation does not change even with significant changes in system parameters.
Quantifying it a little, we find that two phase scheme is much better till GROUP BY selectivity of about $\frac{N}{|\mathcal{R}|}$ because in this range the repartitioning scheme can not exploit all the processors. Between approximately $\frac{N}{|\mathcal{R}|}$ and $\frac{1}{10N}$ both algorithms have comparable performance though the two phase is slightly better. For selectivities above $\frac{1}{10N}$ the repartitioning approach is significantly better. Here the two phase approach is not able to do enough reduction in the number of tuples through local aggregation in the first phase as the selectivity is very small. Therefore the second phase has to do a lot of work in global aggregation thus significantly increasing the overhead.
Evidently, the performance of these algorithms depends critically on the GROUP BY selectivity of the aggregate. In many cases (for example, if the aggregate is on an indexed column of a stored table) this selectivity may be available from the statistics stored in the system catalogs. However, in many cases such a statistic may not be available (for example, if the aggregate is on a column in a relation that is an intermediate result in a query evaluation plan). We propose a sampling based scheme for determining the selectivity in such cases.
The general problem of accurately estimating the number of groups (and hence the GROUP BY selectivity) is similar to the projection estimation problem. It is fairly complex and has received a lot of attention in the statistics literature [BF93].
However, in our case we only need to decide efficiently whether the number of groups in the relation is small or not because we have a lot of leeway in the middle range where both
the algorithms perform well. This does not require an accurate estimate of the number of
groups (especially when they are large) making the problem significantly simpler than the
general estimation problem. Our scheme is as follows. First the optimizer will decide what
is an appropriate switching point of the algorithms depending on the system characteristics.
A reasonable number of groups for switching may be say, 10 times the number of processors
available (a small number likely to lie in the middle range). Call this the crossover threshold.
Then, it can use the following algorithm to decide which scheme to use for aggregation.
- sample the relation
- find the number of groups in the sample
- if (number of groups found < crossover threshold)
- use Two Phase
- else
- use Repartitioning
It can be shown that the number of samples required is fairly small. For example, for
a crossover threshold of 320 (assuming 32 processor and 10 times as many groups) this is
approximately 2563. This is likely to be less than 1% of any reasonably sized relation for small
crossover thresholds.
The probability that the we find $G$ groups in a sample (with replacement) of size $S$ when
there are exactly $G$ groups in a uniformly distributed relation is given by the following recurrence
equation.
$$P(g, s) = P(g, s - 1) * \frac{g}{G} + P(g - 1, s - 1) * (1 - \frac{g - 1}{G})$$
where $P(1,1) = 1$ and $P(g, s < g) = 0$. Solving for $P(G, S)$ gives the desired probability. This
equation can be solved quite efficiently using dynamic programming. Note that the probability
does not depend on the size of the relation. For any fixed value of $P(G, S)$, the number $S$ grows
approximately as $G \times \log G$ [ER61]. This ensures that the number of samples required is not
very large even if the crossover threshold is significant.
What is an appropriate value of $P(G, S)$ and hence the number of samples? The graph on
the left in Figure 14 illustrates the shape of a typical probability curve, here for $P(320, S)$. It
is evident that we must choose the probability in the upward convex part of the curve so as
to ensure that we are stable in terms of estimates i.e. the estimate will not vary widely with
addition or deletion of a few samples. In other words, the probability of underestimating the
number of groups (e.g. estimating 320 groups when there are 322 groups) will fall rapidly as
exemplified by the right graph. In our case we want to minimize the probability that we will
underestimate the number of groups, since we will never overestimate. estimation. In practice
the error would be how often we estimate the number of groups as "small" when it is not so,
e.g. how often we estimate the number of groups to be less than or equal to, say 320, when
in fact there are more groups. In practice, the cost of making a minor mistake is small and
Figure 14 shows that we are not likely to make any major mistakes in estimation. Choosing
$P(\text{crossover threshold, } S) \geq 0.9$ implies that 90% of the time we will guess the number of
groups exactly, and Figure 14 suggests that even in the 10% of the time we underestimate, it
will not be by a significant amount.
All the performance measures reported so far assume that the relation is drawn from a uniform distribution and is evenly declustered across the participating nodes. In the next section we challenge that assumption and see how data skew affects performance of the two approaches.
6 The Effect of Data Skew
The two main forms of data skew that can occur in aggregate processing are the following. First, instead of about equal number of tuples forming each group, the number of tuples forming a group may vary widely. That is, under uniformity assumption a GROUP BY selectivity of 0.01 implies that each group is formed by about 100 tuples, but in a data skew scenario it is possible to envision that a few groups, say, are formed with 5 tuples but others are formed with 1000 tuples averaging to a selectivity of 0.01. Upon repartitioning, such unequal distribution of group sizes will result in different processors getting different number of tuples. We call this skew the selectivity skew. In the second form, the child operator of the aggregate operator on each node produces a different number of tuples. We call this the placement skew. In our case, this implies that initially different nodes have different number of tuples.
These two kinds of skew will manifest themselves differently depending on the algorithm. Selectivity skew alone will not affect the performance of the two phase approach as far as the tuples belonging to a group are uniformly generated by different scan operators. This ensures that each node will get equal number of tuples albeit the number of tuples belonging to a group will vary. However, when the selectivity is high, the selectivity skew results in some groups having large number of tuples which can be aggregated in phase one of the algorithm thus reducing work for phase two. Hence, the two phase approach can potentially have better performance for high skew, high selectivity case. In the repartitioning approach, selectivity skew will result in some nodes getting more tuples to process than the others because all tuples
belonging to a group will be on one node and hence its performance will be adversely affected.

Figure 15: Effect of Selectivity Skew on Performance of the Approaches
Figure 15 shows the performance of the algorithms obtained from the implementation, the selectivity skew being modeled using a Zipf distribution with parameter $\theta = 0.1$ which results in the worst loaded node getting about two times as many tuples as the no-skew case. It shows that the two phase approach is not affected significantly whereas the repartitioning approach does a little poorer in all cases where all processors are utilized (i.e. number of groups $\geq$ number of processors). The I/O bottleneck of single disk, however, minimizes this difference because the CPUs wait for the disks which finish at about the same time as all nodes still have same amount of I/O requirements.
Next, we turn to placement skew, which we modeled by generating the database so that one node had twice as many tuples as the other nodes. Placement skew will affect the CPU performance of the two phase approach as all the tuples generated must be processed locally thus increasing the amount of work for the skewed node. The second phase of the algorithm should not be affected by skew as the number of tuples arriving there will be a function of the number of groups and the number of nodes and not the number of tuples in a group or initial placement.
Additionally, in our case, placement skew determines the amount of disk I/O a node has to perform. Hence, both algorithms will be equally hit in I/O performance. This is brought out by the implementation results in figure 16. The placement skew is modeled by having twice as many tuples on the skewed node as the no-skew case implying the worst case I/O cost is twice as much.
In summary, selectivity skew affects the repartitioning algorithm more than the two phase algorithm, while placement skew affects the two phase algorithm more than the repartitioning algorithm. However, over the ranges of skews that we tested, the impact of skew was not significant enough to change the relative performance of the algorithms in any important way.
Figure 16: Effect of Placement Skew on Performance of the Approaches
7 Conclusions
We have shown that aggregate processing, though apparently simple, has not so obvious trade-offs. We show that each of the two approaches proposed do well in a limited domain decided by the selectivity of the GROUP BY attributes and that it is easy for an optimizer to use sampling to decide which algorithm to use. We also showed that the estimate of selectivity need not be very accurate in order to get good performance, but if we do not use the appropriate algorithm towards the fringes of the selectivity poor performance will result. We also showed what problems can be caused by data skew in aggregate processing and how it affects the relative performance of the two approaches.
References
|
{"Source-Url": "http://research.cs.wisc.edu/techreports/1994/TR1233.pdf", "len_cl100k_base": 8605, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 25146, "total-output-tokens": 10016, "length": "2e13", "weborganizer": {"__label__adult": 0.0003170967102050781, "__label__art_design": 0.00033974647521972656, "__label__crime_law": 0.0004177093505859375, "__label__education_jobs": 0.00104522705078125, "__label__entertainment": 0.0001232624053955078, "__label__fashion_beauty": 0.00016558170318603516, "__label__finance_business": 0.00057220458984375, "__label__food_dining": 0.0004024505615234375, "__label__games": 0.0005841255187988281, "__label__hardware": 0.0029621124267578125, "__label__health": 0.0006866455078125, "__label__history": 0.0003380775451660156, "__label__home_hobbies": 0.00012600421905517578, "__label__industrial": 0.000988006591796875, "__label__literature": 0.0003173351287841797, "__label__politics": 0.0002989768981933594, "__label__religion": 0.00048470497131347656, "__label__science_tech": 0.361572265625, "__label__social_life": 0.00010192394256591796, "__label__software": 0.0263519287109375, "__label__software_dev": 0.6005859375, "__label__sports_fitness": 0.0002415180206298828, "__label__transportation": 0.0005583763122558594, "__label__travel": 0.00019943714141845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40668, 0.02144]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40668, 0.39159]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40668, 0.92152]], "google_gemma-3-12b-it_contains_pii": [[0, 121, false], [121, 121, null], [121, 2413, null], [2413, 5636, null], [5636, 8916, null], [8916, 10924, null], [10924, 13202, null], [13202, 15319, null], [15319, 17334, null], [17334, 19567, null], [19567, 21132, null], [21132, 23259, null], [23259, 23367, null], [23367, 24419, null], [24419, 26604, null], [26604, 28540, null], [28540, 30725, null], [30725, 33912, null], [33912, 35982, null], [35982, 38185, null], [38185, 39959, null], [39959, 40668, null]], "google_gemma-3-12b-it_is_public_document": [[0, 121, true], [121, 121, null], [121, 2413, null], [2413, 5636, null], [5636, 8916, null], [8916, 10924, null], [10924, 13202, null], [13202, 15319, null], [15319, 17334, null], [17334, 19567, null], [19567, 21132, null], [21132, 23259, null], [23259, 23367, null], [23367, 24419, null], [24419, 26604, null], [26604, 28540, null], [28540, 30725, null], [30725, 33912, null], [33912, 35982, null], [35982, 38185, null], [38185, 39959, null], [39959, 40668, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40668, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40668, null]], "pdf_page_numbers": [[0, 121, 1], [121, 121, 2], [121, 2413, 3], [2413, 5636, 4], [5636, 8916, 5], [8916, 10924, 6], [10924, 13202, 7], [13202, 15319, 8], [15319, 17334, 9], [17334, 19567, 10], [19567, 21132, 11], [21132, 23259, 12], [23259, 23367, 13], [23367, 24419, 14], [24419, 26604, 15], [26604, 28540, 16], [28540, 30725, 17], [30725, 33912, 18], [33912, 35982, 19], [35982, 38185, 20], [38185, 39959, 21], [39959, 40668, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40668, 0.10132]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
d098c6cbe623c9835bf49fffe82b660b9e0c0fc3
|
Abstract
Shared state access conflicts are one of the greatest sources of error for fine grained parallelism in any domain. Notoriously hard to debug, these conflicts reduce reliability and increase development time. The standard task graph model dictates that tasks with potential conflicting accesses to shared state must be linked by a dependency, even if there is no explicit logical ordering on their execution. In cases where it is difficult to understand if such implicit dependencies exist, the programmer often creates more dependencies than needed, which results in constrained graphs with large monolithic tasks and limited parallelism.
We propose a new technique, Synchronization via Scheduling (SvS), that uses the results of static and dynamic code analysis to manage potential shared state conflicts by exposing the data accesses of each task to the scheduler. We present an in-depth performance analysis of SvS via examples from video games, our target domain, and show that SvS performs well in comparison to software transactional memory (TM) and fine grained mutexes.
Categories and Subject Descriptors D.1.3 [Concurrent Programming]: Parallel programming; D.3.4 [Processors]: Compilers; D.3.4 [Processors]: Run-time environments
General Terms Design, Languages, Measurement, Performance, Reliability
Keywords parallel programming, shared state management, Synchronization via Scheduling, Dynamic Reachability Analysis
1. Introduction
Shared state access conflicts are the cause of majority of errors in parallel programming. Race conditions and corruption of shared data are common. These bugs can be notoriously difficult to track down as they often manifest rarely, depending on the state of not just one, but several threads of execution. Unfortunately most existing frameworks don’t provide mechanisms to automatically protect shared state. Runtime systems such as the one driving Cilk [12], OpenMP [4] and the venerable pthreads are largely concerned with dispatching code for execution. Systems such as Intel’s TBB [6] do provide a vast array of synchronization primitives, including a number of distinct types of mutexes, for the programmer to construct their own state protection. Unfortunately, they contain very little in the way of support for orchestrating these schemes. Software Transactional Memory (STM) [25] does address this problem and as STM is the closest in effect to our proposed technique, we will discuss it further in the context of our experiments in Section 3.
Very few domains have been as profoundly affected by the multicore revolution as video games. The complexity and high level of interaction of the systems which often rivals operating systems, a nearly inexhaustible demand for better performance, large programming teams and tight deadlines makes it an ideal testbed for parallel techniques. The volume of this software and its commercial appeal would make the problems worth solving even if they were unique to the domain, but any technique developed have application in many other domains. For these reasons we have chosen to let the needs of this domain drive our research. Our assumptions and choice of experiments reflect this.
Programmers faced with the difficulties in managing the data accesses of a large collection of tasks tend to leave many tasks monolithic, comprised sometimes of thousands of lines of codes. The embarrassingly parallel kernels, those without complex state interaction, will generally be dispatched in patterns similar to parallel_for. This has lead to the structure of the current generation of video games. Often the work of an entire subsystem or one its major components will be assigned to a single processing context, co-scheduled with other components that are guaranteed to be conflict free. Interspersed between the execution of these groups will be the execution of the embarrassingly parallel sections and a number of explicit synchronization points. This structure is common in the industry [1]. The lack of parallel width in many phases of execution leaves resources idle and this approach will not scale as the number of cores increase.
The parallel structure of the program is often represented in the standard task graph model where tasks that have not had a dependency declared between them can be scheduled concurrently. There are cases where ensuring ordering of tasks with dependencies is necessary for correctness. However, there remain many cases where there is not an explicit logical ordering between the tasks and a dependency is declared to prevent two tasks that touch the
same data from running concurrently. This serializing of tasks is necessary even if the two tasks touch the same data rarely.
When a program is executed and tasks are serialized unnecessarily because of unneeded dependencies, parallelism is reduced and performance can suffer. This work proposes a mechanism to correct this deficiency. Our model only requires that the programmer state explicit inter-task dependencies, i.e., those that are required by the program logic. In cases where tasks that are not explicitly dependent on each other may touch the same data, our system automatically inserts a new implicit dependency between them, which will prevent these tasks from running concurrently and corrupting shared state. We use conventional static analysis to determine when implicit dependencies are necessary. During runtime, when we can determine more precisely what data the task will actually access, we detect and remove any overly constraining implicit dependencies – to accomplish this we propose new dynamic analysis techniques. Making the shared state access patterns of each applicable task available to the scheduler we are able to safely schedule tasks with potential conflicts. We call this technique Synchronization via Scheduling (SvS).
The general concept of SvS is not overly complicated. The chief difficulty lies in utilizing the information provided by the static analysis and refinement without adding much overhead. To achieve the highly desirable performance of 60 frame-per-second (FPS), a frame must be constructed in just over 16ms. For optimal parallelization many tasks will have runtimes in the tens or hundreds of microseconds. This strict time budget does not allow for much extra computation. One of the major contributions of this work is the description of algorithms to achieve this high speed organization and to demonstrate that they work in practical contexts.
SvS differs from optimistic techniques such as STM in that the work of evaluating the admissibility of a task is done prior to its execution. Though some of the mechanisms used to realize SvS are similar to those in some STM implementations this difference means that there are no expensive rollbacks and there is a much smaller requirement for extra bookkeeping during state access. Additionally, STM requires programmers to define atomic transactions while SvS is an automatic technique completely handled by the compiler and runtime components. We will show a comparison between SvS and STM in section 3.
The rest of the paper is organized as follows. In Section 2 we discuss the model, algorithms and implementation of SvS. Experimental results, drawn from several applications in our domain, are presented in Section 3. In section 4 we will discuss related work and in Section 5 we will conclude and give a short discussion of our future work.
2. SvS Model and Implementation
2.1 Motivation and Overview
Task graph models are a standard pattern for structuring parallelism in programs [20]. A prevailing problem with this model is the lack of automatic shared state management between tasks. Consider an example of skeletal character animation. Typically, multiple animations are applied to the bones of a single character to produce realistic looking motion [2]. For example, to produce a character that is walking and limping we may blend the “walking” and “limping” animations. The mathematical operations performed by these two routines are commutative, so there is no explicit ordering between them. However, it is unsafe to execute these animation routines in concurrent tasks, because they may touch the same bones of the same character. In this case there exists a special type of dependency between tasks, which we term implicit. An implicit dependency exists when there is no logical ordering between the tasks imposed by data or control dependencies, but the tasks may access the same shared state and so it is unsafe to run them concurrently.
Without automatic shared state management, programmers must manage shared state by inserting explicit dependencies where implicit dependencies exist. Protecting shared state via explicit dependencies has two problems. First, this is prone to programmer error, especially when considering dynamic, pointer-based memory accesses. Second, this unnecessarily constrains parallelism in cases where the tasks may incur conflicting accesses of the shared state, but do not actually perform them at runtime.
We address the issue of shared state management in task graph models by introducing a new technique called Synchronization via Scheduling (SvS). SvS provides automatic shared state management by combining static and dynamic analysis to determine if two tasks can potentially access shared state. The result of static analysis is a task graph with dependencies that guarantee the protection of shared state. Dynamic analysis then utilizes run-time information to potentially remove unnecessary dependencies between tasks, allowing for increased parallelism. In this way, SvS determines the set of possible memory accesses a task makes before it is executed and schedules tasks such that no two tasks concurrently access the same memory. In this section, we will provide the details behind the model and implementation of SvS. In section 2.2 we outline the framework for SvS, followed by a brief discussion of relevant background information in sections 2.3 and 2.4. Starting in section 2.5, we will go into detail on the model and implementation of SvS.
2.2 Framework
Figure 1 shows the four main components that comprise the SvS framework: an SvS compatible language, static analysis, dynamic analysis, and the task scheduler. An SvS compatible language allows a programmer to group blocks of code into units called tasks. Programmers can provide a logical ordering between tasks but do not need to manage shared state between them. Beyond providing a task-graph abstraction, an SvS compatible language must be type safe and disallow pointer arithmetic. We describe our prototypical implementation of an SvS compatible language in section 2.4.
At compile time, a program written in an SvS compatible language is passed through a static analysis phase that generates information pertaining to symbols (linguistic abstractions for memory accesses) and task dependencies. This information defines a static task graph which provides an initial scheduling of tasks that ensures the protection of shared state. Information from static analysis is stored for use during dynamic analysis.
We purposely visualize the static analysis in figure 1 as a “black box” because SvS is indifferent to the implementation of static analysis. Static analysis techniques that produce a correct list of task dependencies and a list of all symbols that a task may access are suitable for use within the SvS framework. Static analysis in SvS is formally defined as task dependency analysis in section 2.5; this section also explains how task dependency analysis can be implemented using existing techniques. The classic limiting factor of static analysis when applied to parallelization is that it is limited to compile time information. As a result, it often is forced to create dependencies that are potentially unnecessary, thus restricting parallelism. This problem can be alleviated using dynamic analysis.
Throughout the execution of an SvS program, dynamic analysis maintains dynamic reachability information (i.e. potential memory accesses) for symbols accessed by tasks. As tasks are considered for scheduling, this information is used to generate and compare read/write sets of tasks in order to remove any implicit dependencies that were deemed necessary by the static analysis, but were found to be non-existent when dynamic reachability information...
became available at runtime. We call this process of removing unnecessary static dependencies **refinement**.

Finally, the scheduler respects the set of refined dependencies by ensuring that all concurrently executing tasks have distinct read/write sets (i.e. no implicit dependencies), in effect, performing synchronization via scheduling.
While the SvS framework is the new contribution of this work, its implementation relies on both new and existing techniques. In particular, our framework allows for the use of established static analysis techniques, but the algorithms used in dynamic analysis and in scheduling are new to this work.
### 2.3 Task Graph Model
In this section, we define the task-graph model that is assumed by the current implementation of SvS. In task-graph based execution, code is divided into discrete units called **tasks** and a **task graph** defines a static scheduling of these tasks through directed edges. If there is an edge \((A, B)\) then the task \(A\), the parent, must complete before \(B\), the child, can be executed. These edges are referred to as **dependencies** and a dependency is satisfied when the parent completes execution. Our task-graph model also includes explicit dataflow where one task, a **producer**, ‘sends’ data to another task, a **consumer**. Two tasks involved in dataflow have a **dataflow dependency**. Currently, our task graph model does not allow the programmer to specify cyclic dependencies.
A task that has no parents or has all dependencies satisfied is considered runnable. Additionally, a consumer task must also have been sent data to be runnable. A task **instance** is a task running on a processor. When a task is runnable, we say that an **instance** of it can be scheduled. Task instances are generated in one of two ways. If the task is a consumer task, then a copy (i.e. instance) is executed for each data item received, and the collection of all instances define a data-parallel operation. We describe these instances as being part of a single data-parallel task. For all other tasks, an instance is generated “statically” at the beginning of the program.
Besides dependencies, there is also an implicit temporal ordering between executions of a task graph in that we execute all tasks in the graph and wait for them to finish before executing the graph again. More formally, if we define the execution of all tasks in a task-graph to be an **iteration**, then iteration \(i\) must complete before \(i + 1\) begins.
The dependencies and constructs described in this section support the task-graph requirements of an SvS compatible language, which we describe in the next section.
### 2.4 CDML
To facilitate writing programs based on the task-graph model outlined in section 2.3 and enable the static analysis required for SvS, we have developed the Cascade Data Management Language (CDML). Because C++ is the standard language for game development, CDML is similar to C++ with a few added annotations and restrictions. Note that CDML is not a requirement for SvS; as described in section 2.2, essentially any language that includes the following features is suitable for SvS: (1) syntax for articulating the task graph model described in section 2.3, (2) type safety, (3) no pointer arithmetic.
Additionally, because the current specification for CDML does not yet support object-oriented programming, we assume no inheritance or polymorphism in our current implementation of SvS, but this is not a requirement. We plan to address inheritance and polymorphism in future work. Because we are not presenting CDML as a contribution of this work, the full syntax and features of CDML will not be discussed here.
The grammar for CDML tasks is shown in figure 2. There are two task types in our current language specification: **transform** and **itemizer**. A transform is just a (static) single instance task. An itemizer is used to implement data-parallel tasks, where multiple instances of the task’s body are executed to process items received at run-time. An instance is created for each item received.
Our system assumes that there is no specific ordering between tasks, unless the programmer explicitly specifies a dependency. Explicit (i.e. “ordering”) constraints are expressed in a tasks **constraints**. At run-time, explicit constraints specified by the programmer are never broken. In many cases, a programmer would not need to specify an explicit ordering between tasks, because the same outcome will be achieved regardless of the ordering. This is especially the case for video game engines and scientific computing applications where many computations are commutative. The programmer can also specify data-flow dependencies using the **send** constraint.
Programmers do not have to manage shared memory accesses between tasks. Static and dynamic analysis are used to automatically detect when two tasks can access the same memory. In the case where tasks may perform conflicting accesses to the same data, SvS chooses an arbitrary ordering for the tasks and runs them sequentially. Otherwise, tasks can be run concurrently.
We implemented a translator that converts CDML code into C++. The translator also performs the static analysis to detect implicit dependencies, as described in the next section.
```plaintext
| task := task_type task_name ':' constraints? body |
| task_type := 'itemizer' | 'transform' |
| constraints := ( send | receive | explicit )+ |
| body := '{' statements '}'
```

### 2.5 Static Analysis
We term the static analysis performed in SvS as **task dependency analysis**. The goal of task dependency analysis is to statically find implicit dependencies between tasks – that is, determine whether two tasks (or task-instances) can potentially access the same memory location. The collection of implicit and explicit dependencies define a task graph that ensures the protection of shared state. Because task dependency analysis is essentially a form of dependency analysis, we will present the definition of dependency analysis and derive from it a formal definition for task dependency analysis.
In traditional dependency analysis [9], the fundamental goal is to determine whether a statement \(T\) depends on a statement \(S\). \(T\) depends on \(S\) if there exists an instance \(S'\) of \(S\), an instance \(T'\) of \(T\), and a memory location \(M\) such that:
1. Both \(S'\) and \(T'\) reference \(M\), and at least one reference is a write
2. In the serial execution of the program, \(S'\) is executed before \(T'\)
3. In the serial execution, \(M\) is not written between the time that \(S'\) finishes and the time \(T'\) starts
As mentioned in the previous section, the ordering between tasks in SvS, and thus their accesses, are assumed to be commutative unless the programmer enforces an ordering between tasks by inserting explicit dependencies. For the remaining pairs of tasks, we are not concerned with the order in which they execute. Because of this, conditions number 2 and 3 are not applicable to SvS. It follows from this that task dependency analysis is not concerned with whether the dependency is flow-dependent, anti-dependent, or output-dependent. Therefore, task dependence analysis in SvS can be restated as follows: A dependency exists between task $T$ and a task $S$ if there exists an instance $S'$ of $S$, an instance $T'$ of $T$, and a memory location $M$ such that:
Both $S'$ and $T'$ reference $M$, and at least one reference is a write. A task references $M$ if there exists a statement $X$ in the body of the task that references $M$.
In modern programming languages, a reference to a memory location $M$ might be represented as a scalar variable, array, or pointer. In SvS, we refer to these abstractions for memory locations as symbols. The syntax for a symbol in CDML is provided in figure 3 and mirrors the syntax of C/C++ expressions for array, variable, and member access. Since symbols abstract references to memory, task dependency analysis becomes collecting the symbols in the body of a task and determining if a symbol $x$ in task $S$ can reference the same memory location $M$ as symbol $y$ in task $T$ where at least one of the references is a write. The problem of determining if two symbols can reference the same memory has been thoroughly explored by research in static analysis including pointer analysis [8], array dependence analysis [21], shape analysis [19], and disjoint heap analysis [15].
Because it is not our goal to expand upon work that has already been done in static analysis, our current implementation is very conservative. As a result, our current approach performs the necessary symbol collection and produces a set of dependencies that generally result in a dependency being placed between each pair of tasks. In this case, dynamic analysis is exclusively responsible for uncovering parallelism. As it will be discussed in sections 2.6.4 and 2.7, this is achieved by collecting run-time information describing the potential memory accesses of the symbols extracted during static analysis in order to “recalculate” (i.e. refine) dependencies. We will show in section 3 that SvS is feasible even with dynamic analysis performing most of the work. However, we hypothesize that using more sophisticated static analysis could only reduce the amount of dynamic checks (i.e. dependency refinement), thus decreasing run-time overhead. Expanding the role of static analysis and implementing more sophisticated static analysis is a definite part of future work for SvS.
SvS is indifferent to which techniques are used to solve task dependency analysis as long as the output is a set of symbols for each task, and a set of dependencies between tasks that guarantees no two task instances can concurrently access the same memory location (which will be the case if the techniques correctly solve the task dependency analysis problem). Therefore it is not a goal of SvS to expand upon the work that has already been done in static analysis, but rather to address the deficiencies associated with static analysis.
### 2.6 Dynamic Analysis
Due to the limitations of compile time information, static analysis is often forced to make conservative assumptions. This may result in unnecessary dependencies, thus hindering parallelism. The goal of dynamic analysis is to potentially remove such dependencies at run-time. To achieve this, we use information available at run-time to generate more precise read/write sets for tasks. Then, as task instances are considered for scheduling, we efficiently compare their read/write sets to see if a dependency actually exists (a process we call refinement) and subsequently scheduling non-dependent tasks to execute concurrently.
To calculate read/write sets, we monitor the connectivity and reachability properties of memory objects, our online abstraction for memory accesses, to determine the set of all addresses that can possibly be reached by a memory object. We call this set of accesses the reachability of a memory object and its connectivity properties are represented as a reachability graph (section 2.6.1). We use dynamic reachability analysis (sections 2.6.4 and 2.6.3) to maintain dynamic changes to reachability graphs as memory objects are created and linked together. Since symbols reference memory objects at run-time, this enables us to determine the reachability of symbols accessed by tasks and therefore more precise sets of potential reads and writes.
We will also introduce signatures (section 2.6.2), which are used to compactly represent read/write sets and efficiently determine which tasks have non-overlapping memory accesses. Tasks with non-overlapping read/write sets can then be scheduled concurrently. Two new scheduling algorithms that efficiently accomplish this goal will be presented in section 2.7.
#### 2.6.1 Memory Objects, Links, Reachability and Reachability Graphs
We use the notion of a memory object to abstract memory accesses in SvS. In the simplest case, a memory object is a single primitive (e.g. int) and provides a direct access to memory. In general, memory objects may contain one or more primitives or other memory objects. Primitives and/or other memory objects that compose it are called its members. Memory objects may also contain links. A link ‘points-to’ a child memory object, which allows the parent memory object that contains the link to access all the memory addressable by the child memory object. The difference between members and links is that members are static – they cannot be removed from the object and their memory addresses within the enclosing object cannot be modified, whereas links are dynamic. The child that a link points to can be changed at any time, thus changing the set of memory addresses that a memory object can access. Links can also exist on their own, in that they do not need to be declared as a member of a memory object. Therefore, SvS tries to solve the problem of determining what memory objects a task can possibly access before the task runs, where memory objects can be accessed directly through members and indirectly through links.
We formalize these definitions by representing the problem as a graph. Memory objects represent nodes in the graph. A member edge is a directed edge defined as $(X, Y)$ where memory object $Y$ is a Member of $X$. A ‘link’ edge is a directed edge defined as $(A, B)$ where $A$ is a memory object that contains a link $L$ that points to memory object $B$. We say that $A$ is the parent of $L$ and $B$ is its child. If a link does not have a parent, it is essentially just an alias for the memory object it points to. Changing $L$ to point to a different memory object $C$ effectively removes the edge $(A, B)$ and adds the edge $(A, C)$. This graph represents the dynamic reachability of the memory object and is called the reachability graph.
#### Figure 3. CDML symbol syntax
```plaintext
symbol := identifier accessor*
identifier := [a-zA-Z,][a-zA-Z0-9,]*
accessor :=
| '->' identifier
| '(' expression ')'
```
Figure 3. CDML symbol syntax
reachability T leaf nodes in the area are the containment of the memory object memory addresses. Figure 4 provides an example of a graph that at the given node. Because leaf nodes are primitives, they directly each memory object (a process we call a hash function to determine the bit to set in the signature. Signature set, it means they represent access to the same memory location (or constant length bitstrings. When two signatures have the same bit overlap. To build a signature, id’s a single hash function. Also, because signatures are constant in length and use hashing, there is the opportunity that false positives occur when comparing signatures. This does not affect correctness and its impact to performance can be greatly reduced by using large signature sizes with negligible impact to performance, which will be discussed in section 3.1.
Given any node, (i.e. memory object), in the graph, the set of memory addresses that can be reached (i.e. accessed) by the node is called its reachability and is defined as the set of all leaf-nodes reachable by performing a breadth (or depth) first search starting at the given node. Because leaf nodes are primitives, they directly correspond to an addresses in memory, and thus define a set of memory addresses. Figure 4 provides an example of a graph that would be defined by a typical binary tree. The leaf-nodes inside of the dashed boundary represent the static reachability (unique, static set of memory accesses) of the root node of the tree.
By keeping track of the structure of the reachability graph for each memory object (a process we call dynamic reachability analysis), we are able to dynamically monitor reachability information providing significant insight into the potential memory accesses of tasks. This is particularly useful when dealing with dynamic data structures that allow for ambiguous accesses to memory. Implementation of reachability graphs and pertinent algorithms are described in the next sections.
2.6.2 Signatures: Representing Memory Accesses
Sets of memory accesses in SvS are represented as signatures: constant length bitstrings. When two signatures have the same bit set, it means they represent access to the same memory location (or memory object) and are said to overlap. To build a signature, id’s (i.e. memory object id’s) representing reads or writes are passed to a hash function to determine the bit to set in the signature. Signature overlap is checked using simple and efficient bit-wise operations.
Note that signatures are effectively Bloom filters [11] using a single hash function. Also, because signatures are constant in length and use hashing, there is the opportunity that false positives occur when comparing signatures. This does not affect correctness and its impact to performance can be greatly reduced by using large signature sizes with negligible impact to performance, which will be discussed in section 3.1.
2.6.3 Implementing Reachability Graphs and Dynamic Reachability Analysis
As discussed in the previous section, there are two main components to a reachability graph: memory objects and links. The goal of the implementation for these structures is to provide the metadata and meta-functions necessary to efficiently maintain reachability graphs and extract the reachability of a memory object.
Memory Objects Memory objects are implemented as classes that inherit from a MemoryObject class template. The MemoryObject class stores the id of a memory object that is generated inside the class’s constructor.
The getSignature function of the MemoryObject class returns a signature representing the reachability of a memory object. A straightforward way to implement this function is to simply perform a breadth first search by calling the getSignature function of each member, or the getSignature function of the memory object a link points to, and combine the returned signatures using a bitwise-or operation. For large reachability graphs, a breadth first search will be too expensive. Instead, we have implemented a more efficient method that utilizes the implementation of links as described in the next section.
Links Links are implemented in SvS as a smart pointer template class. Since a link is just an edge in the reachability graph, the smart pointer representing the link stores pointers to a parent and child memory object. The child pointer represents the memory object that a link “points-to” whereas the parent pointer denotes the memory object that the link is a member of. A null parent represents the case where a link is just a reference or alias to the memory object it points to and is not considered to be an edge in the reachability graph.
We now discuss how smart pointers are used to calculate and maintain the reachability of a memory object. The reachability of a memory object is changed when we change the child node of a link edge. This is equivalent to link assignment, and thus by overloading the assignment operator for the smart pointer class, we can detect a change in reachability and perform the necessary updates. The algorithm that performs these updates is called dynamic reachability analysis. First, consider the situation where each memory object stores a signature that accurately represents its reachability. Initially, the reachability of a memory object is just the signature created from its object id (i.e. the signature representing its static reachability). When a link L is changed to point to a memory object B, it means that the memory object A = parent(L) can now access all memory objects reachable by B. It also means that all memory objects that can reach A can also reach memory objects reachable by B. Therefore, during link assignment, we could perform a reverse breadth first search starting at A, recursively updating the signature of each node to include the signature for the reachability of B. However, we want to reduce the cost of this breadth first search. To do this, we introduce the concept of master nodes.
A master node M represents a bounded set of reachable nodes, i.e. a set of nodes X such that a path M \sim X exists. We call the set of nodes X the domain of M. A master M maintains a signature that accurately represents its reachability; this signature is shared by all nodes in the domain of M. M is also responsible for propagating changes in the reachability of nodes in its domain to all other masters that can reach M.
All other nodes are called internal nodes. An internal node X can belong to multiple domains and keeps track of which masters (i.e. domains) it belongs to. X is responsible for notifying each of its masters when its reachability changes. We call the first domain an internal node is assigned to its primary master. An internal node belonging to multiple domains has multiple signatures that conservatively (but correctly) represent its reachability, so we arbitrarily
choose the signature of the primary master to represent its reachability.
By introducing master and internal nodes, we essentially establish a tree of masters that is smaller than the original reachability graph and only maintain precise reachability information for masters. This decreases the cost of the reverse breadth first search required to monitor reachability. Under this implementation, the getSignature function of a memory object just returns the signature of its primary master. We are currently investigating more efficient methods for maintaining dynamic reachability information, but algorithm 1 provides our current implementation of dynamic reachability analysis, including how masters are created. In this algorithm, the list Node.owningMasters are the masters to whose domain Node belongs. Master.notifyMasters is the list of masters that the Master must notify on when its reachability changes, because these masters can reach the nodes in Master’s reachability. Note that our algorithm also performs cycle detection in the reachability graph, but we omit the pseudo-code due to space limitations.
2.6.4 Implementation of Dynamic Refinement
The goal of dynamic refinement is to remove unnecessary implicit dependencies created by static analysis. To this end, the dynamic refinement process determines which memory objects the task may actually access by using the reachability of the referenced symbols and retrieving the corresponding signatures. The implementation of generating signatures for tasks during refinement is shown in algorithm 2.
Our algorithm is only concerned with the reachability of global symbols or received symbols (those that were received as an argument) (see line 4). Symbols that are local to the task are not of concern since they are invisible outside of task boundaries, unless they alias global or received symbols. Our implementation of static analysis keeps track of aliasing, so the potential shared accesses of local symbols would be represented as the reachability of the corresponding global or received objects.
Note that the signature generated by our algorithm is guaranteed to represent all possible memory accesses that a task will make during execution, even if the task performs link assignment, i.e., changes the reachability of a memory object. This is because we account for the reachability of all memory objects a task can access, and link assignment just changes the reachability of one memory object to include the reachability of another memory object, and thus does not change the cumulative reachability of all the memory objects in the task.
Each task in SvS has a makeSignature function that implements algorithm 2; the code for makeSignature is generated by the translator using symbols collected during static analysis. As link assignments occur during program execution, dynamic reachability analysis maintains signatures representing reachability (possible memory accesses) of memory objects. Because makeSignature builds a composite signature of symbols (which reference specific memory objects at run-time), the signature returned represents a description of memory objects that a task can access at that time. Therefore, the process of refining a dependency between two tasks is basically just calling makeSignature for each task and comparing the resulting signatures to see if a dependency in fact exists. By performing this check, we can dynamically re-calculate dependencies between tasks. This process is only performed for implicit dependencies. If an explicit dependency was specified by a programmer, then this dependency will not be removed.
Also note that false positives can occur during signature comparison. Besides the false positives caused by using signatures, there are two additional causes for false positives. The first is due to the conservative assumption that a task accesses the entire reachability of a memory object. The second is due to multiple memory objects sharing the signature of a master node, as described in section 2.6.3.
While refinement is conceptually a separate component in the SvS framework, its implementation is integrated with scheduling, as described in the next section.
Algorithm 1: Link Assignment
Input: A link lhs with parent memory object A, and a link rhs whose child is B
Output: New edge (A, B)
begin
2 if A.owningMasters = ∅ then
3 /* A is its own master */
4 A.owningMasterList.add( A )
5 A.domainSize = 0
6 end
7 if (A.primaryMaster.domainSize < K) or not(B.owningMasters = ∅) then
8 if B.owningMasters = ∅ then
9 A.primaryMaster.domainSize++
10 end
11 foreach a ∈ A.owningMasters do
12 foreach b ∈ B.owningMasters do
13 if b.notifyMasters.add( a )
14 a.updateReachability( getSignature(B) )
15 end
16 end
17 else
18 B.owningMasters.add( B )
19 foreach a ∈ A.owningMasters do
20 if B.notifyMasters.add( a )
21 end
22 end
end
Algorithm 2: run-time calculation of a signature to represent all possible memory accesses that a task will make
Input: Task T
Output: Signature S
begin
2 Let L_T = symbols(T)
3 foreach symbol ∈ L_T do
4 if symbol is not local then
5 S += getSignature(symbol)
6 end
7 end
8 return S
9 end
2.7 Scheduling Tasks
The key job of the task scheduler is to efficiently dispatch tasks with non-overlapping signatures to be executed concurrently. This effectively insures that each task’s set of refined dependencies are respected.
Note that up to this point, we have discussed the process of refinement as comparing signatures between tasks in order to potentially remove dependencies. However, in cases where static analysis generates “many” dependencies (which is currently the case in our system), then performing pair-wise comparisons between tasks for each dependency as described in section 2.6.4 may not be very efficient. As mentioned in section 2.5, our current static analysis is extremely conservative and essentially ends up placing a depen-
dency between each pair of tasks/task instances. In this case, refinement would be faced with approximately \(\binom{T}{2}\) comparisons where \(T\) is the number of task instances, which can be large when dealing with data-parallel tasks. Therefore, instead of removing static dependencies, the scheduler essentially ignores this information and uses tasks’ signatures (algorithm 2) to efficiently determine which task instances can be executed together concurrently. This is particularly useful for data-parallel tasks in general; data-parallel task instances are generated dynamically, making it difficult for static analysis to generate “meaningful” or “efficient” dependencies (e.g. if the operation is not obviously embarrassingly parallel, static analysis may just end up serializing all potential instances of the data-parallel task). In future work, we intend to incorporate dependency information in order to reduce the number of dynamic checks required to perform dynamic refinement and scheduling of tasks.
We designed and implemented two scheduling algorithms, which we present next. Due to space constraints we omit the pseudo-code for these algorithms and provide only textual descriptions.
### 2.7.1 Generations
The goal of generations scheduling is to create groups of task instances such that no two instances have overlapping signatures; we call these groups generations. A single thread is elected to build generations. This thread tries to add a task to a generation in delayList, where each generation has a signature representing all tasks currently in the generation. A task can be added to a generation if the task’s signature does not overlap with the generation’s signature. If a task cannot be added to any of the generations in delayList, a generation from delayList is released for processing and a new generation is added to delayList. The task is then added to this new generation and the signature of the new generation is initialized. Worker threads concurrently process tasks in a generation and ensure that tasks from different generations are never run concurrently by waiting for all threads to complete before advancing to the next generation in scheduleList.
### 2.7.2 Progressive
Progressive scheduling, attempts to execute tasks/task-instances as soon as possible, without violating dependencies. To do this, we maintain a signature, workingSig, that represents all tasks/task-instances currently running. For each task, a signature is created (currentSig) and we atomically compare currentSig to workingSig, and if there is no conflict (i.e. no overlap between signatures), we atomically update workingSig to include currentSig. If there is a conflict, workingSig is not updated and the task is put back onto the queue for later execution. Otherwise, no dependencies exist between the current task and any tasks being executed, so the current task can be dispatched.
Because it is not possible to “subtract” from signatures when a task is completed, workingSig will eventually become stale. This does not affect correctness, but it can affect performance in the form of false positives. To address this issue, we also keep track of the total number of signature updates and consecutive conflicts. If either of these values reach a threshold, we wait for all worker threads to finish executing tasks and then reset workingSig and all flags.
### 2.8 Ensuring Correctness
It is important to underscore that SvS always generates a correct parallelization of the code written in CDML. The first step of this is the static task dependency analysis of the code, which builds a task graph that may contain unnecessary dependencies, but guarantees that shared memory accesses are protected. At run-time, SvS will dynamically recalculate dependencies and schedule tasks to prevent conflicting memory accesses between task-instances at runtime, enabling a greater degree of parallelism while still ensuring correctness.
### 3. Evaluation
Video games are a collection of tightly integrated systems (rendering, gameplay, physics, simulation, AI, animation, audio, user input, networking, GUI, etc) [1] that operate in concert on a rapid and repetitive timeline. Given the significant amount of code involved in a full game engine, studying one in its entirety is a difficult proposition. The complexity of commercial game engines and their attendant tool chains and development environments means that even building the project can be a daunting task. Thus converting an engine from the traditional sequential model into a modern task-based model is almost insurmountable and is not often attempted, not even in industry where it is usually preferable to instead re-implement from the ground up. So it necessary to isolate a particular facet or subset of features in order to study the effects of a particular technique. Therefore, we evaluate SvS using a collection of existing benchmarks and real applications.
We present two game based experiments. Firstly, Cal3D [2] is a third-party open-source skeletal animation engine used in several video games. Chosen because it has a relatively compact and clean code base, Cal3d represents typical computations performed in modern game engines. Secondly, QuakeSquad, our own video game benchmark, focuses on spatial partitioning, entity management, AI and managing numerous agents.
While not strictly game related, we also present three benchmarks from the PARSEC suite [10]: Canneal, Fluidanimate and Blacksholes. We chose these benchmarks because PARSEC is a well known and respected benchmark suite and will help put our results in context.
To provide an evaluation of the primary parameters and costs associated with SvS, we developed micro-benchmarks and several experiments which are presented in the next section.
All SvS tests were written in CDML and executed using the Generational scheduling algorithm which our initial testing shows performs slightly better than Progressive. Further optimization of these algorithms is future work. For the sake of comparison we also parallelized Cal3D and QuakeSquad using Intel TBB 3.0 [6] and software transactional memory (STM) using the Dresden TM Compiler [14] and TinySTM++. In each case we found that the encounter time locking (ETL) algorithm performed the best for STM. The PARSEC benchmarks we used were available already parallelized with pthreads and in some cases TBB. Our experiments were run on a machine with two Intel Xeon E5405 chips with four cores each. Each two cores share a 6MB L2 cache for a total of 12MB per chip.
### 3.1 SvS Overhead
In this section, we provide an evaluation of the primary parameters and costs associated with SvS. SvS has two main run-time costs: false positives and the absolute cost of performing dynamic reachability analysis during link assignment. The key parameters governing these associated costs are signature and master domain sizes. In the following sections, we break down our analysis into two categories: signatures and dynamic reachability analysis.
#### 3.1.1 Signatures
As mentioned in section 2.6.2, false positives can occur during signature comparison, potentially limiting parallelism. We define parallel width to be the number of tasks that are able to execute concurrently at a given time. In the simplest case where a task accesses a single memory object, using signatures limits the
theoretical maximum parallel width to the size (in bits) of the signature.

To demonstrate how signature size affects parallel width, we have designed an experiment consisting of a single producer task that sends unique memory objects to a data-parallel consumer task. The consumer task simply writes to a field of a received object. Note that the objects sent by the producer are single memory objects with no links as members (i.e. its reachability is static). Therefore, when an object is queried for its reachability, it just returns a signature representing its static reachability: a signature with a single bit set by hashing the id of the memory object. This means that there will be no false positives due to master nodes or conservative assumptions during refinement. Therefore, since all objects are unique, any detected conflicts are strictly due to false positives caused by signature size.
Figure 5 provides the average parallel width (y-axis) for varying signature sizes (x-axis) when the producer sends 128,000 objects. (This number was chosen to reflect the number of particles involved in modern fluid dynamics simulations). Note that because we use the generations algorithm, the parallel width at any given time is the size of the currently executing generation. Therefore to measure parallel width, we just record the sizes of each generation. The average parallel width was calculated over 100 executions of the producer and consumer.
Note that for all signature sizes, we (approximately) achieve the theoretical maximum parallel width and therefore the graph shows a linear increase in parallel width as signature size increases. This demonstrates that when conflicts occur, the generations algorithm is often successful in finding a generation in delayList with a signature that does not conflict with the current object.
It is also important to note that the computational cost of increasing signature size is negligible. We have experimentally determined the cost of setting a bit to be about 10 cycles, and the cost of checking overlap on a 64-bit machine to be about \( \frac{n}{10} \) cycles, where \( n \) is the number of bits in the signature. This cost is further minimized by the fact that some signature operations can happen concurrently with executing tasks. Finally, the bitwise operations used when comparing/calculating signatures are prime candidates for vectorization.
Because parallel width increases linearly with signature size and the computational cost of increasing signatures is small, the overall cost of using signatures does not have a significant impact on the performance of SvS.
### 3.1.2 Dynamic Reachability Analysis
Dynamic reachability analysis has two primary costs associated with it that contribute to the overhead of SvS. The first cost is the absolute cost of dynamic reachability analysis, i.e. performing a link assignment. The second cost is false positives that occur due to memory objects in a domain sharing the same reachability signature: the signature of the master node representing that domain. Any false positives will in turn affect parallel width.
In general, absolute cost and parallel width are affected by the size (number of memory objects and links) and shape (i.e. layout/connectivity) of reachability graphs. In the case of absolute cost, larger reachability graphs potentially (although not necessarily) lead to more expensive reverse breadth first searches during link assignment. Also, since memory objects share the signature of a master and the reachability of a master is greater than the reachability of its successors, the larger the graph, the larger the potential for false positives due to sharing master-node signatures. The effective size of reachability graphs is regulated by the size of a master node’s domain: the larger the domain, the fewer the master nodes in a reachability graph.
The following experiments demonstrate how absolute cost and parallel width are affected by the size of a reachability graph and the size of master domains. Because dynamic reachability analysis is also affected by the shape of reachability graphs, it is important to give consideration to the data-structures that we used for these experiments. The micro-benchmark that we implemented builds a binary space partitioning (BSP) tree of depth \( d \). BSP trees are commonly used data-structures in computer graphics algorithms and are generated by continuously bisecting a space and creating nodes to represent each resulting bisection. It is also common for the leaves of a BSP tree to store pointers to all the objects (e.g. game entities or polygons) that are located in the space represented by each leaf. Therefore each leaf also contains a linked list of objects (in our case game entities). If the spaces represented by leaves are small enough, each leaf will likely point to one or zero objects. The entities pointed to by leaves are also stored in a global linked list and each entity contains a list of “items”.
To simulate the assignment of entities to partitions represented by the leaves of a BSP tree, the producer sends out \((\text{leaf}, \text{entity})\) pairs and the consumer performs the associated link assignment, along with synthetic work. The \((\text{leaf}, \text{entity})\) pairs sent by the producer ensure that each entity is assigned to a unique leaf. In this case, no synchronization is actually required to protect the assignment of the entity to the leaf.
Using this micro-benchmark, we perform three experiments, which respectively demonstrate how absolute cost, parallel width, and overall overhead varies as the number of memory objects, and the size of domains change. In all experiments, we demonstrate results for approximately 20,000 \((d = 10, \text{entities} = 1000)\) and 40,000 \((d = 11, \text{entities} = 2000)\) total memory objects.
### Absolute Cost
For absolute cost, we measured the time it takes a consumer to perform a link assignment under varying domain sizes. The results are shown in figure 6, with the cost in microseconds on the y-axis and domain sizes (maximum number of objects per domain) on the x-axis. Figure 6 demonstrates that as domain sizes increase, the cost decreases from about 7.8-4.7µs and 8.5-4.9µs for 20,000 and 40,000 objects respectively. There is also a slight overall increase (4%-7%) in cost going from 20,000 objects to 40,000 objects. Therefore, domain size appears to have a more significant affect on cost than the size of reachability graphs.
Note that it is important to put the absolute cost of dynamic reachability analysis into perspective. For example, acquiring a mutex lock (that does not actually protect any code) can take anywhere from a hundred cycles to as much as 20 microseconds, depending on the level of contention. The cost of dynamic reachability analysis (and SvS in general) is not affected by the amount of contention/sharing in an application. Also, although way are paying a cost during link assignment, SvS does not pay the cost of conflict resolution paid by other techniques such as TM. In the following sections we demonstrate, with real applications, that the benefits produced by SvS are outweighed by its costs.
3.1.1 Performance and Scalability
Figure 6. Cost of link assignment under varying domain and reachability graph sizes.
Figure 7. Parallel width under varying domain and reachability graph sizes.
Figure 8. Overall run-time overhead (normalized) of the consumer for varying domain and reachability graph sizes.
Parallel Width Figure 7 demonstrates the change in parallel width (y-axis) as we increase domain sizes (x-axis). We used a signature size of 8192 and measured the parallel width as described in section 3.1.1. Here we see that parallel width is dramatically affected by the size of master domains. As master domains increase, more memory objects share the same signature and master nodes decrease. As the number of master nodes decrease, their respective reachability increases, thus increasing the chances of conflict between the reachability of master nodes. This accounts for the dramatic decreases in parallel width demonstrated by both curves in figure 7. As in the previous section, the size of the reachability graph does not appear to have a significant affect on parallel width.
Overall Overhead For this experiment, we again used a signature size of 8192. Also, because no sharing actually occurs, we can compare the run-times of our system with SvS enabled and disabled in order to get a worst case scenario overhead for SvS. This overhead not only includes the cost of false positives and link assignment, but also any costs associated with refinement and scheduling, thus providing an overall worst-case cost of dynamic analysis performed at run-time.
Figure 8 demonstrates the overall run-time (y-axis, normalized to the number of (leaf, entity) pairs sent by the producer) for different domain sizes (x-axis). We see that there is essentially no change between domain sizes 2 and 10, since the decrease in absolute cost is negated by the decrease in parallelism when domain sizes increase. After a domain size of 10, we see a sharp increase due to the sharp decrease in parallelism that figure 7 demonstrated. The dotted-line labeled “baseline” is the run-time of the benchmark when SvS is disabled. This means that the overall overhead of SvS, for a domain size of 2, is about 5% for 20,000 objects and about 6% for 40,000 objects.
3.1.3 Discussion
One crucial characteristic of SvS is that its overhead is not dependent on the amount of sharing in the system. Rather, it depends on a few internal parameters and, more predominantly, the size and shape of data-structures and their resulting reachability graphs. This is fundamentally different from existing techniques where performance decreases as the amount of sharing increases (e.g., contention over shared locks, cost of transaction aborts). This is not the case for SvS. In fact, since SvS knows the memory accesses of tasks before they execute, in can mitigate sharing conflicts by grouping together non-conflicting tasks. This is what generations accomplishes by using look-ahead. This is an important distinction between SvS and existing techniques. We demonstrate in the next sections that this distinction leads to SvS being able to perform as well as, or better than several existing techniques, with the added benefit that it performs shared state protection automatically.
3.2 PARSEC
PARSEC is a parallel benchmark suite designed to represent state-of-the-art parallel workloads [10]. While the majority of these benchmarks do not need SvS, we converted Fluidanimate and Canneal which do have shared state conflicts. We also converted Blackscholes, which has no conflicts, in order to show the performance of SvS even when not required.
Blackscholes is a benchmark from the financial domain which calculates prices for stock options. Option prices can be calculated independently from one another and the results are stored in an array. SvS is used to protect the array from having the same array slot written to simultaneously.
Fluidanimate, as the name implies, performs simulation of fluid. The existing parallel implementation divides the 3D space of fluid cells into partitions. During an update, a cell only needs to modify the values of adjacent cells. Therefore the internal nodes of a partition can be processed without any synchronization. Cells on a common border, ghost cells, require locking before being modified. In the SvS implementation, ghost cells are protected from race conditions automatically. For the SvS version we used the number of partitions that is much larger than the number of cores and SvS is then able to process partitions that do not share ghost cells in parallel.
The Canneal benchmark is a place-and-route simulation that uses simulated annealing to minimize the routing cost on the chip. The algorithm iteratively finds a better routing by picking two elements at random and then swapping them if this is determined to be beneficial. The third-party TBB implementation for this benchmark was not available. The pthread implementation uses a construct called an atomic pointer in order to swap two elements, relying on compare-and-swap (CAS) operations to ensure atomicity. The implementation purposefully allows for data races to occur [10]. However, the algorithm is designed to recover from those race conditions. We replace the use of atomic pointers with SvS in order to provide a safe way of swapping the elements in parallel. To accomplish this SvS applied to pairs of elements is used to automatically determine which swaps can safely execute.
We report runtimes calculated over an average of five runs using the simlarge dataset. Times reported for Canneal and Blacksc-holes are from the parallel section of the code. Fluidanimate has parallel sections throughout and so the total execution time is reported. Standard error was negligible in all cases except the Canneal pthread version with eight threads where it was 22%, which we believe to be caused by unpredictable latency of CAS’s.
In Figure 9 we show the performance with different number of threads for SvS and the third-party pthreads and TBB implementations. In the pthreads and TBB implementation fine-grained mutexes are used to provide synchronization unless otherwise noted earlier. The results demonstrate that SvS is able to match performance of the pthreads 1 and TBB even though it does not require the programmer to explicitly protect access to shared state. This suggests that SvS may accomplish similar performance as other models with less programming effort although user studies would be needed to confirm this statement.
3.3 Cal3D
The Cal3D library implements a typical character animation algorithm. Shown in Figure 11, the algorithm iterates through all character models, blending several animations on each. Animations are blended by iterating through a model’s bones and modifying a bone’s position and rotation according to the current state of the animation. Animations typically modify some bones of a model, but not all of them. For example, an animation of a running motion updates the positions and rotations of bones of the legs and the arms, but not the chest bones. A waving animation updates the bones of one arm.
In order to correctly parallelize animation, two animations must not update the same bone concurrently. Different models do not share bones, so the iterations of the first loop in Figure 11 can run in parallel. However, because different animations may touch
the same bone, the second loop cannot be parallelized without protecting against concurrent accesses. Thus, there are four ways to parallelize character animation: restrict parallelism to models, or process models and animations in parallel and protect accesses to bones with locks, transactional memory or SvS.
Figure 10 compares performance and scalability of four parallel implementations of the main animation loop in Cal3D. To drive the loop we use the Callery animation example included in the Cal3D distribution, and we use 4 models and 8 animations per model. These numbers reveal several interesting facts.
TBB-Models processes models in parallel, and since we are processing 4 models, reaches maximum performance at 4 threads. TBB-ModelsAnims parallelizes processing models and animations, protecting accesses to bones with locks. Despite extra parallelism, TBB-ModelsAnims performs similarly to TBB-Models due to high lock contention over shared bones.
For the STM version, we used our parallel runtime system to create a transactional task for each model, animation and bone combination. We present the best performing STM algorithm. Since each transaction is a guaranteed write, it seems that STM performance suffers due to a high conflict rate between transactions writing to the same bone.
To use SvS, we created tasks as in STM, but each task is scheduled using SvS. SvS achieves better performance than other implementations due to the greater parallelism uncovered. Despite many potential conflicts, demonstrated in TBB and STM, the large number of runnable items available to be scheduled allows SvS to achieve good parallelism and performance.
Performance for SvS and STM stops improving when we have more than four threads. The reason has nothing to do with the method of synchronization, but is rooted in the very fine-granular nature of tasks in this example. Each task only takes 3500 cycles to complete, and the overhead of work-stealing dominates the computation. We found that if we implement a semi-static version of the scheduling algorithm that places tasks into thread-local queues and restricts work-stealing we are able to achieve scaling beyond four threads and improve performance in the eight-threaded case by more than a factor of three (these results are not shown).
This suggests an important direction for future research: investigation of semi-static scheduling techniques (in contrast to traditional work-stealing) in order to accomplish good performance for systems with very fine-granular parallelism, or automatically determining the right task granularity in order to minimize the overhead of handling fine-grained tasks. Although there are hardware proposals aiming to reduce the overhead of task scheduling [24], we believe that maximum efficiency can be obtained when software is also structured to avoid the overhead.
3.4 QuakeSquad
Artificial Intelligence (AI), determining the actions of game entities, and Entity Management, managing the movements and interactions of game objects, together make up one common game subsystem and are notoriously difficult to parallelize [5]. This difficulty
Figure 10. Scalability comparison between SvS, TBB and STM implementations of character animation.

Figure 11. The character animation algorithm using a conventional loop notation.
```c
foreach(model in modelList){
foreach(animation in model.animList){
animation->calculateBonePositions(timeDelta);
foreach(bone in animation.bones){
skeleton.bones[bone.ID].blend(bone)
}
}
}
```
has two main sources. First, AI logic tends to be arbitrary and complex being defined by game programmers to fit the circumstances instead of fitting some mathematical formalism. Secondly, the large number of interactions involved in Entity Management mean that several modifications may be made to a single object in one frame. These interactions can often set off chains of interactions that cluster in unforeseen ways. These two complicating factors, and the large amount of shared state that can potentially affected, make this system a primary concern for parallelization.
We took the approach of Lupe et al [18] with their SynQuake benchmark and created an application, QuakeSquad, that captures the essential computational patterns and data structures of video games while remaining simple enough for meaningful testing. QuakeSquad, consists of a two dimensional world with four types of entities: bombs, walls, citizens and techs. These are governed by a few simple rules:
- bombs explode reducing the health of citizens and techs within a set radius and not obstructed by a wall.
- bombs ‘project’ fear onto citizens and technicians who are within a set distance and in the line of sight of the bomb.
- fearful citizens will move away from the closest source of fear while a calm citizen will move randomly.
- calm citizens will not move into an area where it would be subject to fear.
- techs will move toward the closest source of fear and if the tech touches a bomb it is disarmed.
With a large number of entities in the system, the ‘line-of-sight’ tests for occlusion are by far the most expensive. Without further optimization each test would have to consider every entity in the world. To reduce these tests, the world is divided into a grid where each cell is associated with an unordered list of every entity in that area. When an entity moves from one cell to another it will remove itself from its current list and add itself to the new one. The cell size is set such that when making a line-of-sight calculation only the current cell and adjacent cells need be considered. This division of entities, analogous to the spatial partitioning structures used in 3D environments, reduces the number of tests by at least two orders of magnitude. However, even with this optimization, occlusion testing still dominates the computation and so would benefit most from parallelization. These tests occur most frequently when citizens move and when bombs project fear onto citizens and technicians. The same tests also occur when a bomb explodes, but bombs explode infrequently and so we focus on parallelization of citizen movement and fear projection.
When these aspects are transformed into data parallel operations where entities are concurrently updated, potential shared state conflicts are exposed. During bomb updates it is common for bomb radii to overlap and they may modify the same entity simultaneously. During citizen movement a large number of citizens cross grid boundaries and thus expose the associated entity lists to potential concurrent modifications. In both cases SvS can be used to ensure that no race conditions occur, which we detail below. We perform each test with 35 bombs, 100 citizens, 40 techs and 120 walls. 2048 bit signatures are used for SvS.
### 3.4.1 Sending lists (Updating Bombs)
First, we focus on updating the bomb entities and determining which human entities are affected. The runtime of this operation is completely dominated by line-of-sight checks. First a task determines the entities in each bomb’s radius and builds temporary linked lists of potential techs and citizens to scare. These lists of potential candidates are then sent to a data-parallel consumer which perform line-of-sight testing. If an entity is not occluded, the entity become fearful. The reachability of the underlying list is queried to return a signature representing its contents, which is passed to the scheduler. Figure 12(a) shows the execution times averaged over 100 frames. With one thread, the processing takes 1944 $\mu$s. At 8 threads the execution time drops to 675 $\mu$s. The standard deviation in all cases is bellow 100 $\mu$s.
### 3.4.2 Modifying lists (Updating Citizens)
When a citizen moves, it will avoid moving into areas with bombs. This update is also dominated by line of sight checks. A producer task determines the potential new location for each citizen and then sends this to a data-parallel consumer which will perform a line of sight check and move the entity if no bomb is visible. This case is complicated by the grid of entity lists. When an entity moves from one grid to another, a reference is removed from one list and added to another. If two entities move into the same grid simultaneously the linked lists will be subject to concurrent modification. Additionally, the line of sight checks require reading the eight adjacent cells around current location cell and the eight cells adjacent to the prospective destination. Errors will occur if the structure of one of these lists is modified while it is being read. We use SvS to prevent these potential state access conflicts. Again, the reachability of the list queried to produce a signature for the scheduler. Figure 12(b) shows the execution times of this phase averaged over 100 frames. Executions times go from 4617 $\mu$s with one thread to 866 $\mu$s with eight. The standard deviation in all cases is bellow 90 $\mu$s.
### 3.4.3 Putting it together
The previous discussion has shown that both of these major tasks scale well in isolation. We now consider results for entire frames of QuakeSquad, which combines modifying lists and sending lists. For comparison we created a version using TBB with mutexes and another using STM. The results, averaged over 100 frames, are shown in figure 13. Scaling from one to eight threads in the SvS version reduces the frame execution time from 7633 $\mu$s to 1442 $\mu$s. Shared state accesses conflict approximately 10% of the time on average, meaning SvS detects and manages a conflict in every 10 accesses. While the TBB version performs similarly to the SvS version, the STM version fails to benefit from extra threads. A closer examination showed that roll-backs were causing the data-parallel instances to lengthen and increase total runtime.
QuakeSquad is a comprehensive example representing a previously difficult to parallelize subsystem of modern game engines. The performance and scalability achieved by SvS in the results demonstrate its ability to utilize reachability graphs and dynamic reachability analysis to efficiently determine the reads/writes of tasks that access linked data structures and subsequently concurrently schedule tasks with non-overlapping read/write sets.
4. Related Work
SvS was previously introduced by us in a short workshop paper, which gave only a high-level overview of the idea, but the system was not fully specified or implemented at that time. This paper contains the first, self-contained, presentation of the model and implementation of SvS as well as detailed specification of algorithms, and evaluation with multiple benchmarks.
There is a great need in the video game industry for domain appropriate parallelization techniques. Developers for major games studios such as EA [1], Epic [5] and Valve [26] have expressed the need for comprehensive and efficient parallelism and have cited shared state management as a major roadblock.
There are an ever increasing number of parallel environments and language/runtime combinations such as Chapel [13], Cilk [12], OpenMP [4], Gossamer [23] and Intel’s TBB [6] and Ctt\textregistered\textregistered RapidMind [3]. However, they don’t provide automatic mechanisms for shared state protection, generally focusing instead on providing tools for the programmer to manually manage state. SvS or an SvS-like technique could be implemented in a number of these systems.
The Jade [22] language and the Prometheus [7] package both address shared state protection. Jade proposes a set of parallel extensions to C where a programmer denotes blocks of code as tasks and specifies their data constraints. Although Jade also schedules tasks based on their constraints there are fundamental differences. Jade is based around task-parallelism and constraints must be specified by the programmer whereas in SvS they are derived automatically, thus freeing the programmer from the need to concentrate on implicit and hard to spot data dependencies. A task’s scheduling is based entirely on the information available before the task runs. Prometheus’ Serialization Sets work similarly to Jade, but they are applied to an object-oriented language and protect from races within an object. Shared state protection using SvS is more general.
While there is a large body of existing work on static dependency analysis, OoOJava [16] represents recent work in this field that similar to SvS attempts to combine static and dynamic analysis. OoOJava abstracts collections of objects as heap region nodes and uses disjoint reachability analysis [15] to statically infer connectivity between objects. The result is a set of reachability states that are used to determine if two objects $x$ and $y$ are disjoint i.e. cannot reference the same heap node. If it is determined that they might reach the same heap node, in very specific cases they are able to check at run-time if $x = y$ in order to test for disjointness. Otherwise, they are forced to conservatively assume a dependency between $x$ and $y$ since they do not have full reachability information at compile time. SvS addresses this issue by introducing the concepts of reachability and reachability graphs and using dynamic reachability analysis to provide an efficient way to maintain and extract complete reachability information.
Many techniques that address shared state are optimistic in that they attempt to do computations without explicit synchronization and ‘roll-back’ or undo conflicting operations. Software transactional memory (STM) is the most prominent of these techniques and provides database-like transactional atomicity. A programmer wraps code that requires protection in an atomic block and the STM system automatically handles conflicts. The key difference between TM and SvS is that SvS determines whether or not two tasks might conflict before they are executed, whereas TM detects conflicts during execution. This means that TM is less conservative but may be subject to expensive rollbacks. Since rollback cost are high, STM performs well when most transactions are able to complete successfully. So STM may be advantageous to SvS in cases where actual conflicts between tasks are extremely rare, but SvS would serialize them to avoid potential races. This suggests an interesting opportunity for combining STM and SvS: using STM when actual conflicts are rare and using SvS when the conflicts are frequent. Such adaptive use of synchronization primitives may enable to exploit the best of both models and is an interesting direction for future work.
There has also been some work in the STM community on deliberately co-scheduling transactions that appear (based on static or dynamic information) to be unlikely to conflict with one another [27]. However, this work uses the history of previous conflicts, and performs co-scheduling for performance. SvS performs scheduling for correctness. SvS also relies on static and dynamic analysis to determine the potential memory accesses of a task before it executes, rather than conflict history recorded after the fact.
Galois [17] is an optimistic framework, focusing on data-parallelism, that falls outside of STM. Galois focuses on the parallelization of ‘irregular applications’, those with interdependent loop iterations. This framework differs from SvS in that it focusses on commutativity analysis as opposed to dependency analysis and is optimistic and must contend with overhead created by roll-backs.
5. Conclusion and Future Work
We presented SvS – a new framework for automatic protection of shared state in task graph models. We demonstrated that SvS performs comparably to other synchronization techniques, without requiring the programmer to explicitly manage shared state.
While this work demonstrates the feasibility of SvS, there are many opportunities for further research. First of all, there is an opportunity to incorporate more powerful static analysis, including disjoint reachability analysis [15]. In doing so, we may be able to extract greater parallelism statically, which will reduce the complexity and overhead of the algorithms that we use at runtime. Second, as we discovered in the character animation example, there are opportunities for investigating better software techniques for handling fine-grained tasks, including scheduling and determining the optimal task size. Finally, we are interested in expanding the scope of SvS codebase and performing user studies in order to fully understand the impact of SvS on programmer productivity.
References
"Figure 13. Scalability comparison between SvS, TBB and STM versions of QuakeSquad."
|
{"Source-Url": "http://www.cs.sfu.ca/~fedorova/papers/pldi-2011.pdf", "len_cl100k_base": 15456, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43955, "total-output-tokens": 16537, "length": "2e13", "weborganizer": {"__label__adult": 0.0004131793975830078, "__label__art_design": 0.00033974647521972656, "__label__crime_law": 0.0003833770751953125, "__label__education_jobs": 0.0005469322204589844, "__label__entertainment": 0.00011223554611206056, "__label__fashion_beauty": 0.00018870830535888672, "__label__finance_business": 0.0002416372299194336, "__label__food_dining": 0.0003964900970458984, "__label__games": 0.0034046173095703125, "__label__hardware": 0.0019044876098632812, "__label__health": 0.0003485679626464844, "__label__history": 0.00037169456481933594, "__label__home_hobbies": 0.00010091066360473631, "__label__industrial": 0.0005636215209960938, "__label__literature": 0.00023055076599121096, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0005483627319335938, "__label__science_tech": 0.038421630859375, "__label__social_life": 6.520748138427734e-05, "__label__software": 0.007625579833984375, "__label__software_dev": 0.94189453125, "__label__sports_fitness": 0.0004229545593261719, "__label__transportation": 0.0007042884826660156, "__label__travel": 0.00026917457580566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79098, 0.03298]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79098, 0.59165]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79098, 0.93107]], "google_gemma-3-12b-it_contains_pii": [[0, 4599, false], [4599, 12417, null], [12417, 19164, null], [19164, 26631, null], [26631, 33546, null], [33546, 39478, null], [39478, 46873, null], [46873, 54191, null], [54191, 58794, null], [58794, 65211, null], [65211, 71978, null], [71978, 79098, null], [79098, 79098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4599, true], [4599, 12417, null], [12417, 19164, null], [19164, 26631, null], [26631, 33546, null], [33546, 39478, null], [39478, 46873, null], [46873, 54191, null], [54191, 58794, null], [58794, 65211, null], [65211, 71978, null], [71978, 79098, null], [79098, 79098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79098, null]], "pdf_page_numbers": [[0, 4599, 1], [4599, 12417, 2], [12417, 19164, 3], [19164, 26631, 4], [26631, 33546, 5], [33546, 39478, 6], [39478, 46873, 7], [46873, 54191, 8], [54191, 58794, 9], [58794, 65211, 10], [65211, 71978, 11], [71978, 79098, 12], [79098, 79098, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79098, 0.01186]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
a88a5bd44b512feee933b52c7438103cad1d93c7
|
[REMOVED]
|
{"Source-Url": "http://www.it.swin.edu.au/personal/cliu/WISE08_3.pdf", "len_cl100k_base": 14142, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 68410, "total-output-tokens": 15905, "length": "2e13", "weborganizer": {"__label__adult": 0.0003952980041503906, "__label__art_design": 0.0006337165832519531, "__label__crime_law": 0.0004706382751464844, "__label__education_jobs": 0.0033626556396484375, "__label__entertainment": 0.00013434886932373047, "__label__fashion_beauty": 0.0002391338348388672, "__label__finance_business": 0.0005774497985839844, "__label__food_dining": 0.00038695335388183594, "__label__games": 0.0008869171142578125, "__label__hardware": 0.0009393692016601562, "__label__health": 0.0006399154663085938, "__label__history": 0.0006275177001953125, "__label__home_hobbies": 0.00021791458129882812, "__label__industrial": 0.0005245208740234375, "__label__literature": 0.0007891654968261719, "__label__politics": 0.00027108192443847656, "__label__religion": 0.0006504058837890625, "__label__science_tech": 0.148193359375, "__label__social_life": 0.0001819133758544922, "__label__software": 0.0308837890625, "__label__software_dev": 0.8076171875, "__label__sports_fitness": 0.00031375885009765625, "__label__transportation": 0.00069427490234375, "__label__travel": 0.0003390312194824219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44950, 0.02772]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44950, 0.66382]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44950, 0.84656]], "google_gemma-3-12b-it_contains_pii": [[0, 2445, false], [2445, 4637, null], [4637, 8438, null], [8438, 12063, null], [12063, 14931, null], [14931, 17121, null], [17121, 20401, null], [20401, 21516, null], [21516, 24282, null], [24282, 27301, null], [27301, 30024, null], [30024, 33097, null], [33097, 36150, null], [36150, 39529, null], [39529, 42896, null], [42896, 44950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2445, true], [2445, 4637, null], [4637, 8438, null], [8438, 12063, null], [12063, 14931, null], [14931, 17121, null], [17121, 20401, null], [20401, 21516, null], [21516, 24282, null], [24282, 27301, null], [27301, 30024, null], [30024, 33097, null], [33097, 36150, null], [36150, 39529, null], [39529, 42896, null], [42896, 44950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44950, null]], "pdf_page_numbers": [[0, 2445, 1], [2445, 4637, 2], [4637, 8438, 3], [8438, 12063, 4], [12063, 14931, 5], [14931, 17121, 6], [17121, 20401, 7], [20401, 21516, 8], [21516, 24282, 9], [24282, 27301, 10], [27301, 30024, 11], [30024, 33097, 12], [33097, 36150, 13], [36150, 39529, 14], [39529, 42896, 15], [42896, 44950, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44950, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
1ffe46997f3bf5ae2bef869964db5af62461d445
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/coordination/coordination2011/VerhoefKKM11.pdf", "len_cl100k_base": 9681, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 41650, "total-output-tokens": 11171, "length": "2e13", "weborganizer": {"__label__adult": 0.0003948211669921875, "__label__art_design": 0.0005135536193847656, "__label__crime_law": 0.0003757476806640625, "__label__education_jobs": 0.0008993148803710938, "__label__entertainment": 0.0001550912857055664, "__label__fashion_beauty": 0.00019991397857666016, "__label__finance_business": 0.0004658699035644531, "__label__food_dining": 0.0004405975341796875, "__label__games": 0.0010471343994140625, "__label__hardware": 0.0017576217651367188, "__label__health": 0.0007166862487792969, "__label__history": 0.0004425048828125, "__label__home_hobbies": 0.00011646747589111328, "__label__industrial": 0.0007443428039550781, "__label__literature": 0.0004119873046875, "__label__politics": 0.0003921985626220703, "__label__religion": 0.0005087852478027344, "__label__science_tech": 0.1986083984375, "__label__social_life": 0.00010854005813598631, "__label__software": 0.0108184814453125, "__label__software_dev": 0.779296875, "__label__sports_fitness": 0.00036787986755371094, "__label__transportation": 0.0008139610290527344, "__label__travel": 0.0002827644348144531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43368, 0.03646]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43368, 0.24427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43368, 0.87877]], "google_gemma-3-12b-it_contains_pii": [[0, 2918, false], [2918, 6275, null], [6275, 9374, null], [9374, 12105, null], [12105, 14695, null], [14695, 17880, null], [17880, 20749, null], [20749, 23962, null], [23962, 26795, null], [26795, 29809, null], [29809, 32660, null], [32660, 34991, null], [34991, 38477, null], [38477, 39829, null], [39829, 43368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2918, true], [2918, 6275, null], [6275, 9374, null], [9374, 12105, null], [12105, 14695, null], [14695, 17880, null], [17880, 20749, null], [20749, 23962, null], [23962, 26795, null], [26795, 29809, null], [29809, 32660, null], [32660, 34991, null], [34991, 38477, null], [38477, 39829, null], [39829, 43368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43368, null]], "pdf_page_numbers": [[0, 2918, 1], [2918, 6275, 2], [6275, 9374, 3], [9374, 12105, 4], [12105, 14695, 5], [14695, 17880, 6], [17880, 20749, 7], [20749, 23962, 8], [23962, 26795, 9], [26795, 29809, 10], [29809, 32660, 11], [32660, 34991, 12], [34991, 38477, 13], [38477, 39829, 14], [39829, 43368, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43368, 0.30045]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
79cc04dac6fc492912a8e26668c07a483466f974
|
Abstract:
Many critical services are nowadays provided by large and complex software systems. However the increasing complexity introduces several sources of non-determinism, which may lead to hang failures: the system appears to be running, but part of its services are perceived as unresponsive. On-line monitoring is the only way to detect and to promptly react to such failures. However, when dealing with Off-The-Shelf based systems, on-line detection can be tricky since instrumentation and log data collection may not be feasible in practice.
In this paper, a detection framework to cope with software hangs is proposed. The framework enables the non-intrusive monitoring of complex systems, based on multiple sources of data gathered at the Operating System (OS) level. Collected data are then combined to reveal hang failures. The framework is evaluated through a fault injection campaign on two complex systems from the Air Traffic Management (ATM) domain. Results show that the combination of several monitors at the OS level is effective to detect hang failures in terms of coverage and false positives and with a negligible impact on performance.
Keywords: Failure Detection, Hang Failures, On-line Monitoring, Critical Software Systems, Operating Systems
1 Introduction
Software faults represent today a major dependability threat for complex software systems. Testing and static code analysis are widely adopted techniques to remove such defects or “bugs” in a system under development. However, as shown by field data studies (Sahoo et al., 2010; Chillarege et al., 1995; Sullivan and Chillarege, 1991), a large slice of software faults are activated during the operational phase when transient conditions occur (e.g., overload, timing errors, and race conditions). Static analysis and testing techniques fail when dealing with this kind of faults, since their condition of activation cannot be reproduced systematically. This is especially true in the case of complex concurrent applications, where multi-threading and shared resources represent a source of non-determinism in the application behavior.
For these reasons, faults have to be treated during the use phase of the system, by detecting the occurrence of failures due to their activation. To this aim the execution state of the system has to be continuously monitored in order to reveal if one or more components are not running correctly.
However, there is a class of failures, namely hang failures, that pose serious issues to the failure detection. These failures cause the system to be partially or totally unresponsive, although it appears to be running; they can be due to infinite loops and indefinite wait conditions.
Existing detection techniques simply poll the health status of system components (i.e., heartbeat mechanisms), analyze system log files to uncover error messages and their correlation with failures or control the levels of CPU utilization. It is clear that the nature of hang failures prevent traditional techniques to be effective. For instance, a process may still be able to communicate even if the service is not delivered properly; this might be the case of a multi-threaded process in which the thread that answers to queries is not the one in which the hang actually occurred. At the same time, a stuck process may not be able to log events.
These problems are exacerbated when dealing with complex mission and safety critical software systems. Today these systems are being developed as the composition of several Off-The-Shelf (OTS) software modules and complex multi-threaded components. The unavailability of the source code complicates the detection task, since no extra-code can be added to observe the execution state. In addition, due to their particular criticality, these systems pose stringent requirements to failure detection:
- maximize the number of detected failures, in order to avoid catastrophic consequences;
- minimize the number of false positives, in order to avoid unnecessary (and costly) recovery actions
- minimize the latency of the detection, in order to timely trigger the proper countermeasures;
- minimize the overhead of the detection framework limiting the impact on the performance of the system.
To these ends, this paper proposes a lightweight and non-intrusive failure detection framework to reveal the occurrence of software hangs. It relies on several
simple monitors which exploit the Operating System (OS) support to trigger alarms when the behavior of the system differs from the nominal one. For instance, we infer indirectly the state of the system by monitoring different variables such as the waiting time on semaphores or the holding time into critical sections. Nominal behavior has been modeled experimentally by means of a training phase. To combine alarms from detectors we use the Bayes’ rule and a detection event is triggered if the likelihood that hang failure occurs exceeds a given threshold. Our experimental results show that this framework increases the overall capacity of detecting hang failures (it exhibits a 100% coverage of observed failures) while keeping low the number of false positives (less than 6% in the worst case), the latency (about 0.1 seconds in average) and with a negligible impact on performance (less than 10% in the worst case). Moreover, it can be used even when OTS modules are used, because there is no need to modify the source code of the application.
The proposed framework has been implemented for the Linux OS by means of dynamic probes placed in the kernel code. To show the effectiveness of the approach, we applied the framework on two complex systems from the ATM domain, which are based on OTS and legacy components; we performed fault injection experiments to accelerate the process of data collection.
This paper extends our previous results on OS-level hang detection presented in Carrozza et al. (2008). In particular, (i) we propose a more sound combination scheme to trigger detection events, (ii) we introduce additional monitors to collect events related to network sockets status, and (iii) we perform a more extensive experimental evaluation. More in detail, in order to generalize the previous results, we analyze one more case study: the SWIMBOX. The proposed case study is a complex and OTS based system which has been implemented in the framework of the SWIM SUIT FP6 European project.
The rest of the paper is organized as follows. Section 2 presents the related work on hang detection, while the proposed detection approach is described in Section 3. Implementation details are provided in Section 4, and the results of the experiments are presented in Section 5. Finally, Section 6 ends the paper with conclusions and directions for future work.
2 Related work
The problem of hang failures can be mitigated by removing software faults in advance. Debugging techniques use static and dynamic source code analysis to identify hang root causes. In Shen et al. (2005), the disk I/O subsystem is modeled analytically; the model is then compared to execution traces, to identify workload conditions under which performance is suspiciously low, and to fix anomalies (e.g., by improving disk I/O scheduling heuristics). In Wang et al. (2008), runtime traces are exploited to search for potential hang points within source code, to avoid unnecessary end-user waits. In Engler et al. (2000), developers’ knowledge about the system is exploited to formulate coding assertions, and to check the source code for violations. Assertions enforced on the Linux kernel concern memory management errors, temporal ordering of operations, and deadlocks. Debugging techniques are useful to avoid the occurrence of hangs only when the root cause can be easily pinpointed into the source code. Unfortunately, they are not suitable to identify
failures occurred during the use phase of the system because of the activation of complex and transient conditions. On line monitoring and failure detection are thus the only way to uncover these residual faults.
One approach to hang failure detection is represented by **query-based techniques**. They rely on probing the monitored component health status (either locally or remotely) to discover a failure (Chen et al., 2002). The query can be performed by periodically sending "heartbeat" request and waiting for "alive" reply to that message, or a timeout can be enforced to detect anomalous slow responses. In Herder et al. (2006), a query based technique is adopted to detect stalled OS processes in the Minix 3 OS, by using heartbeat requests. This approach requires that the monitored process is a "server" process, i.e., the process performs some work when it receives a request from Inter-Process Communication (IPC) channels. Moreover it assumes that, at given time, the process can only serve a request or respond to the heartbeat. This approach has been extended in Cotroneo et al. (2010) by adapting the timeout at run-time on the basis of past heartbeat replies. Unfortunately, this approach has some drawbacks. On one hand, when dealing with multi-threaded systems, the hang might be localized on a different thread than the one that replies to the heartbeat. Hence, the component get the heartbeats correctly, while other components are stuck. On the other hand, the approach is not suitable for OTS based and legacy systems, because it requires (i) to specify heartbeat requests in a format that can be managed by the system, and (ii) to modify the application in order to send replies.
Traditional failure detection approaches include **log based techniques**. They perform the on-line analysis of log messages produced by the system to infer the occurrence of a failure. In particular, they are often adopted to diagnose failures due to hardware faults, by using statistical analysis and heuristic rules (Iyer et al., 1990; Lin and Siewiorek, 1990). Data mining and language processing techniques have also been adopted to automatically analyze log files (Bose and Srinivasan, 2005). These techniques assume the occurrence of some events in the log file to detect a failure; unfortunately we cannot rely on the availability of log messages when dealing with hang failures since the system may be unable to execute and thus to produce log messages (e.g., a stuck component cannot return an error code or cannot throw an exception).
**Hardware monitoring** techniques are also used in hang failure detection. These techniques require special extra hardware such as watchdog timers to detect software hangs. Timers are periodically reset in failure free conditions; otherwise, an alarm is triggered (a Non-Maskerable Interrupt) to signal that the timer has expired (David et al. 2007). However, these approaches are not able to detect infinite loops where the application is not stuck thus not preventing the monitored events to occur (e.g., resetting the timer). Moreover hardware support may be not available.
Our approach belongs to the class of **anomaly based detection techniques**. These techniques rely on (i) the continuous monitoring of the status of system variables (e.g., CPU consumption) and (ii) on the comparison of these data with traces of normal and anomalous executions. Anomaly based detection has been adopted in several contexts, such as intrusion detection (Forrest et al., 1996; Lee and Stolfo, 1998) and hardware failure detection (Zheng et al., 2007; Pelleg et al., 2008), by exploiting data collected at the network layer (e.g., about TCP connections) and at the hardware layer (e.g., CPU, I/O, and memory usage).
Monitoring OS level variables is also exploited in Podgurski et al. (2003). Authors propose to use system behavior information (e.g., system call traces, I/O requests, call stacks, context switches) and a multi-class classifier to build a diagnosis tool. However this approach has a not negligible overhead (all system call parameters are recorded) and is not suited for failures which cannot be reliably reproduced as hang failures.
The work appeared in Wang et al. (2007) is the closest to ours; it proposed a detection approach at the OS level using CPU hardware counters. On the one hand, applications hangs are detected by estimating an upper bound to the number of instructions executed in each code block of the application. On the other hand, system hangs are detected counting the number of instruction executed between two consecutive context switches (if the system is stuck it does not schedule any other process, and the counter value increases indefinitely). The proposed approach is effective against livelocks and infinite loops, but it does not allow to detect indefinite wait conditions. The approach also requires the analysis of the application code (to identify the code blocks), thus it may be not suitable for OTS based and legacy systems.
3 The proposed detection approach
3.1 System and Failure Assumptions
The detection framework is designed to address complex and distributed software systems relying on OTS components. We assume that the system can be decomposed as a set of Detectable Units (DUs in the following). A DU represents the atomic software entity that can be monitored to detect failures. In this work detection is performed at process level, i.e., we consider OS processes, either single-threaded or multi-threaded, as DUs. OS processes are often adopted for architecting complex and distributed systems, by allocating a set of functionalities to each process (e.g., in the client-server paradigm, a server process listens for processing requests from clients); some examples of complex systems based on OS processes are represented by the case studies in this work (Section 5.1). DUs can be located in the same node or in different nodes, as shown in Figure 1.
This work focuses on hang failures, i.e., a DU does not provide its services anymore or services are delivered unacceptably late. When a process terminates unexpectedly (e.g., due to run-time exceptions), we assume a crash of the DU. Detecting such a failure is fairly simple, since OS promptly deallocates the structures associated to the processes that have crashed. This does not happen with hang failures, since they do not result in process termination; the DU rather survives behaving as halted. Hang failures can be further distinguished in active and passive hangs:
- **Active Hang.** It occurs when a process is still running but its activity may be no longer perceived by other processes because one of its threads, if any, consumes CPU cycles improperly;
Passive Hang. It occurs when a process (or one of its threads) is indefinitely blocked, e.g., it waits for shared resources that will never be released (i.e., it encounters a deadlock).
Hangs might be either silent or non-silent. In the former case the hang compromises the communication capabilities of the process, e.g., it cannot reply to heartbeats. In the latter case, the process is still able to communicate, e.g., it responds to heartbeats or it generates log entries, even if the service is not delivered properly. In complex systems it is hard to tell whether a process (thread) is currently subject to a passive hang, because it can be deliberately blocked waiting for some work to be performed (e.g., this happens when pools of threads are used in multi-threaded server processes). Difficulties are also encountered with active hangs, because a process (thread) can deliver late heartbeat response, due to stressing workload and working conditions.
Along with crash and hang failures, systems may suffer value failures as well Avizienis et al. (2004). These do not cause the system to get halted nor delayed but the delivered service comes with erratic outputs. Awareness on the application logic and domain would be required to detect similar failures, as well as user involvement into the detection process. For this reason, we do not take these failures into account in our detection framework that is rather committed to be transparent to final users.
Figure 1 System model.
3.2 Detection Framework
We propose to leverage the OS support to perform system monitoring and to infer the health of DUs by observing their behavior and interactions with the external environment.
As stated in section 1, the proposed detection framework aims to achieve:
- high coverage, i.e., the ability to notify a failure, when the system is actually affected by a hang;
- low false positive rate, i.e., the ability of avoiding false alarms when the DU is actually working properly;
- low latency, in order to trigger alarms in due time.
- low overhead, in order to minimize the impact on the mission of the system as a whole.
To pursue these objectives, we propose to detect failures by leveraging several sources of information, through monitors placed at the OS level. Monitors concern resources used by the application and are realized by inserting software probes into the OS that are in charge of catching events.
Each monitor is in charge of observing a single resource and it is linked to an alarm generator ($\alpha_i$) which triggers the alarm in case of anomalies in the monitored resource. Monitors and alarm generators compose the overall detection system, named detector, depicted in Figure 2.
The final detection of a failure is performed by combining multiple alarms. As suggested by intuition, combining the alarms coming from multiple sources allows to detect a higher number of failures, if compared to detectors based on a single source. For instance, a passive hang does not lead to system call errors, but it can suspiciously increase the holding time into a critical section. This assumption is experimentally validated in section 5.
Let $N$ be the number of monitors in charge of observing the resource usage for each DU. An alarm generator $\alpha_i$ collects the output of the $i$th monitor. An alert is produced by this monitor if the value of the observed variable ($v_i$) is out of a range $r_i = [r_i^-, r_i^+]$. The variable $v_i$ represents an event occurred for an OS resource (e.g., a thread waited for 10ms before entering a critical section), and the...
A. Bovenzi, G. Carrozza, D. Cotroneo, M. Cinque, R. Natella
range \( r_i \) models the expected behavior of the DU with respect to the monitored resource. Moreover, we also take into account the bursty behavior of some events on OS resources, i.e., the events suddenly occur for a short time period and then disappear. To model this behavior and to detect anomalies in the burst length, the alarm generator also checks that \( v_i \) is out of the range \( r_i \) for \( L_i \) consecutive times in a period \( T_i \). Therefore the output of each \( \alpha_i \) is a binary variable defined as:
\[
F_i = \begin{cases}
1 & \text{if } v_i \notin r_i \text{ for } L_i \text{ times in the period } T_i \\
0 & \text{otherwise}
\end{cases}
\]
(1)
To combine the outputs of all the monitors, we use the Bayes rule as the global detection logic (see equation 2). It allows to correlate existing beliefs (\textit{a priori} probabilities) in the light of new evidence (\textit{a posteriori}), i.e., to combine new data with existing knowledge about the occurrence of a given event.
\[
P(F|\alpha) = \frac{P(\alpha|F)P(F)}{P(\alpha|F)P(F) + P(\alpha|\neg F)(1 - P(F))}
\]
(2)
Applied to alarms and failures, equation 2 can be read this way:
- \( F \) represents the event “faulty DU”;
- \( \alpha \) is a vector containing the output of the alarm generators \( \alpha_i \), i.e., \((F_1, F_2, ..., F_N)\).
The final detection event is triggered when \( P(F|\alpha) \) is greater than a given threshold value. The following probability distributions are estimated during the training phase:
- \( P(\alpha|F) \), represents the probability of detection. It is estimated as the number of occurrences of the \( \alpha \) vector under faulty executions, over the total number of vectors collected.
- \( P(\alpha|\neg F) \), represents the probability of false alarms. It is the number of occurrences of \( \alpha \) during fault-free executions.
Finally,
- \( P(F) \) is the \textit{a priori} probability of having a faulty DU. It can be estimated as \( T/\text{MTTF} \) (i.e., on the average, the DU becomes faulty once every MTTF/T, where \( T \) is the detection period and MTTF stands for Mean Time To Failure), if field data exist. Otherwise, it can be assumed by the literature where typical failure rates of complex software systems are provided. In our experiments, we assumed that \( P(F) = 10^{-6} \) (Chillarege et al., 1995).
The parameters \( r_i, L_i \) and \( T_i \) are tuned during a preliminary training phase. The detection framework assumes that the parameters obtained during the training phase also apply during the operational phase of the system. Therefore, the parameters have to be gathered after observing the system execution for a time period \textit{long enough}, in order to obtain a representative estimates that will apply also in the operational phase. This is a reasonable assumption with respect to the
critical systems we are addressing, since a significant amount of time is devoted to system validation that could be exploited to derive representative parameter estimates.
The training of the parameters should account for the variations in the monitored variables that occur during fault-free runs. The following heuristic approach has been adopted: the distribution of the $v_i$ (i.e., the frequency of values of $v_i$) is analyzed first, then a range $r_i$ that includes the most of the distribution is selected. For instance, the range can be selected by considering first order statistics (see Figure 3a), such as the mean ($m_{v_i}$) and the standard deviation ($\sigma_{v_i}$):
$$r_i = [m_{v_i} - k\sigma_{v_i}, m_{v_i} + k\sigma_{v_i}].$$
(3)
An alternative approach, which has been adopted in our experiments, is to select the minimum (min $v_i$) and the maximum (max $v_i$) value in the distribution, namely:
$$r_i = [\text{min } v_i, \text{max } v_i].$$
(4)
After selecting the range $r_i$, the parameter $L_i$ is set by taking into account the size of the bursts (see Figure 3b). These thresholds have to be set in order to keep low the number of false positives. For this reason, it is desirable to avoid false positives when training the monitor, i.e., during normal executions of the workload. Finally, the parameter $T_i$ is chosen empirically, i.e., by trying several candidate values and selecting the best one with respect to faulty and fault-free runs during training (e.g., minimizing false positives or latency, or maximizing coverage).
3.3 Monitors
Bearing in mind the complexity of the target systems, in terms of concurrency and nodes distribution over a network, we consider the following variables for the detection process:
1. System call error codes;
2. OS signals;
3. Task scheduling timeouts;
4. Waiting time for critical sections;
5. Holding time in critical sections;
6. Process and thread exit codes;
7. Network sockets timeouts;
8. I/O throughput.
Hence, we implemented a set of monitors in charge of observing the above variables for each monitored DU; outputs are provided in the form of log files, formatted according to tight rules, and processed by the alarm generators. Although monitors have been implemented for the Linux OS, we believe that they can be adapted for other environments, since the monitored variables are not strictly dependent on the working environment.
3.3.1 System calls monitor
In UNIX environments, system calls are associated to numerical error codes which are returned if exceptional events occur. Hence, the presence of error codes can be symptomatic of an anomalous system behavior.
All the occurrences of the cartesian product between system calls ID and error codes are considered. However, only a subset of these couples is meaningful. Each time an error code is returned, the monitor records (i) the PID (TID) (Process (Thread) IDentifier) of the calling process (thread), (ii) the system call id (iii) the error code.
3.3.2 UNIX signals monitor
Signals are commonly used to notify the occurrence of a given event, both from processes and the kernel. In the former case, they have coordination purposes, e.g., a signal could be sent to wake a waiting process, or to notify exceptional conditions. In the latter case, instead, signals are used either to inform a process about hardware and/or software exceptions, e.g., an invalid memory access or the loss of a socket connection, or to signal normal events, e.g., to signal that I/O data became available. In UNIX environments, for example, signals are able to explicitly signal the crash (e.g., SIGSEGV) of a process. Additionally, they can be used to signal application specific conditions (e.g., SIGUSR1 or SIGUSR2) or they could represent the symptom of a failure, e.g., due to the loss of network connection. Therefore we believe that monitoring signals could be relevant for hang detection.
When a signal occurs, the monitor logs the following data: (i) PID (TID) of the sender and the receiver of a signal, and (ii) the type of the signal.
3.3.3 Critical sections waiting times monitor
A long wait for a given mutex to be released can be reasonably considered a symptom of indefinite waiting. In other words, the mutex is likely to be never released, hence the waiting process (thread) is intended to remain blocked. Measuring the waiting time can be useful for the detection of passive hangs. It represents the time that a process waits before actually entering a critical section. The critical section is defined as a piece of code that must not be accessed by more
than one thread or process, and it is implemented in UNIX using synchronization
primitives (in particular, UNIX semaphores and the PThread library).
When waiting times exceed a given timeout, the monitor records the following
data: (i) PID (TID) of the waiting process (thread), (ii) the waiting time, (iii) the
time from the beginning of waiting interval.
3.3.4 Critical sections holding times monitor
A process holding a critical section for a long time is likely to preclude shared
resources usage to all the processes which are waiting for them. This greedy behavior
can reasonably be considered a potential cause of passive hangs.
For this reason, when holding times exceed a given timeout, the following data
are logged: (i) PID (TID) of the waiting process (thread), (ii) the holding time, (iii)
the entering time in the critical section.
3.3.5 Task scheduling monitor
Another source of information for detecting a hang failure is represented by the last
time when a process or thread is scheduled; a hang can be occurred if too much time
is elapsed since its last execution. In particular, this monitor is helpful for detecting
hang conditions which are not due to deadlock, e.g., a process may be waiting for
messages coming from a sender process which has failed. For this reason, scheduling
timeouts represent a complementary measure with respect to the previous two.
Similarly to the previous monitors, this monitor takes into account time values,
i.e., scheduling delays. When the timeout is exceeded, the monitor logs the following
data: (i) PID (TID) of the delayed process (thread), (ii) scheduling delay, (iii) last
de-scheduling time.
3.3.6 Processes and threads exit codes
For long running application scenarios, unexpected process (thread) exits can be
considered exceptional conditions deviating from system normal behavior. In fact,
these event may be the symptom of crash failures or overloading conditions which
forced the OS to kill the process (thread) unexpectedly. In turn, the exit of a process
may cause an indefinite wait in other processes.
This monitor takes into account all the processes (threads) deallocations event
and it records data each time a process (thread) is deallocated. In particular, the
following data are logged: (i) the PID (TID) of the exiting process (thread), (ii) the
return code.
3.3.7 Network sockets monitor
The delay between two consecutive packets sent on a given TCP/IP socket (both
from and to the monitored task) is measured, for each thread and individual
socket. A timeout is enforced to detect process (thread) suspiciously silent when
communication is not taking place.
The following data are logged when the timeout is exceeded: (i) the PID (TID)
of the process (thread), (ii) the port number of the socket, and (iii) the IP address
of the remote process that communicate with the monitored process.
3.3.8 I/O throughput monitor
A decrease in the number of I/O operations represents another potential symptom of hang failures. For instance, the hang of a process may prevent I/O operations usually performed by the process (e.g., writing to a log file). Therefore, we argue that monitoring the I/O operations rate may help in the detection of hangs. In particular, we monitor the aggregate throughput of I/O operations with respect to reads and writes on files and socket descriptors. The monitor periodically samples the rate of I/O operations with period $T$, then the sampled value is compared to the bounds for this monitor.
This monitor logs the following data when the bound on I/O rate is exceeded: (i) the value of the I/O sample which caused the triggering, (ii) the I/O operation (read/write), (iii) the exceeded bound (lower/upper). I/O operations are monitored with respect to processes, hence the PID is also recorded, without distinguishing between single threads.
Monitors are schematically summarized in Table 1. The table reports the triggering condition for each monitor, i.e., the condition which cause the monitor to log an alert. The entries are then analyzed by alarm generators to produce alarms if $L_i$ alerts are produced within the period $T_i$.
Table 1 Monitors at the operating system level.
<table>
<thead>
<tr>
<th>Monitor</th>
<th>Triggering condition</th>
<th>Domain</th>
</tr>
</thead>
<tbody>
<tr>
<td>UNIX system calls</td>
<td>An error code is returned</td>
<td>Syscalls × ErrCodes</td>
</tr>
<tr>
<td>UNIX signals</td>
<td>A signal is received by the process</td>
<td>Signals</td>
</tr>
<tr>
<td>Task scheduling</td>
<td>Timeout exceeded (since the task is preempted)</td>
<td>(0, $\infty$)</td>
</tr>
<tr>
<td>Waiting time for critical sections</td>
<td>Timeout exceeded (since the task begins to wait)</td>
<td>(0, $\infty$)</td>
</tr>
<tr>
<td>Holding time in critical sections</td>
<td>Timeout exceeded (since the task acquires a lock)</td>
<td>(0, $\infty$)</td>
</tr>
<tr>
<td>Process and thread exit codes</td>
<td>Task allocation or termination</td>
<td>Lifecycle event</td>
</tr>
<tr>
<td>Timeout on a socket</td>
<td>Timeout exceeded (since a packet is sent over a socket)</td>
<td>(0, $\infty$)</td>
</tr>
<tr>
<td>I/O throughput</td>
<td>Bound exceeded</td>
<td>(0, $\infty$)</td>
</tr>
</tbody>
</table>
4 Implementation issues
Monitors have been implemented by means of dynamic probing. To this aim, we used the KProbes framework to place breakpoints (i.e., special CPU instructions which “break” the execution of kernel code by means of interrupts) into the kernel code. Breakpoints have been placed in the kernel functions providing the monitored measures. When a breakpoint is hit, an handler routine is launched and it is
executed just before the kernel code in order to “quickly” collect data (e.g., input parameters or return values of called function). This does not interfere with program execution, except for a short delay.
The complete detection system has been implemented as a loadable kernel module. To this aim we exploited the SystemTap tool (http://sourceware.org/systemtap/). It allows to program breakpoint handlers by means of a high-level scripting language. SystemTap scripts are then translated into C code, encompassing also the KProbes framework. Synchronization issues between threads have been tricky to monitor. Indeed, we were not able to have a complete view of all the lock/unlock operations on shared resources only by tracing kernel code. This is because kernel system calls are often not invoked at all during operations on mutexes when there is no contention between several threads. For this reason we implemented a shared library to wrap PThread API provided by the standard glibc library which, in fact, overloads the PThread functions we want to monitor.
5 Experimental results
5.1 Case studies
In this section, we evaluate the proposed framework with respect to two complex applications from ATM domain.
5.1.1 FDP Case Study
The first case study is a complex distributed application for Flight Data Processing (FDP). It is in charge of processing aircrafts data produced by Radar Track Generators, by updating the contents of Flight Data Plans (FDPs), and distributing them to flight controllers. The overall (simplified) architecture is depicted in Figure 4; it is based on CARDAMOM, a CORBA middleware for developing mission and safety critical applications compliant with the OMG Fault-Tolerant CORBA specification. CARDAMOM is jointly developed by SELEX-SI and THALES, the two leading industries in the European ATM scenario; in this work we based on the open source community edition which is available at http://cardamom.objectweb.org. CARDAMOM makes use of OTS software items, such as the Data Distribution Service (DDS) implementation provided by RTI (http://www.rti.com) for publish-subscribe communication among components, and the ACE ORB (http://www.aceorb.com) as Object Request Broker. The architecture we refer in this paper is made up of several components:
- **Facade** : the interface between the clients (e.g., the flight controller console) and the rest of the system (conforming to the Facade GoF design pattern); it provides a remote object API for the atomic addition, removal, and update of FDPs. The Facade is replicated according to the warm-passive replication schema. It stores the FDPs along with a lock table for FDPs access serialization.
- **Processing Server** : it is in charge of processing FDPs on demand, by taking into account information from the Correlation Component and the FDPs...
A. Bovenzi, G. Carrozza, D. Cotroneo, M. Cinque, R. Natella
Figure 4 Architecture of the FDP case study.
published by using DDS. This component is replicated several times on
different nodes, and FDP operations are balanced among servers with a round-
• Correlation component: it collects flight tracks generated by radars, and
associates them to FDPs, by means of Correlation Managers (CORLM in the
Figure 4).
This case study includes a workload generator that sends random requests to
the system, both for flight tracks and FDP updates.
5.1.2 SWIMBOX Case Study
The SWIMBOX case study was developed in the framework of the European-wide
initiatives aiming at pursuing global interoperability in the Air Traffic Management
(ATM) domain. SWIM (System Wide Information Management) is the world
recognized initiative (both in Europe and USA in the context of SESAR and
FAA programmers respectively) aiming to enable several stakeholders, i.e., airports,
airlines, military air defense, Area Control Centers (ACC) and Air Navigation
Service Providers (ANSP), to share information on a really large scale. It is meant to
be the software infrastructure able to provide the one-for-all information model for
data exchange and interoperability, as well as common interfaces to access specific
services, at domain level. To this aim, it is going to define a common dictionary
in terms of data and services, as well to use Commercial Off-The-Shelf (COTS)
hardware and software to support a SOA aiming to facilitate systems dynamic
composition and to increase common situational awareness.
The proposed case study is actually pilot prototype for SWIM, the SWIMBOX,
which has been implemented in the framework of the SWIM SUIT FP6 European
project (http://www.swim-suit.aero/swimsuit/).
The overall system is a grid of SWIM nodes, physically deployed at stakeholders
premises and referred as “legacy” nodes, which are the users of the SWIM common
infrastructure and which are allowed to access the SWIM bus through the SWIM-BOX. Only SWIM-BOX instances can directly exchange data and invoke services over the net, acting as mediators between legacy nodes and the SWIM bus. The high-level endpoint perspective is shown in Figure 5, in which the role of Adapters can be appreciated. These have been implemented to let legacy nodes unaware of the SWIM semantic till all of them will be aligned to SWIM in a very next future.

Figure 5 End to end communication scenario between SWIM nodes.
The prototype architecture (see Figure 6) is organized in the following layers:
- **domain level.** It (a) defines a standard data representation embracing well defined models and collaborative approaches (i.e. FOIPS, ICOG2) and translates it in a flexible format (XML in the prototype), (b) exposes the external interfaces which define the domain specific operations on Flight, Surveillance and Aeronautical Data, e.g. create/update a flight plan, handover operation and, also, (c) define services to manage this domain specific components;
- **core level.** It implements synchronous/asynchronous communication pattern (i.e. request/reply, publish/subscribe), security services (i.e. encryption, authentication, access control), data storing (i.e. provides a transparently distributed and transactional storage mechanism allowing users to access shared data) and services registry.
It is worth noting that, in order to assure technology transparency, Publisher/Subscriber component actually provides an abstraction layer able to easily masking the underlying technology without impacting the uppermost domain level components. From a technological point of view, data distribution tasks can be accomplished by means of two different solutions: Data Distribution Service (DDS) and Java Messaging Service (JMS). The former is an OMG standard specification widely used in large scale networked applications. It is able to allow data transfer in the respect of QoS policies that can be customized according to the application needs. Commercial and open source implementations of the DDS standard are available. The SWIMBOX prototype is based on two different implementation of DDS: (i) the open source edition of OpenSplice DDS (OSPL) by Prisimtech ([http://www.opensplice.com](http://www.opensplice.com)) and (ii) the RTI DDS by Real-Time Innovations ([http://www.rti.com/](http://www.rti.com/)). Fault injection campaigns have been made for evaluating
the effectiveness of the proposed approach. Due the the crucial role played by data distribution tasks into the most common SWIMBOX application scenarios, Publisher/Subscribe communication has been chosen as the injection target in order to understand how failures in the DDS components may propagate to the rest of the system. In fact, the communication layer may represent a dependability bottleneck for the whole system if faults at that layer are not properly coped with.
From the technical point of view, the application case study has been evaluated exploiting the FDD domain services of the SWIMBOX. The OpenSplice implementation has been used to accomplish DDS tasks. The application consists of two legacy entities, named the Contributor and the Manager respectively. Figure 7 describes an example of the interaction between the legacy systems. The Contributor acts as the subscriber, waiting for the information on Flight Object (i.e., a single entity including different information related to a flight) updates to be published. Also, it periodically reads all the available Flight Object summaries. Conversely, the Manager is in charge of (i) executing a given number of operations (e.g., Flight Data Object creation and update) at a fixed rate (20ops/sec), as well as of (ii) distributing data over the SWIM network exploiting the Pub/Sub middleware facilities. Once the operations have been completed the Contributor requires to unsubscribe from the FDD subsystem.
5.2 Fault injection campaigns
In order to evaluate the detection framework, we conducted fault injection experiments, i.e., we corrupted application source code in order to emulate software faults. We refer to the injection framework described in Duraes and Madeira (2006) to inject software faults. It defines the 17 most representative classes of software faults; for instance, fault classes frequently occurring in real systems are “missing function calls” (MFC) and “wrong value assigned to a variable” (WVAV). These fault classes are defined with respect to Orthogonal Defect Classification schema (Sullivan and Chillarege, 1991). The distributions of the injected software faults is provided in Table 2.
Injected faults resulted in different failures, which are reported in Figure 8. “Wrong” means that content type failures occurred which are not considered in this work. “OK” means instead that the injected fault did not result in a failure. The analysis of the hang detection framework focused on experiments in which hang failures, either active or passive, have been observed. Results reveal that software faults result in hang failures frequently; in particular, hang account for the majority of failures in the SWIMBOX case study. Passive hangs have been usually the effect of message loss or corruption, leading to an “indefinite wait” condition. Experiments were divided in two sets of equal size, namely training set and test set; the former has been adopted to tune the detectors whereas the latter to evaluate their effectiveness.
Figure 7 Application interaction schema of the SWIMBOX case study.
Table 2 Source-code faults injected in the case study application.
<table>
<thead>
<tr>
<th>ODC type</th>
<th>Fault Nature</th>
<th>Fault Type</th>
<th>Case study</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td>FDP</td>
</tr>
<tr>
<td>Assignment</td>
<td>MISSING</td>
<td>MVIV - Missing Variable Initialization using a Value</td>
<td>8</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MVAV - Missing Variable Assignment using a Value</td>
<td>5</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MVAE - Missing Variable Assignment using a Value</td>
<td>5</td>
</tr>
<tr>
<td></td>
<td>WRONG</td>
<td>MVAV - Wrong Value Assigned to Variable</td>
<td>26</td>
</tr>
<tr>
<td></td>
<td>EXTRANEOUS</td>
<td>EVAV - Extraneous Variable Assignment using another Variable</td>
<td>2</td>
</tr>
<tr>
<td>Checking</td>
<td>MISSING</td>
<td>MIA - Missing IF construct Around statement</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>WRONG</td>
<td>WLEC - Wrong logical expression used as branch condition</td>
<td>3</td>
</tr>
<tr>
<td>Interface</td>
<td>MISSING</td>
<td>MLPA - Missing small and Localized Part of the Algorithm</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>WRONG</td>
<td>WPFFV - Wrong variable used in Part of Function Call</td>
<td>1</td>
</tr>
<tr>
<td>Algorithm</td>
<td>MISSING</td>
<td>MFC - Missing Function Call</td>
<td>13</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MIEB - Missing If construct plus statement plus Else Before statement</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MIFS - Missing IF construct plus Statement</td>
<td>1</td>
</tr>
<tr>
<td>Function</td>
<td>MISSING</td>
<td>MFCT - Missing Functionality</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>WRONG</td>
<td>WALL - Wrong Algorithm (Large modifications)</td>
<td>1</td>
</tr>
<tr>
<td>Total</td>
<td></td>
<td></td>
<td>72</td>
</tr>
</tbody>
</table>
5.3 Results
The goal of a detection system is to uncover as many failure as possible while at the same time keeping low the false positive rate. In order to evaluate our detection framework, we adopted the following quality metrics:
- **Coverage**: the conditional probability that, if there is a failure, it will be detected. It can be estimated from the ratio of the number of experiments in which the failure is detected to the number of experiments with a fault activated;
- **False positive rate**: the conditional probability that an alarm will be issued during fault free executions (i.e. application execution where no fault has been injected). It can be estimated from the ratio of false alarms (i.e. alarms triggered during correct execution) to the number of normal events collected.
- **Latency**: time interval between the fault activation (i.e. the time when the fault-injected code is executed) and detection (i.e. the time when an alarm is triggered);
- **Overhead**: the difference in the average execution time of application methods, by comparing executions with and without monitoring.
We first evaluated the performance of individual monitors for both the case studies, with respect to the metrics mentioned above. For each monitor, a sensitivity analysis has been made, to tune the $T_i$ parameter. We considered timeouts within the range $[0.1s, 4s]$. The best performance and corresponding parameters for all monitors are shown in Tables 3 and 4.
Different monitors achieve different performance in terms of coverage, since they focus on failures impacting on different resources (e.g., a process may be indefinitely waiting for a mutex or for a message). Actually, monitors are unable to achieve full coverage keeping the False positive rate and Latency low (e.g., Mutex Timeout and Sockets). Monitors also provide different rate of false positives, which is remarkably high in some cases (e.g., UNIX semaphores hold timeout in the FDP case study). For this reason, it is important to filter false positives in order to include those monitors within the system (this is useful to increase the amount of covered faults). To take in account this problem the combination rule (explained in section 3.2) has been adopted to prevent false alarms.
It is worth noting that, even if the detection framework can be applied to any application (it relies on several simple monitors at O.S. level), the performance of single monitors varies with the specific case study. For example, in the SWIM-BOX case study, the monitors on Unix Semaphores seem not to be helpful because the application does not call Unix semaphore primitives; instead, these monitors revealed some failures in the FDP case study. Therefore, we cannot claim that there is an individual monitor able to effectively detect hang failures in all scenarios. However, the inclusion of several monitors in the framework provides the potential for detecting hang failures in different scenarios; this goal can be achieved by tuning the combination rule, which accounts for the effectiveness of the individual monitors.
Table 3 Coverage, false positive rate, and latency provided by the individual monitors in the FDP case study.
<table>
<thead>
<tr>
<th>Monitor</th>
<th>Ti</th>
<th>Coverage</th>
<th>False positive rate</th>
<th>Mean (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>UNIX semaphores hold timeout</td>
<td>4 s</td>
<td>64.5%</td>
<td>36.08%</td>
<td>1965.65</td>
</tr>
<tr>
<td>UNIX semaphores wait timeout</td>
<td>2 s</td>
<td>67.7%</td>
<td>1.7%</td>
<td>521.18</td>
</tr>
<tr>
<td>Pthread mutexes hold timeout</td>
<td>4 s</td>
<td>64.5%</td>
<td>4.01%</td>
<td>469.51</td>
</tr>
<tr>
<td>Pthread mutexes wait timeout</td>
<td>-</td>
<td>0%</td>
<td>0%</td>
<td>-</td>
</tr>
<tr>
<td>Scheduling threshold</td>
<td>4 s</td>
<td>74.1%</td>
<td>3.25%</td>
<td>1912.22</td>
</tr>
<tr>
<td>Syscall error codes</td>
<td>1 s</td>
<td>45.1%</td>
<td>0.6%</td>
<td>768.97</td>
</tr>
<tr>
<td>Signals</td>
<td>1 s</td>
<td>45.1%</td>
<td>0%</td>
<td>816.57</td>
</tr>
<tr>
<td>Process/Thread exit</td>
<td>1 s</td>
<td>45.1%</td>
<td>0%</td>
<td>830.64</td>
</tr>
<tr>
<td>Process/Thread Creation</td>
<td>1 s</td>
<td>35.4%</td>
<td>0.05%</td>
<td>375.7</td>
</tr>
<tr>
<td>I/O throughput network input</td>
<td>3 s</td>
<td>77.3%</td>
<td>0.4%</td>
<td>4476.67</td>
</tr>
<tr>
<td>I/O throughput network output</td>
<td>3 s</td>
<td>77.3%</td>
<td>0.2%</td>
<td>2986.4</td>
</tr>
<tr>
<td>I/O throughput disk reads</td>
<td>3 s</td>
<td>70.9%</td>
<td>0.4%</td>
<td>4930</td>
</tr>
<tr>
<td>I/O throughput disk writes</td>
<td>2 s</td>
<td>67.6%</td>
<td>0.05%</td>
<td>6168.57</td>
</tr>
<tr>
<td>Sockets</td>
<td>4 s</td>
<td>100%</td>
<td>3.47%</td>
<td>469.58</td>
</tr>
</tbody>
</table>
To correlate all the different monitors alarms we adopted the Bayesian combination rule as explained in section 3.2. The conditional probabilities have been estimated by counting the frequency of the alarms in faulty and fault free experiments of the training set. Table 5 shows the performance achieved by the joint detector in the FDP and SWIMBOX case studies. The results seem to confirm the benefits of using a combined detector: it is able to achieve full coverage, while keeping low the false positive rate (it is comparable to the best rates in Tables 3 and 4) and the mean latency.
Finally, the overhead of continuous monitoring DUs at the OS level has been measured both for FDP and SWIM-BOX applications, by comparing the execution time with and without monitoring of representative methods provided by the case studies; moreover, we varied the request rate and the number of operations. Figures 9, 10 and 11 show the execution time observed with and without the detection framework. It should be noted that the overhead was lower that 10% in every case (in the SWIM-BOX case it is just over 2%), even during most intensive workload periods.
Table 4 Coverage, false positive rate, and latency provided by the individual monitors in the SWIMBOX case study.
<table>
<thead>
<tr>
<th>Monitor</th>
<th>$T_i$</th>
<th>Coverage</th>
<th>False positive rate</th>
<th>Mean Latency (sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td>UNIX semaphores</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>wait timeout</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>hold timeout</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Pthread mutexes</td>
<td>0.1 s</td>
<td>100%</td>
<td>9.7%</td>
<td>0.1</td>
</tr>
<tr>
<td>wait timeout</td>
<td>0.1 s</td>
<td>38%</td>
<td>0%</td>
<td>0.1</td>
</tr>
<tr>
<td>Scheduling threshold</td>
<td>2 s</td>
<td>100%</td>
<td>24.1%</td>
<td>2</td>
</tr>
<tr>
<td>Syscall error codes</td>
<td>0.1 s</td>
<td>12.5%</td>
<td>8.2%</td>
<td>15.41</td>
</tr>
<tr>
<td>Signals</td>
<td>0.1 s</td>
<td>0%</td>
<td>1.0%</td>
<td>76.65</td>
</tr>
<tr>
<td>Process/Thread exit</td>
<td>0.1 s</td>
<td>50%</td>
<td>2.9%</td>
<td>0.1</td>
</tr>
<tr>
<td>Process/Thread creation</td>
<td>0.1 s</td>
<td>50%</td>
<td>5.4%</td>
<td>0.53</td>
</tr>
<tr>
<td>I/O throughput</td>
<td>0.1 s</td>
<td>0%</td>
<td>1.6%</td>
<td>17.9</td>
</tr>
<tr>
<td>network input</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>I/O throughput</td>
<td>0.1 s</td>
<td>75%</td>
<td>0.5%</td>
<td>10.7</td>
</tr>
<tr>
<td>network output</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>I/O throughput</td>
<td>0.1 s</td>
<td>75%</td>
<td>1.2%</td>
<td>3.97</td>
</tr>
<tr>
<td>disk reads</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>I/O throughput</td>
<td>0.1 s</td>
<td>75%</td>
<td>0.5%</td>
<td>7.72</td>
</tr>
<tr>
<td>disk writes</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Sockets</td>
<td>2 s</td>
<td>100%</td>
<td>23.3%</td>
<td>2</td>
</tr>
</tbody>
</table>
Table 5 Coverage, false positive rate, and latency provided by the joint detector.
<table>
<thead>
<tr>
<th></th>
<th>FDP</th>
<th>SWIMBOX</th>
</tr>
</thead>
<tbody>
<tr>
<td>Coverage</td>
<td>100%</td>
<td>100%</td>
</tr>
<tr>
<td>False positive rate</td>
<td>4.85%</td>
<td>5.4%</td>
</tr>
<tr>
<td>Mean Latency</td>
<td>100.26±135.76 ms</td>
<td>100±33.33 ms</td>
</tr>
</tbody>
</table>
6 Conclusions
This paper proposed a framework for detecting hang failures in complex systems. The framework is based on monitors inserted at the OS level, in order to enable failure detection in the presence of OTS and legacy components. The monitors collect events related to OS resources (e.g., I/O devices, synchronization primitives), which are then analyzed by alarm generators using an anomaly detection technique.
Figure 9 Overhead imposed to the execution of facade’s update_callback method.
Figure 10 Overhead imposed to the execution of facade’s request_return method.
Figure 11 Overhead imposed to the execution of SWIM-BOX’s main method.
The proposed approach was evaluated by an experimental campaign on two real-world case studies. The non-intrusiveness of the approach allowed to deploy the detection framework even in the presence of OTS and legacy components. We noticed that the approach provides the best results when several monitors are combined. The combination of several monitors proved to be effective with respect to coverage by detecting all hang failures, thus confirming that monitoring at the
OS-Level Hang Detection in Complex Software Systems
OS level is a good strategy for hang failure detection. Moreover, the approach is able to keep low the number of false positives and the computational overhead due to on-line monitoring (less than 6% and 10% in the worst case, respectively). Therefore, we believe that the proposed framework can effectively be deployed in real-world scenarios, in order to develop recovery strategies to be triggered when a failure is detected. The development of complex recovery strategies based on failure detection is thus a future research direction we aim to pursue.
References
|
{"Source-Url": "http://wpage.unina.it/roberto.natella/papers/natella_hang_ijccbs_2011.pdf", "len_cl100k_base": 12324, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 54967, "total-output-tokens": 14024, "length": "2e13", "weborganizer": {"__label__adult": 0.0003020763397216797, "__label__art_design": 0.0003447532653808594, "__label__crime_law": 0.0003964900970458984, "__label__education_jobs": 0.0005927085876464844, "__label__entertainment": 9.489059448242188e-05, "__label__fashion_beauty": 0.00012993812561035156, "__label__finance_business": 0.00019168853759765625, "__label__food_dining": 0.00028967857360839844, "__label__games": 0.0007357597351074219, "__label__hardware": 0.0017528533935546875, "__label__health": 0.0004286766052246094, "__label__history": 0.0002732276916503906, "__label__home_hobbies": 7.981061935424805e-05, "__label__industrial": 0.0003871917724609375, "__label__literature": 0.0002646446228027344, "__label__politics": 0.0002071857452392578, "__label__religion": 0.00035953521728515625, "__label__science_tech": 0.06658935546875, "__label__social_life": 7.426738739013672e-05, "__label__software": 0.0205535888671875, "__label__software_dev": 0.9052734375, "__label__sports_fitness": 0.00017762184143066406, "__label__transportation": 0.0003986358642578125, "__label__travel": 0.00017118453979492188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57189, 0.03438]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57189, 0.34822]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57189, 0.90678]], "google_gemma-3-12b-it_contains_pii": [[0, 1272, false], [1272, 4412, null], [4412, 7859, null], [7859, 11622, null], [11622, 14597, null], [14597, 16091, null], [16091, 18193, null], [18193, 21135, null], [21135, 23008, null], [23008, 25751, null], [25751, 28633, null], [28633, 31504, null], [31504, 34350, null], [34350, 36294, null], [36294, 38809, null], [38809, 40289, null], [40289, 41910, null], [41910, 44557, null], [44557, 46548, null], [46548, 48768, null], [48768, 51481, null], [51481, 52189, null], [52189, 55381, null], [55381, 57189, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1272, true], [1272, 4412, null], [4412, 7859, null], [7859, 11622, null], [11622, 14597, null], [14597, 16091, null], [16091, 18193, null], [18193, 21135, null], [21135, 23008, null], [23008, 25751, null], [25751, 28633, null], [28633, 31504, null], [31504, 34350, null], [34350, 36294, null], [36294, 38809, null], [38809, 40289, null], [40289, 41910, null], [41910, 44557, null], [44557, 46548, null], [46548, 48768, null], [48768, 51481, null], [51481, 52189, null], [52189, 55381, null], [55381, 57189, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57189, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57189, null]], "pdf_page_numbers": [[0, 1272, 1], [1272, 4412, 2], [4412, 7859, 3], [7859, 11622, 4], [11622, 14597, 5], [14597, 16091, 6], [16091, 18193, 7], [18193, 21135, 8], [21135, 23008, 9], [23008, 25751, 10], [25751, 28633, 11], [28633, 31504, 12], [31504, 34350, 13], [34350, 36294, 14], [36294, 38809, 15], [38809, 40289, 16], [40289, 41910, 17], [41910, 44557, 18], [44557, 46548, 19], [46548, 48768, 20], [48768, 51481, 21], [51481, 52189, 22], [52189, 55381, 23], [55381, 57189, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57189, 0.22436]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
48734af79dba23e6a50295cbaf4e79c87860079b
|
The Privacy and Security Behaviors of Smartphone App Developers
Rebecca Balebako, Abigail Marsh, Jialiu Lin, Jason Hong, Lorrie Faith Cranor
Carnegie Mellon University
{balebako, acmarsh, jialiu, jasonhong, lorrie}@cmu.edu
Abstract—Smartphone app developers have to make many privacy-related decisions about what data to collect about end-users, and how that data is used. We explore how app developers make decisions about privacy and security. Additionally, we examine whether any privacy and security behaviors are related to characteristics of the app development companies. We conduct a series of interviews with 13 app developers to obtain rich qualitative information about privacy and security decision-making. We use an online survey of 228 app developers to quantify behaviors and test our hypotheses about the relationship between privacy and security behaviors and company characteristics. We find that smaller companies are less likely to demonstrate positive privacy and security behaviors. Additionally, although third-party tools for ads and analytics are pervasive, developers aren’t aware of the data collected by these tools. We suggest tools and opportunities to reduce the barriers for app developers to implement privacy and security best practices.
I. INTRODUCTION
Smartphones such as Android and iPhone offer an array of capabilities to users through a broad and extensive selection of apps. App developers can take advantage of the various sensors on the phones, such as GPS, accelerometer, or camera, to provide entertaining or useful services to the user. Mobile devices are typically always with the user and always on, allowing app developers unprecedented access to information about their users. However, along with these capabilities come great privacy and security risks. While research has looked at smartphone users’ perceptions of smartphone privacy and security, there has been a dearth of work about the perspectives of app developers.
Apps are developed by a broad array of companies and individuals. As the space for innovation is huge, and the barrier to entry is low, many small to medium size app development companies have been able to publish apps. Over 200,000 active developers contribute to the Apple store [1]. There is no training or certification process for app development designed to protect the client. Furthermore, app developers may feel pressure to develop quickly and be the first to market. However, in the race to innovate, privacy and security might not be the top priority for time- and resource-constrained app developers.
In this paper, we examine the ways app developers make decisions and the steps they take to protect security and privacy. Through in-depth interviews with 13 developers, we explored the trade-offs app developers make, how they get information when they need it, and barriers to implementing privacy and security best practices. Informed by the results of these interviews, we formulated several hypothesis about the privacy and security behaviors of app developers. We ran an online survey of 228 app developers to examine factors that predict good privacy and security behaviors, such as encrypting data and providing privacy policies. Our two-step research process is similar to that used in other work examining human subjects’ motivations [2].
We first begin by discussing previous work on smartphones and privacy. Then, we describe the interviews and the themes that emerged. In the following section, we describe the online survey and the results of testing specific hypotheses about privacy and security. We find that many developers lack awareness about privacy, and identify a number of barriers to improved privacy and security behaviors. These include the lack of resources in smaller companies and the difficulty of understanding third-party collection of user data. We identify where developers seek privacy and security advice, and point to intervention points and improved tools to help developers.
II. RELATED WORK
We first describe the smartphone app ecosystem, including major platforms and how apps are submitted. We also discuss users’ perceptions of smartphone privacy and security. We then describe public policy efforts to guide app developers when making privacy and security decisions and previous efforts to inform app developers about privacy and security.
A. App Development Ecosystem
The two most popular smartphone platforms are Apple’s iPhone and Google’s Android, with Blackberry and Microsoft holding a smaller market share. Apple and Google both have app markets that allow independent developers to distribute or sell apps, which users can download from their devices. This has allowed many independent developers to sell smartphone software directly to users, and has resulted in a huge variety of apps, with over 800,000 apps on each of the iOS and Android platforms as of October 2013 [3].
Previous work has found a relationship between data collection and advertising as a revenue model. The ad-based...
revenue model, which often relies on targeted ads, is currently popular [4]. Apps may provide ads through third-party code, such as that provided by Flurry\(^1\) or Google AdSense.\(^2\) Targeted advertising requires collecting information about users, and therefore the targeted advertising revenue model may require more permissions and therefore be more privacy-invasive [5], [6]. Apps may also include third-party code for analytics, whose primary goal is to collect information about the users’ interactions with the app.
Some previous work has examined app developer security behaviors, such as that by Egele et al. [7] and Fahl et al. [8], which found the significant portions of apps with security failures or substandard implementations of security code. Throughout our work, we explore app developers’ perceptions of their work, including self-reported intentions.
B. User Concerns about Privacy and Security
A wealth of previous work has examined users’ perceptions and desires for smartphone privacy and security [9]–[13]. Users are often surprised by what permissions are requested by apps [14], the frequency of data collection, and the data recipients [15]. Furthermore, they often do not understand existing privacy notices, particularly in Android phones [16], [17]. While users are concerned about privacy and security, they are neither informed nor empowered to protect themselves. Therefore, the decisions made by app developers have great impact.
Previous work has examined users’ reactions to privacy policies. While privacy policies offer the illusion of notice to users, the reality is that the required time [18] and reading level [19], and vague language [20] pose significant usability barriers. Our work indicates that app developers have similar troubles with privacy policies.
C. Public Policy and Tools
There have been several efforts to educate app developers about privacy and security. We reviewed five privacy guidelines for app developers: three were published by government agencies in Australia [21], Canada [22], and California [23]; one by an industry consortium in Europe [24]; and one by consumer privacy advocacy groups [25]. These guidelines typically offered clear and readable advice and avoided “legalese.” While they were lengthy (14-32 pages), some offered privacy and security checklists for developers. These guidelines often suggest that privacy policies can help developers think through their data collection practices in addition to notifying users.
There were five recommendations made by all of the above-cited guidelines, which we paraphrase as follows:
1) Someone must be responsible for privacy.
2) The app should have a clear and easy to find privacy policy.
3) The app should encrypt data during transmission.
4) The app should encrypt data it stores.
5) The app should limit data collection to what is needed.
These are the five main privacy and security behaviors we explored quantitatively in our online survey, and we describe them in greater detail in Section V.
Tools have been developed to help developers practice privacy and security behaviors. Many open-source databases, such as mySQL, allow encryption of stored data. Several free or low cost privacy policy generators\(^3\) exist that allow developers to create a policy by answering questions about their app’s behaviors. Our interviews examined whether developers were aware of or used these tools.
III. INTERVIEW METHOD
We conducted semi-structured interviews with 13 smartphone app developers in August and September of 2013. Our research goals were to understand what decisions app developers make that they consider privacy and security related, and to better understand what resources they were aware of to help them make those decisions.
Interviewees represented a variety of app types and company sizes, as shown in Table I. We asked “What type of service does your app provide,” and the choices were based on a taxonomy developed by Hyrynsalmi et al. [26]. Interviews lasted approximately one hour. The interviews were usually conducted remotely, with only one in-person interview. The audio was recorded for transcription, although participants had the option to refuse audio recording, as some said it made them uncomfortable or unlikely to be forthcoming. Interviewees received $20 as compensation. Our interviewees were overwhelmingly male, which is in-line with evidence that 94% of app developers are male [27].
<table>
<thead>
<tr>
<th>Participant Company ID</th>
<th>Size</th>
<th>Revenue Model</th>
<th>Service</th>
<th>State</th>
</tr>
</thead>
<tbody>
<tr>
<td>P1</td>
<td>10-30</td>
<td>Advertising, Free trial</td>
<td>Digital, Physical, Service, Contents</td>
<td>CA</td>
</tr>
<tr>
<td>P2</td>
<td>2-9</td>
<td>Advertising, Subscription</td>
<td>Digital, Service</td>
<td>CA</td>
</tr>
<tr>
<td>P3</td>
<td>2-9</td>
<td>Free trial, Other</td>
<td>Digital, Service</td>
<td>PA</td>
</tr>
<tr>
<td>P4</td>
<td>2-9</td>
<td>Pay-per-user</td>
<td>Physical, Service</td>
<td>WA</td>
</tr>
<tr>
<td>P5</td>
<td>2-9</td>
<td>Free trial</td>
<td>Digital</td>
<td>WA</td>
</tr>
<tr>
<td>P6</td>
<td>100+</td>
<td>Subscription</td>
<td>Other</td>
<td>PA</td>
</tr>
<tr>
<td>P7</td>
<td>1</td>
<td>None</td>
<td>Contents</td>
<td>TX</td>
</tr>
<tr>
<td>P8</td>
<td>10-30</td>
<td>Subscription</td>
<td>Digital, Service</td>
<td>CA</td>
</tr>
<tr>
<td>P9</td>
<td>2-9</td>
<td>Other</td>
<td>Service</td>
<td>CA</td>
</tr>
<tr>
<td>P10</td>
<td>1</td>
<td>None</td>
<td>Contents</td>
<td>PA</td>
</tr>
<tr>
<td>P11</td>
<td>2-9</td>
<td>Advertising, None</td>
<td>Physical, Personalized information</td>
<td>IL</td>
</tr>
<tr>
<td>P12</td>
<td>2-9</td>
<td>None</td>
<td>Personalized information</td>
<td>PA</td>
</tr>
<tr>
<td>P13</td>
<td>100+</td>
<td>None</td>
<td>Physical</td>
<td>MI</td>
</tr>
</tbody>
</table>
TABLE I. INTERVIEW PARTICIPANT MOBILE APP AND COMPANY DEMOGRAPHICS.
<table>
<thead>
<tr>
<th>Service</th>
<th>Examples</th>
</tr>
</thead>
<tbody>
<tr>
<td>Digital</td>
<td>games, MP3, Ebooks</td>
</tr>
<tr>
<td>Physical</td>
<td>selling books</td>
</tr>
<tr>
<td>Service</td>
<td>e-mail, banking, ticketing</td>
</tr>
<tr>
<td>Stock Information</td>
<td>stock prices</td>
</tr>
<tr>
<td>Contents</td>
<td>news, weather, entertainment</td>
</tr>
<tr>
<td>Personalized information</td>
<td>location information</td>
</tr>
</tbody>
</table>
TABLE II. SERVICE CATEGORIES BASED ON CLASSIFICATIONS BY HYRYNSALMI ET. AL. [26].
We recruited participants for interviews through a number of methods, including in-person recruiting at local meetups.
\(^1\)www.flurry.com
\(^2\)www.google.com/adsense/
\(^3\)freeprivacypolicy.com, generateprivacypolicy.com, appprivacy.net
for smartphone app developers, online postings on sites such as Craigslist and Backpage, and through our social networks. Recruitment text said, “Participate in an interview to understand and improve smartphone app development.” Security and privacy were not mentioned in the recruitment to avoid participant bias. We asked interested parties to first fill out a screening survey to see if they qualified. We included two technical questions to determine whether the applicant had credible knowledge of app development. Valid applicants were invited by email to set up an interview time with one of two researchers. We contacted 20 developers, and 13 completed the interview. Five of the invited developers who did not complete the interview failed to respond to the email invitation, and two invites were unable to find a suitable interview time.
We did not collect identifying information, such as given name or company name, from participants unless it was volunteered. The interviewed developers ranged from 26 to 58 years old, and were from six states. Most worked in groups of 2-9 developers, but company size ranged from 1 to 100+ employees. Most interviewees were programmers, but one was a product manager. Several interviewees played multiple roles in their company, such as CEO, manager, or quality assurance. Their apps represented a variety of business models and services, and were at various stages of maturity. Some apps were not yet released to the app market, and others had already had several versions on the app market.
Questions included, “What, if any, online resources do you use to help make privacy and security decisions?” and “Have you ever decided not to collect certain information from users due to privacy concerns?” While we generally followed a script, we iterated on the script as each interview informed the next. Participants were asked what subjects we should have addressed, which revealed gaps in our questions and allowed us to improve the interviews.
IV. Interview Results
We describe the themes that emerged from our interviews. We discuss how app developers learn about privacy and security, whether they are aware of regulation and third-party data collection, and where they seek advice and resources for privacy and security decisions. We discuss developers’ perceptions of privacy policies and the trade-offs that app developers confront when making privacy and security decisions.
A. Education and Advice about Privacy and Security
Only a few of the developers we interviewed had formal training on privacy and security, typically received through corporate training or certification. Other developers rely on online research to find answers to specific questions. They are not accessing the guidelines published by government agencies, and instead are more likely to rely on their social networks, or specialists within their companies for information.
Many participants did not have formal privacy and security training. This suggests that many developers learn about security and privacy when they are confronted with these issues in the course of their work, at which point they may seek out further education. The lack of education on security and privacy available at the introductory levels was not lost on developers. P3 stated, “Most classes in computer science…there isn’t much of a focus on security. That could have a very big impact on how this stuff [implementation of secure code] happens.” On the other hand, some participants were confident that they were learning what they needed to know, or had a good background. P13 said “I have no formal training with privacy and security, but I feel that I am a journeyman in privacy knowledge, and pretty expert at security knowledge.” Similarly, P10 stated that his privacy and security learning, “is pretty much internal knowledge based on my experience in Web.”
Some participants discussed receiving formal training from a variety of sources. Certain businesses have specific training or certification requirements. For example Payment Card Industry (PCI) has security standards for handling credit card information. P11 states, “When you work at E-Commerce, they want you to be what they call PCI compliant.” In less regulated areas, participants reported education including certifications, previous work experience, and conferences such as the RSA Conference.
When asked about current and upcoming privacy and security regulations, participants showed little knowledge. While a few app developers brought up issues of the government requesting user data as a concern, none were aware of guidelines such as those discussed in Section II. The exceptions were apps that were marketed to children under 13 or used health information; these developers were aware of the privacy laws specifically related to their cases.
Participants were asked to discuss what resources they used when they needed advice on security- and privacy-related decisions. We received a variety of responses, which could be grouped into a number of common themes, including searching online, consulting friends, and seeking legal or specialist advice.
One of the most common responses was that developers simply searched online when they were looking for advice. As P10 put it, “I would Google it, to be honest, and I would look for articles from developers who have focused on building secure systems and kind of start my research there.” Developers consulted Hackernews, TreeHouse, Stack-Exchange, Lynda.com, Google, Facebook’s Terms of Use, and various smartphone developer forums to search for advice and examples from other developers.
Many developers also consulted their friends and social networks for advice: P7, a professional developer and part-time student, consulted a “Facebook group with... some 300 students,” many of whom do mobile development. Others consulted with fellow developers in person, like P5 who said, “I go to a couple meetups, especially if I’m looking for a technical element, or I want to get more into usability.” Participants also consulted with contacts who had experience in security or privacy: P10 stated, “I would also talk to my social network, if I knew anyone who has a background in security, about what they would recommend. I fortunately know one or two people.”
Lawyers were also consulted when they were available to developers. Some participants worked for companies with dedicated legal staff, such as P13 who stated, “I try to raise [privacy concerns] up to my management level and let them interact with whatever back-end legal that needs to happen.
I try to avoid directly communicating with the lawyers.” P12 makes it clear that privacy awareness was the legal division’s domain: “Ultimately the legal staff is responsible for making sure that we get the right and accurate information.” Generally, the interviews suggest that developers who had access to legal teams seemed to be less personally involved in the understanding of privacy and security regulations.
Some developers relied on terms of service documents provided by the app markets, with P4 stating, “I would expect that those guidelines fall into the realm of what is legally expected in the United States.” P8 depended on lawyers to understand regulations that affect app development, leading to less personal knowledge: “The only times we had to change anything, lawyers are on top of it. The reason I didn’t bother to know [is that I] depend on a lawyer.” As P3 observed, “Unfortunately, I very rarely have time to actually sift through [privacy and security regulations] and try to digest everything that’s going on, so I primarily rely on other people to let me know.”
B. Security Tools Used More than Privacy Tools
App developers seemed to use and rely on off-the-shelf or third-party tools for security, but did not have as many tools for privacy. The use of third-party tools could also introduce additional privacy concerns, as these tools may collect information that the app developer was unaware of.
Some developers rely on specific tools to help with security. These tools could include encryption built into the database, SSL code built into the platform, or authentication methods such as Facebook authentication. The tools were perceived as being more secure than hand-rolling implementations themselves. For example, P4 discussed the use of Facebook for authentication, “The expectation is that all the crafty security stuff has been handled by them, because I assumed they’d be smart enough to have that locked down, given that they probably hired security people.” However, participants noted that tool usage could be a double-edged sword. For example, participants who used Facebook for authentication had access to much of their users’ Facebook profile. Developers discussed weighing the advantages of collecting this information in case it might be useful against the privacy concerns of the user.
Very few interviewees used or knew about existing tools specifically for privacy, such as privacy policy generators, or security audits. One interviewee described his experience with a privacy policy generator as being “good enough” for the time, but not able to handle complex cases. Security audits were only considered by one interviewee; he handled health information and was working with businesses that required audits.
Participants also relied on third-party tools for other uses, such as analytics or various other features. Participants seemed generally unaware of the privacy and security practices employed by third-party utilities used in the development of their apps. Many developers had not personally read the terms of service, were unsure if their lawyers or legal departments had done so, and may have even forgotten the names of the ad networks or web traffic analysis companies they had used. P3 described the need for more digestible information, saying, “if either Facebook or Flurry had a privacy policy that was short and concise and condensed into real English rather than legalese, we definitely would have read it.”
C. Privacy Policies Are Not Considered Valuable
App developers find creating privacy policies to be a low priority or of low value, believing they only offer legal coverage and may turn off users.
Participants were particularly unconcerned about providing privacy policies. In one interview, P4 said, “I haven’t even read [our privacy policy]. I mean, it’s just legal stuff that’s required, so I just put in there.” Both P10 and P11 explicitly stated that they were not concerned, because they worked for small companies, with P10 saying: “I have not heard of any startups or small companies getting into trouble for privacy policies,” said one, while P11 noted, “Big companies want to [cover your ass], no one is going to go after a small guy like me. I don’t generate enough revenue, so if you do sue me you won’t get any money.” Other developers stated that they did not collect personally identifiable information, and therefore were less concerned about transparency.
Most participants said that while their privacy policies can be accessed on the app website, they were not directly accessible from within the app. In addition, the type of information collected from users would be difficult to find: P8 admits, “We don’t make it very obvious, exactly what data we’re collecting. I guess it’s kind of in the terms of use or privacy policy or something.” Paired with the difficulty of quickly accessing an app’s privacy policy, this suggests that users will find it tough to determine how their data is being collected and used by apps [11], [15], [17].
Furthermore, some developers were not convinced that users want privacy policies. P7 said users have “been groomed [into] thinking ... [data] is not private... Because it’s all anonymous.” They felt that as a result, data collected by their app would not surprise users or cause privacy concerns. P3 described the app developer and user relationship in stark terms: “we have consumers as customers. They either trust us or they don’t.” Some developers were aware of user concerns, noting, for example, the sensitivity of location data. As P8 put it: “it’s definitely important to the user to know that their information is safe with [the app].”
When participants put an effort toward alerting users about information collection, they reported lower user retention. Two interviews reported this concern. “We’ve gone through pretty great lengths to try to make sure that people know exactly what we’re collecting and why we’re collecting it,” describes P3, “So we end up losing out on some number of users because of warnings...they don’t take the time to actually read...so they just sort of see this warning and they’re like, oh, it must be something bad.”
D. Trade-offs Between Privacy, Security, and Resources
Balancing the need for good security and privacy practices with the cost of actually implementing those practices was a struggle for participants in our interviews. Many discussed privacy and security as being part of the development process but not a top priority, and concerns like monetizing the app or limited resources often trump the desire to follow rigorous privacy and security standards. Some manage to support privacy...
and security, like P5, who states: “We are trying to balance where that line [between user concerns and the need to store information] gets drawn. I favor privacy.”
P10 tellingly struggles with this trade-off when discussing his company’s practice of borrowing from other privacy policies, saying, “I don’t see the time it would take to implement that over cutting and pasting someone else’s privacy policies. I don’t see the value being such that’s worth it.”
When questioned about whether their personal feelings towards privacy affected their development decisions, participants gave mixed responses. Some made strong statements, such as P10 who said, “I personally have very strong feelings about user privacy,” and P5 said that as a supporter of privacy rights, he made an effort to collect as little user information as necessary for his app. Even self-described privacy advocates and security experts grappled with implementing privacy and security protection with limited time and resources.
Others, while voicing personal concern about privacy, discussed the need to work with clients’ wishes. In reference to the privacy of user data in apps developed for his clients, P11 says, “What they want is what they want.” Another developer was very invested in privacy protection, but expressed concern that with the threat of his app being copied, advertising was a safer bet for earning revenue than pay-to-download.
This suggests that developers have to weigh their personal desire to respect privacy against the ability to monetize or sell their app, and in particular, developers who work as part of a larger company or who work on commission may be less free to implement good privacy practices than self-employed developers and those who work for small companies. Furthermore, developers consistently discussed the constraints such as time, effort, and money it would take to implement best privacy and security practices.
The cost of collecting and storing data is perceived as minimal. At the same time, interviewees indicated that the cost of developing the code or policies to delete old data or accounts is not prioritized. This is not a question of tools; many of the same tools that allow users to encrypt data also allow them to delete data. Instead, this is a pervasive belief that data may become useful in the future and is therefore worth the resources required to collect and store.
V. Survey Method
Based on the interview results, we formed two hypotheses about privacy and security behaviors in app development. We hypothesized that company size would be related to privacy and security behaviors and that revenue models would also be related to privacy and security. In order to test these hypothesis quantitatively, we performed an online survey of 228 United States app developers and product managers. The survey gathered relevant demographics about the developers and their companies, and examined how developers make decisions about privacy and security.
Our survey was designed to take less than 30 minutes, and participants were compensated with a $5 Amazon gift card. Participants were recruited though several online forums, such as reddit subgroups, technical Facebook pages, and through six United States cities on backpage.com. To avoid biasing participation, it was not advertised as a security or privacy survey.
We included four knowledge and attention check questions in our surveys to help us eliminate non-developers and invalid responses. Due to our stringent requirements, we discarded 232 results that either did not have valid responses or were outside the United States. We were left with 228 valid responses from within the United States.
The privacy and security behaviors we examined are those that were recommended by all five of the privacy and security guidelines for app developers that are discussed in Section II. We describe the questions used to measure the privacy and security behaviors.
Security Behaviors
- SSL usage: By encrypting data going over the network, app developers can protect users from data snooping on insecure connections. We measured SSL usage with the question, “Do you use SSL when transmitting data?”
- Encrypting collected data: Encrypting data stored by the app, either in a database or on the phone, protects the user in the case of data breaches. We considered two variables: whether data was encrypted either in the database or when stored on the users’ phones.
Privacy Behaviors
- Having a Chief Privacy Officer or equivalent: The existence of a CPO or equivalent indicates that the company is paying attention to privacy and has a specialist who is accountable for privacy. We measured this with the question, “Does your company have a Chief Privacy Officer (or equivalent)?”
- Providing a privacy policy: Privacy policies may indicate that the app company has considered their practices and is being transparent to the user. We measured this with the question, “How does your app inform users about what information it collects?” and the response “Privacy policy on website.”
We recognize that there are concerns with self-reported data [28]. We present the results as app developers’ own conceptions of their work, not as ground truth. Our findings may differ than those of previous research based on scans of the app stores. For example, our questions are on a per-developer basis, and developers may have created more than one app. Our results are for all platforms, and both free and paid apps. Furthermore, our survey was done in August 2013 and may be more recent than published papers’ results.
<table>
<thead>
<tr>
<th>Behavior</th>
<th>percent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Use SSL</td>
<td>83.8%</td>
</tr>
<tr>
<td>Encrypt data on phone</td>
<td>59.6%</td>
</tr>
<tr>
<td>Encrypt data in database</td>
<td>53.1%</td>
</tr>
<tr>
<td>Encrypt everything (all data collected)</td>
<td>57.0%</td>
</tr>
<tr>
<td>Revenue from advertising</td>
<td>48.2%</td>
</tr>
<tr>
<td>Have CPO or equivalent</td>
<td>78.1%</td>
</tr>
<tr>
<td>Privacy Policy on website</td>
<td>57.9%</td>
</tr>
</tbody>
</table>
TABLE III. PERCENTAGE OF RESPONDENTS WHO REPORTED VARIOUS PRIVACY AND SECURITY-RELATED BEHAVIORS. PARTICIPANTS COULD SELECT MULTIPLE OPTIONS.
VI. Survey Results
We first present the demographics of our survey participants, including their training in privacy and security, and where they look for advice when making privacy and security decisions. We then discuss the app companies they work for, including size, revenue model, and use of third-party ad and analytics tools. We also present some exploratory work on data collected by app developers, including data types that have not been measured in previous work. We then describe our hypotheses about security and privacy behaviors; that they are correlated to each other, correlated to company size and correlated to revenue, and report our results.
A. Participant Demographics
Most of our respondents were programmers, product managers, or quality assurance testers. The average age was 30 years old (range: 18-50 SD = 5.6). We did not collect additional personal demographics such as gender. Participants selected their professional role from a multi-select list. Our recruitment stated specifically that we were looking for app developers or product managers, so it was not surprising that 78% of participants were programmers or software engineers, product managers, or both. Other participants were testers, managers, and CEOs. The role breakdowns are shown in Table IV.
We asked participants to describe their formal privacy and security training. Our results directly contradicted our interviews, in which few people claimed to have formal privacy or security courses. However, most interview participants worked for small companies. In the survey, only 7.3% claimed to have no formal privacy or security training. 62.9% of respondents claimed to have taken a privacy or security training course. Many also stated that they had received corporate training on privacy or security (62.5%) or attended a professional development seminar or workshop (43.5%). App developers in companies of size 31-100 were the most likely to receive corporate training, and companies with only one employee were the least likely to receive training.
In order to determine how app developers were making privacy and security decisions, we asked participants from whom they sought advice about privacy and security. We asked participants from whom they sought advice about privacy and security. This is useful for two reasons: first, it provides some insight into the level of expertise available to developers, and second it may allow better framing of educational campaigns for app developers about privacy and security. Figure 1 shows from whom participants sought advice, based on their company size. The company size significantly affected whether participants sought advice from their social network, security or privacy exports in their company, or no one (Kruskall-Willis test, p < .001). Participants from companies with under 9 employees were more likely to get advice from their social network, or to ask no one. Developers in larger companies (31-100 employees or 100+ employees) were more likely to ask a privacy or security specialist within their company.
B. App Company Characteristics
We discuss the categories of app companies represented by the survey participants. We do not claim that this is a proportionate sample of app development companies in the United States. Instead, we discuss the characteristics to put our other findings into context.
Equal numbers of participants were building or planning to build iOS (142) and Android (142) apps, with much smaller numbers for other platforms (38 for Windows, 10 for Blackberry and 6 for Palm, and 1 other). Over one quarter of participants (63) said they were developing for both Android and iOS. The survey participants represented different size companies and development groups. The percentages of developers in companies sized 1, 2-9, 10-30, 31-100, and 101 or more were 4.9%, 14.8%, 19.7%, 48.4%, and 12.1% respectively. However, the size of app development groups (employees working directly on the app) were typically between 2-30 people.
Participants were asked to categorize their app, using a list that was a combination of Apple iTunes store and Android
Play store categories. All categories were represented, with Games (17.7%), Entertainment (12.5%), and Finance (10.8%) appearing most frequently.
Apps may collect data for their own use, but interviewees also indicated data is collected for secondary uses such as advertising or analytics. Our interviews indicated that app developers were not always aware of the data collection of the third-party API’s or toolkits they were using. Table VII shows respondent’s knowledge of third-party data collection practices. Just over one-third of app developers claimed that they knew exactly what data is collected by third-party tools. These responses may represent more of the developers’ self-perception than reality. For example, of the developers who claimed they did not use third-party APIs, the majority answered separate questions about using third-party tools differently: 70% said they used at least one ad company, and 82% still resorting to using at least one advertising company. We speculate that app developers may be including advertising API’s without earning money from ads; however this merits further exploration.
Overall, most app developers (87.4%) used at least one analytics company, with one in five using two or more analytics companies. Table VI shows which companies were used by app developers. Most apps also used an advertising company: 86.5% selected one or more advertising companies in use, on average 1.78 ad companies (SD=1.33). Interestingly, app developers were likely to use an ad company regardless of whether they relied on advertising for revenue. Of app developers who did not select advertising as a revenue source, 82% still resorting to using at least one advertising company. We speculate that app developers may be including advertising API’s without earning money from ads; however this merits further exploration.
In our survey, 41.7% of developers self-report that they do not use a third-party tool. It is important to understand app developers’ self-perception, as it will likely influence their need to consider third-party tools’ data collection when creating privacy policies or handling data. If developers are not aware of or fail to consider some libraries, they will not report on their behavior when making privacy decisions. Our 2012 scan of free Android apps indicates that 50.2% of free Android apps did not use ads, analytics, social networks, and payment APIs, which is higher than our survey findings suggest [6]. We find that 36.3% of developers reported using exactly one ad library.
C. Collection of Sensitive Data
As we did not discuss the collection of sensitive data in our interviews, we did not formulate specific hypotheses to test. Therefore, we show the results of some exploratory analysis. Table VIII shows which data the app collected or stored. Due to an error with the survey, 5 participants did not answer this question. They were removed from the analysis of this question.
We asked about data that may be privacy or security sensitive. Several data items corresponded to Android or iOS permissions and warnings (such as location), but other data can be collected without warning the user. This includes which apps are installed, or sensor data from accelerometers. The user would only know about this data collection if it were included in a complete privacy policy. Other data that don’t trigger permission notifications are credit card information or password; these are input by the user but require that the app developer handle them securely.
An average of 5.5 out of the 10 sensitive variables we asked about were collected. Based on our interviews, we
were not surprised that most apps did not collect or store users’ passwords or credit card information. Instead, apps that need this information may often rely on third-parties such as Facebook to do authentication or to handle credit card information. Unsurprisingly, apps collected information pertinent to their app, such as level attained in a game. It is startling that three quarters of app developers collected which other apps are installed on the user’s device. Apps may do this to explicitly collaborate with other apps or services, such as a todo list app accessing a calendar app. However, information about installed apps can have privacy implications, such as family or health status if related apps are installed.
Table VIII summarizes the number of respondents who claimed to engage in the each privacy and security behaviors, and hypothesized that security and privacy behaviors would be positively correlated, and that there would be developers who were particularly privacy and security concerned and demonstrated all or most behaviors, while others would not display any such behaviors. Our hypothesis is mostly supported; all behaviors generally privacy and security concerned and demonstrated all or most correlated, and that there would be developers who were suspected that security and privacy behaviors would be positively correlated, and that there would be developers who were particularly privacy and security concerned and demonstrated all or most behaviors, while others would not display any such behaviors. Our hypothesis is mostly supported; all behaviors are significantly and positively correlated at the p=.05 level except Privacy Policy and SSL, as shown in Table IX.
**H1:** Security and privacy protective behaviors are correlated.
<table>
<thead>
<tr>
<th>Data Type</th>
<th>Collect or Store (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Parameters specific to my app</td>
<td>83.9%</td>
</tr>
<tr>
<td>Which apps are installed</td>
<td>73.9%</td>
</tr>
<tr>
<td>Location</td>
<td>71.6%</td>
</tr>
<tr>
<td>Advertising ID</td>
<td>70.6%</td>
</tr>
<tr>
<td>Sensor information not location-related</td>
<td>63.0%</td>
</tr>
<tr>
<td>Phone ID</td>
<td>54.5%</td>
</tr>
<tr>
<td>Contacts</td>
<td>54.0%</td>
</tr>
<tr>
<td>Phone Number</td>
<td>44.1%</td>
</tr>
<tr>
<td>Password</td>
<td>35.5%</td>
</tr>
<tr>
<td>Credit card information</td>
<td>30.3%</td>
</tr>
</tbody>
</table>
**TABLE VIII. PERCENTAGES OF RESPONDENTS WHO COLLECTED OR STORED SELECTED DATA.**
The category of app also significantly affected the amount of data collected (ANOVA p=.007). Of the categories with 10 or more responses, finance used the most sensitive variables on average (µ=6.36) while entertainment collected the least (µ=4.73). Only 20% of respondents with a finance app had advertising revenue, while 57% of entertainment apps had advertising revenue.
Leontiadis et al. found that free apps required more permissions than pay-to-download apps [4]. Our findings support this. We find statistical differences in the amount of data collected by revenue (ANOVA, p<0.001), and find that the amount of data used by developers with paid-download revenue models only (µ=3.78, SD 3.03) is significantly different from the amount of data collected by advertising-only revenue models (µ=6.48, SD 2.40) (ANOVA multiple comparison, p=0.013 with Bonferroni correction).
**D. Hypothesis Testing and Results**
In this section, we describe our hypotheses about privacy and security behaviors and the results of testing each hypothesis. Table III summarizes the percentages of respondents who claimed to engage in each privacy and security behavior. Table VIII summarizes the number of respondents who collected or stored the data types we examined.
1) **Hypotheses 1: Behaviors are correlated:** First, we hypothesized that security and privacy behaviors would be positively correlated, and that there would be developers who were generally privacy and security concerned and demonstrated all or most behaviors, while others would not display any such behaviors. Our hypothesis is mostly supported; all behaviors are significantly and positively correlated at the p=.05 level except Privacy Policy and SSL, as shown in Table IX.
For hypothesis H2 and H3 we run eight χ² tests separately. We conservatively correct the standard p-value of .05 with Bonferroni correction, and use a significance level of 0.006 (0.05 divided by the number of tests).
2) **Hypotheses 2: Company size:** We are aware that startups or app development companies with small teams and little investment may not have the resources, in terms of time or money, to invest in privacy and security. Therefore, we suspected that small companies may be less likely to engage in the privacy and security behaviors that require additional employees (a CPO), additional time (creating a privacy policy), or additional resources. For example, encryption may require more equipment or software. Using SSL may require additional developer time or experience.
**H2a:** Company size correlates to having a CPO.
**H2b:** Company size correlates to having a privacy policy.
**H2c:** Company size correlates with encrypting everything.
**H2d:** Company size correlates with using SSL.
We found that the size of a company does help determine whether they have a CPO (χ² test p<0.001), whether they have a privacy policy (χ² tests p=.002), and whether they encrypt everything (χ² tests p<0.001). However, the company size was not correlated with SSL using the conservative corrected significance level (χ² tests p=.009). As one respondent wrote in an open-text field, “We are a small, two-person shop. Although we don’t have CxO positions, we do understand the need to protect the privacy of our users. Our app embeds a privacy statement in an easily identifiable location.”
The percentages of companies engaging in the above privacy and security behaviors grows as the company size grows, up to the 31-100 employee companies. For example, all of the respondents with company sizes of 1 said they did not have a CPO or equivalent, while only 58.8% of respondents in companies from 2-9 had someone responsible for privacy, compared to 89.6% and 92.6% of companies size 10-30 and 31-100 respectively. This is shown visually in Figure 2, and is similar for the other privacy and security behaviors. However, this trend of improved privacy and security practices does not hold for company sizes greater than 100. We speculate that app developers in larger companies may not be as aware of all their company’s practices.
**H3d:** Company size correlates to using SSL.
3) **Hypotheses 3: Revenue model:** We were curious about the impact of the revenue model on privacy and security behaviors, and hypothesized that certain revenue models, such as advertising, were less likely to show privacy and security behaviors.
<table>
<thead>
<tr>
<th></th>
<th>CPO</th>
<th>Encrypt Everything</th>
<th>Privacy Policy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Encrypt Everything</td>
<td>-2.72*</td>
<td><.001</td>
<td></td>
</tr>
<tr>
<td>Privacy Policy</td>
<td>-1.59*</td>
<td><.018</td>
<td>-228* .001</td>
</tr>
<tr>
<td>SSL</td>
<td>-2.57*</td>
<td><.001</td>
<td>-217* .005</td>
</tr>
</tbody>
</table>
**TABLE IX. CORRELATIONS BETWEEN THE SECURITY AND PRIVACY BEHAVIORS. THE PHI COEFFICIENTS (Φ) INDICATE THAT THE BEHAVIORS ARE GENERALLY POSITIVELY BUT WEAKLY CORRELATED. * INDICATES SIGNIFICANT CORRELATION AT THE P=.05 LEVEL.**
on suggested privacy and security practices, the app developers industry groups have developed guidelines for app developers that might be useful in the future. Respondents indicated a high level of data collection. Roughly three-quarters of developers collected information about the smartphone user while having little or no relationship with the user, and thus have little incentive to protect user privacy. This may indicate a need for legislation to incentivize third-parties to provide clear information about their data collection to app developers and the end users.
In addition, survey participants demonstrated some confusion about whether they were using third-party tools, providing contradictory responses in different questions. This indicates that tools that automatically detect and describe third-party data collection may be helpful for developers.
B. With a Little Help From my Friends
App developers often mentioned searching for resources about security and privacy on the web. In addition, app developers in small companies rely on their friends and social networks for advice about privacy and security, while developers in larger companies may have experts within their company or legal counsel to turn to. Security and privacy advocates may find traction by intervening at a social level, such as by meeting with developers to discuss and improve their practices.
C. Legalese Hinders Reading and Writing of Privacy Policies
Less than half of small companies (fewer than 10 employees) informed their users about data collection through privacy policies on their websites. Several of our app developer interviewees had never read their own policy, and many others did not view it as a tool to communicate with users. Privacy policies were perceived as a tool that might protect them against lawsuits, but that small companies would not be targeted for lawsuits. This suggests that there is a need to emphasize that privacy policies need not be legalese, and can be an opportunity to communicate with their users. Furthermore, some interviewees expressed concern that full disclosure scares users away. This suggests that required,
\[ H3a: \text{Revenue model is correlated to having a CPO.} \]
\[ H3b: \text{Revenue model is correlated to having a privacy policy.} \]
\[ H3c: \text{Revenue model is correlated with encrypting everything.} \]
\[ H3d: \text{Revenue model is correlated with using SSL.} \]
Since 47 unique combinations of revenue models were reported, we examine the most common models and combinations, which are shown in Table V. All other combinations (with fewer than 10 responses) were combined into an “Other” category. At our conservatively corrected p-value, none of the results were significant (CPO p=.035, encrypt p=.029, SSL p=.037, privacy policy p=.019). However, we note a few interesting cases. An advertising revenue model indicates low adoption of privacy policy, but is average on the other measures. It is disconcerting that in-app purchase, which might be transmitting payment information, have the lowest adoption of SSL. However, we note that all 17 of the developers who used every model except subscription also claimed to implement all the privacy and security sensitive behaviors. The only common feature we found across all 17 of these developers is that they all received corporate privacy and security training as well as college classes.
VII. DISCUSSION
Our results indicate that many developers lack awareness of privacy measures, and make decisions in ad hoc manner. While most developers claimed to be using SSL and to have a CPO or equivalent, only slightly over half of our survey participants claimed to employ the other recommended privacy and security measures such as encrypting everything or having a privacy policy on their website. Our interview respondents discussed encrypting some, but not all of their data, and having little belief that privacy policies were useful. The survey respondents indicated a high level of data collection. Roughly three-quarters of developers collected information about the other apps installed on the users device. Some interviewees discussed collecting data that they didn’t need, but thought might be useful in the future.
While several government agencies, non-profit groups, and industry groups have developed guidelines for app developers on suggested privacy and security practices, the app developers we interviewed were not aware of and had not read these documents. This suggests that public policy around privacy and security is not reaching developers. In this section, we discuss hurdles to better privacy and security behaviors, and provide recommendations to encourage privacy-sensitive behaviors.
A. Third-Party Tools Should be More Transparent about Data Collection
Most app developers in our survey used third-party advertising or analytics services. Previous work shows that these libraries have permission to collect sensitive data [5], [6]. The developers we interviewed discussed their difficulties reading the policies and terms of use for the third-party APIs or services that they integrated into their apps. Popular ad and analytics companies should provide information about their data collection to app developers in an easy-to-read format. They should explain both what they collect and the purpose of that collection. This information could be provided in two places: as part of a quick-start guide, so developers can review before integrating the code, and after the developers has configured the third-party settings, so they can review how their choices impact their users’ privacy and write their privacy policies.
Unfortunately, third-party tools may collect information about the smartphone user while having little or no relationship with the user, and thus have little incentive to protect user privacy. This may indicate a need for legislation to incentivize third-parties to provide clear information about their data collection to app developers and the end users.
In addition, survey participants demonstrated some confusion about whether they were using third-party tools, providing contradictory responses in different questions. This indicates that tools that automatically detect and describe third-party data collection may be helpful for developers.
B. With a Little Help From my Friends
App developers often mentioned searching for resources about security and privacy on the web. In addition, app developers in small companies rely on their friends and social networks for advice about privacy and security, while developers in larger companies may have experts within their company or legal counsel to turn to. Security and privacy advocates may find traction by intervening at a social level, such as by meeting with developers to discuss and improve their practices.
C. Legalese Hinders Reading and Writing of Privacy Policies
Less than half of small companies (fewer than 10 employees) informed their users about data collection through privacy policies on their websites. Several of our app developer interviewees had never read their own policy, and many others did not view it as a tool to communicate with users. Privacy policies were perceived as a tool that might protect them against lawsuits, but that small companies would not be targeted for lawsuits. This suggests that there is a need to emphasize that privacy policies need not be legalese, and can be an opportunity to communicate with their users. Furthermore, some interviewees expressed concern that full disclosure scares users away. This suggests that required,
standardized privacy notices might be a benefit for privacy-protective apps. Efforts of the government to develop such notices may provide guidance [30]. If all apps are required to provide notices, those who have good practices would not be punished for transparency.
Several interviewees believed that complying with the app stores’ policies would provide sufficient legal protection, or that the app store would be monitoring them for compliance. This suggests that platform developers and market controllers are well-placed to encourage privacy and security behaviors. Platforms can highlight best practice notices and checklists, making them clear and accessible to app developers.
D. Small Companies Need Privacy and Security Tools
The smaller companies were the least likely to engage in privacy and security behaviors. Companies with fewer resources are less able to devote time or money to privacy and security issues. Therefore, small companies may need additional help or resources so they can overcome the hurdles to developing privacy policies and encrypting data. We suggest that privacy and security tools should be specifically targeted at small development companies with few resources. OS developers or open-source developers could focus on providing free tools to developers. These tools should be usable and not require legal expertise. In addition, companies of all sizes could be nudged to minimize data collection with tools that help developers decide what data to collect and when to delete it.
VIII. Conclusion
While there is general awareness of need for security measures, such as encrypting information or using SSL, there was a lack of understanding around privacy best-practices. Small companies rely on social networks and search engines for privacy and security advice. Privacy and security tools for developers must be quick, simple, and cheap, so that they can be used by time- and resource-constrained small companies. Platforms should make sure that it is easy to implement good security practices. App stores should provide privacy and security checklists, as they are uniquely positioned to reach developers. Third-party tools should make their data collection clear to developers and end users. More work is needed to make developing clear privacy policies a simple and routine part of app development.
Acknowledgements
This research was funded in part by Google, NQ, John and Claire Bertucci Fellowship, and NSF grants DGE0903659, CNS1012763, and CNS1228813.
References
|
{"Source-Url": "http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2017/09/01_2-paper.pdf", "len_cl100k_base": 11591, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33003, "total-output-tokens": 13328, "length": "2e13", "weborganizer": {"__label__adult": 0.0007181167602539062, "__label__art_design": 0.0006899833679199219, "__label__crime_law": 0.00603485107421875, "__label__education_jobs": 0.004718780517578125, "__label__entertainment": 0.00011605024337768556, "__label__fashion_beauty": 0.0003337860107421875, "__label__finance_business": 0.005367279052734375, "__label__food_dining": 0.0003390312194824219, "__label__games": 0.002017974853515625, "__label__hardware": 0.0029582977294921875, "__label__health": 0.0008525848388671875, "__label__history": 0.0004265308380126953, "__label__home_hobbies": 0.00018787384033203125, "__label__industrial": 0.0003635883331298828, "__label__literature": 0.00043845176696777344, "__label__politics": 0.0017709732055664062, "__label__religion": 0.0003919601440429687, "__label__science_tech": 0.04486083984375, "__label__social_life": 0.0001883506774902344, "__label__software": 0.0528564453125, "__label__software_dev": 0.873046875, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.0006518363952636719, "__label__travel": 0.00024771690368652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61062, 0.02158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61062, 0.12912]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61062, 0.95484]], "google_gemma-3-12b-it_contains_pii": [[0, 5042, false], [5042, 11413, null], [11413, 18012, null], [18012, 24708, null], [24708, 31000, null], [31000, 35143, null], [35143, 38788, null], [38788, 46339, null], [46339, 53955, null], [53955, 61062, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5042, true], [5042, 11413, null], [11413, 18012, null], [18012, 24708, null], [24708, 31000, null], [31000, 35143, null], [35143, 38788, null], [38788, 46339, null], [46339, 53955, null], [53955, 61062, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61062, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61062, null]], "pdf_page_numbers": [[0, 5042, 1], [5042, 11413, 2], [11413, 18012, 3], [18012, 24708, 4], [24708, 31000, 5], [31000, 35143, 6], [35143, 38788, 7], [38788, 46339, 8], [46339, 53955, 9], [53955, 61062, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61062, 0.20851]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
a224180020d6f5e793e33cefe6bec7b30f6ac8b9
|
Structured and Object-Oriented Methodologies: A Comparative Analysis Based on a Case Study
Gary Brian Warren
Follow this and additional works at: https://louis.uah.edu/honors-capstones
Recommended Citation
https://louis.uah.edu/honors-capstones/647
This Thesis is brought to you for free and open access by the Honors College at LOUIS. It has been accepted for inclusion in Honors Capstone Projects and Theses by an authorized administrator of LOUIS.
THE UNIVERSITY OF ALABAMA IN HUNTSVILLE
Honors Program
HONORS SENIOR PROJECT APPROVAL FORM
(To be submitted by the student to the Honors Program with a copy of the Honors Project suitable for binding. All signatures must be obtained.)
Name of Candidate: Gary Brian Warren
Department: Computer Science
Degree: Bachelor of Science in Computer Science
Full Title of Project: Structured and Object-Oriented Software Methodologies: A Comparative Analysis Based Upon a Case Study
Approved by:
[Signatures and dates]
Honors Program Director for Honors Council
Date
Structured and Object-Oriented Methodologies: A Comparative Analysis Based on a Case Study
by
Gary Warren
CS 495
Dr. Carl Davis
September 2, 1993
ABSTRACT
Since the Garmisch Conference of 1978, software engineering has become the standard way to try to deliver larger software packages both on budget and within time constraints. Software engineering involves two fundamental paradigms: life cycle, or organizational technique, and methodology, or the technique employed to model the real-world system under consideration. The two most critical components of life cycle are analysis and design. Two popular methodologies for employing these techniques are Structured Analysis and Design and the newer Object-Oriented Analysis and Design. Structured Methodology focuses on processing while Object-Orientation focuses on data. In an attempt to determine which was the better methodology, a system was analyzed and designed under both of the methodologies according to specific methods of an author who had switched from the structured to the object-oriented method. Comparative results indicate that both had significant advantages and weaknesses, but that Object-Orientation had intractable flaws mainly as a result of its relative newness and immaturity with respect to definition, meaning, and usage. Though using structured methods is recommended for now, the advantages of object-orientation will make it the better model once it has reached maturity.
# TABLE OF CONTENTS
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>List of Figures</td>
<td>iv</td>
</tr>
<tr>
<td>List of Symbols</td>
<td>v</td>
</tr>
<tr>
<td>I. Prologue</td>
<td>1</td>
</tr>
<tr>
<td>II. Introduction</td>
<td>4</td>
</tr>
<tr>
<td>III. Methodologies</td>
<td>8</td>
</tr>
<tr>
<td>IV. Results</td>
<td>20</td>
</tr>
<tr>
<td>V. Conclusions</td>
<td>28</td>
</tr>
<tr>
<td>VI. Bibliography</td>
<td>31</td>
</tr>
<tr>
<td>VII. Appendix A</td>
<td>34</td>
</tr>
<tr>
<td>A. Structured Analysis</td>
<td>34</td>
</tr>
<tr>
<td>B. Structured Design</td>
<td>56</td>
</tr>
<tr>
<td>VIII. Appendix B</td>
<td>61</td>
</tr>
<tr>
<td>A. Object-Oriented Analysis</td>
<td>61</td>
</tr>
<tr>
<td>and Design Notations</td>
<td></td>
</tr>
<tr>
<td>B. Object-Oriented Analysis</td>
<td>64</td>
</tr>
<tr>
<td>(PDC)</td>
<td></td>
</tr>
<tr>
<td>C. Object-Oriented Design</td>
<td>81</td>
</tr>
<tr>
<td>(HIC)</td>
<td></td>
</tr>
</tbody>
</table>
LIST OF FIGURES
Figure 1. The Classic Life Cycle 5
Figure 2. The Prototype Life Cycle 5
Figure 3. The Costs of Errors in the Life Cycle 7
Figure 4. A Context Diagram 9
Figure 5. An Entity-Relationship Diagram 10
Figure 6. A Balanced Data Flow Diagram Fragment 11
Figure 7. A Transform Center 13
Figure 8. A Transaction Center 13
Figure 9. A Transaction-Centered System 14
Figure 10. Objective of Object-Oriented Analysis and Design 16
Figure 11. An Object-Oriented Analysis of a Vehicle Registration System 18
Figure 12. The Multi-Component, Multi-Layer Object-Oriented Analysis and Design Model 19
Figure 13. Layer Models of Object-Oriented Analysis/Object-Oriented Design 62
Figure 14. Class-&-Object Specification Template 63
Figure 15. Object State Diagram Notation 63
Figure 16. Service Chart Notations 63
LIST OF SYMBOLS
AD: analysis and design
DD: data dictionary
DFD: data flow diagram
DMC: data management component
ERD: entity-relationship diagram
HIC: human interaction component
OO: object-oriented, object orientation
OOA: object-oriented analysis
OOAD: object-oriented analysis and design
OOD: object-oriented design
OOPL: object-oriented programming language
PDC: problem domain component
SA: structured analysis
SAD: structured analysis and design
SD: structured design
TMC: task management component
Prologue
On the day of November 9, 1979, civilization as we know it came unusually close to its end. That day, the strategic air command had an alert that scrambled our nuclear forces. The reason our forces scrambled was a mistake. It was not a political or military decision though, but a computer mistake. The WWMCCS computer confused a simulated attack as real because of a software fault, and it signaled our forces that the Soviet Union had launched missiles aimed at the United States. The movie War Games, which appeared five years later, though largely over-dramatized, did show with chilling effect that computer errors, even one error on one computer, can be detrimental to us all [17;4].
This incident should be proof enough that computers and their software effect some of the most fundamentally important aspects of our lives. Yet life-threatening situations involving national defense are only a small aspect of our lives into which computers have been introduced. Computers have become invaluable to high-tech medicine, avionics and shipboard navigation, banking, word processing--almost all types of business in fact; we continue to rely on them for technical superiority in the competitive world in which we live. With such a tremendous amount of reliance on computers, it has become increasingly necessary that they operate more reliably both from a safety and economical standpoint.
Two factors are involved in computer performance:
hardware and software. The hardware is the physical, functioning electronic circuitry; it is the computer chips and bus lines. The instruction set that drives the computer is software. By the time it reaches the market, hardware can be considered fairly reliable because of its rigorous development and precise automated manufacturing techniques. Software, on the other hand, is delivered by humans. Therefore it is only as accurate as the skills of the programmer or programmer team that creates it, the methods used to develop it, and the tools available to help the process.
It is not easy to program computers. As John Guttag of MIT explains,
Anyone with substantial programming experience knows that building software always seems harder than it ought to be. It takes longer than expected, the software's functionality and performance are not as wonderful as hoped, and the software is not particularly malleable or easy to maintain... In its purest form, it is the systematic mastery of complexity [9;9].
With such a task before them, it is not surprising that early computer programmers floundered when it came to producing software, especially on the increasingly complex systems being demanded. In fact, by the 1970's, "the computer world had become famous for failures: dangerous system errors, late deliveries, spectacular budget overruns, and abandoned projects" [6;115].
A key turning point in the software development process came in 1978. That year, NATO held the Garmisch Conference, also called the NATO Software Engineering Conference, a meeting of government officials and some of the most prominent people of the international software market. Here, the three major problems of large software development were noted: (1) projects were exceeding their deadlines greatly, (2) costs were over budget, (3) systems were not meeting their expected performance expectations [6;102]. Among its many recommendations on what to do about what by now was called the "software crisis" or "software bottleneck," the conference endorsed the idea of software engineering, coined by a NATO study group the previous year. Software engineering implied that the process of producing software was comparable to other engineering processes [17;5]. The important feature introduced was that programming as purely an art gave way to a methodology with a corresponding discipline [6;127]. So software engineering got its start and has since become critical for producing large software packages.
**Introduction**
A good definition for software engineering today is "the technological and managerial process concerned with the systematic production and maintenance of software products that are developed and modified on time and within cost estimates." Software engineering as a process can best be understood in terms of two paradigms: the life cycle paradigm and the methodology paradigm.
The life cycle paradigm is the larger example into which the methodology paradigm fits. Life cycle really is the organizational technique used to obtain a finished product. It is the way that available methodology and tools are put into use, hopefully to achieve the most efficient construction for a specified product. The classic life cycle is the most common. It is often known as the "waterfall" approach because of the way it looks (Figure 1). In this paradigm, steps are taken in sequential order to get to the final product. Another popular life cycle model is called prototyping (Figure 2). Prototyping is the more rapid method of the two. With it, programmers quickly construct a minimally functional shell of a program that can perform the most required behaviors of the system under consideration, to test whether they are meeting the requirements in a fashion suitable to the customer. When the customer's requirements are reasonably met, they then complete functionality of the software package [16;11-18]. Other models exist, but these two are the ones that are important to us.
Figure 1: The Classic Life Cycle [16;13].
Figure 2: The Prototype Life Cycle [16;16].
Two particularly important steps within the life cycle are analysis and design, methodology paradigms. They are the planning stages that occur before implementation (programming) of a system, and overall system quality depends on their effective and efficient completion. Averting mistakes in these phases greatly reduces error correction time for software, and hence project costs (Figure 3). Several paradigms for accomplishing these steps are around, but two are of particular importance; these methodologies are Structured Analysis and Design (SAD), and Object-Oriented Analysis and Design (OOAD).
It is the purpose of this project to examine a given system with both of these AD methods to see which yields a "better" solution. The system under consideration is the RECLAIM system (See Appendix A). I completed a SAD of this system in the spring term of 1993 as part of a group project. Now, as of the summer of 1993, I have completed an OOAD of the system. Following, I shall give an explanation of the methodologies involved, and compare their strengths and weaknesses, in an attempt to determine which of these two major analysis and design strategies yields a better result.
Figure 3: The costs of errors in the life cycle [10;16].
Methodologies
Of the two methods, Structured Analysis and Design is the older, established AD technique. It is a functional method. "In the functional view a software system consists of data items that represent some information, and a set of procedures that manipulate the data" [10,159]. SAD requires that functions, i.e., processing, are the active elements of the software, whereas data elements are in themselves totally passive "containers of information" [13;136].
The specific SAD approach applied in this project was documented by Edward Yourdon in his books Modern Structured Analysis and Structured Design, which build on Ward and Mellor's methods from Structured Development for Real-Time Systems. This AD technique is a synthesis of the more useful techniques surrounding the data flow diagram (DFD) description method, but more on that later.
The primary goal in SA is to build an Essential Model, a model of what the system is to do to satisfy the user's requirements, disregarding how it will be done. It has two components, the Environmental Model and Behavioral Model. The Environmental Model defines the system against the rest of the world, with consideration for the interfaces, or boundaries, between them. It includes a statement of purpose, a context diagram (a special DFD that shows data flow between non system components and the system), an event list that describes actions that the system is responsible for responding to, a data dictionary (DD), and an entity-
relationship diagram (ERD), which is a network model for describing data at a high level of abstraction. The Behavioral Model describes the system itself, or what it must do functionally, with Data Flow Diagrams (DFDs). In DFDs, named bubbles (circles) represent processes and parallel lines with a name between represent stores. Data flows, or unidirectional named lines with arrows, represent the transfer of data between processes and stores. The first set of DFDs produced describes the bottom level of the system; they correspond to the event list.
Figure 4: A context diagram. The entire system is represented by the bubble [20,339].
Once these models are obtained, they must be reconciled. First, the DFDs and dictionary are put in balance. The DFDs are upward leveled and the context diagram is lower leveled. Leveling means either combining bubbles (processes) and their data flows into one bubble with data flows into and out of it (upward), or taking a bubble and its flows and breaking it down into multiple bubbles and flows (downward). The idea is to do this from the top (context diagram) and bottom (lowest level DFDs), and meet, balancing the successive levels of diagrams. In total, a view of the system is achieved in which deeper and deeper levels of complex behavior can be viewed by looking deeper and deeper into the diagram levels (see figure 6). A last step is to make sure that the DFDs, ERD and DD are complete and balance against each other. Since the system should be the same from any perspective, items in one diagram should not be missing from another; for example a data store in a DFD must appear in the data
dictionary with its definition in order for the models to agree.
Figure 6: A balanced DFD fragment [20;170]
Structured Design takes the DFDs of Structured Analysis and turns them into modules (represented by boxes) with top-down hierarchy. The DFDs are analyzed at the bottom level. They are divided into transform and transaction designs. In transform analysis, the bubbles are divided into afferents, or input bubbles, transform centers, or data processors, and efferents, or output bubbles (Figure 7). The transforms are factored so that the afferent and efferent bubbles become child modules of the transform center; thus top-down hierarchy is created. Transaction design is similar except that a transaction center has few inputs and based on those inputs it calls any one of a number of processor flows to handle the signified action (Figure 8). Here again hierarchy is created by the transaction centers that become the parents in the structure.
Figure 7: A Transform Center [21;193].
Figure 8: A Transaction Center [21;226].
Object-orientation has reached maturity. Virtually all areas of software science and technology have now recognized its significance and effectiveness. Even the COBOL community, one of the most conservative software communities, is now engaged in designing object-oriented COBOL languages [13;v].
With such avid enthusiasm, one would think that object-orientation is the cure for all software projects.
As one might guess, object-orientation is fundamentally different both in how it views the system to be modeled and the procedures of the modeling process. As far as viewing the system, OOAD tries to provide a more concrete attachment to the real world by using "Objects" as the primary building blocks of the model. These Objects provide behaviors that manipulate data that they contain. In this way, real world objects should more easily be mapped onto the Objects of the model [13;136].
The OO model provides both procedural and data abstraction through association of procedures and data exclusively with an object. These differences might not strike the casual observer as great, but they represent a different way of thinking about engineering software. Ed Yourdon, who put in writing the SAD methodologies now widely practiced, has completely converted to the OO philosophy for not only AD but implementation, or programming, as well. The techniques he and Peter Coad developed in the books Object Oriented Analysis and Object Oriented Design are considered
the premiere techniques of the practice. This methodology was used exclusively in the OOAD of RECLAIM presented here.
Figure 10: Object-oriented modeling attempts to capture reality as closely as possible [4;32].
The first step in OOA is to look at the problem domain, or system under consideration, and identify what are called Class-&-Objects. In the sense that the analyst is trying to "match the technical representation of a system more closely to the conceptual view of the real world," Class-&-Objects in the system are approximately the items in the real world that anyone would call objects. There is an extensive list of procedures by which to derive Class-&-Objects, some of which might be less obvious. These include: structures, other systems, devices, things or events remembered, roles played, operational procedures, and sites. Rules are provided by which to challenge the Class-&-Object candidates.
Once Class-&-Objects have been found, structures are identified. Mankind has developed two basic structures over history, the generalization-specialization or gen-spec structure, and the whole-part structure. In gen-spec structures, the Class is the generalization, and Objects that belong to the Class are instantiations, or the
specializations. In this hierarchy, Objects inherit the characteristics of their parents (Attributes and Services), with perhaps modifications for their own inherent needs. Objects may belong to multiple classes, thus affording for greater inheritance and wider specialization. In the whole-part hierarchy, the whole is an Object and the part is another Object that is considered part of, or to belong to, the whole in some quantity. As a last form of hierarchy, in larger systems with many Class-&-Objects, Subjects are created to maintain readability. These subjects help to add scale, allowing for better visibility of the model and its components.
The final steps, which involve most of the work, are to add Attributes and Services to each Class-&-Object. Attributes are the data, or states, needed to understand the Object in question, including how it should behave. When an Object must associate with another Object to fulfill its duties, an Instance Connection is created to relay this information through the model. An Attribute whose values causes fundamental changes in what Services can be performed on an Object requires production of a State Transition Diagram for its Object. Services are the processes that need to be carried out on the Attributes of an Object. They are triggered by messages sent from other Objects. When an analyst wants to show this mapping, he uses a Message Connection. Services are specified in detail in Service Charts. These are the complete steps in OOA (see Appendix
B). It should be noted that they may be carried out in parallel and iteratively as well as sequentially.
Figure 11: A complete OOA for a vehicle registration system [4;170].
OOD is the same process as OOA, except OOA is applied to different components of what is viewed as the entire model. Whereas in strict OOA the Problem Domain Component (PDC) was analyzed, in OOD the Human Interaction Component (HIC), Task Management Component (TMC), and Data Management Components (DMC) are analyzed. The HIC accounts for how humans will interact with the system. The TMC belongs to models where multi-processing, or the execution of multiple tasks, must occur (near) simultaneously. The DMC is "the infrastructure for the storage and retrieval of objects from a data management system;" it isolates data management. The OOD of these components can be carried out as (simultaneous) activities as opposed to sequential steps. OOAD is a seamless AD method because of the sameness of the processes involved in implementing it.
Figure 12: The seamless "multicomponent, multilayer" OOAD model [4;26].
**Results**
Structured Analysis has some inherent strengths in its methodology. SA is based upon the fundamental concept of sequential flow, a concept well understood by computer programmers. This makes modeling with SA a relatively painless task.
Structured Analysis is a complete model. With the various diagrams and the data dictionary, the analyst is less likely to make a mistake. The reason for this is balancing the models against one another has been made a critical component of the system. In practice, my group found this quite effective, especially since we divided the diagrams and dictionaries among us. When we sat down and put them together, we saw that each of us had discovered in our models aspects of the system which others had not in theirs. This checking and balancing brought us much closer to a complete representation of the system.
However, this finding uncovered a deficiency in style which Coad and Yourdon note that SA supports: too much emphasis on the DFDs. The DFDs took by far the most amount of time and effort to complete; however, we had no problem with this because the DFDs actually were the backbone of the representation of the system. What gave us the most trouble was that balancing came after all the models were fully developed. This made it almost instinctive to reject and find fault in the other models when they did not agree. So it was only with great effort that I as the analysis quality
assurance person could persuade that fundamental changes be made to the DFDs.
In some cases the other parts of the SA model had to suffer from problems with the DFD. My primary example for this is what I like to call the "merge syndrome." During the upward leveling process, when the lower level bubbles are merged to get a higher level bubble, we almost invariably had to merge for the inputs and outputs of the grouping. This is because the data flow inputs and outputs for the encompassing bubble the next level up must be balanced with those below in that they must convey the same data. We could not leave the flows unmerged because then all the parent bubbles of the upper level with their data flows intact from below would make a diagram that looked like spaghetti and meatballs. Hence in the data dictionary many definitions can be found that are simply conglomerations of other data dictionary items.
All of this is hinting at the inherent weakness of Structured Analysis, which is its near total lack of regard for data in the system. The tools of SA force the software engineer to concentrate almost exclusively on the necessary processing of the system, what the system must do. Obviously it is not a bad thing to understand what the system should accomplish; rather, it is necessary. However, when a method like SA advocates jumping in and representing the system almost solely with processing, it creates volatility in the resultant model [4;22-30].
My team was easily aware of this. We carefully crafted the Event List, checking and double checking, to make sure that no events were left out. Why? Because we could see that any change in the Event List would cause fundamental changes in the DFDs; in some cases perhaps a complete reworking of all the tedious diagrams we were to produce. We had no problem with this in the class, because Dr. Davis did not change the system specifications on us. But in the real world where the customer is likely to omit system specifications, the results could be devastating to the models of analysts. In my mind, the lesson to be learned here is that analysts spare as much time as possible to encompass their systems.
Structured Design has an exclusive strength: modularity. Its aim is to convert the DFDs of the SA into modules. I know from experience that programming by modules works, and not only does it work, but it is efficient. The difficult part is to get from the bubbles (DFDs) to boxes (modules). In general this cannot be an easy task but I would submit that it is not as insurmountable as Coad and Yourdon would have us believe. In my work group's case, during the analysis phase we had inadvertently modeled the whole RECLAIM system into a transaction form. Though Yourdon warned against such a move in Modern Structured Design, we felt compelled to model the entire system as a transaction. We were then able to proceed with the factoring of the system.
That is, until we ran into a truly insurmountable
problem. We had from the beginning intended to implement our system in Virtual BASIC, a programming environment that greatly automates creating GUI (graphical user interface) systems. After the first factoring, we realized that the parts of the system that would be handled through Visual BASIC could not be separated from the structure chart; these items corresponded in some cases to complete modules but in other cases not. So the Hierarchical Structure Chart and Interface Structure Chart that we developed for RECLAIM do not methodically follow from the previous charts. Coad and Yourdon refer to a magic transition that often takes place from SA to SD because of their inherent irreconcilability; in our case, I now believe that this is not so much the result of changing methods from SA to SD but because in SAD the software engineer is not forced to consider a Human Interaction Component. As we shall see, this is not the case in OOAD.
Comprehension and stability are the great strengths of OOA. One invariably can see the real world reflected in the Class-&-Objects of the system. Names and the fundamental hierarchical relationships that mankind uses to classify objects are preserved. Beyond that, and more important, is the stable model of the system under consideration that the software engineer gains. I related previously that SAD is flawed in that it tries to model the system almost entirely by processing; with this perspective, any changes in required system behavior might wreck the entire lot of DFDs modeling
the system. Take the same change in OOA's perspective. If items are added to or removed from the system, then they are added to or removed from the Class-&-Objects listing. I initially had Objects for maintenance history and maintenance schedule until I realized that they were computed items and not Objects; getting rid of them only involved deleting them from the list. If behavior of an Object changes, change the Services. This localization of the effect of changes in the system on the model is what makes it stable and coherent; the model can weather indecisive requirements and fickle customers better than any method that I have seen.
Unfortunately beyond these two things that encompass Subject, Class-&-Objects, and Structure layers, the model is both incomplete and at times unviable. Attributes present problems in the OOA methodology. Coad and Yourdon insist on atomic attributes. As Ian Graham says,
Under the influence of the relational database movement, they insist that attributes should be atomic. This is wrong. Object-orientation is about modeling complex objects which need not be atomic...[8;233].
Without the ability to have records and arrays in Objects, it is unclear how to continue except to explode the number of Objects in the park, making it unnecessarily large. In the case of Object Park, I included list structures anyway. This example falls under a more general problem category inherent to OOA: when is something an Attribute as opposed to an Object? This is an especially troubling problem when trying
to distinguish between an Attribute and a Whole-Part Structure.
Services create even bigger problems. They are divided into categories of simple and complex. The "simple" Services are create, connect, access, and release; they are implicit OOA charts and are not directly shown (a good way of hiding coupling). No explicit guidance is given on how these Services function in behavior or what to do with them, but yet Coad and Yourdon state that 80-95% of all behavior is handled here in many systems. Hence, in my OOAD of RECLAIM references to these Services are like functional calls for lack of better understanding. Most of it is connecting a part and whole; it is never stated whether this is implicit within the creation of the part, so I had to assume that I was responsible for it.
A simple Service which Coad and Yourdon neglected entirely was the "WhoAmI" service. With it, Objects can be made aware of other Objects. Because I was unable to use this Service, I had to specify Attributes ObjectType, EcoType, EnvType, and ReqType. This made the Object, EcologicalCharacteristic, EnvironmentalFactor, and MaintenanceRequirement Objects "knowable" so that error checking could be performed.
Another major problem I had with Services is that they are only initiated by messages from other Services, but then those are only initiated by messages from other Services... Couple this with the fact that it is unknown whether Services
of the same Object may call each other. Then it is perhaps easier to understand why my OOAD model carries out actions in a top-down manner, wholes calling parts and handling the necessary processing based on the results, often of observations of Attribute values in the parts.
The strength of OOD lies in the fact that Coad and Yourdon take into account the components of the system that, though apart from the PDC, are equally necessary to the functionality of the system. Whereas the RECLAIM structured design fell apart when we factored in the GUI, I had no problems with a RECLAIM GUI in OOD because of the HIC. These components are developed the same as the PDC in OOA. So OOA and OOD are seamless processes, OOD building directly on the result of OOA. In fact, in retrospect I think it better to carry out OOAD simultaneously. This way, the system develops evenly and problems between components can be discovered earlier.
What is the major problem with Coad and Yourdon's OOD? They simply do not tell enough about the HIC, DMC and TMC. Only a highly empathic reader could understand fully what these components are and how to implement their AD when an average of 11 pages, all generalities, are spent on each one. Hence a major problem that one has is how the different components interact. For instance, my OOAD PDC is enslaved to the HIC because in part from the recursive Service syndrome (note that the HIC itself is assumed called up by the System Management facilities), but also because there are
no guidelines on how to interface the components.
All the stated problems with OOAD are indicative of the wider problem with it. It is great at organizing static structures, but poor with dynamics. Dynamics are the processing and data passing parts of the system. Of course, this is what SAD models well. It is fairly clear that these two paradigms are like opposite sides of the coin: the one the information-oriented model, the other the processing-oriented model.
Conclusions
If I were asked to choose between SAD and OOAD for now, then I would choose SAD in a classic life cycle environment. Though it has its problems, most of them can be overcome. The strong emphasis on the DFDs can be tuned down in favor of reiterating the importance of the ERD and DD. The ERD is actually a component part of the OO diagrams, and its emphasis would help stabilize SA. Also, the gap between SA and SD can be overcome; it is not easy but not impossible. Components like the HIC can be incorporated with practice; real-time, critical-thread processing has already been introduced into SAD.
On the other hand, I do see OOAD, probably in a prototyping environment, gradually replacing SAD through its evolution. It is the more competent, stable model. At this point however, terminology and meaning are still too undefined; methodology is not specific enough. Coad and Yourdon's method is regarded as the best OOAD model available, yet next to no helpful information is available in the OOD book. An undefined method leads analysts and designers into pushing details off until implementation (onto someone else's shoulders), a result of which could be a project failure. Even Coad and Yourdon are quick to point out that OOAD is not a silver bullet, and has yet to reach maturation. Through practice and experimentation I believe it will be refined and brought to acceptable standards.
An important point to remember is that design style
tends to derive from the language and analysis style then from the design. So knowing how to program in the regular languages made me familiar with SAD. On the other hand, I have no experience with OOPLs, and therefore the aims of OOAD are all the more mysterious. Some of the problems I encountered with OOAD might be the result of not understanding where the AD results will fit into an implementation scheme. Yet even object-oriented languages differ widely, as there are no concrete standards yet for them to practice.
An important factor then will be the spread of OOPLs in the computer world. SAD is familiar because its aims are those learned and understood by programmers: modules, top-down and bottom-up design, processing orientation; these are the results of the command imperative languages. As languages actually become object-oriented, instead of just claiming it, and institutions such as the universities acquaint future programmers with object oriented features, object orientation will gain not only acceptance but may surpass process orientation as the language of choice. In this case, OOAD will certainly replace SAD, as there is no reasonable way to move from SAD to OOPLs.
In sum, producing large, reliable software packages is a complex process that requires much skill and is still, even with software engineering, mostly an art. No clear paths exist to methods that will guarantee results, nor are they ever likely to. Yet, as everyday we come to rely more and
more on machines and their software for simple as well as critical services, software engineers must continue the process of uncovering new and better methods of software creation, for everyone's sake.
BIBLIOGRAPHY (WORKS CONSULTED)
19. STROUSTRUP, BJARNE. *What is Object-Oriented*
Appendix A
Structured Analysis
ORIGINAL PROBLEM STATEMENT
LITTLE RIVER CANYON NATIONAL PARK MANAGEMENT SYSTEM
The Little River National Recreation Area is a new tract of several thousand acres in the National Park Service System which was created recently by an Act of Congress. Since the national parks have been receiving excessive numbers of visitors, it has been decided that a computer based system should be developed to manage this and similar areas across the country. The recreation area consists of a winding river and surrounding canyon, heavily forested areas, and a large lake plus meadowland. Many of the forested areas are traversed by old logging trails and are maintained for hiking and bicycling.
The product to be produced will be called the Recreation Area Loading and Management Information (RECLAIM) system and its requirements are listed below.
1. The system shall be able to display and store descriptions of various natural parts of the recreation area. A map of the area will be normally divided into 10,000 square foot cells, and ecological characteristics shall be associated with each cell. If a particular cell has diverse ecological characteristics it may be further divided in order to better describe its characteristics. The characteristics include; soil type, type of cover, water table height, and water quality in the lake and streams. Individual objects within a cell such as a lake shall also be able to be identified separately if required.
2. The system shall be able to display and store descriptions of any constructed facilities in the area, including barns, lodges, restrooms, log bridges, lake docks, picnic tables, shelters, and tennis courts.
3. The system shall allow for easy addition of new ecological characteristics and facilities to the database as required.
4. The system shall be able to associate a maintenance requirement for each cell or selected object in the park database. Using this data the system shall generate a maintenance schedule for the entire recreation area.
5. The system shall allow the entering of the effects of the use of the facility by the public on the recreation area environment. This may also be on a cell by cell basis or an object by object basis. This shall enable administrators to best accommodate the many visitors by adjusting traffic levels to minimize damaging the environment.
6. The system shall be user friendly allowing users the ability to easily generate the maintenance schedules, alter the database and determine environmental effects.
MODIFIED SYSTEM SPECIFICATIONS
This modified system specifications listing consists of the original problem statement, altered according to changes made through requirement change forms. Alterations are denoted by underline.
1. The system shall be able to display and store descriptions of various natural parts of the recreation area. A map of the area shall be divided into various cell sizes, each square in shape so there are no gaps. The largest cell will be the entire park. Every cell may be broken down into four smaller square cells, down to a minimum of 10,000 square feet (100 feet on each side). Each cell may have distinct characteristics. The characteristics include; soil type, type of cover, water table height, and water quality in the lake and streams. Individual objects within a cell such as a lake shall also be able to be identified separately if required.
2. The system shall be able to display and store descriptions of any constructed facilities in the area, including barns, lodges, restrooms, log bridges, lake docks, picnic tables, shelters, and tennis courts.
3. The system shall allow for easy addition of up to six new, user-defined ecological characteristics in addition to the four specified in the original problem statement (soil type, type of cover, water table height, and water quality.), for a total of ten.
4. The system shall be able to associate a maintenance requirement for each cell or selected object in the park database. These requirements will consist of cutting grass, cleaning, picking up trash, painting, fertilizing, and raking, plus up to nine user-defined maintenance requirements, for a total of fifteen. Using this data the system shall generate a maintenance schedule for the entire recreation area. Furthermore, a history of maintenance done will be kept by the system for a rolling year. (That is, there will always be one year of data.) This history will be in the form of a list of dates the maintenance requirement was done.
5. The system shall allow the entering of the effects of the use of the facility by the public on the recreation area environment. This may also be on a cell by cell basis or an object by object basis. This shall enable administrators to best accommodate the many visitors by adjusting traffic levels to minimize damaging the environment.
6. The system shall be user friendly allowing users the ability to easily generate the maintenance schedules, alter the database and determine environmental effects.
EVENT LIST
F 1. Request for object display.
F 2. Request for cell display.
F 3. Request for object data.
F 4. Request for ecological data change for cell.
F 5. Request for ecological data change for cell.
F 6. Request for ecological data addition.
F 7. Request for maintenance schedule change for a cell.
F 8. Request for a maintenance schedule for a cell.
F 9. Request for additions to maintenance schedule for cell.
F 10. Request for environmental data for cell.
F 11. Request for environmental data change for cell.
F 12. Request for an addition to environmental data for cell.
F 13. Request for maintenance schedule change for objects.
F 15. Request for additions to maintenance schedule for objects.
F 16. Request for environmental data for an object.
F 17. Request for environmental data change for an object.
F 18. Request for environmental data addition for an object.
F 19. Request for object addition.
F 20. Request for subdivision of cell.
F 21. Request for change object in a cell.
CONTEXT DIAGRAM
User → RECLAIM
user-input
terminal-display
RECLAIM → Printer
maint-schedule
2 Add Item
2.1
Add Item to Object
2.2
Add Item to Cell
- add-object-command
- add-data
- add-cell-command
Cells
- system-message
2.1 Add Item to Object
Diagram:
- **2.1.1** Find Object and Get New Data
- add-data
- system-message
- env-data
- requested-object
- main-data
- **2.1.2** Add Environmental Data to Object
- system-message
- env-data
- requested-object
- **2.1.3** Add Maintenance Requirement to Object
- system-message
- add-ms-to-object-command
- Cells
2.2 Add Item to Cell
2.2.6 Add Children to Cell
3.2 Change Cell
- Get New Data
- change-data
- message
- Change Object in Cell
- change-object-in-cell-command
- update-display
- object-data
- Change Environmental Data of Cell
- change-env-data-of-cell-command
- env-data
- update-display
- Change Ecological Data of Cell
- change-eco-data-of-cell-command
- eco-data
- update-display
- Change Maintenance Schedule of Cell
- change-ms-of-cell-command
- update-display
5 Display
5.1 Determine Means
5.2 Manage Terminal Display
5.3 Print Data
output
requested-terminal-display-data
maintenance-schedule-data
terminal-display
maintenance-schedule
5.2 Manage Terminal Display
- **5.2.1 Display Graphics**
- Input: requested-terminal-display-data
- Output: graphics-display
- **5.2.2 Display Text**
- Input: requested-terminal-display-data
- Output: text-display
DATA DICTIONARY
CONVENTIONS:
*Italic* = control signal
*Bold Italic* = terminal control signal
*Underline* = data-flow
*Bold Underline* = terminal data-flow
+ = logical AND
| = logical OR
() = grouping
(add-command = add-object-command | add-cell-command)
(add-data = (requested-object + (env-data | maint-data)) | object-data | env-data | eco-data | maint-data)
(add-object-command = add-env-data-to-object-command | add-ms-to-object-command)
(cell = the cell-data currently selected by the user for viewing)
(child-reference = reference to a sub-cell by its parent cell)
(Cells = storage for cell-data)
(cell-size = 10,000 sq. ft | 40,000 sq. ft | 160,000 sq. ft | TBD)
(change-command = change-object-command | change-cell-command)
(change-data = (requested-object + (env-data | maint-data)) | object-data | env-data | eco-data | maint-data)
(change-object-command = change-env-data-of-object-command | change-ms-of-object-command)
(command = input that generates (add-command | change-command | retrieve-command))
date = when maintenance requirement was last performed
date = when maintenance requirement was last performed
eco-data = type of cover | soil type | water table height | water quality | [user specified]
env-data = environmental impact data to be specified by user
frequency = how often a maintenance requirement is to be performed
graphics-display = symbolic representation of cell-data
maint-data = (frequency | date) + maintenance requirement + maintenance history
maintenance history = rolling year of previously entered dates
maintenance requirement = clean | cut grass | fertilize | paint | pick up trash | rake | [user defined]
maintenance-schedule = printout of maintenance-schedule-data
maintenance-schedule-data = (object | cell) + maint-data
message = "cell size not divisible" | "object not found" | "update complete" | TBD
object = (barns | lodges | restrooms | log bridges | lake docks | picnic tables | shelters | tennis courts | user specified)
object-data = object | env-data | maint-data | user specified
output = system-message | cell-data
requested-object = object to be changed or retrieved by user
requested-terminal-display-data = cell data | system-message
retrieve-cell-data-command = retrieve-object-in-cell-command |
retrieve-command = retrieve-cell-data-command | retrieve-object-data-command
system-message = update-display | message
terminal-display = graphics-display + text-display
text-display = textual description of cell-data
user-input = command + (add-data | change-data | requested-object)
Structured Design
Original DFD Mapping to Struct. Chart
Selector
Processor
Add
Retrieve
Change
Add Cell Data
Add Object Data
Retrieve Object Data
Retrieve Cell Data
Change Object Data
Change Cell Data
Add Object Data to Cell
Add Environmental Data to Cell
Add Biological Data to Cell
Change Object Data of Cell
Change Environmental Data of Cell
Find Object
Get Data
Determine Means
Print Data
Manage Terminal Display
Display Graphics
Display Text
Design Document
Appendix B
Object Oriented Analysis & Design Notations
Figure 13: Layer Models of OOA/OOD [5].
Figure 14: Class-&-Object Specification Template [5].
```plaintext
specification
attribute
attribute
attribute
externalInput
externalOutput
objectStateDiagram
additionalConstraints
notes
service
service
service
and, as needed,
traceabilityCodes
applicableStateCodes
timeRequirements
memoryRequirements
```
Figure 15: Object State Diagram Notation [5].
```
[-------------------] State
\ \ Transition
```
Figure 16: Service Chart Notations [5].
```
< ____,> Condition (if; pre-condition; trigger, terminate)
[ ] Text block
[ ] Loop (while; do; repeat; trigger/terminate)
| Connector (connected to the top of the next symbol)
```
Object
Oriented
Analysis
Class-&-Object Layer
- Park
- Map
- Object
- Cell
- Environmental Factor
- Maintenance Requirement
- Ecological Characteristic
Structure Layer
[Diagram showing relationships between Park, Map, Cell, Object, Ecological Characteristic, Environmental Factor, and Maintenance Requirement]
specification Park
attribute ParkName
attribute Location
attribute ObjectList
ObjectList details a list of
• every object type
• the icon for each object type
• a list of maintenance requirements performable on object
• a list of associable environmental factors
attribute EnvList
EnvList is a master list of EnvTypes
attribute EcoList
EcoList is a master list of EcoTypes
attribute MaintList
MaintList is the master list of ReqTypes
service Create
While number of Cell <> 256
Creates a Cell and connects to it
Returns
specification Cell
attribute XCoordinate
XCoordinate is the location of Cell along X axis of park grid.
attribute YCoordinate
YCoordinate is the location of Cell along Y axis of park grid.
attribute CellMap
CellMap is a 16x16 matrix of icons of objects in the cell.
service AddObject (in: Object values, out: result)
For every object in (connected to) this cell
Is the Object's CoordinateList completely covered by CoordinateList of new Object?
yes
Sends message DelObject to delete the old Object
no
Is there overlap between the CoordinateLists of the Objects?
yes
Accesses CoordinateList of old Object and removes shared coordinates
no
Does exact Object exist already?
yes
Adds coordinates to CoordinateList of the existing Object
no
Creates and connects to new Object
Accesses ParkObjectList.Icon
Updates CellMap with Object's Icon
Returns success
service DelObject (in: Object.id, out: result)
Does Default Object (Type Grass, Name NIL) exist?
- no
- yes
Creating default Object
Accesses Object.CoordinateList of Object to be deleted
Gives CoordinateList to default Object
Sends message Object.Release
Returns Success
specification Object
attribute ObjectType
Identifies Object; comes from ObjectList of Park.
attribute ObjectName
What the user calls the object.
attribute ObjectDescription
How the user describes the object.
attribute CoordinateList
The (x,y) coordinates this object occupies in the Cell it belongs to.
service Release
While there is a connected Environmental Factor
Releases an EnvironmentalFactor based on cid
While there is a connected MaintenanceRequirement
Releases MaintenanceRequirement based on cid
Does normal Release
service AddEnv (in: Env data, out: result)
Accesses ObjectList.ObjEnvList
Type of env to add is element of ObjectList.ObjEnvList?
yes
Create Env and Connect to it
Returns success
no
returns "EnviromentalFactor Mismatches Object" failure
service AddReq (in: Req data, out: results)
Accesses ObjectList.ObjReqList
Type of req to add is element of ObjectList.ObjReqList?
yes
Create Env and Connect to it
no
Returns success
returns "Maintenance Requirement Mismatches Object" failure
specification EcologicalCharacteristic
attribute EcoType
The type of ecological characteristic (from EcoList of Map).
attribute EcoInfo
User specified ecological information (text).
specification EnvironmentalFactor
attribute EnvType
What type environmental factor this is (from EnvList of Map).
attribute EnvInfo
User description of environment.
specification MaintenanceRequirement
attribute ReqType
What type of requirement this is (from MaintList in Map).
attribute Frequency
How often to perform the requirement.
attribute BeginDate
The day on which to begin performing the requirement.
Object Oriented Design
NOTES ON THE OBJECT-ORIENTED DESIGN
1. HIC
The author was not able to provide a full development for the HIC of RECLAIM. Limited experience with object-style interfaces in conjunction with time constraints has led to a very simple version of a HIC that is more descriptive than implementative. For instance, windows and menus are used, but attributes and services for these classes are left undefined. All buttons, scroll-bars, etc. are not represented as objects. Also, though the windows for map, ecological, environmental, and maintenance data displays are almost exactly alike in the design, their implementations would actually be quite different, so there is no justification for a class to which they all belong; this problem is a result of the descriptive nature of the HIC models.
The HIC presented here was meant for a Macintosh running Apple's system software. The system software consists of Event, File, Menu, Dialog, Window, Memory, and Resource Managers. In reality, these work in tandem with Mac applications. For the purposes of this design, however, the system software was viewed as a driver.
2. There was no TMC for RECLAIM.
3. There was no DMC implemented for RECLAIM in OOD (or SD).
specification FileMenu
notes
The FileMenu responds to requests for the normal file and printing operations. It depends on File Manager to handle saving files. Other operations such as Page Setup and Printing are assumed to be part of the system's capabilities and as such it just passes on the requests to the system. The assumption is that the active window can be printed through the Print routines of the system.
externalInput
• called by Event Manager
service New
- A RECLAIM window is already open?
- yes
- no
- Opens a dialogbox and informs user only one RECLAIM window is allowed open at one time
- Opens a dialogbox that returns a ParkName and Location
- yes
- no
- Creates park giving it initial values
- Returns to the Event Manager
service Open
Calls system to return RECLAIM File
System found a file?
yes
Pops up a RECLAIM Window
no
Makes necessary system calls to File Manager to load the file
Returns to the Event Manager
service Save
Is a Park open?
- yes
- Calls File Manager to open a file with name of park
- Saves each object in the park through calls to the File Manager
- Calls File Manager to close Park
- Returns to the Event Manager
- no
yes
Is a Park open?
- yes
- Calls File Manager to open a file with name of park
- Saves each object in the park through calls to the File Manager
- Calls File Manager to close Park
- Returns to the Event Manager
- no
service PageSetup & service Print
Calls standard system routine to handle the service
Returns to the Event Manager
specification EditMenu
notes
The EditMenu responds to the requests for editing items in the park. These include add and delete. It checks for the active window, i.e. the one that the user has most recently clicked, and calls the appropriate routines within for those windows that handle additions and deletions. It reports a failure to the user.
externalInput
• called by Event Manager
service Add
Gets active window from the Window Manager
Is active window the RECLAIMWindow?
yes
accesses MapWindow.ParkLevel
ParkLevel = bottom?
no
no __ _
returns to the Event Manager
no
Error returned?
yes
Informs user with messagebox
no
Selects add service for the active window and sends message
Informs user that he cannot add at this level through a messagebox
service Delete
- Gets active window from the Window Manager
Is active window the RECLAIMWindow?
- yes
- no
- accesses MapWindow.ParkLevel
ParkLevel = bottom?
- yes
- no
Selects delete service for the active window
Informs user that he cannot delete at this level with a messagebox
Error returned?
- yes
- no
Informs user with messagebox
- Returns to the Event Manager
specification MaintenanceMenu
notes
The user generates maintenance schedules and views maintenance histories through this menu. Maintenance schedules are for the entire park. Once a maintenance schedule has been generated it replaces the maintenance schedule for its particular month and hence a rolling year history is kept provided the user does not change the internal date of that the computer uses.
externalInput
• called by Event Manager
service GenMaintSchedule
Opens an inactive window titled Maintenance Schedule
For every Cell in the Park
For every Object in the Cell
For every maintenance schedule that the object has
Calculates what days of the current month that the MaintenanceSchedule falls on
Prints report to the Maintenance Schedule window
Saves window contents to this month's maintenance history file
activates window
Returns to the Event Manager
service ViewMaintHistory
1. Opens a dialogue box to get a valid month
2. Pops up an inactive window titled month+" Maintenance History"
3. Calls system to open file for that month
4. Calls system to read file contents into the window
5. Activates window as sole active window
6. Returns to the Event Manager
specification RECLAIMWindow
notes
The RECLAIMWindow is the full screen window. It contains a pop-up window that has as its selections ObjectTypes, ObjectRequirement, ObjectEnvFactor, EnvList, EcoList, and MaintList. This pop up menu will determine what action to take when Add or Delete is requested on the RECLAIMWindow. RECLAIMWindow also contains quadrant buttons used to navigate down into the park, scroll buttons to navigate at the cell level, an up button to move up a level in park view, and a park button to jump to the top of the park (see entire park). It also has a text display window directly underneath the MapWindow to allow for display of various data, especially Object data.
service AddtoParkList (out: result)
Opens dialogbox based on the item selected in the pop-up menu
Dialogbox returns values?
yes
Puts new information in the proper Park list
no
Returns success
service DelfromParkList (out: result)
Opens dialogbox based on the item selected in the pop-up menu
Dialogbox returns values?
yes
Removes information from the proper Park list
no
Returns success
service QuadButtonClicked
Accesses MapWindow.ParkLevel
ParkLevel = bottom?
<table>
<thead>
<tr>
<th>No</th>
<th>Yes</th>
</tr>
</thead>
<tbody>
<tr>
<td>Send Message MapWindow.Down with direction based on which quadrant button clicked</td>
<td>Calls system to "beep" the user to remind that he is at the bottom level</td>
</tr>
</tbody>
</table>
Returns to Event Manager
service UpButtonClicked
Accesses MapWindow.ParkLevel
ParkLevel = top?
Send Message MapWindow.Up
Returns to Event Manager
Calls system to "beep" the user to remind that he is at the top level
service ScrollButtonClicked
Accesses MapWindow.ParkLevel
ParkLevel = bottom?
yes
Sends message MapWindow.Down
with direction based on which
scroll button clicked
no
Calls system to "beep" the user to
remind that he is at the bottom level
Returns to Event Manager
service ParkButtonClicked
Option Button held during click?
- no
- Accesses MapWindow.ParkLevel
- yes
- Sends Message MapWindow.Park
While ParkLevel <> top
- Sends message MapWindow.Up
- Accesses MapWindow.ParkLevel
Returns to Event Manager
service WriteText (in: Text)
Calls system to print Text to TextBox of RECLAIMWindow
Returns
specification MapWindow
attribute ParkLevel
ParkLevel keeps track of what level of the park the user is viewing in MapWindow.
attribute CurrentCell
CurrentCell is either the upper-right-hand Cell used to build the current display or else it is the Cell shown in the display when at the bottom, or individual cell, level of the park.
attribute PreviousCellStack
Holds a list of previous CurrentCells so that it is possible to backtrack level by level all the way back to the top of the park.
attribute CurrentObject
The object the user has indicated for manipulation.
attribute ObjectRef
The information necessary to reference an object in a Cell
notes
MapWindow is a smaller window that occupies the upper-left-hand corner of RECLAIM Window. Its title is the
name of the Park. It displays a 16x16 grid of the icons associated with park objects based on park level and quadrant selections of the user.
service MapClicked
accepts information on what was clicked from the system
ObjectRef <> NIL?
yes
Updates CurrentObject by comparing ObjRef against where system reports MapWindow was clicked
Highlights the icons in grid corresponding to CurrentObject
Accesses CurrentObject's ObjectType, ObjectName, and ObjectDescription
Sends message RECLAIMWindow WriteText, passing the data
Sends message EnvironmentalWindow.EnvBuild
Sends message MaintenanceWindow.MaintBuild
no
Returns to Event Manager
service Rebuild
Rebuilds park (uses algorithm from Structured A&D results)
ParkLevel = bottom?
yes
Builds ObjectReference based on CoordinateLists of Objects in CurrentCell
no
Sets CurrentObject and ObjectRef to NIL
Sends message EcologicalWindow.EcoClear
Sends message EnvironmentalWindow.EnvClear
Sends message MaintenanceWindow.MaintClear
Returns
service Down
- Updates ParkLevel, CurrentCell, and PreviousLevelStack based on quadrant selected
- Calls Rebuild
- Returns
service Up
- Updates ParkLevel, CurrentCell, and PreviousLevelStack
- Calls Rebuild
- Returns
service Park
- Updates ParkLevel, CurrentCell, and PreviousLevelStack
- Calls Rebuild
- Returns
service Move
- Updates ParkLevel, CurrentCell, and PreviousLevelStack
- Calls Rebuild
- Returns
service ObjAdd (out: result)
1. Opens dialogbox and gets Object information
2. Sends message ReclaimWindow.WriteText "Select Cells for Object, then Press Return"
3. Performs necessary calls to System to allow and track selection of grids in MapWindow on an individual basis
4. Takes coordinates returned from system and sends message Cell.AddObject
5. Returns results returned to it
service ObjDel(out: result)
Sends message Cell.DelObj
Returns result returned to it
**specification EcologicalWindow**
**attribute EcoRef**
EcoRef tracks Ecological displayed in the EcologicalWindow.
**attribute ChosenEco**
ChosenEco is the EcologicalCharacteristic displayed in EcologicalWindow which the user has chosen, when defined.
**notes**
EcologicalWindow is placed at the lower left-hand corner inside the RECLAIMWindow. When the park is viewed at the cell level, it will display the EcologicalCharacteristics associated with that particular cell. The user may choose an ecological characteristic and delete it, or choose to add one, by clicking the window to make it active, then selecting Add/Delete from EditMenu.
service EcoHighlightClick
Accepts Highlighted line numbers from Window Manager
EcoRef <> NIL
yes
Cross references Ecological Characteristic from EcoRef using first highlighted line
Sets ChosenEco to the selected EcologicalCharacteristic
Calls Window Manager to highlight just that ecological characteristic's listing in EcologicalWindow
no
Returns to Event Manager
service EcoAdd (out: result)
- Pops up a dialog box that obtains the EcoType and the Ecolinfo
- Accesses MapWindow.CurrentCell
- Creates EcologicalCharacteristic and connects to CurrentCell
- Returns success
service EcoDel (out: result)
Yes
ChosenEco = NIL?
Returns failure
No
Sends destroy to ChosenEco
Returns success
---
service EcoClear
Sets EcoRef to NIL
Sets ChosenEco to NIL
Calls Window Manager to clear EcologicalWindow
Returns success
service EcoBuild
Accesses MapWindow.CurrentCell
For each EcologicalCharacteristic of Current Cell
Accesses EcologicalCharacteristic.EcoType and EcologicalCharacteristic.EcoInfo
Calls Window Manager to print them
References them by location in EcologicalWindow in EcoRef
Returns
specification EnvironmentalWindow
attribute EnvRef
EnvRef tracks EnvironmentalFactors displayed in the EnvironmentalWindow.
attribute ChosenEnv
ChosenEnv is the EnvironmentalFactor displayed in EnvironmentalWindow which the user has chosen, when defined.
notes
EnvironmentalWindow is placed at the upper right-hand corner inside the RECLAIMWindow. When the park is viewed at the cell level and a particular object has been selected, it will display the EnvironmentalFactors associated with that particular object. The user may choose an environmental factor and delete it, or choose to add one, by clicking the window to make it active, then selecting Add/Delete in the EditMenu.
service EnvHighlightClick
Accepts Highlighted line numbers from Window Manager
yes
EnvRef <> NIL
Cross references Environmental Factor from EnvRef using first highlighted line
Sets ChosenEnv to the selected EnvironmentalFactor
Calls Window Manager to highlight just that environmental factor's listing in EnvironmentalWindow
no
Returns to Event Manager
service EnvAdd (out: result)
- Pops up a dialog box that obtains the EnvType and the EnvInfo
- Accesses MapWindow.CurrentObject
- Sends message Object.AddEnv
- Returns result returned to it
service EnvDel (out: result)
ChosenEnv = NIL?
yes
Returns failure
no
Sends destroy to ChosenEnv
Returns success
service EnvClear
Sets EnvRef to NIL
Sets ChosenEnv to NIL
Calls Window Manager to clear EnvironmentalWindow
Returns
service EnvBuild
Accesses MapWindow.CurrentCell
For each Environmental Factor of Current Cell
Accesses EnvironmentalFactor.EnvType and EnvironmentalFactor.EnvInfo
Calls Window Manager to print them
References them by location in EnvironmentalWindow in EnvRef
Returns
specification MaintenanceWindow
attribute MaintRef
MaintRef tracks MaintenanceRequirements displayed in the MaintenanceWindow.
attribute ChosenMaint
ChosenMaint is the MaintenanceRequirement displayed in MaintenanceWindow which the user has chosen, when defined.
notes
MaintenanceWindow is placed at the lower right-hand corner inside the RECLAIMWindow. When the park is viewed at the cell level and an object has been selected, it will display the MaintenanceRequirements associated with that particular Object. The user may choose a maintenance requirement and delete it, or choose to add one, by clicking the window to make it active then selecting Add/Delete in the EditMenu.
service MaintHighlightClick
Accepts Highlighted line numbers from Window Manager
MaintRef <> NIL
yes
Cross references Maintenance Requirement from MaintRef using first highlighted line
Sets ChosenMaint to the selected MaintenanceRequirement
no
Calls Window Manager to highlight just that maintenance requirement's listing in MaintenanceWindow
Returns to Event Manager
service MaintAdd (out: result)
- Pops up a dialog box that obtains the ReqType, Frequency, and BeginDate
- Accesses MapWindow.CurrentObject
- Sends message Object.AddReq
- Returns success
service MaintDel
ChosenMaint = NIL?
yes
Returns failure
no
Sends destroy to ChosenMaint
Returns success
service MaintClear
Sets MaintRef to NIL
Sets ChosenMaint to NIL
Calls Window Manager to clear MaintenanceWindow
Returns
service MaintBuild
Accesses MapWindow.CurrentCell
For each MaintenanceRequirement of the CurrentCell
Accesses MaintenanceRequirement.ReqType
MaintenanceRequirement.Frequency, and
MaintenanceRequirement.BeginDate
Calls Window Manager to print them
References them by location in MaintenanceWindow in MaintRef
Returns
|
{"Source-Url": "https://louis.uah.edu/cgi/viewcontent.cgi?article=1646&context=honors-capstones", "len_cl100k_base": 13991, "olmocr-version": "0.1.53", "pdf-total-pages": 129, "total-fallback-pages": 0, "total-input-tokens": 192651, "total-output-tokens": 19689, "length": "2e13", "weborganizer": {"__label__adult": 0.00027298927307128906, "__label__art_design": 0.00046896934509277344, "__label__crime_law": 0.00020635128021240232, "__label__education_jobs": 0.0025196075439453125, "__label__entertainment": 6.181001663208008e-05, "__label__fashion_beauty": 0.00012445449829101562, "__label__finance_business": 0.00022804737091064453, "__label__food_dining": 0.0002567768096923828, "__label__games": 0.00049591064453125, "__label__hardware": 0.0005326271057128906, "__label__health": 0.0002114772796630859, "__label__history": 0.00027179718017578125, "__label__home_hobbies": 8.296966552734375e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.0002837181091308594, "__label__politics": 0.000179290771484375, "__label__religion": 0.00031065940856933594, "__label__science_tech": 0.007678985595703125, "__label__social_life": 8.845329284667969e-05, "__label__software": 0.006206512451171875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0002036094665527344, "__label__transportation": 0.00039887428283691406, "__label__travel": 0.00016832351684570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68746, 0.01155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68746, 0.34165]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68746, 0.90672]], "google_gemma-3-12b-it_contains_pii": [[0, 618, false], [618, 1186, null], [1186, 1335, null], [1335, 2645, null], [2645, 3466, null], [3466, 4278, null], [4278, 4785, null], [4785, 6240, null], [6240, 7627, null], [7627, 8734, null], [8734, 10224, null], [10224, 10311, null], [10311, 11496, null], [11496, 11553, null], [11553, 13048, null], [13048, 13689, null], [13689, 14692, null], [14692, 14801, null], [14801, 15646, null], [15646, 15727, null], [15727, 17198, null], [17198, 18447, null], [18447, 19958, null], [19958, 20133, null], [20133, 21047, null], [21047, 22490, null], [22490, 23957, null], [23957, 25469, null], [25469, 27003, null], [27003, 28546, null], [28546, 29985, null], [29985, 31499, null], [31499, 31967, null], [31967, 33429, null], [33429, 34918, null], [34918, 35120, null], [35120, 36187, null], [36187, 37391, null], [37391, 37709, null], [37709, 37720, null], [37720, 37740, null], [37740, 40255, null], [40255, 42755, null], [42755, 43811, null], [43811, 43811, null], [43811, 43911, null], [43911, 43911, null], [43911, 44045, null], [44045, 44407, null], [44407, 44428, null], [44428, 44455, null], [44455, 44904, null], [44904, 45088, null], [45088, 45312, null], [45312, 46747, null], [46747, 48498, null], [48498, 48516, null], [48516, 48985, null], [48985, 48985, null], [48985, 48985, null], [48985, 48985, null], [48985, 48996, null], [48996, 49040, null], [49040, 49080, null], [49080, 49763, null], [49763, 49788, null], [49788, 49916, null], [49916, 50075, null], [50075, 50075, null], [50075, 50075, null], [50075, 50515, null], [50515, 50603, null], [50603, 50872, null], [50872, 51474, null], [51474, 51752, null], [51752, 52059, null], [52059, 52291, null], [52291, 52534, null], [52534, 52784, null], [52784, 52968, null], [52968, 53135, null], [53135, 53383, null], [53383, 53406, null], [53406, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 55074, null], [55074, 55398, null], [55398, 55598, null], [55598, 56064, null], [56064, 56181, null], [56181, 56569, null], [56569, 56950, null], [56950, 57391, null], [57391, 57837, null], [57837, 58269, null], [58269, 58578, null], [58578, 59273, null], [59273, 59471, null], [59471, 59672, null], [59672, 59963, null], [59963, 60159, null], [60159, 60429, null], [60429, 60679, null], [60679, 60773, null], [60773, 61538, null], [61538, 61680, null], [61680, 62182, null], [62182, 62542, null], [62542, 62958, null], [62958, 63346, null], [63346, 63432, null], [63432, 64080, null], [64080, 64453, null], [64453, 64662, null], [64662, 64911, null], [64911, 65195, null], [65195, 65878, null], [65878, 66239, null], [66239, 66430, null], [66430, 66669, null], [66669, 66942, null], [66942, 67625, null], [67625, 68001, null], [68001, 68190, null], [68190, 68425, null], [68425, 68746, null]], "google_gemma-3-12b-it_is_public_document": [[0, 618, true], [618, 1186, null], [1186, 1335, null], [1335, 2645, null], [2645, 3466, null], [3466, 4278, null], [4278, 4785, null], [4785, 6240, null], [6240, 7627, null], [7627, 8734, null], [8734, 10224, null], [10224, 10311, null], [10311, 11496, null], [11496, 11553, null], [11553, 13048, null], [13048, 13689, null], [13689, 14692, null], [14692, 14801, null], [14801, 15646, null], [15646, 15727, null], [15727, 17198, null], [17198, 18447, null], [18447, 19958, null], [19958, 20133, null], [20133, 21047, null], [21047, 22490, null], [22490, 23957, null], [23957, 25469, null], [25469, 27003, null], [27003, 28546, null], [28546, 29985, null], [29985, 31499, null], [31499, 31967, null], [31967, 33429, null], [33429, 34918, null], [34918, 35120, null], [35120, 36187, null], [36187, 37391, null], [37391, 37709, null], [37709, 37720, null], [37720, 37740, null], [37740, 40255, null], [40255, 42755, null], [42755, 43811, null], [43811, 43811, null], [43811, 43911, null], [43911, 43911, null], [43911, 44045, null], [44045, 44407, null], [44407, 44428, null], [44428, 44455, null], [44455, 44904, null], [44904, 45088, null], [45088, 45312, null], [45312, 46747, null], [46747, 48498, null], [48498, 48516, null], [48516, 48985, null], [48985, 48985, null], [48985, 48985, null], [48985, 48985, null], [48985, 48996, null], [48996, 49040, null], [49040, 49080, null], [49080, 49763, null], [49763, 49788, null], [49788, 49916, null], [49916, 50075, null], [50075, 50075, null], [50075, 50075, null], [50075, 50515, null], [50515, 50603, null], [50603, 50872, null], [50872, 51474, null], [51474, 51752, null], [51752, 52059, null], [52059, 52291, null], [52291, 52534, null], [52534, 52784, null], [52784, 52968, null], [52968, 53135, null], [53135, 53383, null], [53383, 53406, null], [53406, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 54616, null], [54616, 55074, null], [55074, 55398, null], [55398, 55598, null], [55598, 56064, null], [56064, 56181, null], [56181, 56569, null], [56569, 56950, null], [56950, 57391, null], [57391, 57837, null], [57837, 58269, null], [58269, 58578, null], [58578, 59273, null], [59273, 59471, null], [59471, 59672, null], [59672, 59963, null], [59963, 60159, null], [60159, 60429, null], [60429, 60679, null], [60679, 60773, null], [60773, 61538, null], [61538, 61680, null], [61680, 62182, null], [62182, 62542, null], [62542, 62958, null], [62958, 63346, null], [63346, 63432, null], [63432, 64080, null], [64080, 64453, null], [64453, 64662, null], [64662, 64911, null], [64911, 65195, null], [65195, 65878, null], [65878, 66239, null], [66239, 66430, null], [66430, 66669, null], [66669, 66942, null], [66942, 67625, null], [67625, 68001, null], [68001, 68190, null], [68190, 68425, null], [68425, 68746, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68746, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68746, null]], "pdf_page_numbers": [[0, 618, 1], [618, 1186, 2], [1186, 1335, 3], [1335, 2645, 4], [2645, 3466, 5], [3466, 4278, 6], [4278, 4785, 7], [4785, 6240, 8], [6240, 7627, 9], [7627, 8734, 10], [8734, 10224, 11], [10224, 10311, 12], [10311, 11496, 13], [11496, 11553, 14], [11553, 13048, 15], [13048, 13689, 16], [13689, 14692, 17], [14692, 14801, 18], [14801, 15646, 19], [15646, 15727, 20], [15727, 17198, 21], [17198, 18447, 22], [18447, 19958, 23], [19958, 20133, 24], [20133, 21047, 25], [21047, 22490, 26], [22490, 23957, 27], [23957, 25469, 28], [25469, 27003, 29], [27003, 28546, 30], [28546, 29985, 31], [29985, 31499, 32], [31499, 31967, 33], [31967, 33429, 34], [33429, 34918, 35], [34918, 35120, 36], [35120, 36187, 37], [36187, 37391, 38], [37391, 37709, 39], [37709, 37720, 40], [37720, 37740, 41], [37740, 40255, 42], [40255, 42755, 43], [42755, 43811, 44], [43811, 43811, 45], [43811, 43911, 46], [43911, 43911, 47], [43911, 44045, 48], [44045, 44407, 49], [44407, 44428, 50], [44428, 44455, 51], [44455, 44904, 52], [44904, 45088, 53], [45088, 45312, 54], [45312, 46747, 55], [46747, 48498, 56], [48498, 48516, 57], [48516, 48985, 58], [48985, 48985, 59], [48985, 48985, 60], [48985, 48985, 61], [48985, 48996, 62], [48996, 49040, 63], [49040, 49080, 64], [49080, 49763, 65], [49763, 49788, 66], [49788, 49916, 67], [49916, 50075, 68], [50075, 50075, 69], [50075, 50075, 70], [50075, 50515, 71], [50515, 50603, 72], [50603, 50872, 73], [50872, 51474, 74], [51474, 51752, 75], [51752, 52059, 76], [52059, 52291, 77], [52291, 52534, 78], [52534, 52784, 79], [52784, 52968, 80], [52968, 53135, 81], [53135, 53383, 82], [53383, 53406, 83], [53406, 54616, 84], [54616, 54616, 85], [54616, 54616, 86], [54616, 54616, 87], [54616, 54616, 88], [54616, 55074, 89], [55074, 55398, 90], [55398, 55598, 91], [55598, 56064, 92], [56064, 56181, 93], [56181, 56569, 94], [56569, 56950, 95], [56950, 57391, 96], [57391, 57837, 97], [57837, 58269, 98], [58269, 58578, 99], [58578, 59273, 100], [59273, 59471, 101], [59471, 59672, 102], [59672, 59963, 103], [59963, 60159, 104], [60159, 60429, 105], [60429, 60679, 106], [60679, 60773, 107], [60773, 61538, 108], [61538, 61680, 109], [61680, 62182, 110], [62182, 62542, 111], [62542, 62958, 112], [62958, 63346, 113], [63346, 63432, 114], [63432, 64080, 115], [64080, 64453, 116], [64453, 64662, 117], [64662, 64911, 118], [64911, 65195, 119], [65195, 65878, 120], [65878, 66239, 121], [66239, 66430, 122], [66430, 66669, 123], [66669, 66942, 124], [66942, 67625, 125], [67625, 68001, 126], [68001, 68190, 127], [68190, 68425, 128], [68425, 68746, 129]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68746, 0.02668]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ed71835e61ac3a83f85ed8c4502307856f1c8376
|
Delft University of Technology
Software Engineering Research Group
Technical Report Series
Revisiting the Practical Use of Automated Software Fault Localization Techniques
Aaron Ang, Alexandre Perez, Arie van Deursen, and Rui Abreu
Report TUD-SERG-2017-016
Revisiting the Practical Use of Automated Software Fault Localization Techniques
Aaron Ang∗§, Alexandre Perez†, Arie van Deursen∗, Rui Abreu‡
∗Delft University of Technology, The Netherlands
†University of Porto, Portugal
‡University of Lisbon, Portugal
§Palo Alto Research Center, USA
a.w.z.ang@student.tudelft.nl, alexandre.perez@fe.up.pt, arie.vandeursen@tudelft.nl, rui@computer.org
Abstract—In the last two decades, a great amount of effort has been put in researching automated debugging techniques to support developers in the debugging process. However, in a widely cited user study published in 2011, Parnin and Orso found that research in automated debugging techniques made assumptions that do not hold in practice, and suggested four research directions to remedy this: absolute evaluation metrics, result comprehension, ecosystems, and user studies.
In this study, we revisit the research directions proposed by the authors, offering an overview of the progress that the research community has made in addressing them since 2011. We observe that new absolute evaluation metrics and result comprehension techniques have been proposed, while research in ecosystems and user studies remains mostly unexplored. We analyze what is hard about these unexplored directions and propose avenues for further research in the area of fault localization.
Index Terms—Software Fault Localization; Debugging; Literature Survey.
I. INTRODUCTION
Software systems are complex and error-prone, likely to expose failures to the end user. When a failure occurs, the developer has to debug the system to eliminate the failure. This debugging process can be described in three phases: fault localization, fault understanding, and fault correction [1]. This process is time-consuming and can account for 30% to 90% of the software development cycle [2]–[4].
Traditionally, developers use four different approaches to debug a software system, namely program logging, assertions, breakpoints and profiling [5]. These techniques provide an intuitive approach to localize the root cause of a failure, but, as one might expect, are less effective in the massive size and scale of software systems today.
Therefore, in the last decades a lot of research has been performed on improving and developing advanced fault localization techniques [5] such that they are applicable to the software systems of today. Specifically, the most prominent techniques are spectrum-based fault localization (SBFL) techniques. SBFL techniques pinpoint faults in code based on execution information of a program, also known as a program spectrum [6]. It does this by outputting a list of suspicious components, for example statements or methods, ranked by their suspiciousness. Intuitively, if a statement is executed primarily during failed executions, then this statement might be assigned a higher suspiciousness score. Conversely, if a statement is executed primarily during successful executions, then this statement might be assigned a lower suspiciousness score.
While advanced fault localization techniques have proven to be able to pinpoint faults in code, many studies have ignored their practical effectiveness [7]. This issue was raised in 2011 in a study by Parnin and Orso [1], in which they perform a preliminary user study and show evidence that many assumptions made by advanced fault localization techniques do not hold in practice. For example, many studies adopt a metric that is relative to the size of the codebase to evaluate the performance of a debugging technique. If a faulty statement is assigned a rank of 83, while the total lines of code amounts to 4408, then the evaluation metric suggests that the developer has to inspect 1.8% of the codebase, which appears as a positive result. However, Parnin and Orso observed in their user experiment that developers were not able to translate the results into a successful debugging activity [1].
In this paper, we seek to understand the response of the software fault localization (SFL) research community with regard to Parnin and Orso’s pioneering study, in which multiple directions are proposed for future research in the area of fault localization. To that end, we conduct a literature survey analyzing papers that build upon Parnin and Orso’s study. We assess the progress that has been made since the original study appeared, identify areas that are still open, and give recommendations for future research regarding the practical use of fault localization.
II. BACKGROUND
To set the scene for our study, we first provide an overview of the four most studied software fault localization techniques and identify existing surveys on such techniques.
Today’s most important fault localization techniques can be grouped into four categories: slice-based, spectrum-based, model-based, and information retrieval-based techniques. The first three techniques are discussed because most research has been performed on them compared to other techniques [5]. We discuss information retrieval-based fault localization techniques because they are inherently designed to work on natural languages, which can be useful in providing more context to developers when using SFL techniques in practice.
A. Slice-based Techniques
Static slicing was first introduced by Weiser [8], where irrelevant components of a program are removed from the original set of components to obtain a reduced executable form. This creates a smaller search domain for the developer to locate a fault.
Due to the fact that static slices include every statement that can possibly affect the variables of interest, a constructed slice may still contain statements that are not useful for locating a bug. To deal with this problem, Korel and Laski proposed dynamic program slicing [9]. In dynamic slicing, a slice is constructed based on the execution information of a program for a specific input.
B. Spectrum-based Techniques
A spectrum was first introduced by Reps et al. [6]. A program spectrum consists of execution information from a perspective of interest. For example, a path spectrum may contain simple information such as whether a path has been executed, also known as the hit spectrum. This kind of information was used to tackle the Y2K problem by Reps et al. [6] by comparing multiple path spectra to identify paths that are likely date-dependent.
With this in mind, Collofello and Cousins [10] performed one of the first studies where multiple path spectra are used to localize faults in code. Collofello and Cousins proposed a theory, called relational path analysis, which requires a database that stores correctly executed paths according to test cases that pass successfully. Then, by contrasting a failing execution with the database, execution paths can be pinpointed that are likely to contain the fault.
Collofello and Cousins’ work formed the basis for hit spectrum-based fault localization. To formalize their idea, we define the finite set \( C = \{e_1, e_2, \ldots, e_M\} \) of \( M \) system components, and the finite set \( T = \{t_1, t_2, \ldots, t_N\} \) of \( N \) system transactions, such as test executions. The outcomes of all system transactions are defined as an error vector \( e = (e_1, e_2, \ldots, e_N) \), where \( e_i = 1 \) indicates that transaction \( t_i \) has failed and \( e_i = 0 \) otherwise. To keep track of which system components were executed during which system transactions, we construct a \( N \times M \) activity matrix \( A \), where \( A_{ij} = 1 \) indicates that component \( c_j \) was hit during transaction \( t_i \). Given these definitions, SBFL techniques compute statistics such that the suspiciousness score of a system component can be computed.
A popular SBFL technique to compute the suspiciousness score of each system component is Tarantula, proposed by Jones et al. [11]. Tarantula was developed to visualize fault localization results based on suspiciousness scores to improve the developer’s ability to locate faults.
C. Model-based Techniques
Model-based software fault localization is an application of model-based diagnosis (MBD). MBD was first introduced by Davis [12] and was primarily intended for fault diagnosis in hardware, such as faulty gates in electrical circuits. Subsequently, various studies [13], [14] have refined this area. The underlying theory assumes that there exists a model that defines the correct behavior of a system. Faults are diagnosed when the actual observed behavior differs from the specified behavior.
In 1999, Mateis et al. [15] performed the first study where MBD is applied to Java, an imperative programming language. As opposed to models for physical systems, software programs written in an imperative language seldom come with a complete and up-to-date behavioral model. Therefore, for software systems, the model is generated from source code based on the semantics of the programming language. However, this model can be faulty as the source code is likely to contain bugs. Hence, expected results from a test case and its execution are used together with the generated model to diagnose bugs [16].
D. Information Retrieval-based Techniques
Information retrieval (IR) has been most apparent in web search engines but has recently been applied to SFL. The purpose of IR is to retrieve relevant documents given a query [17]. In IR-based SFL (IRBSFL), bug reports are used as a search query and source code represents the document collection. To retrieve relevant documents, IRBSFL techniques make use of retrieval models, that essentially return documents that are most similar to the search query. Specifically, retrieval models define how documents and queries are characterized such that, ultimately, the representation of a document and query can be compared to find the most relevant documents. The five generic retrieval models that are used to perform SFL are [18]: Vector Space Model (VSM) [19], Smoothed Unigram Model (SUM) [18], Latent Dirichlet Allocation (LDA) [20], [21], Latent Semantic Indexing (LSI) [22], [23], Cluster Based Document Model (CBDM) [18].
E. Surveys on Software Fault Localization
Several literature surveys [5], [24] have been performed to help the community get a better understanding of all advances made in SFL.
Recently, Wong et al. [5] published a comprehensive literature survey where the body of literature comprises studies published from 1977 to November 2014. The fault localization techniques are categorized into eight groups, namely slice-based, spectrum-based, statistics-based, program state-based, machine learning-based, data mining-based, model-based and miscellaneous techniques. Further, Wong et al. discussed several metrics and fault localization tools that are proposed since 1977 and concluded their survey by addressing nine critical aspects in fault diagnosis. This work differs from Wong et al. in that we mainly focus on the improvements in SFL regarding its practical issues.
Souza et al. [24] presented a fault localization survey, where they addressed the shortcomings of current SBFL techniques to be applied in industry. The authors do this by addressing five aspects of fault localization: techniques, faults, benchmarks, testing information, and practical use. Although Souza et al.
focused on the practicality of SBFL, which is similar to this survey, we also survey studies that propose SFL ecosystems.
III. PARNIN AND ORSO’S STUDY
In this section, we first highlight the essence of Parnin and Orso’s study [1]: “Are Automated Debugging Techniques Actually Helping Programmers?”. Then, we generalize Parnin and Orso’s research directions.
A. Summary
Parnin and Orso performed a preliminary user study to examine the usefulness of a popular automated debugging technique in practice to gain insight on how to build better debugging tools. An additional goal was to identify promising research directions in this area.
The authors defined the following hypotheses and research questions.
- Hypothesis 1: Programmers who debug with the assistance of automated debugging tools will locate bugs faster than programmers who debug code completely by hand.
- Hypothesis 2: The effectiveness of an automated tool increases with the level of difficulty of the debugging task.
- Hypothesis 3: The effectiveness of debugging when using a ranking based automated tool is affected by the rank of the faulty statement.
- Research Question 1: How do developers navigate a list of statements ranked by suspiciousness? Do they visit them in order of suspiciousness or go from one statement to the other by following a different order?
- Research Question 2: Does perfect bug understanding exist? How much effort is actually involved in inspecting and assessing potentially faulty statements?
- Research Question 3: What are the challenges involved in using automated debugging tools? What issues or barriers prevent their effective use? Can unexpected, emerging strategies be observed?
Their experiments involved 34 developers divided into four experimental groups: A, B, C, and D. Each participant was assigned two debugging tasks — debug a failure in Tetris (easy) and NanoXML (difficult) — and each group had to use Tarantula [11] for one of the tasks or both. During the experiment, the authors recorded a log of the navigation history of the participants that used Tarantula and made use of a questionnaire in which participants were asked to share their experience and issues.
In the analysis of the results, Parin and Orso categorized the participants as low, medium, or high performer. The average completion time of the high performers in group A is significantly shorter than the average completion time of the high performers in group B for Tetris, and thus Hypothesis 1 is supported but limited to experts and simpler code. For Hypothesis 2 and Hypothesis 3 no support was found.
Based on the recorded logs and questionnaires, the authors found that developers do not linearly traverse the ranked list, produced by Tarantula. Instead, the participants exhibited some form of jumping between ranked statements, searched for statements in the list to confirm their intuition, or skipped statements that did not appear relevant. In addition, the recorded logs showed evidence that perfect bug understanding is not a realistic assumption. On average, developers spent ten additional minutes on searching the diagnosis report after the first encounter with the faulty statement. Regarding Research Question 3, the participants indicated that they prefer more context, e.g. runtime values, or different ways of interacting with the data.
Besides the hypotheses and research questions, Parnin and Orso made several observations and derived research implications as follows.
- Observation 1: An automated debugging tool may help ensure developers correct faults instead of simply patching failures.
- Observation 2: Providing overviews that cluster results and explanations that include data values, test case information, and information about slices could make faults easier to identify and tools ultimately more effective.
- Implication 1: Techniques should focus on improving absolute rank rather than percentage rank.
- Implication 2: Debugging tools may be more successful if they focused on searching through or automatically highlighting certain suspicious statements.
- Implication 3: Research should focus on providing an ecosystem that supports the entire tool chain for fault localization, including managing and orchestrating test cases.
B. Generalization
The first implication states that future research should improve absolute rank instead of percentage rank. Percentage rank is used in many studies to evaluate the performance of the fault localization technique. However, percentage rank does not scale with the size of a codebase. For example, when a faulty statement is ranked on the 83rd position as a result of the fault localization technique and the codebase consists of 8300 lines of code, the percentage rank is \( \frac{83}{8300} \times 100\% \approx 1\% \). From this example, we can conclude that percentage rank is not a practical evaluation metric for the software systems of today, possibly consisting of millions lines of code, which is also confirmed by the authors’ preliminary study. To observe whether, and to what extent, the community has improved in this area, we include all papers that adopt absolute evaluation metrics.
Observation 2 and Implication 2 mention that future research should focus on searching through suspicious statements and providing more contextual information such that it is easier for the user to interpret the fault localization results. This implication has also been confirmed by Minelli et al. [25], who performed an empirical study that strongly suggests that the importance of program comprehension has been significantly underestimated by prior research. In our opinion, searching through the fault diagnosis is too specific, and ultimately focuses on improving result comprehension.
Therefore, to generalize this implication, we include studies in our survey that focus on result comprehension.
The third implication suggests future research to focus on creating complete ecosystems. Therefore, in this survey, we include work that propose or improve existing ecosystems.
Finally, Parnin and Orso mention that more research has to be performed in the form of user studies, as they did themselves. Hence, we give an overview of user studies in the field of fault localization techniques.
**IV. IMPACT OF PARNIN AND ORSO’S STUDY**
In this section, we discuss the selection methodology and give an overview of studies for each research direction proposed by Parnin and Orso as discussed in Section III-B. In Table I, an overview of studies is provided sorted by the year of publication, indicating the problems that each study tackles.
**A. Selection**
In this survey, the initial body of literature comprises work that refer to Parnin and Orso’s study, amounting to 104 published studies on Scopus\(^1\) at the time of writing. These papers were obtained with Scopus because it only consists of peer-reviewed papers. Next, papers that are not written in English or accessible are removed from the set of literature. Finally, we read the abstract and relevant sections that refer to Parnin and Orso of each study, and we determined if it attempts to solve one of the observations or implications made by Parnin and Orso. This results in a body of literature of 19 papers. Studies, that mention Parnin and Orso’s work but do not consider their findings, referred to Parnin and Orso’s study for various reasons: (1) Parnin and Orso’s findings are mentioned as a potential threat to validity, (2) the authors are referred to as related work.
**B. Absolute Evaluation Metrics**
Jin and Orso [28] proposed F\(^3\) that extends BugRedux, a technique for reproducing failures observed in the field, with fault localization capabilities. In their study, the authors evaluate F\(^3\) using wasted effort, indicating the number of non-faulty components that have to be inspected on average before a faulty component is found in the diagnostic report. In their study, the authors use the following formula to compute wasted effort.
\[
\text{wasted effort} = m + n + 1
\]
where \(m\) is the number of non-faulty components that are assigned a strictly higher suspiciousness score than the faulty component, and \(n\) is the number of non-faulty components that are assigned an equal suspiciousness score as the faulty component. Note that the formula used to compute wasted effort can vary. For example, Laghari et al. [37] compute wasted effort as follows.
\[
\text{wasted effort} = m + (n + 1)/2
\]
The wasted effort is also used in [26], [31], [39], [40].
Lo et al. [32] proposed an approach to combine multiple spectrum-based fault localization techniques, namely Fusion Localizer. In their study, the authors investigate multiple approaches to score normalization, technique selection, and data fusion, resulting in twenty variants of Fusion Localizer. In the evaluation of the proposed Fusion Localizer variants, the authors make use of accuracy at n (acc@n), which indicates the number of bugs that can be diagnosed when inspecting the top n components in the ranked list. This metric is also used in [33].
Le et al. [38] proposed a new automated debugging technique, called Savant, that employs learning-to-rank, using changes in method invariants and suspicious scores, to diagnose faults. To evaluate Savant, the authors make use of three absolute rank-based metrics, namely acc@n, wef@n, and MAP. Wasted effort at n (wef@n) is a variation of wasted effort that computes the wasted effort within the top n components of the ranked list. The Mean Average Precision (MAP) [46] metric is widely used in information retrieval. MAP is computed by computing the mean of the average precisions (APs), that is computed as follows:
\[
AP = \frac{1}{M} \sum_{i=1}^{N} P(i)rel(i)
\]
where \(M\) is the number of total faulty program components, \(N\) is the total number of components in the ranked list, \(P(i)\) is the precision at the \(i\)th component in the diagnosis report, and \(rel(i)\) is a binary indicator indicating whether component \(i\) is faulty, i.e. relevant. The precision at position \(k\) \((P(k))\) is computed as follows:
\[
P(k) = \text{number of faulty components within top } k
\]
Finally, MAP is computed by averaging the average precisions of each produced ranked list.
Laghari et al. [37] also make use of wef@n to evaluate the performance of their proposed technique: patterned spectrum analysis. In their study, they use method call patterns, which are obtained by adopting the closed itemset mining algorithm [47], as hit-spectrum to perform SBFL.
Wen et al. [43] proposed an IRBFL technique, called Locus. Locus is able to locate bugs at both the software change and source file level — the latter is a common granularity used in IRBFL techniques. It leverages the information of bug reports, source code changes, and change history to localize suspicious hunks. In the evaluation of Locus, the authors made use of three metrics: Top@n, MRR, and MAP. Top@n reports how many bugs are diagnosed in the top n suspicious code entities, and is therefore identical to acc@n. The Mean Reciprocal Rank (MRR) [48] is another metric used in information retrieval to evaluate the performance. The formula of MRR is as follows:
\[
MRR = \frac{1}{Q} \sum_{i=1}^{Q} \frac{1}{rank_i}
\]
where \(Q\) is the number of queries, i.e. the number of performed fault diagnoses, \(rank_i\) is the position of the first true positive.
\(^1\)https://www.scopus.com/
TABLE I: Overview of studies surveyed in this work.
<table>
<thead>
<tr>
<th>Year</th>
<th>Author</th>
<th>Title</th>
<th>Evaluation metric</th>
<th>Result comprehension</th>
<th>Ecosystem</th>
<th>User study</th>
</tr>
</thead>
<tbody>
<tr>
<td>2013</td>
<td>Campos et al. [26]</td>
<td>Entropy-based test generation for ...</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2013</td>
<td>Gouveia et al. [27]</td>
<td>Using HTML5 visualizations in ...</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2013</td>
<td>Jin and Orso [28]</td>
<td>F3: fault localization for field failures</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2013</td>
<td>Pastore and Mariani [29]</td>
<td>AVA: supporting debugging with ...</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2013</td>
<td>Qi et al. [30]</td>
<td>Using automated program repair ...</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2014</td>
<td>Liu et al. [31]</td>
<td>Simulink fault localization: an ...</td>
<td>•</td>
<td>•</td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2014</td>
<td>Lo et al. [32]</td>
<td>Fusion fault localizers</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2014</td>
<td>Wu et al. [33]</td>
<td>CrashLocator: locating crashing ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2014</td>
<td>Zuddas et al. [34]</td>
<td>MIMIC: Locating and ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2015</td>
<td>Wang et al. [35]</td>
<td>Evaluating the usefulness of ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Kochhar et al. [36]</td>
<td>Practitioners’ expectations on ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Laghari et al. [37]</td>
<td>Fine-tuning spectrum based fault ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Le et al. [38]</td>
<td>A learning-to-rank based fault ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Li et al. [39]</td>
<td>Iterative user-driven fault localization</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Li et al. [40]</td>
<td>Towards more accurate fault ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Wang and Huang [41]</td>
<td>Weighted control flow subgraph to ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Wang and Liu [42]</td>
<td>Fault localization using disparities ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Wen et al. [43]</td>
<td>Locus: locating bugs from software ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Xia et al. [44]</td>
<td>Automated Debugging Considered ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
<tr>
<td>2016</td>
<td>Xie et al. [45]</td>
<td>Revisit of automatic debugging via ...</td>
<td>•</td>
<td></td>
<td>•</td>
<td>•</td>
</tr>
</tbody>
</table>
diagnosed component. This metric evaluates the ability to locate the first faulty component.
Qi et al. [30] analyzed the effectiveness of automated debugging techniques from the perspective of fully automated program repair. The automated program repair process can be divided into three phases: fault localization, patch generation, and patch validation. With this in mind, the authors proposed the NCP metric, the number of candidate patches that are generated in the patch generation phase. Intuitively, a well-performing fault localization technique would require a lower number of generated candidate patches because the faulty component is ranked higher in the diagnosis report.
To summarize, we observe that studies in software fault localization have adopted absolute evaluation metrics since Parnin and Orso’s study, namely wasted effort, accuracy at n, wasted effort at n, mean average precision, mean reciprocal rank, and the number of candidate patches. Moreover, wasted effort is slowly becoming the standard to evaluate the fault localization performance.
C. Result Comprehension
Gouveia et al. [27] implemented GZoltar, a plug-and-play plugin for the Eclipse Integrated Development Environment (IDE) that performs fault localization and visualizes the suspiciousness of program components. Specifically, GZoltar visualizes the results in three different ways: sunburst, vertical partition, and bubble hierarchy. The authors found evidence that the visualizations aid the developer in finding the root cause of a bug, which we discuss in more depth in Section IV-E.
Wang and Liu [42] presented an automated debugging technique using disparities of dynamic invariants, named FDDI. FDDI uses a spectrum-based fault localization technique to localize the most suspicious functions. Then, FDDI uses Duikon to infer dynamic invariant sets for the passing and failing test suites. Finally, FDDI performs a disparity analysis between the two invariant sets and generates a debugging report that comprises suspect statements and variables. The variables are extracted from the disparity, which could assist users in finding and understanding the root cause of a bug.
As mentioned in Section IV-B, Wen et al. [43] performed fault localization based on software changes, resulting in a list of suspicious change hunks. The advantage of outputting change hunks is twofold. First, the time spent on bug triaging is reduced because developers are linked to change hunks. The authors showed in an empirical study that 70% to 80% of the bugs are fixed by the developer who introduced the bug. A possible explanation for this is that the developer, who introduced the bug, is familiar with the code. Second, change hunks consist of contextual lines (unchanged lines), changed lines, and a corresponding commit description, providing the developer with contextual information to understand the diagnosed change hunks.
Wang and Huang [41] proposed the use of weighted control flow subgraphs (WCFSs) to provide contextual information on the suspicious components in the diagnosis report. The WCFSs are constructed from the execution traces collected during the execution of the test suite, which are also used to construct the activity matrix for SBFL. For each suspicious component in the diagnosis report, the authors allow the developer to display the associated WCFS. This enables the developer to navigate or search the diagnosed components in a more natural manner.
Li et al. [39] proposed an SFL technique, named Swift, that involves the developer in the fault localization process.
Swift performs SBFL but instead of displaying a ranked list, it guides the developer through the diagnosis report by showing the developer a query for the most suspicious method. The query consists of the input and output of the method invocation, which the developer has to validate by marking it as correct or incorrect. Then, the fault probabilities are modified accordingly and Swift generates a new query for the next most suspicious method.
Zuddas et al. [34] proposed a prototype tool, called MIMIC, that identifies potential causes of a failure. MIMIC is able to do this by performing four steps: execution synthesis, monitoring points detection, anomaly detection, and filtering. The output of MIMIC does not simply consist of suspicious statements but, instead, provides code locations, their supposedly correct behavior model, and the actual values that violate the generated behavioral model. In an empirical study, the authors show that MIMIC can effectively detect failure causes.
Pastore and Mariani [29] proposed AVA, a fault localization technique that generates an explanation about why diagnosed components are considered suspicious. It does this by comparing execution traces to a finite state automaton (FSA), which is commonly inferred from successful program executions. The suspicious components are detected using KLFA [49]. KLFA is also able to classify the difference between the actual and expected behavior according to a set of defined patterns, e.g. delete, insert, replace, etc. The classification and the suspicious components are then displayed to the developer such that the developer is able to determine whether a suggested component is truly faulty.
To summarize, several studies have focused on improving result comprehension in software fault localization. However, most studies evaluate their approach with a case study, rather than with a study involving actual users.
D. Ecosystems
As mentioned in Section IV-C, Gouveia et al. [27] have developed the GZoltar toolset, which is available as an Eclipse plug-in. The toolset localizes faults by employing a spectrum-based fault localization technique, namely Ochiai [50], that takes as input the coverage information of executed test cases. By performing SBFL the toolset produces a ranked list of suspicious program components. In addition, as a response to the findings of Parnin and Orso [1], the authors have improved the plug-in by extending the toolset with visualization capabilities.
Another tool that was created as a response to Parnin and Orso’s findings is AVA. AVA [29] consists of two main components: the AVA-core library and the AVA-Eclipse Eclipse plug-in. The AVA-core library implements an API that can be invoked from third-party programs to generate interpretations from anomalies. The Eclipse plug-in provides the developer with a GUI in Eclipse to perform debugging using AVA.
We observe that the SFL research community has not yet put a lot of effort in creating tools that can be used by developers. Therefore, we suggest that more effort should be spent on developing tools that facilitate automated debugging techniques.
E. User Studies
To verify the effectiveness of the visualizations generated by GZoltar in practice, Gouveia et al. [27] performed a user study. The experiment involved 40 participants divided into two groups: a control group that is only allowed to make use of the default debugging tools provided by the Eclipse IDE and a test group that has to use GZoltar for debugging. The user experiment showed evidence that the mean time of completing the debugging task of the test group is significantly shorter than the mean time of the control group. In fact, the test group took on average 9 minutes and 17 seconds less than the control group to find the injected fault.
Xie et al. [45] reproduced a similar user study to Parnin and Orso’s work [1] that differs in the size of involved participants and debugging tasks, namely 207 participants and 17 debugging tasks. The experiments are performed on a platform, called Mooctest, that is able to localize faults, track user behavior such as mouse position, and analyze produced logs. The main finding of the user study is that, regardless of the accuracy, spectrum-based fault localization does not reduce time spent in debugging a fault. Also, inaccurate fault localization results may even lengthen the debugging process. Based on these results, the authors corroborated the findings of Parnin and Orso — more research should be performed on result comprehension.
Kochhar et al. [36] performed a user study by means of a survey involving 386 practitioners from more than 30 countries. In the survey, the authors found that practitioners have high thresholds for adopting automated debugging techniques. A comparison between the expectations of practitioners and the state-of-the-art fault localization techniques showed that research should primarily focus on improving reliability, scalability, result comprehension, and IDE integration, such that practitioners’ expectations can be met.
Wang et al. [35] evaluated IR-based fault localization techniques by means of an analytical study and one involving human subjects. In the analytical study, Wang et al. showed evidence that the performance of IRBFL techniques is determined by the quality of bug reports. However, the authors also found that a large portion of the bug reports does not contain enough identifiable information, and therefore IRBFL techniques are less effective in the majority of cases. In the user experiment, the authors found evidence that IRBFL techniques are helpful when bug reports do not contain rich information but are unlikely to be effective otherwise.
Xia et al. [44] reproduced a similar user study to the work of Parnin and Orso as well as Xie et al.. Their user study involved 36 professionals and 16 real bugs from 4 reasonably large open source projects. However, unlike Parnin and Orso and Xie et al., Xia et al. show evidence that SBFL does reduce time spent debugging.
To summarize, we observe that the research community has performed a couple of user studies to understand the users’ needs. However, besides the mentioned user studies, almost no study evaluates their technique with a user study, which
would be particularly useful in determining its effectiveness.
V. RESEARCH IMPLICATIONS
In Section IV, we observed that more studies are adopting an absolute metric to measure the performance of SFL techniques. In particular, we see that wasted effort is slowly becoming the absolute metric to measure the performance of SFL techniques.
Although there are a few studies that propose a solution for better result comprehension, almost none of them evaluate their solution with a user study. In case of GZoltar, its visualizations are evaluated with a user study and the authors have shown evidence that debugging with GZoltar reduces time spent on debugging. However, while the debugging time is reduced, no study has yet analyzed the debugging process with an SFL tool in depth. For example, does a developer run an SFL tool multiple times before fixing a bug? Or is a bug fixed after the first analysis? How many suspicious locations identified by an SFL tool are typically visited by the developer? For what reasons? To answer such questions, we need a theory describing successful use of software fault localization techniques — developing such a theory calls for extensive qualitative studies [51] with developers interacting with such techniques.
To perform studies that focus on how to improve result comprehension, we need tooling. Parnin and Orso have pointed out that the SFL community needs to focus on tooling, but in our survey we have not seen significant advancements in this area. Although creating a tool requires a lot of effort, we are not able to push forward SFL research if we do not spend time on developing SFL tools. Therefore, we call for an open source community for SFL tooling such that development efforts can be distributed among researchers. Creating an open source community for SFL also has the benefit that replication studies are easier to perform and therefore allows comparisons to be made. This tooling environment should also provide an integrated, rich source of additional data that diagnostic techniques can leverage. Using historical data to assess multiple-fault prevalence [52] and constructing prediction models from issue trackers to improve SFL diagnoses [53] are two examples of work benefitting from such integration.
When tooling exists we are able to perform more user studies. Since Parnin and Orso’s study, only five user studies [27], [35], [37], [44], [45] have been performed. A possible cause is that tooling does not yet exist and requires a lot of effort to develop. However, user studies are essential to fully understand how to improve the current state of SFL techniques, and how to make SFL techniques being adopted in the software development cycle.
VI. CONCLUSION
In the past two decades, substantial effort has been put in improving software fault localization techniques. However, Parnin and Orso were one of the first to perform a user study and found that the assumptions made by SFL techniques do not actually hold in practice. As an example, the common assumption of perfect bug understanding does not hold in practice. For this reason, Parnin and Orso suggested a number of research directions which we generalized into absolute evaluation metrics, result comprehension, ecosystems, and user studies.
In our survey, we found that since Parnin and Orso’s study, the SFL research community is slowly adopting the absolute evaluation metric. Furthermore, it has proposed several techniques to improve result comprehension. Unfortunately, substantially less effort has been put in developing ecosystems and performing user studies, which play essential roles in closing the gap between research and practice.
Based on these observations, we recommend the SFL research community to focus on creating an ecosystem that can be used by developers during debugging activities. Such an ecosystem can serve as a framework for SFL such that researchers can easily implement their techniques in the framework and evaluate them in user studies. While current studies mostly evaluate their SFL technique using absolute metrics, actual adoption requires insights that can only be obtained from user studies of automated debugging techniques used in practice.
ACKNOWLEDGMENTS
This material is based upon work supported by the scholarship number SFRH/BDI/95339/2013 and project POCI-01-0145-FEDER-016718 from Fundação para a Ciência e Tecnologia (FCT), by ERDF COMPETE 2020 Programme, by EU Project STAMP ICT-16-10 No.731529 and by 4TU project “Big Software on The Run”.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/29064688/TUD_SERG_2017_016.pdf", "len_cl100k_base": 8887, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34164, "total-output-tokens": 9769, "length": "2e13", "weborganizer": {"__label__adult": 0.0003647804260253906, "__label__art_design": 0.0002397298812866211, "__label__crime_law": 0.0003075599670410156, "__label__education_jobs": 0.0006046295166015625, "__label__entertainment": 4.6372413635253906e-05, "__label__fashion_beauty": 0.00014781951904296875, "__label__finance_business": 0.0001112222671508789, "__label__food_dining": 0.0002453327178955078, "__label__games": 0.0005202293395996094, "__label__hardware": 0.0005598068237304688, "__label__health": 0.0003876686096191406, "__label__history": 0.00016069412231445312, "__label__home_hobbies": 5.3763389587402344e-05, "__label__industrial": 0.0002112388610839844, "__label__literature": 0.00022351741790771484, "__label__politics": 0.00018894672393798828, "__label__religion": 0.0003502368927001953, "__label__science_tech": 0.004547119140625, "__label__social_life": 7.826089859008789e-05, "__label__software": 0.004077911376953125, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00026106834411621094, "__label__transportation": 0.0003082752227783203, "__label__travel": 0.00016367435455322266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43002, 0.01581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43002, 0.59405]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43002, 0.92641]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 260, false], [260, 260, null], [260, 5500, null], [5500, 11566, null], [11566, 17340, null], [17340, 23039, null], [23039, 30387, null], [30387, 36683, null], [36683, 43002, null], [43002, 43002, null], [43002, 43002, null], [43002, 43002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 260, true], [260, 260, null], [260, 5500, null], [5500, 11566, null], [11566, 17340, null], [17340, 23039, null], [23039, 30387, null], [30387, 36683, null], [36683, 43002, null], [43002, 43002, null], [43002, 43002, null], [43002, 43002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43002, null]], "pdf_page_numbers": [[0, 0, 1], [0, 260, 2], [260, 260, 3], [260, 5500, 4], [5500, 11566, 5], [11566, 17340, 6], [17340, 23039, 7], [23039, 30387, 8], [30387, 36683, 9], [36683, 43002, 10], [43002, 43002, 11], [43002, 43002, 12], [43002, 43002, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43002, 0.12941]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
88a586880bc0f80be5de8880ab08fc5e1c5a2233
|
Obfuscated VBA Macro Detection Using Machine Learning
Sangwoo Kim, Seokmyung Hong, Jaesang Oh and Heejo Lee∗
Korea University
Seoul, Republic of Korea
Email: { sw_kim, canasta, jaesangoh, heejo } @korea.ac.kr
Abstract—Malware using document files as an attack vector has continued to increase and now constitutes a large portion of phishing attacks. To avoid anti-virus detection, malware writers usually implement obfuscation techniques in their source code. Although obfuscation is related to malicious code detection, little research has been conducted on obfuscation with regards to Visual Basic for Applications (VBA) macros.
In this paper, we summarize the obfuscation techniques and propose an obfuscated macro code detection method using five machine learning classifiers. To train these classifiers, our proposed method uses 15 discriminant static features, taking into account the characteristics of the VBA macros. We evaluated our approach using a real-world dataset of obfuscated and non-obfuscated VBA macros extracted from Microsoft Office document files. The experimental results demonstrate that our detection approach achieved a F2 score improvement of greater than 23% compared to those of related studies.
I. Introduction
Attacks using macros have become a constant threat since the “Concept”, a wide-spread macro virus written in Visual Basic for Applications (VBA), which appeared in 1995 [1]. Macro malware was a major threat from the late 1990s to the early 2000s, but it had declined since the security mechanism of Microsoft Office was enhanced in 2000 [2], [3]. However, according to the statistics and security news of Anti-Virus (AV) companies, attacks using VBA macros have been increasing again since the second half of 2014 [4], [5]. Since the release of Microsoft Office 2000, the execution of VBA macros was disabled by default, but attackers began to deploy simple social engineering techniques that lure users into enabling the execution of macros.
The threat reports of AV companies also confirmed the comeback of script-based malware such as VBA macro malware. According to the report released by Symantec in 2016, MS Office document file formats dominated the email attachments (73.2%), even more than executable files [6]. Furthermore, a recent Kaspersky threat report demonstrates that the Microsoft Office Word VBA macro-based attacks are included in the top 10 malware families [7]. The latest McAfee security report, published in September 2017, also covers the trends of script-based malware and reports a malware type which includes PowerShell command inside of VBA macro [8].
As mentioned above, the security reports of AV vendors have shown that script-based attacks are on the rise and can be dangerous. The most frequently mentioned scripting languages that can be used in malicious code are JavaScript, Visual Basic Script, PHP, and Powershell. Among these, VBA macro malware, which is an attack related to MS Office documents, should not be ignored. Owing to the fact that MS Office document files are used by a large number of companies and institutions, malware which leverages MS Office documents as an attack vector can have a large impact. Attacks related to VBA macros are usually considered less suspicious than the executable files because most people are familiar with the MS Office document files, e.g., .docx or .pptx. In result, this negligence leads to the proliferation of ongoing VBA macro attacks.
A primary quality that a successful cyber-attack must have is the ability to bypass AVs. One of the most effective strategies to bypass AVs is obfuscation, which is the intentional obscuring of code by making it difficult to understand. In many script-based malware, obfuscation techniques are fairly common, and it is generally known that obfuscation works well against AVs. There have been malicious JavaScript detection studies which categorized obfuscation techniques into four types and investigated how the detection rate changed when they were applied [9], [10]. The studies demonstrated that obfuscation techniques are effective in avoiding the AV detection.
Currently, many obfuscated VBA macro attacks are underway, but there are still few studies on obfuscated VBA macro detection. Most document malware detection researches focused on vulnerability or shellcode detection [11]–[14].
Only recently has it appeared in several studies under the name of “Downloader” or “Macro malware”. Mimura et al. [15] conducted a study to extract the Remote Access Trojan (RAT) in malicious documents files used in Advanced Persistent Threat (APT) attacks from 2009 to 2015. They classified the collected document malware as “Downloader” and “Dropper”. “Downloader” uses VBA macro, and “Dropper” includes executable files in itself. However, the focus of the study was on the “Dropper”, rather than the “Downloader”.
We have observed that the rate of VBA macro use in APT attacks has been drastically increasing since 2014, and our proposed method targets the missing area that has not been studied by the referenced research. There are few studies
that leverage machine learning to detect malicious MS Office documents.
Cohen and Nissim et al. [16], [17] proposed a method to detect malicious docx files with structural features by using active learning that emphasizes the updatability of a detection model. It provided a 94.44% true positive detection rate by leveraging the hierarchical nature of docx files. It presented the most prominent 11 features, including 8 structural path related to the existence of VBA macros.
Conversely, research on detecting attacks related to PDF documents have been widely carried out. VBA macros in MS Office files and JavaScript in PDF documents share similar characteristics. We can detect the obfuscation techniques in the JavaScript of the PDF files, and there are many studies on the detection of obfuscated malicious JavaScript. Given that it is also a scripting language, one may think that we can apply JavaScript research to VBA macro detection, but it has never been demonstrated how it would work. There are many similarities due to the “scripting language in the document”, but the language itself is different, hence the obfuscated code is very different. For instance, there is a minification technique in JavaScript. Although minification can reduce code size by deleting linefeed, it often appears in malicious script code to avoid malware detection. This technique is only applicable to JavaScript, not VBA macros. Owing to the differences between JavaScript and VBA, independent research focusing on VBA macros should be conducted.
In this paper, we propose a method to detect obfuscated VBA macros in MS Office documents by using machine learning classifiers. First, we investigated the VBA macros that were actually used as malicious code, and classified the VBA obfuscation techniques into four categories by referring to related research. In our experiment, we evaluated the performance of our proposed obfuscation detection method which leverages machine learning. 773 malicious and 1,764 benign MS Office files were collected and we conducted an experiment with 4,212 VBA macro extracted from the collected files. All VBA macros were manually labeled as either obfuscated or normal. By performing a manual scan on large, real-world samples, we demonstrated how many malicious and benign samples were obfuscated. From this labeled dataset, we extracted 15 discriminant static features that reflect the characteristics of the VBA macros and applied them to five different classifiers, and compared the results with those of related studies. As a result, we obtained a 23% improvement in F₂ score in our comparative experiment.
The contributions of this paper are as follows.
- As the first obfuscation detection study applied to VBA macro, we have summarized the types of obfuscation techniques and we have shown the extent of obfuscation applied to real-world VBA macros.
- We presented 15 discriminant static features and, tested them using five different classifiers. The results of the comparison with related research show that the performance was improved by 23%.
The rest of this paper is organized as follows. Section II summarizes related studies concerning detection of document malware. Section III provides the simple explanation about VBA macro, and categorization of obfuscation techniques. In Section IV, we propose our obfuscated VBA macro detection approach with experiment setup. Section V, we evaluate the classification performance of proposed detection approach. Finally, Section VI and VII include discussion and conclusion of this paper.
II. RELATED WORK
Attacks using VBA macros continue to increase. Moreover, over 98% of malicious VBA macros are obfuscated according to our manual inspection on a collected sample set (as detailed in Section IV.B). However, there is a scarcity of studies on the detection of obfuscation on VBA macros. Given that attacks using VBA macros have only just begun to increase, most research is focused on vulnerability or shellcode detection [11]–[15] rather than on the detection of VBA macros. The following are the studies that can be applied to attacks using VBA macros.
A. Malicious VBA macro detection
Until now, a few studies have been proposed and most of them are based on a machine learning approach. Cohen et al. [16] conducted research on malware detection for XML-based documents. This study uses the hierarchical nature of Office Open XML (OOXML) as a key feature of machine learning to detect MS Office document malware. It recognized the risks that could be posed by document files, and well-organized the types of possible attacks which could result from them. In their experiment, nine different classification algorithms were used and Random Forest classifiers demonstrated the best results among them. In addition, they proved the effectiveness of their proposed method by comparing the detection results to those of several AVs. This research using the idea of structural feature has proven to be effective when dealing with OOXML file types such as .docx, .docm, or .xlsx. However, the majority of VBA macro malware are .doc or .xls, which are not OOXML file types [6].
Subsequently, Nissim et al. [17] added Active Learning to the SFEM method in 2017. Active Learning methods are designed to assist the analytical efforts of experts; it led to a 95.5% reduction of labeling efforts. However, their proposed mechanism is limited to docx files, which is narrower than OOXML files.
Gaustad [18] presented a research on malicious VBA macro detection in 2017. This study used a Random Forest classifier of the ensemble learning with over a thousand static features to detect malicious documents. However, given that its detection was performed with the static features of malicious VBA macro codes, it is difficult to identify how obfuscation techniques were considered in the detection process.
B. Malicious JavaScript code detection
JavaScript is one of the most popular scripting languages. JavaScript-based attacks are also taking place in PDFs, and
have similarities with VBA in that both threats utilize scripting language in document formats. By retrieving research on malicious JavaScript detection, we are able to explore appropriate ways to counteract VBA macro malware.
While malicious VBA macro detections in MS Office documents mainly consist of machine learning methods, there are a larger variety of approaches to detecting malicious JavaScript in PDFs. Moreover, a number of research have been emphasizing on analyzing obfuscations, some focusing on restoring the obfuscated code to original code by de-obfuscating it. In the subsections below, we will introduce the representative studies that have been researched to detect malicious JavaScript.
**Static analysis approach:** In malware detection, static analysis has advantages over dynamic analysis in terms of cost for inspection, because it generally guarantees a lightweight inspection. Choi et al. [19] proposed a method to detect JavaScript obfuscation that leveraged the lexical characteristics of obfuscated strings. Detection was performed by using an N-gram distribution, entropy, the string length for all the strings used, and the parameters of the dangerous function. Xu et al. [20] analyzed the decoding process of obfuscated code to detect obfuscation. Their key idea was that obfuscated malicious JavaScript code has to be de-obfuscated before it executes its malicious actions. They identified the function calls that are related with obfuscated malicious JavaScript code.
**Dynamic analysis approach:** Liu et al. [21] proposed a method to detect malicious JavaScript through document instrumentation. This method inserts monitoring code into a PDF, so that the inspector knows the context of the runtime behaviors. Kim et al. [22] proposed J-force, which is a forced execution engine for JavaScript. J-force was introduced to detect suspicious hidden behavior, and it achieved a 95% code coverage on real-world JavaScript samples. Furthermore, there is a study focusing on the de-obfuscation of malicious JavaScript, JSDES [23]. It is an automated system for de-obfuscation and analysis of malicious JavaScript code. This study conducted an extensive survey on the available JavaScript obfuscation techniques and their usage in malicious code.
**Machine learning approach:** Likarish et al. [24] proposed a method based on the Support Vector Machine (SVM) and a decision tree to detect malicious JavaScript in web pages. They proposed a frequency of 50 keywords and 15 properties as detection features that indicate human-readable characteristics. Jodavi et al. [25] used one-class SVM classifiers to detect obfuscation. In training, they pruned the classifier ensemble using a novel binary Particle Swarm Optimization (PSO) algorithm to find a near-optimal sub-ensemble. Aebersold et al. [26] tested the machine learning approach to detect obfuscated JavaScript in 2016. This study trained four different classifiers and evaluated them with real-world PDF files. Their approach and proposed features scored promised results on a benign dataset, but had a 60.6% recall score on a malicious dataset.
The users of the host applications are able to leverage the VBA language to write script that access to the functionalities of host applications.
### III. BACKGROUND
#### A. Visual Basic for Applications
Visual Basic for Applications (VBA) is a scripting language that is implemented within host applications, such as Microsoft Office Word or Excel [27]. The advantage of VBA is its ability to automatically and repeatedly use various functions of the host application and system. Figure 1 displays a sample macro code that interacts with a system. Figure 1(a) shows the macro code for executing a program of a system via the VBA function `Shell()`. With several lines of code, any program in a computer can be executed. As shown in Figure 1(b), VBA can be used to send emails in Excel via an Outlook object. Through VBA, users can perform a variety of tasks.
The expandability of VBA is convenient for users, but it can also become an opportunity for attackers. Attackers can accomplish almost every action that can be used for malicious behavior, such as downloading or executing, via a VBA macro. Figure 1 represents the sample code of functions.
### TABLE I: Type of obfuscation techniques
<table>
<thead>
<tr>
<th>#</th>
<th>Type</th>
<th>Method</th>
</tr>
</thead>
<tbody>
<tr>
<td>O1</td>
<td>Random obfuscation</td>
<td>Randomize name</td>
</tr>
<tr>
<td>O2</td>
<td>Split obfuscation</td>
<td>Split strings</td>
</tr>
<tr>
<td>O3</td>
<td>Encoding obfuscation</td>
<td>Encode strings</td>
</tr>
<tr>
<td>O4</td>
<td>Logic obfuscation</td>
<td>Insert and reorder code</td>
</tr>
</tbody>
</table>
Fig. 2: An example of Random obfuscation
that are triggered by users – however attackers prefer to take advantage of functions triggered upon opening a document, such as `workbook_open` or `document_open()`. Furthermore, by using simple social engineering techniques which lure users to enable macros, attackers are able to bypass MS Office’s security mechanism.
### B. Obfuscation Techniques in VBA
The goal of this study is to detect obfuscation with the textual characteristics of obfuscated macro code. For more effective detection, we classify obfuscation techniques into four types by target and method of obfuscation based on the studies by Collberg et al. [28] and Xu et al. [9]: 1) Random obfuscation, 2) Split obfuscation, 3) Encoding obfuscation, and 4) Logic obfuscation. Each obfuscation type has different syntactic structure and different uses of functions and operators. Therefore, we can use the unique characteristics of each type to detect obfuscation. Table I provides a summary of each obfuscation type.
The obfuscation techniques affect the manual code inspection of human experts. Whether it be a signature-based AV or machine learning based AV, in order to judge the maliciousness of code, it must be predetermined by human experts. These obfuscation techniques are applied to decelerate the time of analysis, which in turn, delays the countermeasures after detection. Although each obfuscation technique is quite simple, when used in combination, they render the code visually indecipherable. In addition, attackers use obfuscation tools to create many variants of malware with different hash values. In the following subsections, the explanation of each obfuscation technique and our machine learning features to detect these techniques will be provided with example code.
1) **O1 Random Obfuscation**: Random obfuscation is a type of obfuscation that changes the identifiers of VBA macro code. Identifiers are the names of variables and procedures that are used in VBA macro code. Random obfuscation makes it difficult to analyze the flow from variables and function calls by changing the identifiers to random strings.
Figure 2 shows an example of random obfuscation. The names of the sub procedure and the variables are changed to random meaningless strings such as `ueiwjfdjkfdsv`, `yruehdjdnnz`. This change to random strings makes it difficult for humans to understand the actual operation of the macro code.
The identifying feature of this random obfuscation is in the naming of the identifiers. Therefore, using Entropy, a measure of the disorder of the characters of the identifiers, can be one way of detecting the characteristics of this obfuscation. Related studies already leverage the entropy of the entire code as one feature to detect malicious scripts [18], [26]. In addition to this, given that random obfuscation is applied to identifiers, it is also possible to use the variance or mean value of length of identifiers as one feature of obfuscation detection.
2) **O2 Split Obfuscation**: Split obfuscation usually performs obfuscation by dividing parameter data. The morphological changes that occur in the process of partitioning data have proven to be very effective in avoiding signature-based AVs [9]. As the data is partitioned, it has a form that is different from the detection signature hence, it is not flagged by the detection technique. However, when the macro is executed, the parameter value transferred to the function is the same, so the macro can successfully execute its malicious action. Figure 3 displays an example of macro code with split obfuscation. This conversion does not change the actual behavior of the code, but it avoids the detection of the use of “wScript.shell” or “Process” as the signature for malware detection.
Functions such as `Shell()` and `URLDownloadToFile()` are frequently used for attacks in malicious VBA macros, but legitimate users can also use them in benign VBA macros for normal programs. Therefore, in order to determine whether a VBA macro is obfuscated or not, it is necessary to verify not only the functions it uses, but also the input parameters of the functions. Split obfuscation obstructs the detection of malicious code by modifying parameter values.
In obfuscated macro code, in order to use the split data, it is essential to combine it. The combination of data is done using the join operators ‘&’ and ‘+’, as shown in Figure 3. The join operators are used in normal macros, but more often in obfuscated macros. Thus, an excessive appearance of these characters can be selected as one of the features to detect obfuscation. In addition to this, given that it also increases the number and length of string variables, we can also leverage it as a feature.
3) O3 Encoding Obfuscation: Encoding obfuscation performs obfuscation by modifying function parameters like split obfuscation. Modification is performed by converting parameter data using reversible algorithms such as Base64 or Shift. Three types of methods are used in encoding obfuscation: 1) built-in VBA functions, 2) character encoding, and 3) user-defined functions.
The first type of encoding obfuscation uses the built-in functions of VBA such as Replace(), Right(), or Left(). Figure 4(a) shows an obfuscation using Replace() which is basically supported by VBA. As shown in the figure, the parameter “savetofile” is changed to “savteRKtofilteRK” which replaces “e” to “t”. It prevents macros from being detected by the keyword “savetofile”. The second type of encoding obfuscation changes the character encoding by the use of VBA functions such as Asc(), Hex(), Chr(). These functions change characters to the number of the ASCII code and vice versa. The last type of encoding obfuscation uses conversion algorithms that are manually defined by users, for example, 4(b). Many algorithms are used with simple bitwise operations, such as shift or xor, or complex encryptions, such as Base64.
The functions used for encoding obfuscation are used in non-obfuscated macros as well, but there is a large gap in the frequency of their appearance. This is because attackers encode as many strings as possible to prevent AVs from finding keywords. In the case of “Downloader [15]” which downloads and executes a malicious executable, the URL, path and related strings are all encoded by use of the aforementioned functions. Hence, we can leverage the appearance frequency of encoding functions as a feature to detect this type of obfuscation.
4) O4 Logic Obfuscation: Logic obfuscation changes the execution flow of macro code. It complicates the code and makes analysis difficult. This technique is done by declaring unused variables or using redundant function calls. It is not difficult to increase the code size by inserting dummy codes, and it is already being used by a public VBA macro obfuscation tool [29]. If the size of the code that needs to be analyzed increases 100 times by deliberately inserting redundant dummy code, it means that the time it takes for the code analyst to troubleshoot the obfuscated code will be increased by the considerable amount.
Although the logic obfuscation affects the code analysis, it often results in a significant change in code size. It also changes several characteristics of code such as the number of functions and declared variables, function parameters, string data, etc. Therefore, logic obfuscation has no effect on the detection rate in our obfuscation detection study using static features. Rather, if the characteristics of logic obfuscation are well-summarized, we can leverage them as features to detect obfuscation. In Section IV, 15 discriminant static features which reflect the above-mentioned characteristics of the obfuscation techniques will be introduced.
IV. DETECTING OBFUSCATION WITH A MACHINE LEARNING APPROACH
The obfuscation techniques in VBA macros are explained in Section III. To detect aforementioned obfuscation techniques, we propose a method based on classification algorithm through supervised machine learning. Although machine learning based detection method requires several prerequisites such as sufficient data collection, training set labeling, and feature selection process, it nevertheless has several advantages over alternative techniques. Unlike machine learning, static analyses, such as signature or pattern based detection method, have limitations when counteracting to unknown malware; dynamic analysis has a heavy overhead. On the other hand, machine learning approach has been applied in numerous areas of the computer science field including anomaly detection, and has guaranteed and acceptable run time. If the prerequisites are satisfied, machine learning method can overcome the shortcomings of the above-mentioned approaches and promising performance can be expected.
This section provides an overview of our experiment process. It consists of 1) Data collection, 2) Preprocessing, 3) Feature extraction & selection, and 4) Classification using
machine learning classifiers. To thoroughly evaluate the performance of our proposed machine learning method, we first explain how we collected the samples and preprocess them. After that, the entire process of extracting and selecting features to effectively detect the obfuscation techniques summarized in Section III will be described. Finally, the explanation of the machine learning classifiers will follow.
A. Data collection
Before proceeding with the experiment, we collected Microsoft Office document files which contained VBA macros. Owing to the fact that our study targets VBA macros, we collected ".docm" and ".xlsm" files, which will likely contain macros, through keyword searches from Google. We also collected all the MS Office files that were classified as malicious in the malware portal [30]–[32] unconditionally, to ensure that our proposed method is well-suited to be applied to the malicious files. The sample collection was done from 2016 to 2017.
We verified the hash value of the collected files so that there were no duplicates, and we also excluded the files which did not have VBA macros. In the next step, we double-checked the detection results of the VirusTotal [32] and the VBA macros of files to determine the benign and malicious dataset, so that the only samples using VBA macros as an attack vector were included in the malicious dataset. As a result of the data collection, we obtained 2,537 files in which 773 are benign, and 1,764 are malicious. Table II displays the summary of our dataset with the average file size of each sample set. According to our observation, malicious files tend to be much smaller in terms of file size, which means that most of the attacks using VBA macros work to download malware from a remote address and execute it, and do not actually include malware in the file itself [15].
Although VirusTotal includes the results of about 60 different AV vendors who take advantage of individual detection mechanism, it is not 100% accurate. Because there is no conclusive criterion to determine a sample’s maliciousness, we set a threshold to divide samples into malicious/benign training dataset. We set this threshold loosely to prevent the training samples from being mislabeled. In detail, we labeled a sample as malicious if more than 25 vendors detected it as malicious, and labeled it as benign if less than or equal to 2 vendors marked it as malicious. Every sample in between was manually inspected by three security researchers who specialize in VBA macros.
B. Preprocessing
The next step for detecting obfuscation is preprocessing. By preprocessing we mean to extract VBA macros from the collected MS Office document files, remove small (insignificant) and duplicated macros, and label training samples.
To obtain the VBA macros from Microsoft Office document file, we need to open the document file directly or parse the structure of OpenXML (OLE in the previous version of MS Office 2003). Given that malicious VBA macros are often executed when documents are opened, we use oletools in the extraction of VBA macro codes [33]. Oletools is an open source Python package to analyze Microsoft Office document files. It allows us to easily extract the VBA macros without opening the file.
Although we split our dataset into benign and malicious to provide the information about the relationship between maliciousness and obfuscation, the goal of this paper is to detect obfuscation in VBA macros. VBA macros in benign
datasets could be obfuscated, and vice versa. Therefore, we manually inspected and marked the macros with obfuscating features (described in Section III) as “obfuscated”.
In this manual labeling process, we observed that the macros of less than 150 bytes are not meaningful, either malicious or benign, because they are only made up of comments or practice code that had no particular purpose. Therefore, insignificant macros with too short of a length were excluded from our dataset.
Table III shows that the majority of malicious VBA macros are obfuscated. Only 1.7% of the benign macros are obfuscated, whereas 98.4% of the malicious macros are obfuscated. With a huge gap of obfuscation rates in each of the dataset group, we verified the obfuscation tendency in benign and malicious macros: malicious macros are more likely to be obfuscated.
Also, there is a large gap in the number of extracted VBA macros. As explained in the data collection step, we already eliminated the duplicates ones, after collecting the Microsoft Office document files. But there is still a possibility that the files have macro duplicates. We found that there were about 5k macros for the overall dataset in this process of duplicates elimination. Finally, the number of macros was narrowed down to 3,380 and 832 respectively, in the benign and malicious dataset.
In the case of the benign dataset, the number of macros increases to more than 4 times as many as the number of files, because one file could have several macros. However, in the case of a malicious dataset, even though we only collected files that contain more than one macro in the data collection step, the number of macros is halved compared to the number of files. This means that most of the malicious documents which contains VBA macros are using the same macros.
In addition to this, we also examined the code length of the macros belonging to the non-obfuscated and obfuscated group. The results are shown in Figure 5 (a) and (b). Each figure displays the code length distribution in normal and obfuscated VBA macros, respectively. Figure 5 (a) is uniformly distributed throughout, this could also be evidence that our dataset is well-collected, including the informative benign macros. Alternatively, in Figure 5 (b), it can be seen that the macros are somewhat grouped to form several horizontal lines. Generally, we can expect that obfuscated code is reproduced with a custom obfuscater with different options. Especially in the malicious case, malware writers are expected to make variations to avoid the signature-based detection of AVs. We can interpret the results shown in Figure 5 (b), as the result of this expectation. This means that there are a large number of macros which have a similar code length even after the duplicate elimination.
C. Feature selection
We summarized the types of obfuscation techniques in Section III. After observing the results of applying the obfuscation techniques, we built a set of features based on each of the obfuscation techniques. The proposed features are depicted in Table IV. Each of the features targets obfuscation, and some of them are from related studies. Given that four types of techniques have distinct characteristics, different combinations of features are required for an effective detection.
1) Detection of O1 (Random obfuscation): The O1 obfuscation technique randomizes the identifier in the macro code. The identifier refers to both the function names and variable names, and O1 can be applied to both of them. As a result of O1 obfuscation, the randomness of the macro code increases. To measure the randomness of macros, we use the Shannon entropy of the file as the feature V13 [35]. The computation of the entropy is performed on the basis of each character of the macro code. If $p_i$ is considered to be the rate at which
character $i$ appears in the entire macro code, entropy $H$ follows Shannon’s Entropy formula.
$$H(X) = - \sum_i p_i \log_2 p_i$$
We use 2 additional features, V14 and V15 to capture the characteristics of O1. Because the identifiers with O1 techniques have various lengths, we calculate the length of the identifier. V14 is the average length of identifiers used in macro codes, V15 is the variance of each identifier length.
2) Detection of O2 (Split obfuscation): In the VBA macros with O2, more strings and string operators are observed than normal macros for the purpose of avoiding the detection of AVs. It also contains many unused dummy strings. For this type of obfuscation, we use V5-V7. V5 contains the number of occurrences of string operators such as ‘+’, ‘-‘ or ‘&’, which are used for string concatenation. Feature V6 is % of characters belonging to strings, and V7 calculates the average length of strings. These three features can indicate the unusual appearance of strings in obfuscated macros.
3) Detection of O3 (Encoding obfuscation): Encoding obfuscation is related to the use of various function calls. It is often used with O2, hiding keywords that can be detected by AVs, e.g., URL or .exe. It also uses infrequent financial functions which are only used for accounting and financial calculations to create more varied variants. To capture the characteristics of O3, we use V8-V11, while attempting to cover as many types as possible. The examples of the functions included for each feature are listed as follows. The rest of functions can be found by referring to the VBA language specification [27].
- **V8 (text functions):** Asc(), Chr(), Mid(), Join(), InStr(), Replace(), Right(), StrConv(), etc.
- **V9 (arithmetic functions):** Abs(), Atn(), Cos(), Exp(), Log(), Randomize(), Round(), Tan(), Sqr(), etc.
- **V10 (type conversion functions):** CBool(), CByte(), CChar(), CStr(), CDec(), CUInt(), CSShort(), etc.
- **V11 (financial functions):** DDB(), FV(), IPmt(), PV(), Pmt(), Rate(), SLN(), SYD(), etc.
4) Detection of O4 (Logic obfuscation): O4 changes the entire shape of the targeted code by inserting dummy codes and reordering the code. As we mentioned in Section III, code reordering does not affect our proposed method as we use static features. We use V1-V4 to capture the dummy code insertion, which leads to an increase in code size. Before describing each feature, we use “words” to represent the units delimited by whitespace and VBA programming language symbols. “words” is used as a part of the features to detect maliciousness in [24]; it is also included in our features as V3 and V4 because it is a discriminant feature for dividing obfuscated and non-obfuscated code. V3 and V4 represent the average and the variance of “word” length, respectively.
To balance the effect of each feature on the training classifiers, a normalization process is required. Aebersold et al. [26] divided the value of features, which need to be normalized, by the length of the entire scripts. Instead, we assign the length of the comments-excluded macro code to V1, and the length of comments to V2. Then we use V1 as the normalization unit for more effective training.
V1-V11 and V13-V15 are selected to capture the characteristics of each obfuscation technique. In addition, there are a few unique functions observed in the obfuscated macros. Obfuscation is usually applied to code that has something to hide rather than tiny, insignificant code. Obfuscation is used to protect the intellectual property of the program code, or to hide malicious behavior in malware. In both cases, obfuscated code has a significant role that programmer wants to hide, hence it often leads to the use of certain functions with relatively rich functionality. For examples, the Shell() function is able to run executable programs, CallByName() can execute methods of objects which have full functionality in the VBA macro. Including these functions, V12 counts the use of functions that can write, download, or execute files.
D. Machine learning classifiers
We choose five different supervised machine learning classifiers to evaluate the performance of our proposed method: Random forest (RF), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), Bernoulli Naive Bayes (BNB), and Multi-Layer Perceptron (MLP). In addition to the four classifiers already used in previous studies [24], [26], we introduced the MLP classifier which is a class of artificial neural network models. We choose Scikit-learn [36] to use the aforementioned classifiers. Instead of describing the details of each classifier, we provide a customization parameter as well as a brief description of each classifier in this part of the paper.
**Support Vector Machine (SVM)** [37] finds the optimal, or maximum-margin hyperplane in a feature space that can separate a feature space into two classes (in our work, two classes indicate obfuscated and non-obfuscated). In our experiment, we use C=150, $\gamma$ =0.03 as a parameter.
**Random Forest (RF)** [38] is an ensemble learning method for classification or regression. It constructs multiple decision trees in the training phase. It is known that Random Forest is less likely to have an overfitting problem than a decision tree [39].
**Multi-Layer Perceptron (MLP)** [40] is a feed-forward artificial neural network model that conducts supervised learning by backpropagation using one or more hidden layers between the input and output layer.
**Linear Discriminant Analysis (LDA)** [41], which is a form of supervised dimensionality reduction, is a generalization of Fisher’s linear discriminant [42] that finds the linear subspace which maximizes the separation between two classes.
**Naive Bayes** [43] classifiers are a set of simple probabilistic classifiers based on applying the Bayes’ Theorem with naive independence assumptions between the features used. We use Bernoulli Naive Bayes (BNB) in the evaluation of proposed method.
TABLE V: Evaluation results of proposed approach.
<table>
<thead>
<tr>
<th>Feature set</th>
<th>Classifier</th>
<th>Accuracy</th>
<th>Precision</th>
<th>Recall</th>
</tr>
</thead>
<tbody>
<tr>
<td>V1-V15</td>
<td>SVM</td>
<td>0.955</td>
<td>0.881</td>
<td>0.906</td>
</tr>
<tr>
<td></td>
<td>RF</td>
<td>0.965</td>
<td>0.938</td>
<td>0.848</td>
</tr>
<tr>
<td></td>
<td>MLP</td>
<td>0.970</td>
<td>0.938</td>
<td>0.915</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>0.901</td>
<td>0.842</td>
<td>0.64</td>
</tr>
<tr>
<td></td>
<td>BNB</td>
<td>0.891</td>
<td>0.75</td>
<td>0.713</td>
</tr>
<tr>
<td>J1-J20</td>
<td>SVM</td>
<td>0.753</td>
<td>0.445</td>
<td>0.751</td>
</tr>
<tr>
<td></td>
<td>RF</td>
<td>0.903</td>
<td>0.841</td>
<td>0.657</td>
</tr>
<tr>
<td></td>
<td>MLP</td>
<td>0.834</td>
<td>0.76</td>
<td>0.316</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>0.826</td>
<td>0.677</td>
<td>0.318</td>
</tr>
<tr>
<td></td>
<td>BNB</td>
<td>0.701</td>
<td>0.391</td>
<td>0.775</td>
</tr>
</tbody>
</table>
V. EVALUATION
In this section, the evaluation results based on the method proposed in section IV will be described. We extracted the feature matrix from the preprocessed dataset with the features introduced in Table IV. After the five different classifiers have undergone the training process, we will evaluate the classification performance with several evaluation metrics. Before going into the details of evaluation, we briefly explain the evaluation metrics to be used in this section.
For more precise and quantitative measures of our classification performance, we use several evaluation metrics: Accuracy, Precision, Recall, Fβ score, and AUC of ROC curve. We use accuracy, precision and recall to evaluate the basic classification performance, and choose β=2 of the Fβ score to emphasize the security aspect. F2 score is often used when weighing recall more than precision. By putting an emphasis on recall, we can make sure malicious VBA macro is not executed on the users’ system. In addition, we use the Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC), which is one of the standard conventions, to show the comparison of classification results in a more intuitive manner.
We used 4,212 macros for the evaluation of classification performance, 877 of which are marked as obfuscated. Although our dataset is large enough to evaluate the classification performance of the proposed method, we use 10-fold Cross Validation (CV) to improve the statistical reliability. Therefore, the experimental results to be described below are the results of applying the 10-fold cross validation.
Table V shows the classification results with basic evaluation metrics. The feature set we proposed is marked as V1-V15 in the leftmost column. As a result of the evaluation, SVM, RF and MLP classifiers show relatively high performance among five classifiers. In particular, RF recorded a precision of 98.2% and MLP recorded a recall of 91.5%. However, LDA and BNB classifiers were found to be inadequate for detecting obfuscated VBA macro.
The evaluation result with F2 score is depicted in Figure 6. The result of the proposed method is the bars labeled ‘V feature set’. Because obfuscation detection is primarily concerned with security purposes, we emphasize recall to minimize false negatives. As MLP classifier showed relatively high performances in the basic three metrics, accuracy, precision, and recall, it also recorded the highest F2 score of 92%. In a related study that evaluated detection performance with the F2 score [24], we can see that our method is 11.4% higher, given that 80.6% was its maximum.
We can then ask ourselves the following research question: “It has been confirmed that the proposed features and classification method are effective in detecting obfuscated VBA macro, but how effective would it be to use the malware detection features of the related studies that have already been
TABLE VI: Summary of the features used in related work.
<table>
<thead>
<tr>
<th>Features</th>
<th>Description</th>
<th>Used In:</th>
</tr>
</thead>
<tbody>
<tr>
<td>J1</td>
<td>length in characters</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J2</td>
<td>avg. # of chars per line</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J3</td>
<td>total number of lines</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J4</td>
<td># of strings</td>
<td>[24]</td>
</tr>
<tr>
<td>J5</td>
<td>% human readable</td>
<td>[24]</td>
</tr>
<tr>
<td>J6</td>
<td>% whitespace</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J7</td>
<td>% of methods called</td>
<td>[24]</td>
</tr>
<tr>
<td>J8</td>
<td>avg. string length</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J9</td>
<td>avg. argument length</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J10</td>
<td># of comments</td>
<td>[24], [26]</td>
</tr>
<tr>
<td>J11</td>
<td>avg. comments per line</td>
<td>[24]</td>
</tr>
<tr>
<td>J12</td>
<td># words</td>
<td>[24]</td>
</tr>
<tr>
<td>J13</td>
<td>% words not in comments</td>
<td>[24]</td>
</tr>
<tr>
<td>J14</td>
<td>% of lines > 150 chars</td>
<td>[26]</td>
</tr>
<tr>
<td>J15</td>
<td>Shannon entropy of the file</td>
<td>[26], [34]</td>
</tr>
<tr>
<td>J16</td>
<td>share of chars belonging to a string</td>
<td>[26]</td>
</tr>
<tr>
<td>J17</td>
<td>% of backslash characters</td>
<td>[26]</td>
</tr>
<tr>
<td>J18</td>
<td>avg. # of chars per function body</td>
<td>[26]</td>
</tr>
<tr>
<td>J19</td>
<td>% of chars belonging to a function body</td>
<td>[26]</td>
</tr>
<tr>
<td>J20</td>
<td># of function definitions divided by J1</td>
<td>[26]</td>
</tr>
</tbody>
</table>
Fig. 7: The solid curve and dashed curve represents ROC curves of MLP classifier with proposed feature set and RF classifier with comparison feature set, respectively.
conducted? Would it not be more effective?”. In response to this question, we added a comparative experiment to detect obfuscated VBA macros using the same machine learning approach to the same dataset. The features used in related studies [24], [26] are listed in Table VI.
Due to the linguistic differences between JavaScript and Visual Basic for Applications, many of the features used in obfuscated JavaScript detection are not applicable for obfuscated VBA macro detection. For example, "# of eval() calls divided by entire code length" was used in the related paper [26], which was not implemented in this study because it is difficult to match the eval() function to corresponding VBA function. Besides, J14, originally ‘% of lines with more than 1000 characters’, was modified to reflect the characteristics of VBA macros that can not be applied the minification technique of removing linefees. The results of this comparison experiment are shown in Table V and Figure 6 as ‘J feature set’.
Table V includes the evaluation result of comparison experiment (marked as J1-J20). The accuracy and precision of RF classifier were the highest at 90.3% and 84.1% among five classifiers, respectively. However, in all aspects, the classification performance was much better when using V features, than when using J features. In order to comprehensively evaluate the classification performance, we introduced the F2 score and the result is depicted in Figure 6. The maximum F2 score was found in the MLP classifier for V feature set (0.92) and the RF classifier for J feature set (0.69).
As another comprehensive evaluation method, the AUC of ROC curves were calculated. Figure 7 shows the ROC curves of MLP and RF, which scored maximum F2 for proposed V and J features, respectively, MLP classifier with proposed feature set (V features) has an AUC of 0.95, and comparison experiment (J features) gets 0.812. It shows that our proposed method outperformed the previous studies by 0.138 on the AUC basis.
As a result, we obtained up to 92.0% F2 score with proposed feature set when obfuscation detection was performed using the MLP classifier. This is 23% higher than the result of using the features proposed in the related studies. The accuracy, precision, and recall show better results, and the AUC value of the ROC curve was 0.950, showing that the proposed method and features are suitable for obfuscated VBA macro detection.
VI. DISCUSSION
A. Obfuscation detection and malicious code detection
We presented 15 static features for obfuscation detection, and evaluated our proposed method using various evaluation metrics. However, this is a method for obfuscation detection, not malicious code detection. We investigated a sufficient number of MS Office document files to clarify the relationship between obfuscation and maliciousness. This obfuscation detection method can play a major role in malicious code detection, as the rate of obfuscation applied differs greatly between malicious dataset (98.4%) and benign dataset (1.7%) as described in Table III.
Currently, the distinction between malicious code detection and obfuscated code detection is unclear in malware detection research. As long as cases where obfuscation techniques used to protect intellectual property rights exist, malicious code detection should be distinguished from obfuscated code detection. However, a few of the related studies used the characteristics of obfuscation to detect malicious codes without considering obfuscation techniques [18], [24]. The confusion between maliciousness and obfuscation may lead to an increase in false alarms. Therefore, we generally classified obfuscation type (O1-O4) to prevent this mistake, and designed the feature set to not be biased towards the characteristics of a specific obfuscation tool.
In order to address the need for a counteraction against the increasing obfuscated VBA macro malware, we compared the ability of J feature set and our proposed V feature set regarding obfuscation detection. The results showed that the J feature set underperformed against the proposed V feature set, but this does not mean that the research results regarding JavaScript is bad. Rather, in regards to detection of obfuscation in highly obfuscated VBA macro malware (98.4%), applying existing studies (J feature set)—that does not take into account the characteristics of obfuscation—is not ideal.
B. Case studies: anti-analysis techniques in VBA
The obfuscation techniques observed in VBA macros are categorized into four types (O1-O4) in Section III. When using features based on the O1-O4, we succeeded in identifying obfuscation with an accuracy of 97%. In addition to obfuscation, however, several tricks have been found for the purpose of hindering the analysis and understanding of the code. In this
This technique is frequently used to improve the results of static analysis. Comparison to other analysis techniques, the proposed study achieved a score improvement of greater than 23% compared to those.
ONCLUSION
Fig. 8: Example code of anti-analysis technique
(a) A sample macro code which uses hiding string data. If the code analyst has only the above code, it can not be determined whether it is malicious or not before checking what ‘UYjwCZdgnz’ and ‘mambaFRUTISsIn’ contain.
(b) Inserting broken code causes an error when code parser tries to interpret “Sel” or “Colu” nonexistent objects.
The anti-analysis techniques to be introduced are not directly addressed or included in the proposed method. However, they also interfere with the process of analyzing the code and tend to be found together in obfuscated VBA macros. For further malware detection research, we organize the basic anti-analysis techniques observed in VBA macro as follows: 1) Hiding string data, 2) Inserting broken code, and 3) Changing the flow.
1) Hiding string data: Microsoft Office documents provide useful data spaces for storing string data. For example, one can store string data as the document’s property value, the Caption value of CommandButton, Label, and Form controls, or the ControlTipText value of UserForm controls [44]. If a malware writer hides malicious string values in these fields or even in the cell value of an Excel document and the malware refers to them, this prevent the use of static analysis techniques which analyze the VBA macro source code. Figure 8 (a) shows the case of hiding string data technique.
2) Inserting broken code: This technique is frequently adopted in obfuscated VBA macros. It is done by inserting broken code which causes run-time error. However, as Figure 8 (b) shows, the instruction pointer actually exits in line number 5, before reaching the broken code starting from line number 8. So this anti-analysis technique does not affect the actual behavior of the macro code, but it is considered as a syntax error when trying to parse the code.
3) Changing the flow: Another anti-analysis strategy, which can be used together with the aforementioned anti-analysis techniques, is achieved by switching the execution flow. It is done by using a conditional branching statement, together with checking certain condition is satisfied. Certain condition may be an http response code that verifies that the connection is well established, or it may be the number of recently opened files to prevent sandboxing analysis [45].
VII. CONCLUSION
This paper is the first research to propose obfuscated VBA macro detection using machine learning method. Attacks using VBA macro have been increasing since 2014. Given the familiarity of the MS Office document, this type of attack should not be taken lightly. Even though AV agencies are increasingly reporting attacks using VBA macro, little research has been conducted to mitigate them.
Unlike the conventional malware which exploits the vulnerability of programs, attacks using VBA macro utilize legitimate functions provided by MS Office document. These threats are not caused by a programmers mistake, nor are mitigated by a security update. A general way to avoid this kind of cyber attack is to improve the security awareness of the end users. It includes: not downloading attachments from untrusted e-mails, and recognizing the potential damage that even one malicious document can bring.
Research on identifying obfuscation techniques, which are applied to VBA macros in the document, is one of the countermeasures to prevent malware infection before malicious code is executed. We collected 4,212 benign and malicious VBA macros to investigate how many macros were obfuscated. 98.4% of the malicious macros files were obfuscated, one the other hand, only 1.7% of the benign macros were obfuscated.
In this paper, we proposed obfuscated VBA macro detection with machine learning based approach. We have classified VBA macro obfuscation techniques into four types and introduced a feature set for effective obfuscation detection. In the process of selecting detection features, several features were selected from JavaScript related studies after being modified to reflect the characteristics of VBA macro, or excluded if not applicable for VBA macro. We then evaluated the classification result of the five suggested machine learning classifiers using various evaluation metrics. The evaluation results demonstrated that our detection approach achieved a F2 score improvement of greater than 23% compared to those of related studies.
ACKNOWLEDGMENT
The authors would like to express our sincere gratitude for our shepherd, Eric Eide, and the anonymous reviewers for their every valued comments to improve the quality of the paper. This research has been supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (NO.2017-0-00184, Self-Learning Cyber Immune Technology Development).
|
{"Source-Url": "https://ccs.korea.ac.kr/pds/DSN18.pdf", "len_cl100k_base": 11651, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40588, "total-output-tokens": 12008, "length": "2e13", "weborganizer": {"__label__adult": 0.0006341934204101562, "__label__art_design": 0.000713348388671875, "__label__crime_law": 0.00489044189453125, "__label__education_jobs": 0.0019054412841796875, "__label__entertainment": 0.00021946430206298828, "__label__fashion_beauty": 0.0002894401550292969, "__label__finance_business": 0.0004761219024658203, "__label__food_dining": 0.00044417381286621094, "__label__games": 0.0017547607421875, "__label__hardware": 0.002681732177734375, "__label__health": 0.0009593963623046876, "__label__history": 0.00040650367736816406, "__label__home_hobbies": 0.0002008676528930664, "__label__industrial": 0.0007510185241699219, "__label__literature": 0.000568389892578125, "__label__politics": 0.0004949569702148438, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.19189453125, "__label__social_life": 0.00019168853759765625, "__label__software": 0.08160400390625, "__label__software_dev": 0.70751953125, "__label__sports_fitness": 0.0003116130828857422, "__label__transportation": 0.00035119056701660156, "__label__travel": 0.00018274784088134768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53115, 0.03006]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53115, 0.59701]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53115, 0.91547]], "google_gemma-3-12b-it_contains_pii": [[0, 5122, false], [5122, 11174, null], [11174, 15466, null], [15466, 20128, null], [20128, 24893, null], [24893, 28389, null], [28389, 32248, null], [32248, 38252, null], [38252, 43068, null], [43068, 48070, null], [48070, 53115, null], [53115, 53115, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5122, true], [5122, 11174, null], [11174, 15466, null], [15466, 20128, null], [20128, 24893, null], [24893, 28389, null], [28389, 32248, null], [32248, 38252, null], [38252, 43068, null], [43068, 48070, null], [48070, 53115, null], [53115, 53115, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53115, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53115, null]], "pdf_page_numbers": [[0, 5122, 1], [5122, 11174, 2], [11174, 15466, 3], [15466, 20128, 4], [20128, 24893, 5], [24893, 28389, 6], [28389, 32248, 7], [32248, 38252, 8], [38252, 43068, 9], [43068, 48070, 10], [48070, 53115, 11], [53115, 53115, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53115, 0.22727]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
bcfab10bb6539198114ac092f71957855394599e
|
The Paralax Infrastructure: Automatic Parallelization With a Helping Hand
Document Version:
Peer reviewed version
Queen's University Belfast - Research Portal:
Link to publication record in Queen's University Belfast Research Portal
General rights
Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The Research Portal is Queen’s institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person’s rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact openaccess@qub.ac.uk.
Download date: 18. Oct. 2018
The Paralax Infrastructure: Automatic Parallelization With a Helping Hand
Hans Vandierendonck
Dept. of Electronics and Information Systems
Ghent University
Belgium
hvdieren@elis.ugent.be
Sean Rul
Dept. of Electronics and Information Systems
Ghent University
Belgium
srul@elis.ugent.be
Koen De Bosschere
Dept. of Electronics and Information Systems
Ghent University
Belgium
kdb@elis.ugent.be
ABSTRACT
Speeding up sequential programs on multicore processors is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exemplified by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful.
This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to find thread-level parallelism. Each of the annotations specifies only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily verified. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to auto-parallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor.
The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the efficiency of parallel programming.
Categories and Subject Descriptors
Software [Programming Techniques]: Concurrent Programming—Parallel Programming; Software [Programming Languages]: Processors—Compilers
General Terms
Algorithms, Design, Performance
Keywords
Semi-automatic parallelization, semantic annotations
1. INTRODUCTION
Parallel programming has been with us since the advent of computing. Until recently, parallelizing programs was not always worth the effort as single-threaded performance doubled every 18 to 24 months; a consequence of technology advances (scaling, frequency increase) and architectural improvements of processors. Since 2004, however, the economics have changed: by necessity, processor manufacturers have turned to multi-core processors where single-thread performance increases “only” by about 20% per year. Due to the proliferation of multi-core processors, all application domains are now confronted with thread-level parallelism, even those that are not easily amenable to such parallelism.
Many programming models exist to create parallel programs, ranging from low-level models (e.g. POSIX threads [6]), to higher-level models (e.g. OpenMP, MPI [31], Cilk [14]), to productivity languages (e.g. X10 [40], UPC [18]) and to domain-specific approaches (e.g. StreamIt [15]). Each of these languages matches particularly well to specific program structures, most often scientific computing or streaming operations. Irregular pointer-based applications are not targeted, yet parallelization of such codes is also mandatory.
The languages cited above are explicitly parallel: the programmer is burdened with the tasks of explicitly identifying parallelism, transforming the program and debugging the performance to verify the utility of the effort. In general, explicit parallel programming languages complicate program maintenance and debugging. Interactive parallelization tools [3, 20, 21, 23] aid the programmer with the parallelization and thread mapping tasks, although these tools still require significant effort from the programmer.
Ideally, programs are automatically parallelized. Research of the ’80s and ’90s has resulted in successful parallelization of DOALL and DOACROSS loops [5, 22, 30]. These techniques apply very well to array-based languages such as Fortran; but little success was obtained on irregular pointer-intensive C codes.
In this paper, we explore the semi-automatic parallelization of irregular pointer-intensive C codes. Here, we use an implicit parallel programming methodology [19], which assumes that the programmer is aware that the program will execute on parallel hardware, but he does not have to write explicitly parallel programs. Rather, an auto-parallelizing compiler turns the sequential program into a parallel one. This gives the benefit of exploiting performance improvements due to parallelism while writing code in a sequential programming model.
This paper makes the following contributions in order to make the implicit parallel programming approach work on pointer- and control-intensive C codes.
1. We present the Paralax compiler, an auto-parallelizing compiler for coarse-grain loops operating on whole data structures. This approach works well as alias analysis succeeds at grouping memory references per data structure and loop bodies are quite large. In contrast, parallelization of fine-grain loops in C programs has not been successful as this depends too much on accurate intra-data structure alias analysis and loop bodies are too small.
2. We present LWPM, a light-weight programming model based on annotations. The annotations detail semantic properties of functions, variables and function arguments. An important annotation is, e.g., the KILL annotation which states that a variable is dead at a certain program point. The unique property of this programming model is that it does not explicitly steer parallelization; parallelization follows automatically from the increased accuracy of compiler analysis.
3. We present methods for debugging the correctness of the annotations. As the annotations are not provable by our auto-parallelizing compiler (otherwise they would be redundant), we propose functionality to turn the annotations into code fragments that check their correctness during execution of the program.
4. We present a tool for proposing where to insert annotations in a program. Here, we capture dynamic dependence information during profiling executions and we compare it to statically determined dependences. The difference between the sets of dependences indicates where annotations may be applicable. Such a tool helps programmers to upgrade sequential programs to implicitly parallel ones.
The paper is structured as follows. Section 2 presents the compilation flow and the design decisions appropriate for parallelizing irregular pointer-intensive programs. Section 3 presents the lightweight programming model that conveys additional information to the compiler. Section 4 presents programmer tools to help the programmer with inserting annotations and to test the correctness of annotations. Next, we apply the techniques to benchmarks and provide numerical evaluation in Section 5. Section 6 discusses related work and Section 7 concludes the paper.
2. COMPILATION FLOW
We illustrate the compilation flow using a simplified version of the main compression loop in bzip2 as example (Figure 1). The code makes extensive use of global variables, in this case block, last and szptr. Many more variables appear in the real code but these are omitted for pedagogical reasons. The example code first allocates memory and initializes the global pointers. Then, it enters a main loop, consisting of four main stages as indicated by four function calls.
The loop can be parallelized as a parallel-stage pipeline: the loadAndRLESource() and sendMTFValues() functions carry dependences and must be executed sequentially, but the remaining function calls are highly parallel. In fact, these functions may be executing multiple times in parallel, each one operating on the data computed by a different loop iteration.
We illustrate below how the Paralax compiler combines several state-of-the-art algorithms to recognize parallelism in this code.
2.1 Memory Analysis
The first step of analyzing memory is to identify data structures. A data structure is identified by its type and a base pointer. Recognized types are primitive types such as integer types and floating point types. They can also be composite types such as a pointer to a type, a structure (an ordered collection of types) or an array (a repetition of a type). Types may also be undefined in cases where types cannot be accurately determined. We use a unifying shape analysis to determine the types of data structures, in particular Data Structure Analysis [28].
Example: Data Structure Analysis easily identifies the globals used in the program and can reconstruct from the code that block and szptr are used as pointers to heap-allocated arrays of type char and short, respectively.
2.2 Dependence Analysis
Dependence analysis tracks the pairs of statements or instructions that have dependences through data structures stored in memory. Here, algorithms based on use/def chains or static single assignment (SSA) may be used; the actual algorithm used is orthogonal to this paper. SSA however has some advantages, e.g. it simplifies privatization of data structures when generating multithreaded code.
Applying SSA to memory variables has proved tricky due to the partial updates of data structures made by word-size stores to elements or fields. Roughly speaking, existing solutions range from applying SSA to individual words in memory [9, 27, 32] to full data structure phi-nodes as in Array SSA [13, 24]. In the first case, the granularity of the representation is too fine to facilitate full-data structure transformations such as privatization. In the latter case,
the merging effect of phi-nodes must be evaluated at runtime in order to model partial updates [24].
The Paralax compiler uses a mixture of full-data structure SSA and use/def chains. The idea is to create phi-nodes only when a data structure is fully defined and to use use/def-chains on all operations on the same SSA version.\footnote{It is also feasible to use an SSA algorithm on individual words to represent accesses to the same version of a data structure.} This strategy avoids the complexity of Array SSA [24], i.e. runtime evaluation of merging phi-nodes, while providing the benefits of SSA that are most important in the present context.
**Example:** Dependence analysis is not quite precise. Figure 2 shows the program dependence graph (PDG) [12], a graph where each node represents an instruction and where edges represent control, data and memory dependences between instructions. The four nodes lined up vertically correspond to the four function calls, while the nodes on the top row correspond to the loop termination test and exit branch.
Dependence analysis conservatively assumes some non-existing dependences, in particular the dependence of loadAndRLESource() on doReversibleTransformation() and the fact that each function that initializes an array is also dependent on itself, i.e. there is a loop-carried dependence. In contrast, dependence analysis does know that the global last is re-initialized on every loop iteration as there are no loop-carried dependences on the last variable. The reason is that it is easy to see that a scalar is defined, but it is much harder to prove that every array element is defined.
### 2.3 Parallelization
We follow Allen and Kennedy in order to detect parallelism [2]. Parallelism is detected by computing the strongly connected components (i.e. cycles) on the PDG of a loop. Each strongly connected component (SCC) represents a group of instructions that are cyclically dependent. As such, they cannot be split across pipeline stages. The SCCs are clustered in pipeline stages using basic block execution frequencies and inter-SCC dependences in order to load balance the pipeline [33, 35]. Parallel-stage pipelines are possible when an SCC does not have a loop-carried dependence with itself. The compiler uses a static performance model to predict the speedup of parallelization. Only loops with significant speedups are parallelized.
The Paralax compiler also recognizes task parallelism outside of loops, a pattern that is similar to the OpenMP sections construct. It can happen that a group of instructions is cyclically dependent on a particular data structure or variable, e.g. in a reduction or when manipulating I/O streams. Such groups of instructions are known as ordered sections [45]. The decoupled software pipelining model can handle such instructions when they occur in the loop body, but not when they are embedded in callee functions. We analyze whether ordered sections can be extracted from the callees and be executed out-of-line in the loop body. Hereto, the ordered sections are “queued up” together with the required input data. The ordered sections are then executed in original program order by reading elements from the queue and executing them in a sequential pipeline partition [45].
**Example:** The PDG of the example loop contains three SCCs (Figure 2). Each SCC has a loop-carried dependence, allowing to transform the code to a three-stage pipeline.\footnote{Note that the real bzip2 code contains many more dependences than the example. Consequently, the compiler cannot discover the pipeline in the real code.}
### 2.4 What’s Missing
We noted that there are a number of spurious dependences in the PDG of Figure 2, resulting from the fact that dependence analysis cannot determine precisely the last def of array elements. In particular, (i) it does not know the size of the arrays as they are heap-allocated and (ii) the arrays are not necessarily entirely overwritten on each loop iteration, although the programmer knows that array elements are used only if they have been defined in the same loop iteration. What we need is to convey this semantic information to the compiler to allow it to find more parallelism.
We propose several annotations to do just this (Section 3). One particular annotation is the KILL statement, the effect of which we describe here. The KILL statement tells the Paralax compiler that a particular data structure is dead at the point where the statement is placed. Dependence analysis picks up on this and assumes that the KILL statement defines fully the referenced data structure and that
---
**Figure 2:** Program dependence graph for the bzip2 code.
**Figure 3:** Program dependence graph after introducing KILL statements.
Communication and synchronization between pipeline stages is effected by means of a C-style struct called “communication structure”. This structure gathers all variables that are passed between pipeline stages, including privatized data structures. The first stage of the pipeline is modified to set up a fresh structure for each iteration of the loop and to initialize live-in values. All references to variables passed between pipeline stages are rewritten to access the copy of the globals in the communication structure. The final pipeline stage is rewritten to access variables passed between pipeline stages, then these functions are also rewritten to access the copy of the globals in the communication structure. The final pipeline stage is rewritten to copy live-out values to the original program variables and to cleanup the communication structure.
Communication and synchronization between pipeline stages is effected by means of a C-style struct called “communication structure”. This structure gathers all variables that are passed between pipeline stages, including privatized data structures. The first stage of the pipeline is modified to set up a fresh structure for each iteration of the loop and to initialize live-in values. All references to variables passed between pipeline stages are rewritten to access the copy of the globals in the communication structure. The final pipeline stage is rewritten to copy live-out values to the original program variables and to cleanup the communication structure.
The communication structure is passed between threads by means of queues. The structure is queued up only after a pipeline stage has fully executed. As the following pipeline stage can only start executing after retrieving the communication structure from the queue, the queues implement all the necessary synchronization.
We currently schedule loop iterations statically, i.e. we decide during code generation what thread executes what pipeline stage and what loop iteration of that stage. This strategy is optimal for pipelines, but may be sub-optimal for parallel-stage pipelines. Static scheduling was chosen because it allows us to generate faster code and to use only lock-free single-producer single-consumer queues. When communication intensity is low, the Paralax compiler uses the more expensive POSIX locks in order to not waste processing cycles on polling the queues. This allows us to generate more threads than the number of hardware cores when pipeline stages are imbalanced.
3. PROGRAMMING MODEL
We present a light-weight programming model consisting of simple annotations of functions and program variables. Each annotation is designed to be unambiguous and to be automatically testable (e.g. during debugging runs). None of the annotations directly triggers parallelization; they only strengthen program analysis.
We propose annotations on function arguments, on functions and on memory variables. Table 1 summarizes the annotations, their semantics and the program analysis they impact. Annotations like the ones proposed here are already used frequently in compilers with the goal of conveying additional semantic information that enables or disables specific optimizations, forces particular code generation schemes, etc. Such annotations however are selected for a particular purpose. As such, annotations already in use have slightly different semantics which make them not exactly right for our purpose.
3.1 Function Arguments
It is well understood that program analysis is incomplete when calls to external functions are made. We propose function argument attributes that help improve alias analysis of external functions. Attributes for pointer arguments and the return value describe memory accesses. The REF and MOD annotations say whether a pointer argument is used for reading or writing. The NOCAPTURE annotation says that a pointer does not escape through the function, i.e. the pointer is not stored to memory. The KILL annotation indicates that a pointer argument points to a memory region that will be entirely overwritten. This is typically useful for pointers to scalars and structures. When assuming C language semantics, it is generally not possible to guarantee KILL semantics on array arguments because the callee function does not know the size of the array; at best it knows what part of the array it is allowed to overwrite.
These annotations are used by dependence analysis and by data structure analysis (DSA). When examining calls to external functions, these analysis make appropriate assumptions for each function argument, rather than assuming the default that pointer arguments are read, written and escape.
A 5th attribute, RETALIAS(#), specifies that the return value is a pointer that is computed based on a particular pointer argument, identified by #. This attribute is useful for functions that return a pointer to the same memory range as one of their arguments. We have extended data structure analysis to unify data structure nodes for the argument and return value when the annotation is present. This makes DSA more accurate in general.
3.2 Functions
A function labeled with SYSCALL may execute system calls and thus has externally visible side-effects. Dependence analysis adds mutual dependences between all SYSCALL functions in the PDG, forcing them to execute in original program order.
STATELESS functions do not maintain internal state, i.e. they do not access global variables but they access only data structures included in the argument list. STATELESS differs from GCC’s pure and const function annotations as the latter describe functions that do not modify or access memory at all allowing, e.g., common sub-expression elimination of calls to those functions. In our case, we want to indicate that escaped pointers will not be referenced or modified by a STATELESS function call.
The COMMUTATIVE annotation implies that calls to such functions may be reordered, but only once the function may be running at any one time [4]. The commutativity annotation is taken into account by dependence analysis, which modifies call nodes in the PDG by removing memory dependences to data structures that are not included in the argument list. When transforming the parallel code, the function is turned into a critical section by inserting a lock.
The constructor/destructor pair of annotations describe functions that allocate and free data structures. Knowing these functions is particularly important when privatizing data structures, because it is otherwise not possible to duplicate complex (i.e. linked) data structures.
The GCC annotation alloc_size is related as it states that the return value is freshly allocated memory, but it does not mention the corresponding destructor. GCC also provides constructor and destructor annotations but these are entirely unrelated as they indicate that the corresponding functions should execute before and after executing main, respectively.
The constructor is a function that returns a pointer to new memory. The constructor may have any set of arguments. When privatizing the data structure, a new call to the constructor will be created with exactly the same arguments as the original constructor call. The destructor is a function with a single argument that is a pointer to the memory to destroy.
For data structures that do not store pointers (a property identified by Data Structure Analysis), we assume that malloc and free are the default constructor and destructor.
3.3 Data Structures
Privatization of data structures is an important prerequisite for enabling parallelization [44]. Privatization is, however, only possible when we know that a data structure is dead, e.g. at the beginning of a loop iteration. Proving this in general is very hard, so we introduce the KILL(var) annotation. KILL(var) is a statement that signifies that the variable var is dead at the program point where the annotation is inserted. It is recognized by dependence analysis.
The KILL annotation applies to the scalar, array or structure pointed to by var. It does not apply recursively to any other data structures referenced by pointers stored in var.
4. PROGRAMMING TOOLS
4.1 Discovering Locations for Annotations
The function argument annotations apply foremost to functions external to the current compilation unit, such as library functions but also application functions. While library functions are already annotated by the library writer, it is wise to correctly label the memory semantics of all other externally defined functions. Likewise, constructor/destructor pairs should be labeled. These can be identified by simply tracking calls to allocation and free functions.
Finally, we provide an algorithm for suggesting where to place the KILL attribute on variables, probably the most important attribute of LWPM. The reasoning behind the algorithm is that there exist memory dependence edges in the program dependence graph that are not real; they are included only due to conservatism of compiler analysis. It is possible to identify such candidate edges by examining the statically computed memory dependences with the dependence measured during a profile run of the program. The dependences observed during profiling are observed dependences; the remaining memory dependences are potentially bogus. Several
<table>
<thead>
<tr>
<th>Annotation</th>
<th>Semantics</th>
<th>Influenced analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>MOD</td>
<td>Pointed-to-memory is modified</td>
<td>Memory analysis</td>
</tr>
<tr>
<td>REF</td>
<td>Pointed-to-memory is referenced</td>
<td>Memory analysis</td>
</tr>
<tr>
<td>KILL</td>
<td>Pointed-to-memory is invalidated</td>
<td>Memory analysis</td>
</tr>
<tr>
<td>NOCAPTURE</td>
<td>Pointer is not captured (doesn’t escape)</td>
<td>Memory analysis</td>
</tr>
<tr>
<td>RETALIAS(#)</td>
<td>Return value is pointer and aliases argument number #</td>
<td>Memory analysis</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Functions</th>
<th>Semantics</th>
<th>Influenced analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>SYSCALL</td>
<td>Function may have externally visible side-effects</td>
<td>Dependence analysis</td>
</tr>
<tr>
<td>STATELESS</td>
<td>Function does not maintain internal state</td>
<td>Dependence analysis</td>
</tr>
<tr>
<td>COMMUTATIVE</td>
<td>Function is commutative</td>
<td>Dependence analysis, code transformation</td>
</tr>
<tr>
<td>CONSTRUCTOR(in)</td>
<td>Function is a constructor, fn is the corresponding destructor</td>
<td>Privatization</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Variables</th>
<th>Semantics</th>
<th>Influenced analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>KILL(var)</td>
<td>Statement specifying that var is dead</td>
<td>Dependence analysis, privatization</td>
</tr>
</tbody>
</table>
Table 1: Light-weight programming model annotations
dynamic dependence profiling tools have been recently discussed in the literature [11, 38, 43]. The algorithm works as follows:
1. In the program dependence graph, memory dependence edges are labeled with the corresponding data structure. This allows us to recognize the data structures that require a KILL annotation.
2. Observed memory dependences are read from the profiling information and corresponding edges are inserted or updated in the PDG. Note that dependences with source and or destination in a callee function must be related to the corresponding call site in the analyzed loop. In the bzip2 example (Figure 6), observed memory dependences are marked with an asterisk.
3. Memory dependence edges in the PDG are re-analyzed. We define a certain dependence as either a control dependence, a data dependence or an observed memory dependence. A static memory dependence edge from node \( M \) to \( N \) is a critical memory dependence if there does not exist any path of certain dependences from node \( M \) to \( N \). Removing a critical memory dependence may break cycles (split SCCs) and expose more parallelism.
In the bzip2 example, critical memory dependences exist on the block, szptr and last variables. Removing the critical dependences on block or szptr will reduce the size of SCCs or turn a self-dependent SCC into a self-parallel SCC. Removing the critical dependence on last (from node idRLE to node sndMTF) will not break cycles as there is a path of certain dependences between these nodes.
4. For every target node \( N \) of a critical memory dependence, we check if there is an observed memory dependence on the same data structure with \( N \) as a target. If such a dependence does not exist, then the node \( N \) is a potential location for inserting a KILL annotation on the data structure specified in the critical dependence label. If however, such an observed memory dependence does exist, then node \( N \) is clearly not a suitable location to insert a KILL annotation.
We propose inserting the annotation just before a node \( N \) in the code, in the same basic block. Alternatively, for a call site node, the annotation may be inserted at the start of the called function.
These rules lead us to insert a KILL annotation in the example for block just before the call to loadAndRLESource() and one on szptr just before the call to generateMTFValues().
The algorithm produces a list of data structures and program locations where KILL annotations may be appropriate. Data structures are reported by a combination of name (global and local variables) and type. The programmer should verify these annotations and insert those that are correct. We show in the evaluation section that the number of KILL candidates is limited and that they are quite easy to verify.
The algorithm can be made more precise by filtering critical memory dependences. First, if multiple reduction operations are specified on the same variable, then memory dependences appear where the store of one reduction is dependent on the load of a different reduction. These dependences are also not reported, as they are implied by other, likely observed, dependences. Second, we do not propose annotations if either the source or the destination of the critical dependence was not executed during profiling. Third, we do not propose annotations if the data structure is read-only or write-only in the final SCC. Fourth, it is worthwhile to verify that a speedup would be obtained if the annotations were correct: the loop is analyzed as if the annotations were inserted and it is verified that a larger speedup is predicted with the annotations than without.
4.2 Checking Correctness of Annotations
Annotations are meta-information stored in the program source code. As such, they can become incorrect when the source code evolves. To minimize this effect, we designed the annotations to be easily testable automatically, e.g., during debugging or regression runs. Each of the annotations can be automatically turned into a piece of code that is executed during program execution and that tests the validity of the annotation. Most attributes are easy to check (e.g., MOD and REF, NOCAPTURE). Other attributes are specified at the library function level and can be automatically propagated up the call graph (e.g., SYSCALL). The KILL annotation can be checked using methods similar to those used in inspector threads for identifying privatizable data structures [36]. The NOCAPTURE attribute can be tested using dynamic escape analysis mechanisms [29].
5. EXPERIMENTAL EVALUATION
The Parallax compiler is built on top of the LLVM compiler framework. It was specifically constructed to auto-parallelize irregular pointer-intensive programs, such as the SPECint benchmarks.
We demonstrate the efficacy of the Parallax compiler on several benchmarks with coarse-grain parallel loops, taken from the SPECint2000 and SPECint2006 benchmark suites, complemented with clustalw, a bio-informatics benchmark. We selected these benchmarks as they exhibit coarse-grain parallelism. The compiler selects what loops to parallelize based on profiling information and performance models. Other SPECint benchmarks often require speculative parallelization [4], which is not implemented in our compiler.
The benchmark sources are first translated to non-optimized LLVM byte-codes as we generally obtain better results by parallelizing before optimizing. All byte-code files are then linked together in a single file to allow the analysis and code transformation
passes to have a global view of the program (whole program analysis). After parallelizing the code, standard LLVM optimizations passes are run (which are comparable to gcc -O3). We compare speedups relative to the same compilation process without the parallelization step.
Performance is measured on a 2.3 GHz AMD Opteron 2378 quad-core dual processor (shanghai architecture) running Scientific Linux 5.3, kernel version 2.6.18. The LLVM version is revision 83199. Benchmarks are executed on reference inputs.
5.1 Evaluation Results
We compiled the benchmarks using the Paralax compiler assuming that the Paralax compiler knows the annotated declarations for all (used) C library functions, in the style of Figure 5. Furthermore, we used the KILL annotation proposal algorithm to determine such annotations. Dynamic dependence information was captured using the training inputs. Table 2 shows the main loops in the benchmarks and the annotations proposed by our programming tool. Incorrect annotations are shown in italics.
Figure 7 summarizes the performance measurements. In some cases we generate more threads than the number of available cores, as the execution time of some sequential pipeline stages is very low. Still, we map these to their own thread. Loop-speedups are reported in Table 3. These results are discussed next.
5.1.1 Bzip2
Bzip2 is a commonly used (de-)compression utility. The SPECint2000 benchmark repeatedly executes the compress and uncompress steps on an in-memory buffer.
For the bzip2 `compressStream()` function, 9 annotations are correctly identified (Table 2), which must all be added to the program to allow parallelization. The loop is parallelized as a 3-stage pipeline, where the second pipeline stage performs the majority of the work and multiple instantiations may be run in parallel.
The incorrectly identified data structures are related to IO operations, either C library IO data structures or the benchmark-specific structures. The annotation proposal tool identifies these data structures because there were no read-after-write dependencies in the dynamic dependence information.
A similar analysis holds for the bzip2 `uncompressStream()` function. Here, an imbalanced 2-stage pipeline with limited parallelism is recognized.
Bzip2 compression speeds up by 2.36 on 8 threads. Speedup stagnates at about 4 threads (Figure 7) which is due to input size restrictions. Also, the parallel-stage pipeline has a quite heavy load in the sequential pipeline stages which take about 35% of the execution time of the pipeline. The speedup of decompression is a mere 1.15 using 2 threads (Table 3). Overall, the speedup is 1.79.
5.1.2 Mcf
The mcf program (SPECint2006) does vehicle scheduling optimization using a simplex graph algorithm. Our compiler recognizes one important parallel loop and identifies a 2-stage pipeline where multiple instantiations of the first pipeline stage can run in parallel. The loop itself is highly parallel allowing a loop speedup of 6.03 on 8 threads. The loop covers only about 60% of the total execution time (Table 3), so overall speedup is limited to 2.06.
5.1.3 Clustalw
The clustalw program performs multiple sequence alignment. The source code is taken from the BioPerf benchmark suite. There are two important phases in the program: pairwise alignment and progressive alignment.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Loop</th>
<th>Coverage</th>
<th>Best speedup</th>
<th>Threads</th>
</tr>
</thead>
<tbody>
<tr>
<td>bzip2</td>
<td>compressStream</td>
<td>69.4%</td>
<td>2.36</td>
<td>1/6/1</td>
</tr>
<tr>
<td></td>
<td>uncompressStream</td>
<td>29.9%</td>
<td>1.15</td>
<td>1/1</td>
</tr>
<tr>
<td></td>
<td>overall</td>
<td>100%</td>
<td>1.79</td>
<td></td>
</tr>
<tr>
<td>mcf06</td>
<td>prim_ref</td>
<td>81.2%</td>
<td>6.03</td>
<td>7/1</td>
</tr>
<tr>
<td></td>
<td>overall</td>
<td>100%</td>
<td>2.06</td>
<td></td>
</tr>
<tr>
<td>hmmer</td>
<td>main_loop</td>
<td>99.9%</td>
<td>7.00</td>
<td>1/8/1</td>
</tr>
<tr>
<td></td>
<td>overall</td>
<td>100%</td>
<td>7.00</td>
<td></td>
</tr>
<tr>
<td>clustalw</td>
<td>pairalign</td>
<td>44.5%</td>
<td>4.04</td>
<td>1/8/1</td>
</tr>
<tr>
<td></td>
<td>pdiff</td>
<td>55.4%</td>
<td>1.74</td>
<td>1/1</td>
</tr>
<tr>
<td></td>
<td>overall</td>
<td>100%</td>
<td>2.33</td>
<td></td>
</tr>
</tbody>
</table>
Table 3: Per-loop and overall speedups. The column ‘Threads’ shows the number of threads used for each pipeline stage.
The `pairalign()` function performs pairwise comparison of a number of DNA sequences, which is trivially parallel. For the Paralax compiler to recognize this parallelism, KILL annotations are necessary on a number of scratch arrays (Table 2). Custom allocation routines are used to create these arrays, so these routines must be labeled as constructors and destructors to allow the compiler to privatize them. Again, KILLs on IO data structures are erroneously proposed.
The `pdiff()` function contains two loop nests that may run in parallel (task parallelism). The loops operate on distinct data structures, so the parallelization transformation does not require privatization of data structures.
Performance of pairwise alignment scales very well with an increasing thread count (Figure 7). The first and last pipeline stages are mapped to their own threads although they are not compute-intensive. Therefore, we can generate code utilizing 10 threads on 8 cores, resulting in a speedup of 4.04. Progressive alignment sees a 1.74 speedup. Overall, the speedup is 2.33 (Table 3).
Higher speedups would be possible for pairwise alignment by utilizing dynamic mapping of iterations to threads instead of static mapping (cf. Section 2.5). Experiments with OpenMP versions of the code confirm this expectation.
5.1.4 Hmmer
The SPECint2006 hmmer benchmark spends virtually all its time in applying a Hidden Markov Model to a set of randomly generated data. Each randomly generated data point can be operated on independently. Extracting this parallelism requires several annotations.
Figure 8 shows a simplified version of the annotated source code of the program’s main loop. Two function declarations were annotated. The function `CreatePlan7Matrix()` is annotated to tell the compiler that it is a memory allocation function and that the corresponding deallocation function is the function `FreePlan7Matrix()`. Also, the function `AddToHistogram()` is labeled as a commutative function; updates to the histogram may occur in any order but each update must run in isolation to avoid data races.
The annotation proposal tool suggests to place KILL annotations on 4 arrays pointed to by `mx`. These are scratch arrays for the computations made by `P7Viterbi()`. By simple extrapolation we conclude to KILL the entire data structure.
The compiler is able to infer a parallel-stage pipeline with 3 stages. The first stage is self-dependent and is concerned with the random number generation. The second stage is self-parallel and consists of the `DigitizeSequence()`, `P7Viterbi()` and following steps. A final self-dependent pipeline stage is needed for implementation reasons as it hosts loop control and cleanup code.
It is also possible to annotate the random number generator as commutative. This turns the whole loop iteration into a single self-parallel partition (DO-ALL loop). It has, however, the draw-
Table 2: Overview of proposed KILL annotations: the insertion point and the data structure. The number of annotations and the number of correct annotations are shown. Incorrect annotations are shown in italics.
<table>
<thead>
<tr>
<th>Benchmark loop</th>
<th>Insertion point</th>
<th>Data structures</th>
<th>#Prop</th>
<th>#Corr</th>
</tr>
</thead>
<tbody>
<tr>
<td>bzip2 - compressStream</td>
<td>loadAndRLESource</td>
<td>IO_FILE, IO_FILE->buffer, block, inUse,</td>
<td>12</td>
<td>9</td>
</tr>
<tr>
<td></td>
<td>generateMTFValues</td>
<td>quadrant, ftab, zptr, spec_fd_t->buffer</td>
<td>8</td>
<td>5</td>
</tr>
<tr>
<td></td>
<td>sendMTFValues</td>
<td>unsqToSeq, seqToUnseq</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>bsPutXXX</td>
<td>spec_fd_t->buffer</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>bzip2 - uncompressStream</td>
<td>getAndMoveToFrontDecode</td>
<td>ll4, ll8, ll16, tt, unzftab, spec_fd_t->buffer</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>mcf06 - primal_bea_mpp</td>
<td>P7Viterbi</td>
<td></td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>hmmer - main_loop_serial</td>
<td></td>
<td></td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>clustalw - pairalign</td>
<td>forward_pass</td>
<td>HH, DD</td>
<td>9</td>
<td>7</td>
</tr>
<tr>
<td></td>
<td>diff</td>
<td>RR, SS, displ</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td></td>
<td>tracepath</td>
<td></td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td></td>
<td>fprintf</td>
<td></td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>clustalw - pdiff</td>
<td></td>
<td></td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Figure 7: Performance impact of parallelization with and without annotations. The bars show the execution time of the original sequential benchmarks (1 thread) and of parallelizing for multiple threads. Where appropriate, execution time is broken down for different phases. Numbers on top of the stacked bars indicate program speedup. The lines show the execution time obtained when no annotations are added to the program.
back that the randomly generated sequences change from execution to execution, depending on the interleaving of calls to the random number generator. Hereby, the program becomes nondeterministic.
Performance measurements indicate a 7.00x speedup when utilizing 10 threads (Figure 7). Hereto, the version with non-commutative random number generation was used in order to keep execution times comparable.
Note that programmers frequently “optimize” their programs to save on memory allocation time by recycling memory buffers. Had the calls to CreatePlan7Matrix and FreePlan7Matrix been placed inside the loop body, then the single-threaded execution time would increase by 2%. But, more importantly, the com-
for mx = CreatePlan7Matrix(1, hmm->M, 25, 0);
float score;
mx = CreatePlan7Matrix(1, hmm->M, 25, 0);
for(idx=0; idx < nsamples; ++idx) {
do {
sqlen = GaussRandom(lenmean, lensd);
while(sqlen<1);
seq = RandomSequence(..., sqlen);
dsq = DigitizeSequence(seq, sqlen);
LWPM_KILL(mx->mmx_mem);
LWPM_KILL(mx->mmx_mem);
LWPM_KILL(mx->mmx_mem);
LWPM_KILL(mx->mmx_mem);
score = P7Viterbi(dsq, sqlen, hmm, mx, 0);
/* Update histogram */
AddToHistogram(hist, score);
free(seq);
free(dsq);
}
}
Figure 8: Annotated code of hmmer.
6. RELATED WORK
Parallel execution potentially gives important performance benefits. Different approaches to obtaining parallel code have been investigated, giving different levels of exposure of the programmer to the parallelization process.
A myriad of parallel programming languages have been proposed as extensions to sequential programming languages [31, 14] or have been designed for parallelism from first principles [40, 18, 15]. Furthermore, parallel programming models aid in writing parallel code [25, 37]. These approaches however demand explicitly parallel programming, which requires significant programming efforts. In contrast, we advocate an implicitly parallel approach where the dirty process of parallelization (code transformation, thread mapping, etc.) is performed automatically.
Several systems depend on programmer-supplied annotations for optimization, e.g. the language defined by Guyer and Lin [16]. Annotations describe mod/ref behavior of library functions or they define data transfer functions, describing properties of the computation result conditionally on input values. Many systems use what we call directives, i.e. statements that steer the compiler to perform a specific action. OpenMP pragma’s are in this sense directives as they direct the compiler to parallelize a specific loop. Similarly, directives steering parallelization [1] and vectorization [2] are commonly used in explicitly parallel programming languages.
Programmer support environments have been developed to aid the programmer in writing parallel code [3, 20, 21, 23]. These systems are focused on Fortran programs and array-based computations. They are also geared towards explicitly parallel programming languages. In contrast, this paper considers a different application domain and implicit parallel programming.
Compile-time automatic parallelization has been successful on array-based code, leveraging DOALL and DOACROSS parallelism [5, 22, 30]. These techniques fail however on irregular pointer-based applications, due to the absence of significant loops performing array-based computations. In this paper, we show that auto-parallelizers must be structured differently for this type of code and must search for different types of pipeline parallelism at coarser levels of granularity.
It has been proposed to apply speculative parallelization on irregular pointer-based applications [7, 17], but these systems typically require hardware support. Cord [42] is a software-only speculative parallelization technique. However, they always parallelize speculatively, even if the parallelism is non-speculative. Decoupled software pipelining [33] is another approach for parallelizing irregular pointer-based applications at a fine-grain level. Both compiler and hardware support are assumed. The Galois system is a programming model supporting irregular data-parallel applications [26]. They too rely on speculative execution but it is the programmer who identifies when to speculate.
Bridges et al. [4] study the performance of manually identified speculatively parallel code regions. Using trace analysis and extrapolation of performance, they obtain quite reasonable speedups, which vary greatly between benchmarks. Their approach is not practical; they present an estimate of potential speedup given extensive hardware and software support.
Dynamic parallelization is performed at runtime based on input data [39]. Data dependences are dynamically profiled and/or checked before executing in parallel [8, 34]. Software behavior oriented parallelization [10] allows the programmer to identify possibly parallel code regions and uses a runtime system for speculatively parallel execution. In [41], dynamic dependences are used to decide on the correctness of pipeline parallelism at runtime. In contrast, we rely on static compile-time parallelization. Dynamic
dependence profiling can aid the programmer in inserting annotations, but it is not essential. Thus, we avoid runtime overheads. In summary, our work is unique as it targets irregular pointer-intensive codes, presents an auto-parallelizing compiler for such applications and assumes an implicit parallel programming model. Furthermore, it presents programming tools for proposing and testing program annotations in the context of an implicit parallel programming model.
7. CONCLUSION
This paper describes an implicit parallel programming environment geared towards irregular and pointer-intensive applications such as the SPECint benchmarks. In implicit parallel program, the goal is to write a sequential program (yielding the relative simplicity of developing non-parallel programs) that can be automatically parallelized with important performance improvements.
We present the Paralax compiler, an auto-parallelizing compiler that is constructed specifically for parallelizing irregular pointer-intensive applications. Hereto, we focus on coarse-grain dependence analysis and coarse outer program loops. We show that substantial parallelism exists at this level.
In order to aid the Paralax compiler in finding significant thread-level parallelism, we present a light-weight programming model to fill in the semantic gaps. The light-weight programming model adds annotations to a program that describe well-defined properties of functions, variables and data structures; information that a static compiler cannot infer. The annotations are designed such that verification of their correctness is fairly easy.
Furthermore, we present programming tools to support implicit parallel programming: automatically testing the correctness of annotations during debugging runs and automatically proposing annotations based on dynamic dependence information. This helps to upgrade a sequential program to an implicitly parallel one.
Application of our implicit parallel programming environment to the SPECint benchmarks shows promising results. On a dual processor system with two quad-core processors, we demonstrate overall program speedups in the range of 1.79 to 7.00 when using 8 cores, even though some benchmarks have limited parallelism.
In future work, we plan to improve the auto-parallelizing compiler by utilizing speculative parallelism and by adding intra-data structure dependence analysis.
8. ACKNOWLEDGEMENTS
The authors are grateful to Albert Cohen and the anonymous reviewers for their insightful comments. Hans Vandierendonck is a Postdoctoral Fellow with the Fund for Scientific Research–Flanders. Sean Rul is supported by the Flemish Institute for the Promotion of Scientific-Technological Research in the Industry (IWT). This research was also sponsored by Ghent University and by the European Network of Excellence on High-Performance Embedded Architectures and Compilation.
9. REFERENCES
|
{"Source-Url": "https://pure.qub.ac.uk/portal/files/5240407/postprint.pdf", "len_cl100k_base": 10399, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37847, "total-output-tokens": 11831, "length": "2e13", "weborganizer": {"__label__adult": 0.0003650188446044922, "__label__art_design": 0.00032830238342285156, "__label__crime_law": 0.00033545494079589844, "__label__education_jobs": 0.0004551410675048828, "__label__entertainment": 6.681680679321289e-05, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.0001938343048095703, "__label__food_dining": 0.00034880638122558594, "__label__games": 0.0006704330444335938, "__label__hardware": 0.002201080322265625, "__label__health": 0.0005140304565429688, "__label__history": 0.00028586387634277344, "__label__home_hobbies": 0.0001062154769897461, "__label__industrial": 0.0005578994750976562, "__label__literature": 0.0002009868621826172, "__label__politics": 0.0002887248992919922, "__label__religion": 0.0006074905395507812, "__label__science_tech": 0.040252685546875, "__label__social_life": 6.681680679321289e-05, "__label__software": 0.00553131103515625, "__label__software_dev": 0.9453125, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0007734298706054688, "__label__travel": 0.0002359151840209961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53351, 0.0277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53351, 0.45758]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53351, 0.87699]], "google_gemma-3-12b-it_contains_pii": [[0, 1270, false], [1270, 6208, null], [6208, 11089, null], [11089, 15861, null], [15861, 20997, null], [20997, 26430, null], [26430, 32002, null], [32002, 39111, null], [39111, 42358, null], [42358, 46816, null], [46816, 53351, null], [53351, 53351, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1270, true], [1270, 6208, null], [6208, 11089, null], [11089, 15861, null], [15861, 20997, null], [20997, 26430, null], [26430, 32002, null], [32002, 39111, null], [39111, 42358, null], [42358, 46816, null], [46816, 53351, null], [53351, 53351, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53351, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53351, null]], "pdf_page_numbers": [[0, 1270, 1], [1270, 6208, 2], [6208, 11089, 3], [11089, 15861, 4], [15861, 20997, 5], [20997, 26430, 6], [26430, 32002, 7], [32002, 39111, 8], [39111, 42358, 9], [42358, 46816, 10], [46816, 53351, 11], [53351, 53351, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53351, 0.17339]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
7127c2ed9ce17ff1779e19460822b276c41a933f
|
Logic Programming: From Underspecification to Undefinedness
Lee Naish, Harald Sondergaard and Benjamin Horsfall
Department of Computing and Information Systems
The University of Melbourne, Victoria 3010, Australia
{lee,harald,brho}@unimelb.edu.au
Abstract
The semantics of logic programs was originally described in terms of two-valued logic. Soon, however, it was realised that three-valued logic had some natural advantages, as it provides distinct values not only for truth and falseshood, but also for “undefined”. The three-valued semantics proposed by Fitting and by Kunen are closely related to what is computed by a logic program, the third truth value being associated with non-termination. A different three-valued semantics, proposed by Naish, shared much with those of Fitting and Kunen but incorporated allowances for programmer intent, the third truth value being associated with underspecification. Naish used an (apparently) novel “arrow” operator to relate the intended meaning of left and right sides of predicate definitions. In this paper we suggest that the additional truth values of Fitting/Kunen and Naish are best viewed as duals. We use Fitting’s later four-valued approach to unify the two three-valued approaches. The additional truth value has very little affect on the fitting three-valued semantics, though it can be useful when finding approximations to this semantics for program analysis. For the Naish semantics, the extra truth value allows intended interpretations to be more expressive, allowing us to verify and debug a larger class of programs. We also explain that the “arrow” operator of Naish (and our four-valued extension) is essentially the information ordering. This sheds new light on the relationships between specifications and programs, and successive executions states of a program.
1 Introduction
Logic programming is an important paradigm. Computers can be seen as machines which manipulate meaningful symbols and the branch of mathematics which is most aligned with manipulating meaningful symbols is logic. This paper is part of a long line of research on what are good choices of logic to use with a “pure” subset of the Prolog programming language. We ignore the “non-logical” aspects of Prolog such as cut and built-ins which can produce side-effects, and assume a sound form of negation (ensuring in some way that negated literals are always ground before being called).
There are several ways in which having a well-defined semantics for programs is helpful. First, it can be helpful for implementing a language (writing a compiler, for example) — it forms a specification for answering “what should this program compute”. Second, it can be helpful for writing program analysis and transformation tools. Third, it can be helpful for verification and debugging — it can allow application programmers to answer “does this program compute what I intend” and, when the answer is negative, “why not”. There is typically imprecision involved in all three cases.
1. Many languages allow some latitude to the implementor in ways that affect observable behaviour of the program, for example by not specifying the order sub-expression evaluation (C is an example). Even in pure Prolog, typical approaches to semantics do not precisely deal with infinite loops and/or “floundering” (when a negative literal never becomes ground). Such imprecision is not necessarily a good thing, but there is often a trade-off between precision and simplicity of the semantics.
2. Program analysis tools must provide imprecise information in general if they are guaranteed to terminate, since the properties they seek to establish are almost always undecidable.
3. Programmers are often only interested in how their code behaves for some class of inputs. For other inputs they either do not know or do not care (this is in addition to the first point). Moreover, it is often convenient for programmers to reason about partial correctness, setting aside the issue of termination.
A primary aim of this paper is to reconcile two different uses of many-valued logic for understanding logic programs. The first use is for the provision of semantic definition, with the purpose of answering “what should this program compute?” The other use is in connection with program specification and debugging, concerned with answering “does this program compute what I intend” and similar questions involving programmer intent. Our main contributions are:
- We show how Belnap’s four-valued logic enables a clean distinction between a formula/query which is undefined, or non-denoting, and one which is irrelevant, or inadmissible.
- We use this logic to provide a denotational semantics for logic programs which is designed to help a programmer reason about partial correctness in a natural way. This aim is different to the semanticist’s traditional objective of reflecting runtime behaviour, or aligning denotational and operational semantics.
- We show how four-valued logic helps modelling the concept of modes in a moded logic programming language such as Mercury.
We argue that the semantics fits well with established practice in program debugging and verification.
We assume the reader has a basic understanding of pure logic programs, including programs in which clause bodies use negation, and their semantics. We also assume the reader has some familiarity with the concepts of types and modes as they are used in logic programming.
The paper is structured as follows. We set the scene in Section 2 by revisiting the problems that surround approaches to logical semantics for pure Prolog. In Section 3 we introduce the three- and four-valued logics and many-valued interpretations that the rest of the paper builds upon. In Section 4 we provide some background on different approaches to the semantics of pure Prolog, focusing on work by Fitting and Kunen. In Section 5 we review Naish’s approach to what we call specification semantics. In Section 6 we present a new four-valued approach to what we call logical semantics. In Section 7 we establish a four-valued interpretation for which the semantics or do we consider a model of which is the set of all models/fixed points as the semantics. In Section 8 we show how a four-valued interpretation for one particular model/fixed point (such as the least one according to some ordering) as the semantics or do we consider a set of models to be a possible semantics or consider the set of all models/fixed points as the semantics.
Let us fix our vocabulary for logic programs and lay down an abstract syntactic form.
Definition 1 (Syntax) An atom (or atomic formula) is of the form \( p(t_1, \ldots, t_n) \), where \( p \) is a predicate symbol (of arity \( n \)) and \( t_1, \ldots, t_n \) are terms. If \( A = p(t_1, \ldots, t_n) \) then \( A \)’s predicate symbol \( pred(A) \) is \( p \). There is a distinguished equality predicate \( = \) with arity 2, written using infix notation. A literal is an atom \( A \) or the negation of an atom, written \( \neg A \). A conjunction \( C \) is a conjunction of literals. A disjunction \( D \) is of the form \( C_1 \lor \cdots \lor C_k \), where each \( C_i \) is a conjunction. A predicate definition is a pair \( (H, \exists W[D]) \) where \( H \) is an atom in most general form \( p(V_1, \ldots, V_n) \) (that is, the \( V_i \)’s are distinct variables), \( D \) is a disjunction, and \( W = vars(D) \setminus vars(H) \). We call \( H \) the head of the definition and \( W[D] \) its body.
The variables in \( H \) are the head variables and those in \( W \) are local variables. Finally, a program is a finite set \( P \) of predicate definitions such that if \( (H_1, B_1) \in S \) and \( (H_2, B_2) \in S \) then \( pred(H_1) \neq pred(H_2) \).
We let \( G \) denote the set of ground atoms (for some suitably large fixed alphabet).
Definition 2 (Head instance) A head instance of a predicate definition \( (H, \exists W[D]) \) is an instance where all head variables have been replaced by ground terms and local variables remain unchanged.
3 Interpretations and models
In two-valued logic, an interpretation is a mapping from \( G \) to \( \{t, f\} \). To give meaning to recursively defined predicates, the usual approach is to impose some structure on \( G \rightarrow \{t, f\} \), to ensure that we are dealing with a lattice, or a semi-lattice at least. Given the traditional “closed-world” assumption (that a formula is false unless it can be proven true), the natural ordering on \( \{t, f\} \) is this: \( b_1 \leq b_2 \) iff \( b_1 = t \lor b_2 = t \). The ordering on interpretations is the natural extension of \( \leq \), equipped with which \( G \rightarrow \{t, f\} \) is a complete lattice.
Three-valued logic is arguably a more natural logic for the partial predicates that emerge from pure Prolog programs, and more generally, for the partial functions that emerge from programming in any Turing complete language. The case for three-valued logic is that the appropriate logic for computation has been made repeatedly, starting with Kleene (1938) and pursued by the VDM school (see for example Barringer, Cheng & Jones (1984)), and others.
The third value, \( u \), for “undefined”, finds natural uses, for example as the value of \( p(b) \), given the program in Figure 1.
With three- and four-valued logic, an interpretation becomes a mapping from \( G \) to \( \{t, f, u\} \) or \( \{t, f, u, t\} \) (we discuss the role of the fourth value \( i \) shortly.). For compatibility with the way equality is treated in Prolog, we constrain interpretations so \( x = y \) is mapped to \( t \) if \( x \) and \( y \) are the relationship between the truth values of the head and body — what set of truth values do we use, what constitutes a model or a fixed point, etc. Another is whether we consider one particular model/fixed point (such as the least one according to some ordering) as the semantics or do we consider a set of models to be a possible semantics or consider the set of all models/fixed points as the semantics.
Figure 1: Small program to exemplify semantics.
\[
\begin{align*}
\text{p(a)} :&= \text{p(b)} . \\
\text{p(c)} :&= \text{not p(c).} \\
\text{p(d)} :&= \text{not p(a).} \\
\end{align*}
\]
identical (ground) terms, and \( f \), otherwise. This is irrespective of the set of truth values used. There are different choices for the semantics of the connectives. Based on the natural “information content” orderings shown in Figure 2(b) and (d), the natural choices are the strongest monotone extensions of the two-valued connectives. This gives rise to Kleene’s (strong) three-valued logic \( K_3 \) (Kleene 1938) and Belnap’s four-valued logic (Belnap 1977). We denote the ordering depicted in Figure 2(b) by \( \sqsubseteq \), that is, \( b_1 \sqsubseteq b_2 \) if \( b_1 = u \lor b_1 = b_2 \), and we overload this symbol to also denote the ordering in Figure 2(d) (that is, \( b_1 \sqsubseteq b_2 \) iff \( b_1 = u \lor b_1 = b_2 \lor b_2 = i \)), as well of the natural extensions to \( \mathcal{G} \to \mathcal{O} \) or \( \mathcal{G} \to \mathcal{F} \). We shall also use \( \sqsupseteq \), the inverse of \( \sqsubseteq \). In some contexts we disambiguate the symbol by using a superscript: \( \sqsubseteq^3 \) or \( \sqsupseteq^4 \). Similarly, we use \( \sqsubseteq^2 \) for the truth ordering with two values, and \( \sqsupseteq^2 \), \( \sqsupseteq^3 \) and \( \sqsupseteq^4 \) for equality of truth values in the different domains.
The structure in Figure 2(d) is the simplest of Ginsberg’s bilattices (Ginsberg 1988). The diamond shape can be considered a lattice from two distinct angles. The ordering \( \sqsubseteq \) is the “truth” ordering, whereas \( \sqsupseteq \) is the “information” ordering. For the truth ordering we denote the meet and join operations by \( \land \) and \( \lor \), respectively. For the information ordering we denote the meet and join operations by \( \sqcap \) and \( \sqcup \), respectively. The bilattice in Figure 2(d) is interlaced: Each meet and each join operation is monotone with respect to either ordering. The bilattice is also distributive in the strong sense that each meet and each join operation distributes over all the others.
An equivalent view of three- or four-valued interpretations is to consider an interpretation to be a pair of ground atom sets. That is, the set of interpretations \( \mathcal{I} = \mathcal{P}(\mathcal{G}) \times \mathcal{P}(\mathcal{G}) \). In this view an interpretation \( I = (T_I, F_I) \) is a set \( T_I \) of ground atoms deemed true together with a set \( F_I \) of ground atoms deemed false. A ground atom \( A \) that appears in neither is deemed undefined. Such a truth value gap may arise from the absence of any evidence that \( A \) should be true, or that \( A \) should be false. In a four-valued setting, para-consistency is a possibility: A ground atom \( A \) may belong to \( T_I \cap F_I \). Such a truth value glut may arise from the presence of conflicting evidence regarding \( A \)’s truth value.
The concept of a model is central to many approaches to logic programming. A model is an interpretation which satisfies a particular relationship between the truth values of the head and body of each head instance. We now define how truth for atoms is lifted to truth for bodies of definitions.
**Definition 3** (Made true) Let \( I = (T_I, F_I) \) be an interpretation. Recall that ground equality atoms are in \( T_I \) or \( F_I \), depending on whether their arguments are the same term.
For a ground atom \( A \),
- \( I \) makes \( A \) true iff \( A \in T_I \)
- \( I \) makes \( A \) false iff \( A \in F_I \)
For a ground negated atom \( \neg A \),
- \( I \) makes \( \neg A \) true iff \( A \in F_I \)
- \( I \) makes \( \neg A \) false iff \( A \in T_I \)
For a ground conjunction \( C = L_1 \land \cdots \land L_n \),
- \( I \) makes \( C \) true iff \( \forall i \in \{1 \cdots n\} I \) makes \( L_i \) true
- \( I \) makes \( C \) false iff \( \exists i \in \{1 \cdots n\} I \) makes \( L_i \) false
For a ground disjunction \( D = C_1 \lor \cdots \lor C_n \),
- \( I \) makes \( D \) true iff \( \exists i \in \{1 \cdots n\} I \) makes \( C_i \) true
- \( I \) makes \( D \) false iff \( \forall i \in \{1 \cdots n\} I \) makes \( C_i \) false
For the existential closure of a disjunction \( \exists W[D] \),
- \( I \) makes \( \exists W[D] \) true iff \( I \) makes some ground instance of \( D \) true
- \( I \) makes \( \exists W[D] \) false iff \( I \) makes all ground instances of \( D \) false
We use this to extend interpretations naturally, so they map \( \mathcal{G} \) and existential closures of disjunctions to \( 2 \), \( 3 \) or \( 4 \). We freely switch between viewing an interpretation as a mapping and as a pair of sets. Thus, for any formula \( F \),
\[
I(F) = \begin{cases}
u & \text{if } I \text{ neither makes } F \text{ true nor false} \\
f & \text{if } I \text{ makes } F \text{ false and not true} \\
t & \text{if } I \text{ makes } F \text{ true and not false} \\
i & \text{if } I \text{ makes } F \text{ true and also false}
\end{cases}
\]
**Definition 4** (\( \mathcal{R}^P \)-Model) Let \( D \) be \( 2 \), \( 3 \) or \( 4 \) and \( \mathcal{R}^P \) be a binary relation on \( D \). An interpretation \( I \) is a \( \mathcal{R}^P \)-model of predicate definition \( (H, B) \) iff for each head instance \( (H \theta, B \theta) \), we have \( \mathcal{R}^P(I(H \theta), I(B \theta)) \). \( I \) is a \( \mathcal{R}^P \)-model of program \( P \) if it is a \( \mathcal{R}^P \)-model of every predicate definition in \( P \).
For example, a \( =^2 \)-model is a two-valued interpretation where the head and body of each head instance have the same truth value.
Another important concept used in logic programming semantics and analysis is the “immediate consequence operator”. The original version, \( T_P \), took a set of true atoms (representing a two-valued interpretation) and returned the set of atoms which could be proved from those atoms by using a clause for a single deduction step. Various definitions which generalise this to \( 3 \) and \( 4 \) have been given (see Apt & Bol (1994)). Here we give a definition based on how we define interpretations. We write \( \Phi_P \) for the immediate consequence operator, following Fitting (1985).
Definition 5 (\(\Phi_P\)) Given an interpretation \(I\) and program \(P\), \(\Phi_P(I)\) is the interpretation \(I'\) such that the truth value of an atom \(H\) in \(I'\) is the truth value of \(B\) in \(I\), where \((H, B)\) is a head instance of a definition in \(P\).
Proposition 1 An interpretation \(I\) is a fixed point of \(\Phi_P\) iff \(I\) is a \(=^3\)-model of \(P\), for \(d\) in \(\{2, 3, 4\}\).
Proof Straightforward from the definitions. \(\square\)
4 Logic program operational semantics
We first discuss some basic notions and how Clark’s two-valued approach to logic program semantics fits with what we have presented so far. Then we discuss the Fitting/Kunen three-valued approach and Fitting’s four-valued approach.
4.1 Two-valued semantics
There are three aspects to the semantics of logic programs: proof theory, model theory and fixed point theory (see Lloyd (1984), for example). The proof theory is generally based on resolution, often some variant of SLDNF resolution (Clark 1978). This gives a top-down operational semantics, which we don’t consider in detail here. The model theory gives a declarative view of programs and is particularly useful for high level reasoning about partial correctness. The fixed point semantics, based on \(\Phi_P\) or \(T_P\), gives an alternative “bottom up” operational semantics (which has been used in deductive databases) and which is also particularly useful for program analysis.
The simplest semantics for pure Prolog disallows negation and treats a Prolog program as a set of definite clauses. Prolog’s :- is treated as classical implication, \(\rightarrow\), that is, \(=^2\)-models are used. There is an important soundness result: if the programmer has an intended interpretation which is a model, any ground atom which succeeds is true in that model.
The \((\leq)\) least model is also the \(=^2\)-model and the least fixed point of \(\Phi_P\), which is monotone in the truth ordering (so a least fixed point always exists). The atoms which are true in this least model are precisely those which have successful derivations using SLD resolution. For these reasons, this is the accepted semantics for Prolog programs without negation.
To support negation in the semantics, Clark (1978) combined all clauses defining a particular predicate into a single “if and only if” definition which uses the classical bi-implication \(\leftrightarrow\). This is called the Clark completion \(comp(P)\) of a program \(P\). Our definitions are essentially the same, but we avoid the \(\leftrightarrow\) symbol. In this paper’s terminology, Clark used \(=^2\)-models, which correspond to classical fixed points of \(\Phi_P\).
The soundness result above applies, and any finitely failed ground atom must also be false in the programmer’s intended interpretation, if it is a model. However, because \(\Phi_P\) is non-monotone in the truth ordering when negation is present, there may be multiple minimal fixed points/models, or there may be none. For example, using Clark’s semantics for the program in Figure 1, there is no model and no fixed point due to the clause for \(p(c)\), yet the query \(p(a)\) succeeds and \(p(d)\) finitely fails. Thus the Clark semantics does not align particularly well with the operational semantics.
4.2 Three-valued semantics
Even in the absence of negation, a two-valued semantics is lacking in its inability to distinguish failure and looping. Mycroft (1984) explored the use of many-valued logics, including \(\{0, 1, 2\}\) to remedy this. Mycroft discussed this for Horn clause programs, and others, including Fitting (1985) and Kunen (1987), subsequently adapted Clark’s work to a three-valued logic, addressing the problem of how to account properly for the use of explicit negation in programs.
In a two-valued setting the Clark completion may be inconsistent, witness the completion of the clause for \(p(c)\) in Figure 1. A \(=^3\)-model always exists for a Clark-completed program; for example, \(p(c)\) takes on the third truth value. Moreover, since \(\Phi_P\) is monotone with respect to the information ordering, a least fixed point always exists and coincides with the \(=^3\)-model. Ground atoms which are \(t\) in this model (such as \(p(a)\) in Figure 1) are those which have successful derivations, while ground atoms which are \(f\) (such as \(p(d)\)) are those which have finitely failed SLDNF trees. Atoms with the third truth value (\(p(b)\) and \(p(c)\)) must loop. If we were to delete the clause for \(p(c)\) in Figure 1, the Clark semantics would map \(p(b)\) to \(t\), even though it does not finitely fail. Atoms which are \(t\) or \(f\) in the Fitting/Kunen semantics may also loop if the search strategy or computation rule are unfair (even without negation, \(t\) atoms may loop with an unfair search strategy). However the Fitting/Kunen approach does align the model theoretic and fixed point semantics much more closely to the operational semantics than the approach of Clark.
\(\Phi_P\) has a drawback, though: while monotone, it is not in general continuous. Blair (1982) shows that the smallest ordinal \(\beta\) for which \(\Phi_P^\beta(\perp)\) is the least fixed point of \(\Phi_P\) may not be recursive and Kunen (1987) shows that, with a semantics based on three-valued Herbrand models (all models or the least model), the set of ground atoms true in such models may not be recursively enumerable.
Kunen instead suggests a semantics based on any three-valued model and shows that truth (\(t\)) in all \(=^3\)-models is equivalent to being deemed true by \(\Phi_P^n(\perp)\) for some \(n \in \mathbb{N}\). Hence Kunen proposes \(\Phi_P^\omega(\perp)\) as the meaning of program \(P\). For a given \(P\) and ground atom \(A\), it is decidable whether \(A\) is \(t\) in \(\Phi_P^\omega(\perp)\), so whether \(A\) is \(\perp\) in \(\Phi_P^\omega(\perp)\) is semi-decidable.
For simplicity, in this paper we take (the possibly non-computable) \(M = \Phi_P(\Phi_P)\) to be the meaning of a program. However, since we shall be concerned with over-approximations to \(M\), what we shall have to say will apply equally well if Kunen’s \(\Phi_P^\omega(\perp)\) is assumed.
4.3 Four-valued semantics
Subsequent to his three-valued proposal, Fitting recommended, in a series of papers including Fitting (1991, 2002), bilattices as suitable bases for logic program semantics. The bilattice \(4\) (Figure 2(d)) was just one of several studied for the purpose, and arguably the most important one.
Fitting’s motivation for employing four-valued logic was, apart from the elegance of the interlaced bilattices and their algebraic properties, the application in a logic programming language which supports a notion of (spatially) distributed programs. In this context there is a natural need for a fourth truth value, \(\top\) (our \(1\)), to denote conflicting information received from different nodes in a distributed computing network.
In this language, the traditional logical connectives used on the right-hand sides of predicate definitions are explained in terms of the truth ordering. Negation is reflection in the truth ordering: \(\neg u = u, \neg f = t, \neg i = i\), conjunction is meet \((\land)\), disjunction is join \((\lor)\), and existential quantification is the least upper bound \((\lor)\) of all instances. The following tables give conjunction and disjunction in \(4\).
The operations $\land$ and $\lor$ are similarly given by Figure 2(d). Fitting refers to $\land$ (he writes $\otimes$) as consensus, since $x \land y$ represents what $x$ and $y$ agree about. The $\lor$ operation (which he writes as $\oplus$) he refers to gullibility, since $x \lor y$ represents agreement with both $x$ and $y$, whatever they say, including cases where they disagree.
The idea of a information (or knowledge) ordering is familiar to anybody who has used domain theory and denotational semantics. To give meaning to recursively defined objects we refer to fixed points of functions defined on structures equipped with some ordering — the information ordering. This happens in Fitting’s three-valued semantics: That uses the same distinction between a truth ordering $\leq$ and an information ordering $\sqsubseteq$ but it does not expose it as radically as the bilattice. In Fitting’s words, the three-valued approach, “while abstracting away some of the details of [Kripke’s theory of truth] still hides the double ordering structure” (Fitting 2006).
The logic programming language of Fitting (1991) contains operators $\otimes$ and $\oplus$, reflecting the motivation in terms of distributed programs. We, on the other hand, deal with a language with traditional pure Prolog syntax. If the task was simply to model its operational semantics, having four truth values rather than three would offer little, if any, advantage. However, our motivation for using four-valued logic is very different to that of Fitting. We find compelling reasons for the use of four-valued logic to explain certain programming language features, as well as to embrace, semantically, such software engineering aspects as program correctness with respect to programmer intent or specification, declarative debugging, and program analysis. We next discuss one of these aspects.
5 Three-valued specification semantics
Naish (2006) proposed an alternative three-valued semantics. Unlike other approaches, the objective was not to align declarative and operational semantics. Instead, the aim was to provide a declarative semantics which can help programmers develop correct code in a natural way. Naish argued that intentions of programmers are not two-valued. It is generally intended that some ground atoms should succeed (be considered $t$) and some should definitely fail (be considered $f$) but some should never occur in practice; there is no particular intention for how they should behave and the programmer does not care and often does not know how they behave. An example is merging tables, where it is assumed two sorted lists are given as input: it may be more appropriate to consider the value of $\text{merge}(\{3,2\}, \{1,3,2\})$ irrelevant than to give it a classical truth value, since a precondition is violated. Or consider this program:
\[
\begin{align*}
\text{or2}(t, _, t). \\
\text{or3}(f, t, t). \\
\text{or3}(B, f, B).
\end{align*}
\]
It gives two alternative definitions of $\text{or}$ (defined in Section 2), both designed with the assumption that the first two arguments will always be Booleans. If they are not, we consider the atom is inadmissible (a term used in debugging (Pereira 1986, Naish 2000)) and give it the truth value $i$. Interpretations can be thought of as the programmer’s understanding of a specification, where $i$ is used for underspecification of behaviour. The same three-valued interpretation can be used with all three definitions of $\text{or}$, so a programmer can first fix the interpretation then code any of these definitions and reason about their correctness. In contrast, both the Clark and Fitting/Kunen semantics assign different meanings to the three definitions, with atoms such as $\text{or3}(4, f, 4)$ and $\text{or2}(t, [1], t)$ considered $t$ and $\text{or3}(t, [1], t)$ considered $f$. In order for the programmer’s intended interpretation to be a =2-model or =3-model, unnatural distinctions such as these must be made. Naish (2006) argues that it is unrealistic for programmers to use such interpretations as a basis for reasoning about correctness of their programs.
Although Naish uses $i$ instead of $u$ as the third truth value, his approach is structurally the same as Fitting/Kunen’s with respect to the $\Phi$ operator and the meaning of connectives used in the body of definitions. The key difference is how Prolog’s $\leftarrow$ is interpreted. Fitting generalises Clark’s classical $\leftrightarrow$ to $\cong$ or “strong equivalence”, where heads and bodies of head instances must have the same truth values. Naish defined a different “arrow”, $\Rightarrow$, which is asymmetric. In addition to identical truth values for heads and bodies, Naish allows head instances of the form $(i, f)$ and $(i, t)$. The difference is captured by these tables (Fitting left, Naish right):
<table>
<thead>
<tr>
<th></th>
<th>$t$</th>
<th>$f$</th>
<th>$u$</th>
</tr>
</thead>
<tbody>
<tr>
<td>$t$</td>
<td>$t$</td>
<td>$f$</td>
<td>$u$</td>
</tr>
<tr>
<td>$f$</td>
<td>$f$</td>
<td>$t$</td>
<td>$u$</td>
</tr>
<tr>
<td>$u$</td>
<td>$u$</td>
<td>$u$</td>
<td>$u$</td>
</tr>
</tbody>
</table>
Naish’s reasoning is that if a predicate is called in an inadmissible way, it does not matter if it succeeds or fails. The definition of a model uses this weaker “arrow”; we discuss it further in Section 6. Naish (2006) shows that for any model, only $t$ and $i$ atoms can succeed and only $f$ and $i$ atoms can fail. In models of the code in Figure 1, p(b) can be $t$ or $f$ or $i$ but $p(c)$ can only be $i$. For practical code, programmers can reason about partial correctness using intuitive models in which the behaviour of some atoms is unspecified.
6 Four-valued specification semantics
The Fitting/Kunen and Naish approaches all use three truth values, the Kleene strong three-valued logic for the connectives in the body of definitions, and the same immediate consequence operator. It is thus tempting to assume that the “third” truth value in these approaches is the same in some sense. This is implicitly assumed by Naish, when different approaches are compared (Table 1 of Naish (2006)). However, the third truth value is used for very different purposes in these approaches. Fitting uses it to make the semantics more precise than Clark — distinguishing success and finite failure from nontermination (neither success nor finite failure). Naish uses it to make the semantics less precise than Clark, allowing a truth value corresponding to success or finite failure. Thus we believe it is best to treat the third truth values of Fitting and Naish as duals instead of the same value. Because conjunction, disjunction and negation in 4 are symmetric in the information order, the third value in the Kleene strong three-valued logic can map to either the top or bottom element in 4. This is why the third truth values in Fitting/Kunen
and Naish are treated in the same way, even though they are better viewed as semantically distinct.
The four values, \( t, i, f \) and \( u \) are associated with truth/success, falsehood/finite failure, inadmissibility (the Naish third value) and looping/error (the Fitting/Kunen third value). Inadmissibility can be seen as saying both success and failure are correct, so we can see it as the union of both. Atoms which are \( u \) in the Fitting semantics neither succeed nor fail. Thus the information ordering can also be seen as the set ordering, \( \preceq \), if we interpret the truth values as sets of Boolean values. In Naish, \( i \) is implicitly considered the bottom element so the ordering used in that work is the inverse of the ordering considered here.
We now show how Naish’s semantics can be generalised to \( 4 \). As discussed above, adding the truth value \( i \) to the Fitting semantics does not allow us to describe what is computed any more precisely, though it can be useful for approximating what is computed. However, adding the truth value \( u \) to the Naish semantics does allow us to describe more precisely what is intended. There are occasions when both the success and finite failure of an atom are considered incorrect behaviour and thus \( u \) is an appropriate value to use in the intended interpretation. We give three examples. The first is an interpreter for a Turing-complete language. If the interpreter is given (the representation of) a looping program it should not succeed and it should not fail. The second is an operating system. Ignoring the details of how interaction with the real world is modelled in the language, termination means the operating system crashes. The third is code which is only intended to be called in limited ways, but is expected to be robust and check its inputs are well formed. Exceptions or abnormally termination with an error message are best not considered success or finite failure. Treating them in the same way as infinite loops in the semantics may not be ideal but it is more expressive than using the other three truth values (indeed, “infinite” loops are never really infinite because resources are finite and hence some form of abnormal termination results).
Naish (2006) defines models in terms of the \( \leftarrow \) described earlier, and his Proposition 7 relates models to the information ordering on interpretations. This is actually a key observation (though the significance is not noted by Naish (2006)): the \( \leftarrow \) defines the information order on truth values! The classical arrow defines the truth ordering on interpretations. This is therefore clear how Naish’s arrow can be generalised to \( 4 \). The models of Naish (2006) are \( \mathfrak{P}^3 \)-models, which can be generalised to \( \mathfrak{P}^4 \)-models.
**Proposition 2** \( M \) is a \( \mathfrak{P}^4 \)-model of \( P \) if \( \Phi_P(M) \subseteq M \).
**Proof** \( M \) is a \( \mathfrak{P}^4 \)-model iff, for every head instance \((H, B)\) of \( P \), \( M(B) \subseteq M(H) \). This is equivalent to stating that if \( M \) makes \( B \) true then \( M \) makes \( H \) true, and also, if \( M \) makes \( B \) false then \( M \) makes \( H \) false. But this is the case iff \( \Phi_P(M) \subseteq M \), by the definition of \( \Phi_P \).
It is easy to see that if \( M \) is a \( \mathfrak{P}^3 \)-model of \( P \) then \( M \) is a \( \mathfrak{P}^4 \)-model of \( P \). However, the converse is not necessarily true, so the results of Naish (2006) cannot be used to show properties of four-valued models. However, such properties can be proved directly, using properties of the lattice of interpretations.
**Proposition 3** If \( M \) is a \( \mathfrak{P}^4 \)-model of \( P \) then \( \operatorname{lfp}(\Phi_P) \subseteq M \).
**Proof** The proof is by transfinite induction. Given program \( P \), define
\[
I^\beta = \begin{cases} \Phi_P(I^\beta) & \text{if } \beta \text{ is a successor ordinal } \beta' + 1 \\ \bigsqcup_{\alpha < \beta} I^\alpha & \text{if } \beta \text{ is a limit ordinal}
\end{cases}
\]
Assume \( I^\alpha \subseteq M \) for all ordinals \( \alpha < \beta \). We show that \( I^\beta \subseteq M \).
First consider the case \( \beta = \beta' + 1 \). By the induction hypothesis, \( I^\beta \subseteq M \). Since \( \Phi_P \) is monotone, \( I^\beta = \Phi_P(I^\beta) \subseteq \Phi_P(M) \). By Proposition 2, \( \Phi_P(M) \subseteq M \). Hence \( I^\beta \subseteq M \).
Now consider the case of limit ordinal \( \beta = \bigsqcup_{\alpha < \beta} \alpha \).
By definition, \( I^\beta = \bigsqcup_{\alpha < \beta} I^\alpha \). By the induction hypothesis, \( I^\alpha \subseteq M \), for each \( \alpha < \beta \). But then by properties of the least upper bound operation, \( I^\beta = \bigsqcup_{\alpha < \beta} I^\alpha \subseteq M \). □
**Proposition 4** The least \( \mathfrak{P}^4 \)-model of \( P \) is \( \operatorname{lfp}(\Phi_P) \).
**Proof** This follows from Proposition 3 and the fact that fixed points are \( \mathfrak{P}^4 \)-models.
For reasoning about partial correctness, the relationship between truth values in an interpretation and operational behaviour is crucial.
**Theorem 1** If \( M \) is a \( \mathfrak{P}^4 \)-model of \( P \) then no \( t \) atoms in \( M \) can finitely fail, no \( f \) atoms in \( M \) can succeed and no \( u \) atoms in \( M \) can finitely fail or succeed.
**Proof** Finitely failed atoms are \( f \) in \( \operatorname{lfp}(\Phi_P) \), successful atoms are \( t \) in \( \operatorname{lfp}(\Phi_P) \), and \( u \) atoms in \( \operatorname{lfp}(\Phi_P) \) must loop, from Kunen. From Proposition 3 and the \( \sqsubseteq \) ordering, \( f \) atoms in \( M \) can only be \( f \) or \( u \) in \( \operatorname{lfp}(\Phi_P) \), \( t \) atoms in \( M \) can only be \( t \) or \( u \) in \( \operatorname{lfp}(\Phi_P) \), and \( u \) atoms in \( M \) can only be \( u \) in \( \operatorname{lfp}(\Phi_P) \).
These results about the behaviour of \( t \) and \( f \) atoms are essentially the two soundness theorems, for finite failure and success, respectively, of Naish (2006). The result for \( u \) atoms is new. The relationship between the operational semantics and various forms of three-valued model-theoretic semantics was summarised by Table 1 of Naish (2006). However, it assumed the Fitting/Kunen third truth value was the same as Naish’s. We can now refine it using the four values, as follows (the last row summarises Theorem 1):
<table>
<thead>
<tr>
<th>operational</th>
<th>succeed</th>
<th>loop</th>
<th>fail</th>
</tr>
</thead>
<tbody>
<tr>
<td>least ( \mathfrak{P} )-model</td>
<td>( t )</td>
<td>( u )</td>
<td>( f )</td>
</tr>
<tr>
<td>any ( \mathfrak{P} )-model</td>
<td>( t )</td>
<td>( t/u/i/f )</td>
<td>( f )</td>
</tr>
<tr>
<td>any ( \mathfrak{P}^4 )-model</td>
<td>( t/i )</td>
<td>( t/u/i/f )</td>
<td>( i/f )</td>
</tr>
</tbody>
</table>
Figure 3 gives a graphical representation of how the least model compares with a typical intended model.
of the atoms in a typical intended interpretation is a subset in the intended model. The case where the intended utilised in the next section, on modes.
This will be analogous result, using the information ordering, which programs with negation. Here we give a new analysis.
Proposition 1 of Naish (2006) generalises this result to the right. Weaker definitions of models allow more flexibility in how we think of our programs, yet still guarantee partial correctness.
7 A “model intersection” property
With the classical logic approach for definite clause programs, we have a useful model intersection property: if $M$ and $N$ are (the set of true atoms in) models then $M \cap N$ is (the set of true atoms in) a model. Proposition 1 of Naish (2006) generalises this result using the truth ordering for three-valued interpretations, and Proposition 2 of Naish (2006) gives a similar result which mixes the truth and information orderings. However, none of these results hold for logic programs with negation. Here we give a new analogous result, using the information ordering, which holds even when negation is present. This will be utilised in the next section, on modes.
Proposition 5 If $M$ and $N$ are $3^4$-models of program $P$ then $M \cap N$ is a $3^4$-model of $P$.
Proof By Proposition 2, $\Phi_P(M) \subseteq M$ and $\Phi_P(N) \subseteq N$, since $M$ and $N$ are models. By monotonicity, $\Phi_P(M \cap N) \subseteq \Phi_P(M) \subseteq M$, and $\Phi_P(M \cap N) \subseteq \Phi_P(M \cap N) \subseteq M \cap N$, so by Proposition 2, $M \cap N$ is a model of $P$.
This result does not hold for $4^4$-models. For example:
$$
p : = p.\quad q : = q.\quad r : = p ; q ; s.\quad s : = p ; q ; \text{not } r.
$$
Let $M$ be the interpretation which maps $(p,q,r,s)$ to $(t,f,t,t)$, respectively, and $N$ be the interpretation $(f,t,t,t)$. Both $M$ and $N$ are $4^4$-models. The meet, $M \cap N$, is $(u,u,t,t)$ but $\Phi_P$ applied to this interpretation is $(u,u,t,u)$. So $M \cap N$ is a $3^4$-model but not a $4^4$-model.
8 Types and modes
We now discuss the motivation for type and mode systems in logic programming and show how $3^4$-models could have a role to play in mode systems. The lack of restrictions on what constitutes an acceptable Prolog program means that it is easy for programmers to make simple mistakes which are not immediately detected by the Prolog system. A typical symptom is the program unexpectedly fails, leading to rather tedious analysis of the complex execution in order to uncover the mistake. One approach to avoid some runtime error diagnosis is to impose additional discipline on the programmer, generally restricting programming style somewhat, in order to allow the system to statically classify certain programs as incorrect. Various systems of “types” and “modes” have been proposed for this. An added benefit of some systems is that they help make implementations more efficient. Here we discuss such systems at a very high level and argue that four-valued interpretations potentially have a role in this area, particularly in mode systems such as that of Mercury (Somogyi, Henderson & Conway 1995).
Type systems typically assign a type (say, Boolean, integer, list of integers) to each argument of each predicate. This allows each variable occurrence in a clause to also be assigned a type. One common error is that two occurrences of the same variable have different types. For example, consider a predicate head which is intended to return the head of a list of integers but is incorrectly defined as: head\([L|Y],Y\).
The first occurrence of $Y$ is associated with type list of integer and the other is associated with type integer. If head is called with both arguments instantiated to the expected types, it must fail. But head can succeed if it is called in different ways. For example, with only the first argument instantiated it will succeed, albeit with the wrong type for the second argument (and this in turn may cause a wrong result or failure of a computation which calls head).
Type systems can be refined by considering the “mode” in which predicates are called, or dependencies between the types of different arguments. This can allow additional classes of errors to be detected. For example, we can say the first argument of head is expected to be “input” and the second argument can be “output”. Alternatively (but with similar effect), we could say if the first argument is a list of integers, the second should be an integer. For a definition such as head\([L|Y],X\) there is a consistent assignment of types to variables but it does not satisfy this mode/type-dependency constraint. One high level constraint of several mode systems is that if input arguments are well typed then output arguments should be well typed for any successful call. In fact, we want the whole successful derivation to be well typed (otherwise we have a very dubious proof).
Typically, well typed inputs in a clause head imply well typed inputs in the body, which implies well typed outputs in body, which implies well typed outputs in the head. This idea is present in the directional types concept (Aiken & Lakshman 1994, Boye & Maluszynski 1995), the mode system of Mercury (Somogyi et al. 1995), and the view of modes proposed in Naish (1996). Here we show the relevance of four-valued interpretations to this idea, ignoring the details of what constitutes a type (which differs in the different proposals) and what additional constraints are imposed (neither Mercury or directional types support cyclic dataflow and Mercury has additional interactions between types, modes and determinism).
Type and mode declarations document some aspects of how predicates are intended to be used and how they are intended to behave. We define a subset of possible interpretations which are consistent with these declarations. We assume there is a notion of well typedness for each argument of each predicate in program $P$.
**Definition 6 (Mode and mode interpretation)** A mode for predicate $p$ is an assignment of “input” or “output” to each of $p$’s argument positions. Each predicate has a set of modes. A mode interpretation of $P$ is a four-valued interpretation $M$ such that the truth value of an atom $A$ in $M$ is
1. $i$ if there is no mode of the predicate for which all input arguments are well typed, and
2. $f$, if there is a mode of the predicate for which all input arguments are well typed but some (output) argument is not well typed.
Other atoms may take any truth value.
In typical automated mode analysis there is no additional information about other user-defined atoms and it can be assumed they are $t$. The (builtin) error atoms should be $u$ for a language like Mercury. In the mode interpretation corresponding to Figure 5, $p_1(g, g)$ is $t$ and $\{p_1(n, n), p_1(n, g), p_1(g, n)\}$ are all $i$ (we use $n$ as a representative ill-typed term; it also corresponds to non-ground computed answers). If the mode of $p_1$ was changed to (in, out), $p_1(g, n)$ would be $f$. Changing the modes of a predicate so it can be used in more flexible ways corresponds to changing the truth value of some atoms from $i$ to $f$. For $p_2$, with two mode declarations, only $p_1(n, n)$ is $i$. Mode interpretations which are $\nabla^4$-models give us the high level properties of well modedness:
**Lemma 1** If a mode interpretation $M$ of a program $P$ is a $\nabla^4$-model and $A$ is a successful atom which, for some mode of the predicate, has all input arguments well typed, then $A$ has all arguments well typed.
**Proof** By Theorem 1, since $M$ is a $\nabla^4$-model and $A$ succeeds, $A$ must be $t$ or $i$ in $M$. By the definition of mode interpretations, since $A$ is not $f$ and all input arguments are well typed for some mode, all output arguments must be well typed as well. □
**Lemma 2** If a mode interpretation $M$ of a program $P$ is a $\nabla^4$-model and $A$, with $M(A) = t$, succeeds, then $A$ is well typed and there is a ground clause instance $A : - B_1 ; \ldots ; B_n$ such that all atoms in some $B_i$ are well typed and assigned $t$.
**Proof** Since $M$ is a mode interpretation and $A$ is not $i$, $A$ has all input arguments well typed for some mode, so by Lemma 1 all arguments are well typed. Since $A$ is $t$ and $M$ is a $\nabla^4$-model, the clause body must be $t$ or $u$. Because $i$ succeeds it cannot be $u$, so it must be $t$. By the definition of disjunction and conjunction, all atoms in some $B_i$ must be $t$. Since $M$ is a mode interpretation, each of these atoms must have well typed inputs for some mode (otherwise they would be $I$). They all succeed, so by Lemma 1 they must be well typed. □
**Theorem 2** If a mode interpretation $M$ of a program $P$ is a $\nabla^4$-model and $A$ is a $t$ atom which succeeds, then there is a proof in which all atoms are well typed.
**Proof** By induction on the depth of the proof and Lemma 2. □
The mode interpretation corresponding to the code in Figure 5 is a $\exists^4$-model of the program. The same holds when a mode declaration is replaced by any of those "commented out" variants that are labelled OK. Conversely, the interpretations corresponding to ill-moded variants are not $\exists^4$-models. For example, predicate $p_1$ with mode $\text{(in,out)}$ has a clause instance $p_3(g,n) :- \text{true}$, of the form $f :- t$ (whereas $p_5$ is well moded with this mode; its instance is of the form $f :- u$). For $p_3$ with $\text{(in,out)}$ as the only mode we have clause instance $p_3(g,n) :- p_3(n,g)$, of the form $t :- i$. For $p_4$ with mode $\text{(in,in)}$ we have the clause instance $p_4(g,g) :- p_4(g,n)$, of the same form.
Although mode interpretations do not capture all the complexities of the Mercury mode system, they do give us a high level view and some additional insights. For any predicate definition, we know there is a lattice of mode interpretations, some of which are typically $\exists^4$-models. Each one corresponds to a set of mode declarations. Models higher in the information order place more restrictions on how we use a predicate — more atoms are i and more arguments must be input. Proposition 5 tells us the meet of two $\exists^4$-models is a $\exists^4$-model. This corresponds to taking the union of the sets of mode declarations (the set of i atoms in the meet is the intersection of the i atoms in the two $\exists^4$-models). One way the Mercury mode system could potentially be extended is by allowing the programmer to specify several sets of mode declarations for a predicate (corresponding to several $\exists^4$-models). A predicate such as $p_2$ could have two singleton sets of modes declared (with the union implicit due to Proposition 5), whereas $p_3$ would need a single set of two mode declarations. This potentially could allow more errors to be detected and perhaps greater efficiency (avoiding some modes of a predicate appearing in the object code if they are not required).
An understanding of the lattice of mode interpretations may also be helpful for mode inference.
9 Declarative debugging
The semantics of Naish (2006) is closely aligned with declarative debugging (Shapiro 1983) and the term "inadmissible" comes from this area (Pereira 1986). In particular, it gives a formal basis for the three-valued approach to declarative debugging of Naish (2000), as applied to Prolog. This debugging scheme represents the computation as a tree; sub-trees represent the sub-computations. Each node is classified as correct, erroneous or inadmissible. The debugger searches the tree for a buggy node, which is an erroneous node with no erroneous children. If all children are correct it is called an e-bug, otherwise (it has an inadmissible child) it is called an i-bug. Every finite tree with an erroneous root contains at least one buggy node and finding such a node is the job of a declarative debugger.
To diagnose wrong answers in Prolog a proof tree (see Lloyd (1984)) is used to represent the computation. Nodes containing $t$, $f$ and i atoms are correct, erroneous and inadmissible, respectively. To diagnose computations that miss answers, a different form of tree is used, and nodes containing finitely failed $t$, $f$ and i atoms are erroneous, correct, and inadmissible, respectively. There are some additional complexities, such as non-ground wrong answers and computations which return some but not all correct answers; we skip the details here. Four-valued interpretations could be used in place of three-valued interpretations in this scheme. For wrong answer diagnosis, u should be treated the same as f and for missing answer diagnosis u should be treated the same as t. Naish (2000) also discusses diagnosis of abnormal termination and (suspected) non-termination, but assumes only 1 atom should loop or terminate abnormally. With four-valued interpretations this restriction can be lifted.
10 Computation and information ordering
The logic programming paradigm introduced the view of computation as deduction (Kowalski 1980). Classical logic was used and hence computation was identified with the truth ordering. Similarly, there was much early work discussing the relationship between specifications (written in classical higher-order logic) and programs (Hogger 1981, Kowalski 1985). This work generally overlooked what we call inadmissibility. For example, Kowalski (1985) gives a specification for the $\text{subset}(SS,S)$ predicate, $\forall E [\text{member}(E,S) \rightarrow \text{member}(E,SS)]$ (sets are represented as lists and $\text{member}$ is the Prolog list membership predicate), and shows that a common Prolog implementation of $\text{subset}$ is a logical consequence. However, $\text{subset}(\text{true},42)$ is true according to the specification and if the specification is modified to restrict both arguments to be lists, the program is no longer a logical consequence (it has $\text{subset}([1,\_],\_)$ as a base case). When negation is considered, or even the fact that logic programs implicitly define falsehood of some atoms, it becomes clear that approaches based on the truth ordering are unworkable.
Four-valued logic enables us to identify computation with the information ordering rather than the truth ordering. Specifications can be identified with intended interpretations, and inadmissibility with underspecification. There can be different logic programs, with different behaviours, which are correct according to a specification — they can be seen as refinements of the specification. The behaviour of a program is given by its least $\exists^4$-model, and it is (partially) correct if and only if the least model is less than or equal to the specification, in the information ordering. The specification being a $\exists^4$-model is a sufficient condition for correctness.
The same ordering applies to successive states of a computation using a correct program. Because $H \sqsubseteq B$ for each head instance, replacing a subgoal by the body of its definition (a basic step in a logic programming computation) gives us a new goal which is lower (or equal) in the information ordering, in the following sense. Given a top-level Prolog goal, the intended interpretation gives a truth assignment for each ground instance. Subsequent resolvents can also be given a a truth assignment for each ground instance of the variables in the top level goal (with local variables considered existentially quantified). As the computation progresses, the truth value assignment for each instance often remains the same, but can become lower in the information ordering. For example, consider the goal $\text{implies}(X,f)$. Our interpretation will map $\text{implies}(f,t)$ to $t$ and $\text{implies}(t,t)$ to $f$, but may map $\text{implies}(42,t)$ to $t$ if the first argument is expected to be input. After one step of the computation we have the conjunction $\text{neg}(X,U) \land \text{or}(U,f,t)$. If our intended interpretation allows any mode for neg, the instance where $X = 42$ is then mapped to $f$.
We believe that having a complete lattice using the information ordering provides an important and fundamental insight into the nature of computation. At the top of the lattice we have an element which corresponds to underspecification in the mind of a person. At the bottom of the lattice, the interpretation of a buggy node corresponds to a the inability of a machine or formal system to compute or define a value. The transitions
between the meanings we attach to specifications and correct programs, and successive execution states of a correct program, follow the information ordering, rather than the truth ordering.
11 Conclusion
We have been aware of the limitations of formal systems since well before the invention of electronic computers. Gödel showed the impossibility of a complete proof procedure for elementary number theory, hence important gaps between truth and provability, and in any Turing-complete programming language there are programs which fail to terminate — undefinedness is unavoidable. Our awareness of the limitations of humans in their interaction with computing systems goes back even further. Babbage (1864) claims to have been asked by members of the Parliament of the United Kingdom, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out”? The term “garbage in, garbage out” was coined in the early days of electronic computing and concepts such as “preconditions” have always been important in formal verification of software — underspecification is also unavoidable in practice.
Using a special value to denote undefinedness is the accepted practice in programming language semantics. Using a special value to denote underspecification is less well established, but has been shown to provide elegant and natural reasoning about partial correctness, at least in the logic programming context. In this paper we have proposed a domain for reasoning about Prolog programs which has values to denote both undefinedness and underspecification — they are the bottom and top elements of a bilattice. This gives an elegant picture which encompasses both humans not making sense of some things and computers being unable to produce definitive results sometimes. The logical connectives Prolog uses in the body of clauses operate within the truth order in the bilattice. However, the overall view of computation operates in the orthogonal “information” order: from underspecification to undefinedness.
References
|
{"Source-Url": "https://people.eng.unimelb.edu.au/lee/papers/sem4lp/paper.pdf", "len_cl100k_base": 13907, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41710, "total-output-tokens": 16181, "length": "2e13", "weborganizer": {"__label__adult": 0.00033164024353027344, "__label__art_design": 0.00030493736267089844, "__label__crime_law": 0.00036263465881347656, "__label__education_jobs": 0.0008649826049804688, "__label__entertainment": 7.081031799316406e-05, "__label__fashion_beauty": 0.00014913082122802734, "__label__finance_business": 0.00020515918731689453, "__label__food_dining": 0.0004038810729980469, "__label__games": 0.0006775856018066406, "__label__hardware": 0.0007300376892089844, "__label__health": 0.00045561790466308594, "__label__history": 0.00022029876708984375, "__label__home_hobbies": 9.948015213012697e-05, "__label__industrial": 0.0004503726959228515, "__label__literature": 0.0004982948303222656, "__label__politics": 0.00028061866760253906, "__label__religion": 0.000560760498046875, "__label__science_tech": 0.02410888671875, "__label__social_life": 8.112192153930664e-05, "__label__software": 0.005191802978515625, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.00027632713317871094, "__label__transportation": 0.00064849853515625, "__label__travel": 0.0001728534698486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59643, 0.01971]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59643, 0.46847]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59643, 0.88941]], "google_gemma-3-12b-it_contains_pii": [[0, 5101, false], [5101, 10288, null], [10288, 16432, null], [16432, 23857, null], [23857, 30585, null], [30585, 37510, null], [37510, 43165, null], [43165, 46469, null], [46469, 54038, null], [54038, 59643, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5101, true], [5101, 10288, null], [10288, 16432, null], [16432, 23857, null], [23857, 30585, null], [30585, 37510, null], [37510, 43165, null], [43165, 46469, null], [46469, 54038, null], [54038, 59643, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59643, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59643, null]], "pdf_page_numbers": [[0, 5101, 1], [5101, 10288, 2], [10288, 16432, 3], [16432, 23857, 4], [23857, 30585, 5], [30585, 37510, 6], [37510, 43165, 7], [43165, 46469, 8], [46469, 54038, 9], [54038, 59643, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59643, 0.04651]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
45b4d67d64f0b90b9e1e7e18fa3e45c4fd4a1aff
|
Object-oriented Neural Programming (OONP) for Document Understanding
Zhengdong Lu\textsuperscript{1}, Xianggen Liu\textsuperscript{2,3,4,*}, Haotian Cui\textsuperscript{2,3,4,*}, Yukun Yan\textsuperscript{2,3,4,*} Daqi Zheng\textsuperscript{1}
luz@deeplycurious.ai,
\{liuxg16,cht15,yanyk13\}@mails.tsinghua.edu.cn, da@deeplycurious.ai
\textsuperscript{1} DeeplyCurious.ai
\textsuperscript{2} Department of Biomedical Engineering, School of Medicine, Tsinghua University
\textsuperscript{3} Beijing Innovation Center for Future Chip, Tsinghua University
\textsuperscript{4} Laboratory for Brain and Intelligence, Tsinghua University
Abstract
We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural net-based Reader sequentially goes through the document, and builds and updates an intermediate ontology during the process to summarize its partial understanding of the text. OONP supports a big variety of forms (both symbolic and differentiable) for representing the state and the document, and a rich family of operations to compose the representation. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL), reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.
1 Introduction
Mapping a document into a structured “machine readable” form is a canonical and probably the most effective way for document understanding. There are quite some recent efforts on designing neural net-based learning machines for this purpose, which can be roughly categorized into two groups: 1) sequence-to-sequence model with the neural net as the the black box (Dong and Lapata, 2016; Liang et al., 2017), and 2) neural net as a component in a pre-designed statistical model (Zeng et al., 2014). We however argue that both approaches have their own serious problems and cannot be used on document with relatively complicated structures. Towards solving this problem, we proposed Object-oriented Neural Programming (OONP), a framework for semantically parsing in-domain documents. OONP is neural net-based, but it also has sophisticated architecture and mechanism designed for taking and outputting discrete structures, hence nicely combining symbolism (for interpretability and formal reasoning) and connectionism (for flexibility and learnability). This ability, as we argue in this paper, is critical to document understanding.
* The work was done when these authors worked as interns at DeeplyCurious.ai.
OONP seeks to map a document to a graph structure with each node being an object, as illustrated in Figure 1. We borrow the name from Object-oriented Programming (Mitchell, 2003) to emphasize the central position of “objects” in our parsing model: indeed, the representation of objects in OONP allows neural and symbolic reasoning over complex structures and hence it makes it possible to represent much richer semantics. Similar to Object-oriented Programming, OONP has the concept of “class” and “objects” with the following analogousness: 1) each class defines the types and organization of information it contains, and we can define inheritance for class with different abstract levels as needed; 2) each object is an instance of a certain class, encapsulating a number of properties and operations; 3) objects can be connected with relations (called links) with pre-determined type. Based on objects, we can define ontology and operations that reflect the intrinsic structure of the parsing task.
For parsing, OONP reads a document and parses it into this object-oriented data structure through a series of discrete actions along reading the document sequentially. OONP supports a rich family of operations for composing the ontology, and flexible hybrid forms for knowledge representation. An OONP parser can be trained with supervised learning (SL), reinforcement learning (RL) and hybrid of the two. Our experiments on one synthetic dataset and two real-world datasets have shown the efficacy of OONP on document understanding tasks with a variety of characteristics.
Figure 1: Illustration of OONP on a parsing task.
2 Related Works
2.1 Semantic Parsing
Semantic parsing is concerned with translating language utterances into executable logical forms and plays a key role in building conversational interfaces (Jonathan and Percy, 2014). Different from common tasks of semantic parsings, such as parsing the sentence to dependency structure (Buys and Blunsom, 2017) and executable commands (Herzig and Berant, 2017), OONP parses documents into a predesigned object-oriented data structure which is easily readable for both human and machine. It is related to semantic web (Berners-Lee et al., 2001) as well as frame semantics (Charles J, 1982) in the way semantics is represented, so in a sense, OONP can be viewed as a neural-symbolic implementation of semantic parsing with similar semantic representation.
2.2 State Tracking
OONP is inspired by Daumé III et al. (2009) on modeling parsing as a decision process, and the work on state-tracking models in dialogue system (Henderson et al., 2014) for the mixture of symbolic and probabilistic representations of dialogue state. For modeling a document with entities, Yang et al. (2017) use coreference links to recover entity clusters, though they only model entity mentions as containing a single word. However, entities whose names consist of multiple words are not considered. Entity Networks (Henaff et al., 2016) and EntityNLM (Ji et al., 2017) have addressed above problem and are the pioneers to model on tracking entities, but they have not considered the properties of the entities. In fact, explicitly modeling the entities both with their properties and contents is important to understand a document, especially a complex document. For example, if there are two persons named ‘Avery’, it is vital to know their genders or last names to avoid confusion. Therefore, we propose OONP to sketch objects and their relationships by building a structured graph for document parsing.
3 Overview of OONP
An OONP parser (as illustrated through the diagram in Figure 2) consists of a Reader equipped with read/write heads, Inline Memory that represents the document, and Carry-on Memory that summarizes the current understanding of the document at each time step. For each document to parse, OONP first preprocesses it and puts it into the Inline Memory, and then Reader controls the read-heads to sequentially go through the Inline Memory (for possibly multiple times, see Section 8.3 for an example) and at the same time update the Carry-on Memory.
Figure 2: The overall diagram of OONP, where S stands for symbolic representation, D stands for distributed representation, and S+D stands for a hybrid representation with both symbolic and distributed parts.
The major components of OONP are described in the following:
- **Memory**: we have two types of memory, Carry-on Memory and Inline Memory. Carry-on Memory is designed to save the state in the decision process and summarize current understanding of the document based on the text that has been ‘read’. Carry-on Memory has three compartments:
- It is not entirely accurate, since the Inline Memory can be modified during the reading process it also records some of the state information.
– **Object Memory**: denoted as $M_{\text{obj}}$, the object-based ontology constructed during the parsing process, see Section 4.1 for details;
– **Matrix Memory**: denoted as $M_{\text{mat}}$, a matrix-type memory with fixed size, for differentiable read/write by the controlling neural net (Graves et al., 2014). In the simplest case, it could be just a vector as the hidden state of conventional Recurrent Neural Network (RNN);
– **Action History**: denoted as $M_{\text{act}}$, saving the entire history of actions made during the parsing process.
Intuitively, $M_{\text{obj}}$ stores the extracted knowledge with defined structure and strong evidence, while $M_{\text{mat}}$ keeps the knowledge that is fuzzy, uncertain or incomplete, waiting for future information to confirm, complete and clarify. **Inline Memory**, denoted $M_{\text{inl}}$, is designed to save location-specific information about the document. In a sense, the information in Inline Memory is low level and unstructured, waiting for Reader to fuse and integrate for more structured representation.
**• Reader**: Reader is the control center of OONP, coordinating and managing all the operations of OONP. More specifically, it takes the input of different forms (reading), processes it (thinking), and updates the memory (writing). As shown in Figure 3, Reader contains **Neural Net Controller** (NNC) and multiple symbolic processors, and Neural Net Controller also has **Policy-net** as its sub-component. Similar to the controller in Neural Turing Machine (Graves et al., 2014), Neural Net Controller is equipped with multiple read-heads and write-heads for differentiable read/write over Matrix Memory and (the distributed part of) Inline Memory, with possibly a variety of addressing strategies (Graves et al., 2014). Policy-net however issues discrete outputs (i.e., actions), which gradually builds and updates the **Object Memory** in time (see Section 4.1 for more details). The actions could also updates the symbolic part of Inline Memory if needed. The symbolic processors are designed to handle information in symbolic form from Object Memory, Inline Memory, Action History, and Policy-net, while that from Inline Memory and Action History is eventually generated by Policy-net.

**Figure 3**: The overall diagram of OONP
We can show how the major components of OONP collaborate to make it work through the following sketchy example. In reading the following text:
“Tom stole a white Audi A6 and a black BMW 3. He tried to sell John both cars, but he only took the BMW for 5k.”
OONP has reached the underlined word “BMW” in Inline Memory. At this moment, OONP has two objects (I01 and I02) for Audi-06 and BMW respectively in Object Memory. Reader
determines that the information it is currently holding is about I02 (after comparing it with both objects) and updates its status property to sold, along with other update on both Matrix Memory and Action History.
**OONP in a nutshell:** The key properties of OONP can be summarized as follows
1. OONP models parsing as a decision process: as the “reading and comprehension” agent goes through the text it gradually forms the ontology as the representation of the text through its action;
2. OONP uses a symbolic memory with graph structure as part of the state of the parsing process. This memory will be created and updated through the sequential actions of the decision process, and will be used as the semantic representation of the text at the end;
3. OONP can blend supervised learning (SL) and reinforcement learning (RL) in tuning its parameters to suit the supervision signal in different forms and strength;
4. OONP allows different ways to add symbolic knowledge into the raw representation of the text (Inline Memory) and its policy net in forming the final structured representation of the text.
4 OONP: Components
In this section we will discuss the major components in OONP, namely Object Memory, Inline Memory and Reader. We omit the discussion on Matrix Memory and Action History since they are straightforward given the description in Section 3.
4.1 Object Memory
Object Memory stores an object-oriented representation of document, as illustrated in Figure 4. Each object is an instance of a particular class, which specifies the internal structure of the object, including internal properties, operations, and how this object can be connected with others. The internal properties can be of different types, for example string or category, which usually correspond to different actions in composing them: the string-type property is usually “copied” from the original text in Inline Memory, while the category properties usually needs to be rendered by a classifier. The links are by nature bi-directional, meaning that it can be added from both ends (e.g., in the experiment in Section 8.1), but for modeling convenience, we might choose to let it to be one directional (e.g., in the experiments in Section 8.2 and 8.3). In Figure 4 there are six “linked” objects of three classes (namely, Person, Event, and Item). Taking Item-object I02 for example, it has five internal properties (Type, Model, Color, Value, Status), and is linked with two Event-objects through stolen and disposed link respectively.
In addition to the symbolic part, each object had also its own distributed presentation (named object-embedding), which serves as its interface with other distributed representations in Reader (e.g., those from the Matrix Memory or the distributed part of Inline Memory).
---
1In this paper, we limit ourselves to a flat structure of classes, but it is possible and even beneficial to have a hierarchy of classes. In other words, we can have classes with different levels of abstractness, and allow an object to go from abstract class to its child class during the parsing process, with more and more information is obtained.
For description simplicity, we will refer to the symbolic part of this hybrid representation of objects as **ontology**, with some slight abuse of this word. Object-embedding serves as a dual representation to the symbolic part of a object, recording all the relevant information associated with it but not represented in the ontology, e.g., the context of text when the object is created.
The representations in **Object Memory**, including the ontology and object embeddings, will be updated in time by the operations defined for the corresponding classes. Usually, the actions are the driving force in those operations, which not only initiate and grow the ontology, but also coordinate other differentiable operations. For example, object-embedding associated with a certain object changes with any non-trivial action concerning this object, e.g., any update on the internal properties or the external links, or even a mention (corresponding to an **Assign** action described in Section 5) without any update.
According to the way the ontology evolves with time, the parsing task can be roughly classified into two categories
- **Stationary**: there is a final ground truth that does not change with time. So with any partial history of the text, the corresponding ontology is always part of the final one, while the missing part is due to the lack of information. See task in Section 8.2 and 8.3 for example.
- **Dynamical**: the truth changes with time, so the ontology corresponding to partial history of text may be different from that of the final state. See task in Section 8.1 for example.
It is important to notice that this categorization depends not only on the text but also heavily on the definition of ontology. Taking the text in Figure 1 for example: if we define ownership relation between a PERSON-object and ITEM-object, the ontology becomes dynamical, since ownership of the BMW changed from Tom to John.
4.2 Inline Memory
Inline Memory stores the relatively raw representation of the document that follows the temporal structure of the text, as illustrated through Figure 2. Basically, Inline Memory is an array of memory cells, each corresponding to a pre-defined language unit (e.g., word) in the same order as they are in the original text. Each cell can have distributed part and symbolic part, designed to save 1) the result of preprocessing of text from different models, and 2) certain output from Reader, for example from previous reading rounds. Following are a few examples for preprocessing:
- **Word embedding:** context-independent vectorial representation of words
- **Hidden states of NNs:** we can put the context in local representation of words through gated RNN like LSTM (Greff et al. 2015) or GRU (Cho et al. 2014), or particular design of convolutional neural nets (CNN) (Yu and Koltun 2015).
- **Symbolic preprocessing:** this refer to a big family of methods that yield symbolic result, including various sequential labeling models and rule-based methods. As the result we may have tag on words, extracted sub-sequences, or even relations on two pieces.
During the parsing process, Reader can write to Inline Memory with its discrete or continuous outputs, a process we named “notes-taking”. When the output is continuous, the notes-taking process is similar to the interactive attention in machine translation (Meng et al., 2016), which is from a NTM-style write-head (Graves et al., 2014) on Neural Net Controller. When the output is discrete, the notes-taking is essentially an action issued by Policy-net.
Inline Memory provides a way to represent locally encoded “low level” knowledge of the text, which will be read, evaluated and combined with the global semantic representation in Carry-on Memory by Reader. One particular advantage of this setting is that it allows us to incorporate the local decisions of some other models, including “higher order” ones like local relations across two language units, as illustrated in the left panel of Figure 5. We can also have a rather “nonlinear” representation of the document in Inline Memory. As a particular example (Yan et al., 2017), at each location, we can have the representation of the current word, the representation of the rest of the sentence, and the representation of the rest of the current paragraph, which enables Reader to see information of history and future at different scales, as illustrated in the right panel of Figure 5.
Figure 5: Left panel: Inline Memory with symbolic knowledge; Right panel: one choice of nonlinear representation of the distributed part of Inline Memory used in (Yan et al., 2017).
4.3 Reader
Reader is the control center of OONP, which manages all the (continuous and discrete) operations in the OONP parsing process. Reader has three symbolic processors (namely, Symbolic Matching, Symbolic Reasoner, Symbolic Analyzer) and a Neural Net Controller (with Policy-net as the sub-component). All the components in Reader are coupled through intensive exchange of information as shown in Figure 6. Below is a snapshot of the information processing at time $t$ in Reader.
- **STEP-1**: let the processor Symbolic Analyzer to check the Action History ($M_{act}^t$) to construct some symbolic features for the trajectory of actions;
- **STEP-2**: access Matrix Memory ($M_{mat}^t$) to get an vectorial representation for time $t$, denoted as $s_t$;
- **STEP-3**: access Inline Memory ($M_{inl}^t$) to get the symbolic representation $x_i^{(s)}$ (through location-based addressing) and distributed representation $x_i^{(d)}$ (through location-based addressing and/or content-based addressing);
- **STEP-4**: feed $x_i^{(d)}$ and the embedding of $x_i^{(s)}$ to Neural Net Controller to fuse with $s_t$;
- **STEP-5**: get the candidate objects (some may have been eliminated by $x_i^{(s)}$) and let them meet $x_i^{(d)}$ through the processor Symbolic Matching for the matching of them on symbolic aspect;
- **STEP-6**: get the candidate objects (some may have been eliminated by $x_i^{(s)}$) and let them meet the result of STEP-4 in Neural Net Controller;
- **STEP-7**: Policy-net combines the result of STEP-6 and STEP-5, to issue actions;
- **STEP-8**: update $M_{obj}^t$, $M_{mat}^t$ and $M_{inl}^t$ with actions on both symbolic and distributed representations;
- **STEP-9**: put $M_{obj}^t$ through the processor Symbolic Reasoner for some high-level reasoning and logic consistency.
Note that we consider only single action for simplicity, while in practice it is common to have multiple actions at one time step, which requires a slightly more complicated design of the policy as well as the processing pipeline.
5 OONP: Actions
The actions issued by Policy-net can be generally categorized as the following
- **New-Assign**: determine whether to create an new object (a “New” operation) for the information at hand or assign it to a certain existed object
- **Update.X**: determine which internal property or external link of the selected object to update;
- **Update2what**: determine the content of the updating, which could be about string, category or links.
The typical order of actions is New-Assign $\rightarrow$ Update.X $\rightarrow$ Update2what, but it is very common to have New-Assign action followed by nothing, when, for example, an object is mentioned but no substantial information is provided,
Figure 6: A particular implementation of Reader in a closer look, which reveals some details about the entanglement of neural and symbolic components. Dashed lines stand for continuous signal and the solid lines for discrete signal.
5.1 New-Assign
With any information at hand (denoted as $S_t$) at time $t$, the choices of New-Assign typically include the following three categories of actions: 1) creating (New) an object of a certain type, 2) assigning $S_t$ to an existed object, and 3) doing nothing for $S_t$ and moving on. For Policy-net, the stochastic policy is to determine the following probabilities:
$$
\begin{align*}
&\text{prob}(c, \text{new}|S_t), \quad c = 1, 2, \cdots, |C| \\
&\text{prob}(c, k|S_t), \quad \text{for } O_t^{c,k} \in M_t^{obj} \\
&\text{prob(\text{none}|S_t)}
\end{align*}
$$
where $|C|$ stands for the number of classes, $O_t^{c,k}$ stands for the $k^{th}$ object of class $c$ at time $t$. Determining whether to new objects always relies on the following two signals:
1. The information at hand cannot be contained by any existed objects;
2. Linguistic hints that suggests whether a new object is introduced.
Based on those intuitions, we takes a score-based approach to determine the above-mentioned probability. More specifically, for a given $S_t$, Reader forms a “temporary” object with its own structure (denoted $\hat{O}_t$), including symbolic and distributed sections. In addition, we also have a virtual object for the New action for each class $c$, denoted $O_t^{c,\text{new}}$, which is typically a time-dependent vector formed by Reader based on information in $M_t^{\text{mat}}$. For a given $\hat{O}_t$, we
can then define the following $|C| + |M_{obj}| + 1$ types of score functions, namely
- New an object of class $c$: $\text{score}^{(c)}_{\text{new}}(O_t^{c,\text{new}}, \hat{O}_t; \theta^{(c)}_{\text{new}})$, $c = 1, 2, \cdots, |C|
- Assign to existed objects: $\text{score}^{(c)}_{\text{assign}}(O_t^{c,k}, \hat{O}_t; \theta^{(c)}_{\text{assign}})$, for $O_t^{c,k} \in M_t^{obj}$
- Do nothing: $\text{score}^{\text{none}}(\hat{O}_t; \theta^{\text{none}})$
to measure the level of matching between the information at hand and existed objects, as well as the likeliness for creating an object or doing nothing. This process is pictorially illustrated in Figure 7. We therefore can define the following probability for the stochastic policy
$$
\text{prob}(c, \text{new}|S_t) = \frac{e^{\text{score}^{(c)}_{\text{new}}(O_t^{c,\text{new}}, \hat{O}_t; \theta^{(c)}_{\text{new}})}}{Z(t)}
$$
$$
\text{prob}(c, k|S_t) = \frac{e^{\text{score}^{(c)}_{\text{assign}}(O_t^{c,k}, \hat{O}_t; \theta^{(c)}_{\text{assign}})}}{Z(t)}
$$
$$
\text{prob}(\text{none}|S_t) = \frac{e^{\text{score}^{\text{none}}(\hat{O}_t; \theta^{\text{none}})}}{Z(t)}
$$
where $Z(t) = \sum_{c' \in C} e^{\text{score}^{(c')_{\text{new}}}(O_t^{c',\text{new}}, \hat{O}_t; \theta^{(c')_{\text{new}}})} + \sum_{(c'', k') \in \text{idx}(M_{obj})} e^{\text{score}^{(c'')_{\text{assign}}}(O_t^{c'',k'}, \hat{O}_t; \theta^{(c'')_{\text{assign}}})} + e^{\text{score}^{\text{none}}(\hat{O}_t; \theta^{\text{none}})}$ is the normalizing factor.
Figure 7: A pictorial illustration of what the Reader sees in determining whether to New an object and the relevant object when the read-head on Inline Memory reaches the last word in the sentence in Figure 2. The color of the arrow line stands for different matching functions for object classes, where the dashed lines is for the new object.
Many actions are essentially trivial on the symbolic part, for example, when Policy-net chooses none in New-Assgin, or assigns the information at hand to an existed object but choose to update nothing in Update.\text{x}, but this action will affect the distributed operations in Reader. This distributed operation will affect the representation in Matrix Memory or the object-embedding in Object Memory.
5.2 Updating objects: Update.X and Update2what
In Update.X step, Policy-net needs to choose the property or external link (or none) to update for the selected object determined by New-Assgin step. If Update.X chooses to update an external link, Policy-net needs to further determine which object it links to. After that Update2what updates the chosen property or links. In task with static ontology, most internal properties and links will be “locked” after they are updated for the first time, with some exception on a few semi-structured property (e.g., the Description property in the experiment in Section 8.2). For dynamical ontology, on contrary, many important properties and links are always subject to changes. A link can often be determined from both ends, e.g., the link that states the fact that “Tina (a PERSON-object) carries apple (an ITEM-object)” can be either specified from Tina (through adding the link “carry” to apple) or from apple (through adding the link “iscarriedby” to Tina), as in the experiment in Section 8.1. In practice, it is often more convenient to make it asymmetrical to reduce the size of action space.
In practice, for a particular type of ontology, both Update.X and Update2what can often be greatly simplified: for example,
- when the selected object (in New-Assgin step) has only one property “unlocked”, the Update.X step will be trivial;
- in $S_t$, there is often information from Inline Memory that tells us the basic type of the current information, which can often automatically decide the property or link.
5.3 An example
In Figure 8, we give an example of the entire episode of OONP parsing on the short text given in the example in Figure 1. Note that different from our late treatment of actions, we let some selection actions (e.g., the Assign) be absorbed into the updating actions to simplify the illustration.
6 OONP: Neural-Symbolism
OONP offers a way to parse a document that imitates the cognitive process of human when reading and comprehending a document: OONP maintains a partial understanding of document as a mixture of symbolic (representing clearly inferred structural knowledge) and distributed (representing knowledge without complete structure or with great uncertainty). As shown in Figure 4, Reader is taking and issuing both symbolic signals and continuous signals, and they are entangled through Neural Net Controller.
OONP has plenty space for symbolic processing: in the implementation in Figure 6, it is carried out by the three symbolic processors. For each of the symbolic processors, the input symbolic representation could be rendered partially by neural models, therefore providing an intriguing way to entangle neural and symbolic components. Here are three examples we implemented for two different tasks
1. Symbolic analysis in Action History: There are many symbolic summary of history we can extracted or constructed from the sequence of actions, e.g., “The system just New an object with PERSON-class five words ago” or “The system just put a paragraph starting with ‘(2)’ into event-3”. In the implementation of Reader shown in Figure 6, this
analysis is carried out with the component called Symbolic Analyzer. Based on those more structured representation of history, Reader might be able to make an informed guess like “If the coming paragraph starts with ‘(3)’, we might want to put it to event-2” based on symbolic reasoning. This kind of guess can be directly translated into feature to assist Reader’s decisions, resembling what we do with high-order features in CRF (Lafferty et al., 2001), but the sequential decision makes it possible to construct a much richer class of features from symbolic reasoning, including those with recursive structure. One example of this can be found in (Yan et al., 2017), as a special case of OONP on event identification.
2. Symbolic reasoning on Object Memory: we can use an extra Symbolic Reasoner to take care of the high-order logic reasoning after each update of the Object Memory
caused by the actions. This can be illustrated through the following example. Tina (a Person-object) carries an apple (an Item-object), and Tina moves from kitchen (a Location-object) to garden (Location-object) at time $t$. Supposing we have both Tina-carry-apple and Tina-islocatedat-kitchen relation kept in Object Memory at time $t$, and OONP updates the Tina-islocatedat-kitchen to Tina-islocatedat-garden at time $t+1$, the Symbolic Reasoner can help to update the relation apple-islocatedat-kitchen to apple-islocatedat-garden. This is feasible since the Object Memory is supposed to be logically consistent. This external logic-based update is often necessary since it is hard to let the Neural Net Controller see the entire Object Memory due to the difficulty to find a distributed representation of the dynamic structure there. Please see Section 8.1 for experiments.
3. Symbolic prior in New-Assign: When Reader determines an New-Assign action, it needs to match the information about the information at hand ($S_t$) and existed objects. There is a rich set of symbolic prior that can be added to this matching process in Symbolic Matching component. For example, if $S_t$ contains a string labeled as entity name (in preprocessing), we can use some simple rules (part of the Symbolic Matching component) to determine whether it is compatible with an object with the internal property Name.
7 Learning
The parameters of OONP models (denoted $\Theta$) include that for all operations and that for composing the distributed sections in Inline Memory. They can be trained with different learning paradigms: it takes both supervised learning (SL) and reinforcement learning (RL) while allowing different ways to mix the two. Basically, with supervised learning, the oracle gives the ground truth about the “right action” at each time step during the entire decision process, with which the parameter can be tuned to maximize the likelihood of the truth. In a sense, SL represents rather strong supervision which is related to imitation learning (Stefan, 1999) and often requires the labeler (expert) to give not only the final truth but also when and where a decision is made. For supervised learning, the objective function is given as
$$J_{SL}(\Theta) = \frac{-1}{N} \sum_{i} \sum_{t=1}^{T_i} \log(\pi_t^{(i)}[a_t^*])$$
(1)
where $N$ stands for the number of instances, $T_i$ stands for the number of steps in decision process for the $i^{th}$ instance, $\pi_t^{(i)}[\cdot]$ stands for the probabilities of the feasible actions at $t$ from the stochastic policy, and $a_t^*$ stands for the ground truth action in step $t$.
With reinforcement learning, the supervision is given as rewards during the decision process, for which an extreme case is to give the final reward at the end of the decision process by comparing the generated ontology and the ground truth, e.g.,
$$r^{(i)}_t = \begin{cases}
0, & \text{if } t \neq T_i \\
\text{match}(M_{obj}^{T_i}, G_t), & \text{if } t = T_i
\end{cases}$$
(2)
where the match($M_{obj}^{T_i}, G_t$) measures the consistency between the ontology of in $M_{obj}^{T_i}$ and the ground truth $G^*$. We can use any policy search algorithm to maximize the expected total reward. With the commonly used REINFORCE (Williams, 1992) for training, the gradient is given by
\[ \nabla_{\Theta} J_{RL}(\Theta) = -E_{\pi_{\theta}} \left[ \nabla_{\Theta} \log \pi_{\Theta}(a_i^t|s_i^t) r_{i:T}^t \right] \approx -\frac{1}{NT} \sum_{i}^{N} \sum_{t=1}^{T} \nabla_{\Theta} \log \pi_{\Theta}(a_i^t|s_i^t) r_{i:T}^t. \]
When OONP is applied to real-world tasks, there is often quite natural SL and RL. More specifically, for “static ontology” one can often infer some of the right actions at certain time steps by observing the final ontology based on some basic assumption, e.g.,
- the system should New an object the first time it is mentioned,
- the system should put an extracted string (say, that for Name ) into the right property of right object at the end of the string.
For those that can not be fully reverse-engineered, say the categorical properties of an object (e.g., Type for event objects), we have to resort to RL\(^\dagger\) to determine the time of decision, while we also need SL to train Policy-net on the content of the decision. Fortunately it is quite straightforward to combine the two learning paradigms in optimization. More specifically, we maximize this combined objective
\[ J(\Theta) = J_{SL}(\Theta) + \lambda J_{RL}(\Theta), \]
where \( J_{SL} \) and \( J_{RL} \) are over the parameters within their own supervision mode and \( \lambda \) co-ordinates the weight of the two learning mode on the parameters they share. Equation [1] actually indicates a deep coupling of supervised learning and reinforcement learning, since for any episode the samples of actions related to RL might affect the inputs to the models under supervised learning.
For dynamical ontology (see Section 8.1 for example), it is impossible to derive most of the decisions from the final ontology since they may change over time. For those, we have to rely mostly on the supervision at the time step to train the action (supervised mode) or count on the model to learn the dynamics of the ontology evolution by fitting the final ground truth. Both scenarios are discussed in Section 8.1 on a synthetic task.
8 Experiments
We applied OONP on three document parsing tasks, to verify its efficacy on parsing documents with different characteristics and investigate different components of OONP.
8.1 Task-I: bAbI Task
8.1.1 Data and task
We implemented OONP an enriched version of bAbI tasks ([Johnson, 2017]) with intermediate representation for history of arbitrary length. In this experiment, we considered only the original bAbi task-2 ([Weston et al., 2015]), with an instance shown in the left panel Figure 9. The ontology has three types of objects: PERSON-object, ITEM-object, and LOCATION-object, and three types of links:
1. is-located-at\(_A\): between a PERSON-object and a LOCATION-object,
2. is-located-at\(_B\): between a ITEM-object and a LOCATION-object;
\(\dagger\) A more detailed exposition of this idea can be found in ([Liu et al., 2018]), where RL is used for training a multi-label classifier of text.
3. **carry**: between a **PERSON-object** and **ITEM-object**;
which could be rendered by description of different ways. All three types of objects have **Name** as the only internal property.
[Figure 9: One instance of bAbI (6-sentence episode) and the ontology of two snapshots.]
The task for **OONP** is to read an episode of story and recover the trajectory of the evolving ontology. We choose this synthetic dataset because it has dynamical ontology that evolves with time and ground truth given for each snapshot, as illustrated in Figure 9. Comparing with the real-world tasks we will present later, bAbi has almost trivial internal property but relatively rich opportunities for links, considering any two objects of different types could potentially have a link.
### 8.1.2 Implementation details
For preprocessing, we have a trivial NER to find the names of people, items and locations (saved in the symbolic part of **Inline Memory**) and word-level bi-directional GRU for the distributed representations of **Inline Memory**. In the parsing process, **Reader** goes through the inline word-by-word in the temporal order of the original text, makes **New-Assign** action at every word, leaving **Update.X** and **Update2what** actions to the time steps when the read-head on **Inline Memory** reaches a punctuation (see more details of actions in Table 1). For this simple task, we use an almost fully neural **Reader** (with MLPs for Policy-net) and a vector for **Matrix Memory**, with however a **Symbolic Reasoner** for some logic reasoning after each update of the links, as illustrated through the following example. Suppose at time $t$, the ontology in $M^i_{obj}$ contains the following three facts (among others)
- **fact-1**: John (a **PERSON-object**) is in kitchen (a **LOCATION-object**);
- **fact-2**: John carries apple (an **ITEM-object**);
- **fact-3**: John drops apple;
where fact-3 is just established by Policy-net at $t$. **Symbolic Reasoner** will add a new **is-located-at** link between apple and kitchen based on domain logic.
---
§ The logic says, an item is not “in” a location if it is held by a person.
### Table 1: Actions for bAbI.
<table>
<thead>
<tr>
<th>Action</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>NewObject(c)</td>
<td>New an object of class-(c).</td>
</tr>
<tr>
<td>AssignObject((c,k))</td>
<td>Assign the current information to existed object ((c,k)).</td>
</tr>
<tr>
<td>Update((c,k)).AddLink((c',k',\ell))</td>
<td>Add an link of type-(\ell) from object-((c,k)) to object-((c',k')).</td>
</tr>
<tr>
<td>Update((c,k)).DelLink((c',k',\ell))</td>
<td>Delete the link of type-(\ell) from object-((c,k)) to object-((c',k')).</td>
</tr>
</tbody>
</table>
#### 8.1.3 Results and Analysis
For training, we use 1,000 episodes with length evenly distributed from one to six. We use just REINFORCE with only the final reward defined as the overlap between the generated ontology and the ground truth, while step-by-step supervision on actions yields almost perfect result (result omitted). For evaluation, we use the following two metrics:
- the Rand index [Rand 1971] between the generated set of objects and the ground truth, which counts both the duplicate objects and missing ones, averaged over all snapshots of all test instances;
- the F1 [Rijsbergen 1979] between the generated links and the ground truth averaged over all snapshots of all test instances, since the links are typically sparse compared with all the possible pairwise relations between objects.
with results summarized in Table 2. OONP can learn fairly well on recovering the evolving ontology with such a small training set and weak supervision (RL with the final reward), which clearly shows that the credit assignment over to earlier snapshots does not cause much difficulty in the learning of OONP even with a generic policy search algorithm. It is not so surprising to observe that Symbolic Reasoner helps to improve the results on discovering the links, while it does not improves the performance on identifying the objects although it is taken within the learning. It is quite interesting to observe that OONP achieves rather high accuracy on discovering the links while it performs relatively poorly on specifying the objects. It is probably due to the fact that the rewards does not penalizes the objects.
<table>
<thead>
<tr>
<th>model</th>
<th>F1 (for links) (%)</th>
<th>RandIndex (for objects) (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>OONP (without S.R.)</td>
<td>94.80</td>
<td>87.48</td>
</tr>
<tr>
<td>OONP (with S.R.)</td>
<td>95.30</td>
<td>87.48</td>
</tr>
</tbody>
</table>
Table 2: The performance a implementation of OONP on bAbI task 2.
#### 8.2 Task-II: Parsing Police Report
##### 8.2.1 Data & task
We implement OONP for parsing Chinese police report (brief description of criminal cases written by policeman), as illustrated in the left panel of Figure [10]. We consider a corpus of 5,500 cases with a variety of crime categories, including theft, robbery, drug dealing and others. The ontology we designed for this task mainly consists of a number of PERSON-objects and ITEM-objects connected through a EVENT-object with several types of relations, as illustrated in the right panel of Figure [10]. A PERSON-object has three internal properties: Name (string), Gender (categorical) and Age (number), and two types of external links (suspect and victim) to an EVENT-object. An ITEM-object has three internal properties: Name (string), Quantity (string) and Value (string), and six types of external links (stolen, drug, robbed, swindled, 16.
damaged, and other) to an Event-object. Compared with bAbI in Section 8.1, the police report ontology has less pairwise links but much richer internal properties for objects of all three objects. Although the language in this dataset is reasonably formal, the corpus coverages a big variety of topics and language styles, and has a high proportion of typos. On average, a sample has 95.24 Chinese words and the ontology has 3.35 objects, 3.47 mentions and 5.02 relationships. The average length of a document is 95 Chinese characters, with digit string (say, ID number) counted as one character.
Figure 10: An example of police report and its ontology.
8.2.2 Implementation details
The OONP model is designed to generate ontology as illustrated in Figure 10 through a decision process with actions in Table 3. As pre-processing, we performed regular NER with third party algorithm (therefore not part of the learning) and simple rule-based extraction to yield the symbolic part of Inline Memory as shown in Figure 11. For the distributed part of Inline Memory, we used dilated CNN with different choices of depth and kernel size (Yu and Koltun, 2015), all of which will be jointly learned during training. In making the New-Assign decision, Reader considers the matching between two structured objects, as well as the hints from the symbolic part of Inline Memory as features, as pictorially illustrated in Figure 7. In updating objects with its string-type properties (e.g., Name for a Person-object), we use Copy-Paste strategy for extracted string (whose NER tag already specifies which property in an object it goes to) as Reader sees it. For undetermined category properties in existed objects, Policy-net will determine the object to update (an New-Assign action without New option), its property to update (an Update.X action), and the updating operation (an Update2what action) at milestones of the decision process, e.g., when reaching an punctuation. For this task, since all the relations are between the single by-default Event-object and other objects, the relations can be reduced to category-type properties of the corresponding objects in practice. For category-type properties, we cannot recover New-Assign and Update.X actions from the label (the final ontology), so we resort RL for learning to determine that part, which is mixed with the supervised learning for Update2what and other actions for string-type properties.
8.2.3 Results & discussion
We use 4,250 cases for training, 750 for validation an held-out 750 for test. We consider the following four metrics in comparing the performance of different models:
<table>
<thead>
<tr>
<th>Action</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>NewObject(c)</td>
<td>New an object of class-c.</td>
</tr>
<tr>
<td>AssignObject(c,k)</td>
<td>Assign the current information to existed object (c,k).</td>
</tr>
<tr>
<td>UpdateObject(c,k).Name</td>
<td>Set the name of object-(c,k) with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(PERSON,k).Gender</td>
<td>Set the name of a PERSON-object indexed k with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Item,k).Quantity</td>
<td>Set the quantity of an ITEM-object indexed k with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Item,k).Value</td>
<td>Set the value of an ITEM-object indexed k with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Event,1).Items.x</td>
<td>Set the link between the EVENT-object and an ITEM-object, where x ∈{stolen, drug, robbed, swindled, damaged, other}.</td>
</tr>
<tr>
<td>UpdateObject(Event,1).Persons.x</td>
<td>Set the link between the EVENT-object and a PERSON-object, and x ∈{victim, suspect}.</td>
</tr>
</tbody>
</table>
Table 3: Actions for parsing police report.
| Assignment Accuracy | the accuracy on New-Assign actions made by the model |
| Category Accuracy | the accuracy of predicting the category properties of all the objects |
| Ontology Accuracy | the proportion of instances for which the generated ontology is exactly the same as the ground truth |
| Ontology Accuracy-95 | the proportion of instances for which the generated ontology achieves 95% consistency with the ground truth |
which measures the accuracy of the model in making discrete decisions as well as generating the final ontology. We empirically examined several OONP implementations and compared them with two baselines, Bi-LSTM and EntityNLM [Ji et al., 2017], with results given in Table 4.
<table>
<thead>
<tr>
<th>Model</th>
<th>Assign Acc. (%)</th>
<th>Type Acc. (%)</th>
<th>Ont. Acc. (%)</th>
<th>Ont. Acc-95 (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bi-LSTM (baseline)</td>
<td>73.2 ± 0.58</td>
<td>-</td>
<td>36.4 ± 1.56</td>
<td>59.8 ± 0.83</td>
</tr>
<tr>
<td>ENTITYNLM (baseline)</td>
<td>87.6 ± 0.50</td>
<td>84.3 ± 0.80</td>
<td>59.6 ± 0.85</td>
<td>72.3 ± 1.37</td>
</tr>
<tr>
<td>OONP (neural)</td>
<td>88.5 ± 0.44</td>
<td>84.3 ± 0.58</td>
<td>61.4 ± 1.26</td>
<td>75.2 ± 1.35</td>
</tr>
<tr>
<td>OONP (structured)</td>
<td>91.2 ± 0.62</td>
<td>87.0 ± 0.40</td>
<td>65.4 ± 1.42</td>
<td>79.9 ± 1.28</td>
</tr>
<tr>
<td>OONP (RL)</td>
<td>91.4 ± 0.38</td>
<td>87.8 ± 0.75</td>
<td>66.7 ± 0.95</td>
<td>80.7 ± 0.82</td>
</tr>
</tbody>
</table>
Table 4: OONP on parsing police reports.
The Bi-LSTM and EntityNLM are essentially a simple version of OONP without a structured Carry-on Memory and designed operations (sophisticated matching function in New-Assign). Basically the Bi-LSTM baseline consists of a Bi-LSTM Inline Memory encoder and a two-layer MLP on top of that acting as a simple Policy-net for prediction actions. Since this baseline does not have an explicit object representation, it does not support category type of prediction. We hence only train this baseline model to perform New-Assign actions, and evaluate with the Assignment Accuracy (first metric) and a modified version of Ontology Accuracy (third and fourth metric) that counts only the properties that can be predicted, hence in favor of Bi-LSTM. As for EntityNLM, as another strong baseline, it can model an arbitrary number of entities in context while generating each entity mention at an arbitrary length and perform well in coreference resolution, and entity prediction [Ji et al., 2017]. Adapted to this scenario, it is re-implemented to predict object index and properties of object with a minor change that name prediction task is replaced by the identical third-party algorithm for fairness. We consider three OONP variants:
- **OONP (neural)**: simple version of OONP with only distributed representation in Reader in determining all actions;
- **OONP (structured)**: OONP that considers the matching between two structured objects in New-Assign actions, with symbolic prior encoded in Symbolic Matching and other
features for Policy-net:
- OONP (RL): another version of OONP (structured) that uses RL to determine the time for predicting the category properties, while OONP (neural) and OONP (neural) use a rule-based approach to determine the time.
As shown in Table 4, Bi-LSTM baseline struggles to achieve around 73% Assignment Accuracy on test set, while OONP (neural) can boost the performance to 88.5%. Arguably, this difference in performance is due to the fact that Bi-LSTM lacks Object Memory, so all relevant information has to be stored in the Bi-LSTM hidden states along the reading process. When we start putting symbolic representation and operation into Reader, as shown in the result of OONP (structure), the performance is again significantly improved on all four metrics. More specifically, we have the following two observations (not shown in the table),
- Adding inline symbolic features as in Figure 11 improves around 0.5% in New-Assign action prediction, and 2% in category property prediction. The features we use include the type of the candidate strings and the relative distance to the maker character we chose.

- Using a matching function that can take advantage of the structures in objects helps better generalization. Since the objects in this task has multiple property slots like Name, Gender, Quantity, Value. We tried adding both the original text (e.g., Name, Gender, Quantity, Value) string of an property slot and the embedding of that as additional features, e.g., the length the longest common string between the candidate string and a relevant property of the object.
When using REINFORCE to determine when to make prediction for category property, as shown in the result of OONP (RL), the prediction accuracy for category property and the overall ontology accuracy is improved. It is quite interesting that it has some positive impact on the supervised learning task (i.e., learning the New-Assign actions) through shared parameters. The entanglement of the two learning paradigms in OONP is one topic for future research, e.g., the effect of predicting the right category property on the New-Assign actions if the predicted category property is among the features of the matching function for New-Assign actions.
8.3 Task-III: Parsing court judgment documents
8.3.1 Data and task
We also implement OONP for parsing court judgement on theft. Unlike the two previous tasks, court judgements are typically much longer, containing multiple events of different
types as well as bulks of irrelevant text, as illustrated in the left panel of Figure 10. The
dataset contains 1961 Chinese judgement documents, divided into training/dev/testing set
with 1561/200/200 texts respectively. The ontology we designed for this task mainly consists
of a number of Person-objects and Item-objects connected through a number Event-object
with several types of links. A Event-object has three internal properties: Time (string),
Location (string), and Type (category, ∈{theft, restitution, disposal}), four types of
external links to Person-objects (namely, principal, companion, buyer, victim) and
four types of external links to Item-objects (stolen, damaged, restituted, disposed). In
addition to the external links to Event-objects, a Person-object has only the Name (string)
as the internal property. An Item-object has three internal properties: Description (array
of strings), Value (string) and Returned(binary) in addition to its external links to Event-
objects, where Description consists of the words describing the corresponding item, which
could come from multiple segments across the document. A Person-object or an Item-
object could be linked to more than one Event-object, for example a person could be the
principal suspect in event A and also a companion in event B. An illustration of the judgement
document and the corresponding ontology can be found in Figure 12.
Figure 12: Left panel: the judgement document with highlighted part being the description
the facts of crime, right panel: the corresponding ontology
8.3.2 Implementation details
We use a model configuration similar to that in Section 8.2 with however the following
important difference. In this experiment, OONP performs a 2-round reading on the text. In
the first round, OONP identifies the relevant events, creates empty Event-objects, and does
Notes-Taking on Inline Memory to save the information about event segmentation (see (Yan
et al., 2017) for more details). In the second round, OONP read the updated Inline Memory,
fills the Event-objects, creates and fills Person-objects and Item-objects, and specifies the
links between them. When an object is created during a certain event, it will be given an extra
feature (not an internal property) indicating this connection, which will be used in deciding
links between this object and event object, as well as in determining the future New-Assign
actions. The actions of the two round reading are summarized in Table 5.
Table 5: Actions for parsing court judgements.
<table>
<thead>
<tr>
<th>Action for 1st-round</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>NewObject(c)</td>
<td>New an EVENT-object, with c = Event.</td>
</tr>
<tr>
<td>NotesTaking(Event, k).word</td>
<td>Put indicator of event-k on the current word.</td>
</tr>
<tr>
<td>NotesTaking(Event, k).sentence</td>
<td>Put indicator of event-k on the rest of sentence, and move the read-head to the first word of next sentence.</td>
</tr>
<tr>
<td>NotesTaking(Event, k).paragraph</td>
<td>Put indicator of event-k on the rest of the paragraph, and move the read-head to the first word of next paragraph.</td>
</tr>
<tr>
<td>Skip.word</td>
<td>Move the read-head to next word</td>
</tr>
<tr>
<td>Skip.sentence</td>
<td>Move the read-head to the first word of next sentence</td>
</tr>
<tr>
<td>Skip.paragraph</td>
<td>Move the read-head to the first word of next paragraph.</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Action for 2nd-round</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>NewObject(c)</td>
<td>New an object of class-c.</td>
</tr>
<tr>
<td>AssignObject(c, k)</td>
<td>Assign the current information to existed object (c, k)</td>
</tr>
<tr>
<td>UpdateObject(PERSON, k).Name</td>
<td>Set the name of the kth PERSON-object with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(ITEM, k).Description</td>
<td>Add to the description of an kth ITEM-object with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(ITEM, k).Value</td>
<td>Set the value of an kth ITEM-object with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Event, k).Time</td>
<td>Set the time of an kth EVENT-object with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Event, k).Location</td>
<td>Set the location of an kth EVENT-object with the extracted string.</td>
</tr>
<tr>
<td>UpdateObject(Event, k).Type</td>
<td>Set the type of the kth EVENT-object among {theft, disposal, restitution}</td>
</tr>
<tr>
<td>UpdateObject(Event, k).Items.x</td>
<td>Set the link between the kth EVENT-object and an ITEM-object, where x ∈ {stolen, damaged, restitution, disposed}</td>
</tr>
<tr>
<td>UpdateObject(Event, k).Persons.x</td>
<td>Set the link between the kth EVENT-object and a PERSON-object, and x ∈ {principal, companion, buyer, victim}</td>
</tr>
</tbody>
</table>
8.3.3 Results and Analysis
We use the same metric as in Section 8.2 and compare two OONP variants, OONP (neural) and OONP (structured), with two baselines, EntityNLM and Bi-LSTM. The two baselines will be tested only on the second-round reading, while both OONP variant are tested on a two-round reading. The results are shown in Table 6. OONP parsers attain accuracy significantly higher than Bi-LSTM models. Among, OONP (structure) achieves over 64% accuracy on getting the entire ontology right and over 78% accuracy on getting 95% consistency with the ground truth.
<table>
<thead>
<tr>
<th>Model</th>
<th>Assign Acc. (%)</th>
<th>Type Acc. (%)</th>
<th>Out. Acc. (%)</th>
<th>Out. Acc-95 (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bi-LSTM (baseline)</td>
<td>84.66 ± 0.20</td>
<td>-</td>
<td>18.20 ± 0.74</td>
<td>36.88 ± 1.01</td>
</tr>
<tr>
<td>ENTITYNLM (baseline)</td>
<td>90.50 ± 0.21</td>
<td>96.33 ± 0.39</td>
<td>39.85 ± 0.20</td>
<td>48.29 ± 1.96</td>
</tr>
<tr>
<td>OONP (neural)</td>
<td>94.50 ± 0.24</td>
<td>97.73 ± 0.12</td>
<td>53.29 ± 0.26</td>
<td>72.22 ± 1.01</td>
</tr>
<tr>
<td>OONP (structured)</td>
<td>96.90 ± 0.22</td>
<td>98.80 ± 0.08</td>
<td>71.11 ± 0.54</td>
<td>77.27 ± 1.05</td>
</tr>
</tbody>
</table>
Table 6: OONP on judgement documents.
9 Conclusion
We proposed Object-oriented Neural Programming (OONP), a framework for semantically parsing in-domain documents. OONP is neural net-based, but equipped with sophisticated architecture and mechanism for document understanding, therefore nicely combining interpretability and learnability. Experiments on both synthetic and real-world datasets have shown that OONP outperforms several strong baselines by a large margin on parsing fairly complicated ontology.
Acknowledgments
We thank Fandong Meng and Hao Xiong for their insightful discussion. We also thank Classic Law Institute for providing the raw data.
References
|
{"Source-Url": "http://arxiv-export-lb.library.cornell.edu/pdf/1709.08853", "len_cl100k_base": 13611, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 65405, "total-output-tokens": 15733, "length": "2e13", "weborganizer": {"__label__adult": 0.0006680488586425781, "__label__art_design": 0.001331329345703125, "__label__crime_law": 0.001171112060546875, "__label__education_jobs": 0.0045318603515625, "__label__entertainment": 0.0004589557647705078, "__label__fashion_beauty": 0.00037550926208496094, "__label__finance_business": 0.0004737377166748047, "__label__food_dining": 0.000492095947265625, "__label__games": 0.0016374588012695312, "__label__hardware": 0.001323699951171875, "__label__health": 0.0008411407470703125, "__label__history": 0.0007128715515136719, "__label__home_hobbies": 0.0001785755157470703, "__label__industrial": 0.0008578300476074219, "__label__literature": 0.0034618377685546875, "__label__politics": 0.0006384849548339844, "__label__religion": 0.0009298324584960938, "__label__science_tech": 0.459716796875, "__label__social_life": 0.0002696514129638672, "__label__software": 0.025482177734375, "__label__software_dev": 0.4931640625, "__label__sports_fitness": 0.0002951622009277344, "__label__transportation": 0.000743865966796875, "__label__travel": 0.00021958351135253904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60727, 0.03275]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60727, 0.46863]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60727, 0.88451]], "google_gemma-3-12b-it_contains_pii": [[0, 2924, false], [2924, 5347, null], [5347, 7743, null], [7743, 10505, null], [10505, 13665, null], [13665, 15598, null], [15598, 18306, null], [18306, 21044, null], [21044, 22707, null], [22707, 24957, null], [24957, 28098, null], [28098, 28984, null], [28984, 32307, null], [32307, 35268, null], [35268, 37420, null], [37420, 40943, null], [40943, 43583, null], [43583, 47276, null], [47276, 49856, null], [49856, 52347, null], [52347, 56450, null], [56450, 58658, null], [58658, 60727, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2924, true], [2924, 5347, null], [5347, 7743, null], [7743, 10505, null], [10505, 13665, null], [13665, 15598, null], [15598, 18306, null], [18306, 21044, null], [21044, 22707, null], [22707, 24957, null], [24957, 28098, null], [28098, 28984, null], [28984, 32307, null], [32307, 35268, null], [35268, 37420, null], [37420, 40943, null], [40943, 43583, null], [43583, 47276, null], [47276, 49856, null], [49856, 52347, null], [52347, 56450, null], [56450, 58658, null], [58658, 60727, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60727, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60727, null]], "pdf_page_numbers": [[0, 2924, 1], [2924, 5347, 2], [5347, 7743, 3], [7743, 10505, 4], [10505, 13665, 5], [13665, 15598, 6], [15598, 18306, 7], [18306, 21044, 8], [21044, 22707, 9], [22707, 24957, 10], [24957, 28098, 11], [28098, 28984, 12], [28984, 32307, 13], [32307, 35268, 14], [35268, 37420, 15], [37420, 40943, 16], [40943, 43583, 17], [43583, 47276, 18], [47276, 49856, 19], [49856, 52347, 20], [52347, 56450, 21], [56450, 58658, 22], [58658, 60727, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60727, 0.17901]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
b3fc5fe547443c086586c85ff7eaa5c4dfddf233
|
“Concurrency is the most extreme form of programming. It’s like white-water rafting without the raft or sky diving without the parachute.”
— Peter Buhr
1 Concurrent Programming
We’ve developed two core calculi and looked at four exemplar programming languages, but we’re as yet missing one of the most important characteristics of modern computing: concurrency. First, let’s clarify some terms, because the concurrent programming community are quite picky about these terms:
- **Parallelism** is the *phenomenon* of two or more things—presumably, two computations—actually happening at the same time.
- **Concurrency** is the *experience* of two or more things—such as applications, tasks, or threads—appearing to happen at the same time, whether or not they actually are.
For instance, an average computer in the 1990’s had only one CPU and one core, and so had no parallelism, but could still run multiple applications “at the same time”, and so still had concurrency. In this case, concurrency is achieved by quickly switching which task the CPU is concerned with between the various running programs. It is also possible for a system to be parallel without being concurrent: for instance, a compiler may optimize an imperative loop into a specialized parallel operation on a particular CPU, but this is only visible to the programmer as a speed boost, so the programmer’s experience still lacks concurrency. And, of course, a system can have both: on a modern, multi-core CPU, one uses concurrency to take advantage of the parallelism, by running multiple programs or threads on their multiple cores. The concurrent tasks appear to happen at the same time because they *do* happen at the same time.
The domain of concurrent programming has changed dramatically over time, in response to growing parallelism. Early exploration into concurrency was theoretical, and then became practical with networks. When multi-core CPU’s started becoming the norm, concurrency rose in importance again, because it is often impossible to take advantage of real parallelism without concurrency.
The core idea of a concurrent system is that there are multiple tasks which can communicate with each other, and computation can proceed in any of them at any rate. This is contrary to the behavior of the \( \lambda \)-calculus or the Simple Imperative Language, and contrary to all of our exemplar languages (at least, as far as we’ve investigated them), so we will need both a new formal language and a new exemplar language for concurrency. In practice, of course, in the same way that many languages which are not fundamentally object oriented have picked up object-oriented features, most languages which are not fundamentally concurrent have picked up at least some kind of concurrent features.
In a formal model of concurrency, we need a way of expressing multiple simultaneous tasks, and, as usual, a way of taking a step. Unlike our previous formal semantics, we *want* this semantics to be non-deterministic: the fact
that any of multiple tasks may proceed is the essence of concurrency. However, we will not define our semantics as truly parallel: a step will involve only one task taking a step.
Another issue to be addressed is models of concurrency, i.e., how a concurrent language presents multiple tasks. Models of concurrency are defined across two dimensions: how one specifies multiple tasks, and how those multiple tasks communicate. Specification comes down to what structures a language provides for a program to create tasks, and often, to specify real parallelism as well. Options include threads of various sorts, actors, processes, and many others, but all of these terms are imprecise and ambiguous. Except for demonstrating the mechanisms for specifying concurrency in our formal model and exemplar language, we will put little focus on specification.
The other dimension is communication, which falls largely into two forms: shared-memory concurrency and message-passing concurrency. In shared-memory concurrency, either the entire heap or some portion of the heap is shared between multiple tasks. In our terms, all tasks have the same Σ, and if they share any labels, they can see changes made by other tasks. However, because multiple tasks can compute at the same time, there may not be any guarantee that one task’s write to a location in Σ occurs before another task’s read. Other mechanisms, such as locks, are required to guarantee ordering.
In message-passing concurrency, two new primitive operations are introduced: sending a message and waiting for a message. A task which is waiting for a message cannot proceed until another task sends it a message. A task may send a message at any time, but to do so, must have a way of communicating with the target task. Thus, message-passing concurrency requires some form of message channels, by which two tasks can arrange to exchange messages. Ordering is guaranteed by the directionality of message passing; a waiting task will not proceed until a sending task sends a message.
Aside: Shared-memory and message-passing concurrency are equally powerful, and in fact, either can be rewritten in terms of the other. However, this rewriting is fairly unsatisfying: shared memory can be rewritten in terms of message passing by imagining Σ itself as a “task” that expects messages instructing it to read and write certain memory locations. Equivalently, we can consider a modern CPU as sending messages to the memory bus, rather than simply reading and writing memory. As a practical matter, message-passing implementations tend to be slower than shared-memory implementations of the same algorithm.
This course is not intended to compete with CS343 (Concurrent and Parallel Programming), so will take a very language-focused view of concurrency. That course uses concurrency to solve problems; in this course, concurrency is only a cause of problems.
2 π-Calculus
In λ-calculus, we built a surprising amount of functionality around abstractions: with only three constructs in the language (abstractions, applications, and variables), we could represent numbers, conditionals, and ultimately, anything computable. π-calculus (The Pi Calculus) takes a similar approach to message-passing concurrency. The only structures are concurrency, communication, replication, and names, but these will be sufficient to build any computation. Notably lacking is functions (or abstractions). π-calculus itself was developed by Robin Milner, Joachim Parrow, and David Walker in A Calculus of Mobile Processes[1], but it was the culmination of a long line of development of calculi of communicating processes, to which Uffe Engberg and Mogens Nielsen also made significant contributions.
We will look at the behavior of π-calculus, but will not discuss how to encode complex computation into π-calculus concurrency. π-calculus is Turing-complete, but like λ-calculus, it’s more common to layer other semantics on top of it than to take advantage of its own computational power.
A π-calculus program consists of any number of concurrent tasks, called processes, separated by pipes (|). Those tasks can be grouped. Each process can create a channel, receive a message on a channel, send a message on a channel, replicate processes, or terminate. The only values in π-calculus are channels, so any encoding of useful information also needs to be done with channels, and the only value you can send over a channel is another channel.
1 The same Milner of Hindley-Milner type inference.
Channels are also called “names”, because they are simply named by variables; two processes must agree on a name in order to communicate (as well as another restriction which we will discuss soon).
The syntax of \( \pi \)-calculus is as follows, presented in BNF, with \( \langle \text{program} \rangle \) as the starting non-terminal:
\[
\langle \text{program} \rangle ::= \langle \text{program} \rangle \mid \langle \text{program} \rangle \mid \langle \text{receive} \rangle \mid \langle \text{send} \rangle \mid \langle \text{restrict} \rangle \mid \langle \text{replicate} \rangle \mid \langle \text{terminate} \rangle \\
\langle \text{receive} \rangle ::= \langle \text{var} \rangle \langle \text{var} \rangle \langle \text{program} \rangle \\
\langle \text{send} \rangle ::= \langle \text{var} \rangle \langle \text{var} \rangle \langle \text{program} \rangle \\
\langle \text{restrict} \rangle ::= (\nu \langle \text{var} \rangle) \langle \text{program} \rangle \\
\langle \text{replicate} \rangle ::= ! \langle \text{program} \rangle \\
\langle \text{terminate} \rangle ::= 0 \\
\langle \text{var} \rangle ::= a|b|c|\cdots
\]
Note that \( \nu \) is the Greek letter nu, not the Latin/English letter ‘v’, because somebody decided that using confusing, ambiguous Greek letters was acceptable; we will avoid using \( v \) as a variable for this reason. Like in \( \lambda \)-calculus, we will actually be more lax in our use of variable names than this BNF suggests, for clarity. Like in the Simple Imperative Language, this is assumed to be an abstract syntax, and we will add parentheses to disambiguate as necessary. The pipe (\( | \)) in \langle \text{program} \rangle has the lowest precedence, so, for instance, \( x(y).0|z(a).0 \) is read as \( (x(y).0)|z(a).0 \), not \( x(y).0|z(a).0 \).
Unfortunately, \( \pi \)-calculus uses several of the symbols which we use in BNF as well, which we’ve surrounded in quotes to separate them from BNF, as well as an overline. Here are two small examples to clarify the syntax. The following snippet receives a message on the channel \( x \), into the variable \( y \), before proceeding with the process \( P \):
\[ x(y).P \]
The following snippet sends \( y \) on the channel \( x \), before proceeding with the process \( P \):
\[ \overline{x} \langle y \rangle .P \]
As discussed, a program consists of a number of processes, separated by pipes. Each process is itself a program, so the distinction is just usage. We will use the term “process” to refer to any construction other than the composition of multiple programs with a pipe, so that a program can be read as a list of processes.
A program proceeds through its processes sending and receiving messages on channels until they terminate. For instance, this program consists of two processes, of which the first sends the message \( h \) to the second, and the second then attempts to pass that \( h \) along on another channel:
\[ \overline{x} \langle h \rangle .0|x(y).\overline{x} \langle y \rangle .0 \]
The first process is \( \overline{x} \langle h \rangle .0 \), which consists of a send of \( h \) over the channel \( x \), and then termination of the process (0). The second process is \( x(y).\overline{x} \langle y \rangle .0 \), which consists of a receive of \( y \) from the channel \( x \), then a send of \( y \) over the channel \( z \), then termination. Programs in \( \pi \)-calculus proceed by sending and receiving messages; in this case, the program can proceed, because a process is trying to send on \( x \), and another process is trying to receive on \( x \). After sending that message, the program looks like this:
\[ 0|\overline{x} \langle h \rangle .0 \]
Like in \( \lambda \)-calculus applications, message receipt works by substitution, so the \( y \) was substituted for the received message, \( h \). We will not formally define substitution for \( \pi \)-calculus; it is fairly straightforward. A terminating process can simply be removed, so the next step is as follows:
\[ \overline{x} \langle h \rangle .0 \]
This program cannot proceed, because no process is prepared to receive a message on the channel \( z \).
Consider this similar program:
\[
\nu x. (\pi \langle h \rangle . z(a).0|x(y).\pi \langle y \rangle .0)|x(y).0
\]
This time, we have three processes. The first sends the message \( h \) over the channel \( x \), then receives a message on the channel \( z \), then terminates. The second is identical to our original second process: it receives a message on \( x \), then sends it back on \( z \). The third message receives a message on \( x \), then immediately terminates. This program can proceed, because a process is sending on \( x \) and a process is prepared to receive on \( x \). But, which process receives the message? \( \pi \)-calculus is non-deterministic, so the answer is that either process may receive the message. Both ways for the program to proceed are valid. In this case, these two reductions are both valid:
\[
\Rightarrow z(a).0|\pi \langle h \rangle .0|x(y).0 \\
\Rightarrow 0|0|x(y).0 \\
\Rightarrow 0|x(y).0 \\
\Rightarrow x(y).0
\]
The first sequence of reductions occurs if the message is received by the second process, and the third occurs if the message is received by the third process. The first sequence may seem more complete, or valid, since two of the three processes terminated, but both are valid. We are using \( \Rightarrow \) informally for “takes a step”, because we haven’t yet formally defined \( \Rightarrow \), and we will discover that there’s an extra complication to the definition of steps in \( \pi \)-calculus in Section 2.2.
Because name uniqueness is so important to communication, \( \pi \)-calculus also has a mechanism to restrict names. For instance, consider this rewrite of the above program:
\[
(\nu x)(\pi \langle h \rangle . z(a).0|x(y).\pi \langle y \rangle .0)|x(y).0
\]
It is identical to the previous program, except that the first two processes are nested inside of a restriction: \( (\nu x) \). A restriction is like a variable binding, in that it scopes the variable to its context: the \( x \) inside of the restriction is not the same as the \( x \) outside of the restriction. Unlike variable binding, however, it doesn’t bind it to any particular value—remember, names are all there are in \( \pi \)-calculus, so there’s nothing else it could be bound to—it merely makes for two distinct interpretations of the name. That’s why it’s called a restriction; it restricts the meaning of, in this case, \( x \), within the restriction expression. Now, this program can only proceed like so:
\[
\Rightarrow (\nu x)(z(a).0|\pi \langle h \rangle .0)|x(y).0 \\
\Rightarrow (\nu x)(0|0)|x(y).0 \\
\Rightarrow (\nu x)(0)|x(y).0 \\
\Rightarrow x(y).0
\]
In the last step, we can remove a restriction when it is no longer restricting anything (i.e., when its contained process terminates). When a restriction is at the top level like this, it can always be rewritten by renaming variables to new, fresh names, so the above program is equivalent to the following program:
\[
\overline{x'} \langle h \rangle . z(a).0|x'(y).\pi \langle y \rangle .0|x(y).0
\]
Aside: If your eyes are glazing over from \( \pi \)-calculus syntax, don’t worry, you’re not the only one. Something about \( \pi \)-calculus’s use of overlines and \( \nu \) and pipes makes it semantically dense and difficult to read. I have no advice to alleviate this; just be careful of the pipes and parentheses.
A restriction only applies to the variable it names, so processes within restrictions are still allowed to communicate with processes outside of restrictions:
\[ (\nu x)\pi(y).0[z(y).\pi(y).0] \]
\[ \Rightarrow (\nu x)0[\pi(y).0] \]
\[ \Rightarrow \pi(y).0 \]
Substitution must be aware of restriction, because a restricted variable is distinct from the same name in the surrounding code. For instance:
\[ (x(y).\nu x.x(y).0)[z/x] = (z(y).\nu x.x(y).0) \]
This exception is the same as is introduced by \(\lambda\)-abstractions in substitution for \(\lambda\)-calculus.
The only messages that processes can send are names. Names are also the channels by which processes send messages. As a consequence, processes can send channels over channels. For instance, consider this program:
\[ \pi(z).0|x(y).\pi(h).0|z(a).0 \]
The first process will send the name \(z\) over the channel \(x\). The second process is waiting to receive a message on the channel \(x\). The third process is waiting to receive a message on the channel \(z\), but there is no send on the channel \(z\) in the entire program. The third process cannot possibly proceed, but the first and second can, like so:
\[ \Rightarrow 0|\pi(h).0|z(a).0 \Rightarrow \pi\langle h \rangle.0|z(a).0 \]
The second process received a \(z\) on the channel \(x\), as the variable \(y\). But, it then proceeds to send on the channel \(y\). Because message receipt works by substitution, that \(y\) has been substituted for \(z\). This program can now proceed, by sending \(h\) to the third process (at which point both the second and third process terminate).
Restriction has an unusual interaction with sending channels. For instance, consider this program:
\[ (\nu x)\pi\langle x \rangle.x(y).0|z(a).\pi\langle x \rangle.0 \]
The first process is under a restriction for \(x\), and the second process is not. But, the behavior of the first process is to send \(x\) over the channel \(z\), and the second process is waiting to receive a message on the channel \(z\). It’s then going to send \(x\) back, but that \(x\) isn’t the same \(x\) as the first process’s \(x\), because of the restriction. So, what happens if we send a restricted channel outside of its own restriction? How do we deal with these two conflicting \(x\)’s? The answer is made clear by our statement that a restriction can always be rewritten by simply using a new name. In this case, this program can be rewritten like so:
\[ \pi\langle x' \rangle.x'(y).0|z(a).\pi\langle x \rangle.0 \]
From this state, the steps are clear:
\[ \Rightarrow x'(y).0|\pi\langle x \rangle.0 \]
\[ \Rightarrow 0|0 \]
\[ \Rightarrow 0 \]
Finally, \(\pi\)-calculus supports process creation: a process may create more processes. There are actually two mechanisms of process creation. First, processes may simply be nested. For instance, consider this program:
\[ x(y).(\gamma(h).0|\gamma(m).0)|f(a).\pi(a).0|f(b).\pi(b).0|\pi(f).0 \]
Note in particular the position of the parentheses: this program has four processes, not five! The first process is \(x(y).(\gamma(h).0|\gamma(k).0)\). Although this process has the pipe which separates multiple processes within it, those are not two independent processes until this process has received a message on the channel \(x\). Essentially, as soon as this process receives a message, it will split into two processes. This program can proceed as follows (this is not the only possible sequence):
instance, consider the following program:
The astute reader may have noticed that we have described sequences of steps, with no true parallelism. For
2.1 Concurrency vs. Parallelism
Exercise 1. Give another possible sequence for this example.
The other mechanism of process creation is replication. The ! operator creates an endless sequence of identical
processes. For instance, consider the following program:
There are three processes trying to send on the channel x, but only one process with a receive on the channel x.
However, the ! creates any number of the same process, so all three of the sending processes can proceed, in any
order. This is one possible sequence:
2.1 Concurrency vs. Parallelism
The astute reader may have noticed that we have described sequences of steps, with no true parallelism. For
instance, consider the following program:
There are two possible steps this program can take—it can send a message on x or y—but we describe it as taking
one or the other, not both at the same time. The concurrency comes from the lack of prioritization, and non-
determinism: each of these two options is equally valid, and to consider how this program proceeds, we need to
consider both possibilities. But, the concurrency is restricted by the nature of messages: only pairs of matching
sends and receives can actually proceed.
Because concurrency models the appearance of multiple tasks happening simultaneously, in most cases, it is not
necessary to model true parallelism. The most complex formal models in the domain of concurrency and parallelism
are models of parallel shared-memory architectures, and even they are formally descriptions of concurrency rather
than parallelism, in that they model parallel action as a non-deterministic ordering.
---
^2 In this author’s opinion.
2.2 Structural Congruence
Because processes may proceed in any order in π-calculus, \(P|Q\) is not meaningfully distinct from \(Q|P\). Similarly, \((\nu x)P|Q\) is not meaningfully distinct from \(P[y/x]|Q\), where \(y\) is a new name, and α-equivalent programs are also indistinct.
In λ-calculus, these equivalences mostly gave us a baseline for comparing things. In π-calculus, it would be difficult or impossible to define reduction without this equivalence, because of the non-deterministic ordering of steps.
This equivalence is defined formally as structural congruence, written as \(\equiv\). That is, \(P \equiv Q\) means that \(P\) is structurally congruent to \(Q\). Structural congruence is reflexive, symmetric, and transitive.
The formal rules for structural congruence follow. Note that different presentations of π-calculus present slightly different but equivalent rules of structural congruence, so this may not exactly match other materials on the same topic.
Definition 1. (Structural congruence)
Let the metavariables \(P\), \(Q\), and \(R\) range over programs, and \(x\) and \(y\) range over names. Then the following rules describe structural congruence of π-calculus programs:
\[
\begin{align*}
\text{C\_Alpha} & \quad \frac{\alpha P \equiv Q}{P \equiv Q} & \text{C\_ORDER} & \quad \frac{P|Q \equiv Q|P}{P \equiv Q} & \text{C\_Nest} & \quad \frac{P \equiv P'}{P|Q \equiv P'|Q} \\
\text{C\_Paren} & \quad \frac{(P|R)|R \equiv P|(Q|R) \equiv P|Q|R}{(P|Q)|R} & \text{C\_Termination} & \quad \frac{0|P \equiv P}{0} \\
\text{C\_Restriction} & \quad \frac{y \text{ is a fresh variable}}{(\nu x)P \equiv P[y/x]} & \text{C\_Replication} & \quad \frac{!P \equiv P!P}{!P}
\end{align*}
\]
The \text{C\_Alpha} rule specifies that α-equivalence implies structural congruence, i.e., two α-equivalent programs are also structurally congruent. The \text{C\_ORDER} and \text{C\_Nest} rules allow us to reorder programs and apply structural congruence to subprograms. The \text{C\_Paren} rule specifies that all concurrent processes are equivalent, and different placements of parentheses do not affect their composition, so we can remove parentheses at the top level of a program. The \text{C\_Termination} rule describes termination in terms of equivalence: rather than termination being a step, we can describe a program with a terminating process as equivalent to a program without that process. The \text{C\_Restriction} rule makes explicit our description of restriction as creating a fresh variable. Finally, the \text{C\_Replication} rule makes replication a property of structural congruence, rather than a step: a replicating process is simply equivalent to a version with a replica, and thus, by the transitive property, equivalent to a version with any number of replicas.
Because of \text{C\_Termination}, \text{C\_Restriction}, and \text{C\_Replication}, only sending and receiving messages is described as an actual step of computation. Everything else is structural congruence.
Note that \text{C\_Restriction} does not allow us to remove all restrictions from a program, because restrictions may be nested inside of other constructs, and structural equivalence does not allow us to enter any other constructs. For instance, the program \(x(y).(\nu z).0\) has no structural equivalent (except for α-renaming), because the restriction of \(z\) is nested inside of a receipt on the channel \(y\).
2.3 Formal Semantics
With structural congruence, we may now describe the formal semantics of $\pi$-calculus.
**Definition 2. (Formal semantics of $\pi$-calculus)**
Let the metavariables $P$, $Q$, and $R$ range over programs, and $x$, $y$, and $z$ range over names. Then the following rules describe the formal semantics of $\pi$-calculus:
\[
\begin{align*}
\text{CONGRUENCE} & \quad P \equiv Q \quad Q \rightarrow Q' \quad Q' \equiv R \quad \text{MESSAGE} \quad \pi(y).P \rightarrow Q \rightarrow R \rightarrow P[Q[y/z]]R
\end{align*}
\]
Because of structural congruence, these two rules are all that is needed to define the semantics of $\pi$-calculus. By CONGRUENCE, $P$ can reduce to $R$ if $P$ is equivalent to some $Q$ which can reduce to some $Q'$, and $Q'$ is equivalent to $R$. That is, reduction is ambivalent to structural congruence in either its “from” or “to” state. MESSAGE describes the only actual reduction step in our semantics: if there is a process to send a message, and a process to receive a message on the same channel, then we may take a step in both, by removing the send from the sending process, removing the receipt from the receiving process, and substituting the variable in the receiving process with the value sent by the sending process.
MESSAGE itself is deterministic. The non-determinism in $\pi$-calculus is introduced by CONGRUENCE. Every program $P$ has infinitely many structurally congruent equivalents. Some number of those structurally congruent programs are able to take steps with MESSAGE. Each of those is equivalently correct, and none has priority; all are valid reduction steps.
Consider our previous example:
\[!x(a).0|\pi(b).0|\pi(c).0|\pi(d).0\]
We may now define one possible sequence formally, with structural congruence and reduction:
\[
\begin{align*}
!x(a).0|\pi(b).0|\pi(c).0|\pi(d).0 & \equiv x(a).0|x(a).0|\pi(b).0|\pi(c).0|\pi(d).0 & (\text{C\_REPLICATION}) \\
\equiv & \pi(c).0|x(a).0|!x(a).0|\pi(b).0|\pi(d).0 & (\text{C\_ORDER}) \\
\rightarrow & 0|0|!x(a).0|\pi(b).0|\pi(d).0 & (\text{MESSAGE}) \\
\equiv & !x(a).0|\pi(b).0|\pi(d).0 & (\text{C\_TERMINATION}) \\
\equiv & x(a).0|x(a).0|\pi(b).0|\pi(d).0 & (\text{C\_REPLICATION}) \\
\equiv & \pi(b).0|x(a).0|x(a).0|\pi(d).0 & (\text{C\_ORDER}) \\
\rightarrow & 0|0|!x(a).0|\pi(d).0 & (\text{MESSAGE}) \\
\equiv & !x(a).0|\pi(d).0 & (\text{C\_TERMINATION}) \\
\equiv & x(a).0|x(a).0|\pi(d).0 & (\text{C\_REPLICATION}) \\
\equiv & \pi(d).0|x(a).0|x(a).0 & (\text{C\_ORDER}) \\
\rightarrow & 0|0|!x(a).0 & (\text{MESSAGE}) \\
\equiv & !x(a).0 & (\text{C\_TERMINATION})
\end{align*}
\]
2.4 The Use of $\pi$-Calculus
In concurrent programming, most problems can be simplified to happens-before relationships. That is, with multiple processes able to perform tasks concurrently, you want to guarantee that some task happens before some other task. Concurrent systems are modeled in terms of $\pi$-calculus to prove these kinds of happens-before relationships.
For instance, let’s say we want to verify that a given program always sends a message on channel \( x \) before sending a message on channel \( y \). Here is a program that fails to guarantee such a relationship:
\[
\pi(x).\overline{y}(y).0|a(m).\pi(h).0|b(n).\pi(h).0|x(q).0|y(q).0
\]
We can demonstrate this by showing a reduction that sends on \( y \) before sending on \( x \):
\[
\begin{align*}
\rightarrow & \quad \overline{y}(y).0|\pi(h).0|b(n).\pi(h).0|x(q).0|y(q).0 \\
\equiv & \quad \overline{y}(y).0|b(n).\pi(h).0|\pi(h).0|x(q).0|y(q).0 \\
\rightarrow & \quad 0|\overline{y}(h).0|\pi(h).0|x(q).0|y(q).0 \\
\equiv & \quad 0|\gamma(h).0|y(q).0|\pi(h).0|x(q).0 \\
\rightarrow & \quad 0|0|\pi(h).0|x(q).0 \\
& \quad \text{(Premise violated)}
\end{align*}
\]
Proving that happens-before relationships hold is, of course, far more complicated, since it is impossible to enumerate the infinitely many possible structurally congruent programs. Luckily, \( C_{\text{Replication}} \) is the only case that can introduce infinite reducible programs, and the difference between them is uninteresting (only how many times the replicated subprogram has been expanded). So, in many cases, it is possible to enumerate all interesting reductions. If the program can reduce forever, then it is instead necessary to use inductive proofs for most interesting properties.
Generally, \( \pi \)-calculus is extended with other features to represent the actual computation that each process performs, rather than performing computation through message passing. For instance, \( \lambda \)-calculus and \( \pi \)-calculus can be overlain directly by allowing processes which contain \( \lambda \)-applications to proceed as in the \( \lambda \)-calculus, while process pairs containing a matching send and receive can proceed as in \( \pi \)-calculus. Such combinations are often used for proving type soundness of concurrent languages.
### 3 Exemplar: Erlang
In \( \pi \)-calculus, we have found a formal semantics for message-passing concurrency. Although there are many programming languages with support for concurrency, and even many programming languages with support for message-passing concurrency, there is a stand-out example which is to message-passing concurrency as Smalltalk is to object orientation: Erlang.
Erlang\(^3\) is a language built on the principle that “everything is a process”. It was created in the late 1980’s at Ericsson by Joe Armstrong, Robert Virding, and Mike Williams, to manage telecom systems. There were three primary goals in that context:
- that the system scale from single systems (where many processes would run on one computer) to distributed systems (where processes could be distributed across many computers) with little or no rewriting,
- that processes would be sufficiently isolated that faults in one process would not (necessarily) affect the rest of the system, and
- that individual processes could be replaced live in a running system, allowing for smooth upgrades without any downtime.
These goals led Erlang to a quite extreme design, whereby Erlang programs use processes in the same way as Smalltalk programs use objects. Nearly all compound data is bound in processes, and one interacts with processes by sending and receiving messages. Just like in \( \pi \)-calculus, one can create processes and send channels in messages, allowing sophisticated interactions.
In fact, unlike Smalltalk’s objects, Erlang does support some primitive data types which are not processes. Integers, floating point numbers, tuples, lists, key-value maps, and Prolog-like atoms are all supported, and ports—Erlang’s name for one end of a communication channel—are not themselves processes (how would one ever send
\(^3\)Pronounced roughly like “air-lang” by people who know how to pronounce words, or “ur-lang” by the kind of troglodytes who pronounce “wiki” as “wick-y”.
a message if message channels were themselves processes which were controlled by messages?). So, it’s not quite true that everything is a process, but everything is a process. In fact, it’s perfectly possible to treat Erlang as a mostly-pure functional language and write totally non-concurrent code. However, we won’t focus on that aspect of Erlang. Instead we’ll look only at its concurrency features.
Although Erlang has its own unique syntax, by this point, you should be able to guess how most of it works. Like Prolog, variables in Erlang are named with capital letters, and atoms and functions are named with lower-case letters, but its behavior is otherwise more similar to a functional language than a logic language.
We will take only a very cursory glance at Erlang, to discuss how processes and concurrency can be used to build more familiar data structures.
### 3.1 Modules
Erlang divides code into modules. We’ve avoided discussing modules for most of this course, but creating a process in Erlang requires modules, so we will briefly discuss them. An Erlang file is a module, and must start with a declaration of the module’s name. For instance, a module named `sorter` begins as follows:
```erlang
-module(sorter).
```
Most modules define some public functions and some private functions. Any functions which should be usable from other modules must be exported. For instance, if we define a function `merge` taking two arguments, we make that function visible like so:
```erlang
-export([merge/2])
```
Note that functions in this context are named with their arity, in this case 2, in the same fashion as Prolog, so `merge/2` is a function named `merge` which takes two arguments.
Functions in the same module can be called with only their name:
```erlang
merge([1, 2, 3], [2, 2, 4])
```
Functions in other modules need to be prefixed with the target module:
```erlang
sorter:merge([1, 2, 3], [2, 2, 4])
```
### 3.2 Processes
A process is created in Erlang with the built-in `spawn` function. `spawn` is called with a module and function name, and the arguments for that function, and the newly created process starts running that function. `spawn` returns a process reference, which can then be used to communicate with the process.
For instance, the following function spawns two processes, passing the process reference of the first to the second. The first process runs the `pong` function in the `pingpong` module with no arguments, and the second runs the `ping` function in the `pingpong` module with the arguments 5 and the reference to the `pong` process:
```erlang
start() ->
Pong = spawn(pingpong, pong, []),
spawn(pingpong, ping, [5, Pong]).
```
Generally, functions perform a list of comma-separated actions like this.
Now, let’s write the `ping` and `pong` functions. `ping` will send the given number of “ping” messages to the given process, and expect an equal number of “pong” messages in response. `pong` will expect a sequence of “ping” message, and send a “pong” to each. For this to work, we need to know Erlang’s syntax for sending and receiving messages.
Messages are sent in Erlang with the `!` operator, as `Target ! Message`. The message can be any Erlang value, but in practice, it is either an atom or a tuple in which the first element is an atom. The atom specifies the kind of message, and any arguments that the message has fill the rest of the tuple. In our case, the “pong” process does not have a reference to the “ping” process, but the “ping” process does have a reference to the “pong” process, so the “ping” message will need to send a reference along in order for the “pong” process to know how to reply. In π-calculus terms, “ping” must send the channel on which “pong” is to reply.
Messages are received in Erlang with a `receive` expression, which resembles a pattern match, in that it matches the shape of the message received. For a message to be successfully sent, the target process must be running a `receive` with a matching pattern, in the same way that for a π-calculus program to make progress, a sending process must have a matching receiving process. Because `receive` matches particular shapes of messages, a process can receive messages in any order, but process them in the order it chooses, simply by performing `receives` in sequence that match only the kinds of messages it wishes to process.
Knowing this, we will write `ping` first:
```erlang
1 ping(0, _) -> io:format("Ping finished~n", []);
2 ping(N, Pong) ->
3 Pong ! {ping, self()},
4 receive
5 pong -> io:format("Pong received~n", [])
6 end,
7 ping(N - 1, Pong).
```
Like in Haskell, functions can be declared in multiple parts with implicit patterns. In this case, the `ping` function simply outputs “Ping finished” to standard out and terminates if the first argument (the number of times to ping) is 0. If `N` is not 0, it sends a message to the `Pong` process, and then awaits a `pong` message back. The sent message is a tuple containing the atom `ping`, to indicate that this is a `ping` message, and a reference to the current process, obtained with the built-in `self` function. Once a `ping` has been sent and a `pong` received, it prints “Pong received”, and then recurses with one fewer `ping` left to send.
Now, let’s write `pong`:
```erlang
1 pong() ->
2 receive
3 {ping, Ping} ->
4 Ping ! pong,
5 io:format("Ping received~n", [])
6 end,
7 pong().
```
Where `ping` starts with a send, `pong` instead starts with a receive. Once `pong` has received a message matching the pattern `{ping, Ping}` (remember, `Ping` is a variable because it starts with a capital letter), it sends a `pong` message back (`pong` is an atom in this case, not the function), and then prints “Ping received”. We’ve intentionally written this with the print after the send, to demonstrate concurrency.
If the `start` function we wrote above is run, one possible output is:
```
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Ping finished
```
However, because the `pong` process sends its `pong` before printing that the `ping` was received, other orders are possible, such as this one:
```
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Ping received
Pong received
Pong received
Ping received
Pong received
Ping received
Pong received
Pong received
Pong received
Pong received
Ping received
Pong received
Ping finished
```
**Exercise 2.** What orders are *not* possible?
In this example, the pong process never actually terminates: only ping knew how many times to ping, so pong is left waiting endlessly for another ping that will never arrive. For cleanliness, we could instead add a terminate message like so:
```erlang
ping(0, Pong) ->
io:format("Ping finished-n", []),
Pong ! terminate;
ping(N, Pong) ->
Pong ! {ping, self()},
receive
pong -> io:format("Pong received-n", [])
end,
ping(N - 1, Pong).
pong() ->
receive
terminate ->
io:format("Pong finished-n", []);
{ping, Ping} ->
Ping ! pong,
io:format("Ping received-n", []),
pong()
end.
```
Since pong does not recurse if terminate is received, it instead simply ends, terminating the process.
## 4 Processes as References
Erlang does not have mutable variables. But, surprisingly, they can be built with nothing but processes!
When representing mutable data in functional languages, we needed a way to put that data aside, separate from the program, in the heap (\(\Sigma\)). But, concurrent processes are already “aside” and separate from one another, so all we actually need is a way for a process to store a piece of data like a single mapping in the heap, similarly to Haskell’s monads.
To achieve this, we will make a module which exports three functions: ref/1, get/1, and put/2. The ref function will generate a reference, like OCaml’s ref. The get function will retrieve the value stored in a reference, like OCaml’s !. The put function will store a value in a reference, like OCaml’s :=. The actual value used for the reference will be a process reference, to a process carefully designed to work this way.
The complete solution follows:
```erlang
-module(refs).
-export([ref/1, refproc/1, get/1, put/2]).
ref(V) ->
spawn(refs, refproc, [V]).
refproc(V) ->
receive
{get, Return} ->
Return ! {refval, V},
refproc(V);
{put, Return, NV} ->
Return ! {refval, NV},
refproc(NV)
end.
get(Ref) ->
Ref ! {get, self()},
receive
{refval, V} -> V
end.
```
CS442: Module 9: Concurrent Programming
24 put(Ref, V) ->
25 Ref ! {put, self(), V},
26 receive
27 {refval, _} -> Ref
28 end.
The ref function spawns a new process, returning the process reference. The new process represents the reference, and runs the function refproc, with the initial value as its arguments. The get function sends a get message to the given process (which must be a reference process for this to work), and expects a refval message in response with the value stored in the reference. The put function sends a put message to the given process, containing the value to put in the reference, and also waits for a refval message. put doesn’t actually care about the value returned by refval; it’s only used to make sure that the message has been received and acted on before the current process continues.
The refproc function contains all of the interesting behavior, as it is the function used by the actual reference process. refproc must be exported because of how spawn works—there are ways to get around “polluting” the exported names in this way, but they’re not important for our purposes. refproc’s behavior is quite similar regardless of whether it receives a get message or a put message: it returns a value and then recursively calls refproc again. The difference is in which value. The value stored in the reference is in the (immutable) V variable. With get, it returns that value, and then recurses with the same value. With put, it instead expects a new value, NV, and returns and recurses with NV instead of V. In this way, although no variables are mutable, the reference itself is, since if it receives a put message, then it will respond to future get messages with the new value, until another put is received.
In shared-memory concurrency, two tasks may access the same mutable memory, and each may mutate it. With these references, we have in fact implemented shared-memory concurrency on top of message-passing concurrency: if two processes each have a reference to the reference process, then either may mutate its value, and both can see the other’s mutations. The fact that it is then extremely difficult to guarantee that the processes mutate things in the correct order is the usual argument for using message-passing concurrency instead of shared-memory concurrency in the first place, but this form of references demonstrates that neither is more powerful than the other.
5 Processes as Objects
If you’re accustomed to object-oriented programming, you’ve probably noticed that references from the reference module above act a lot like objects with a get and put method. Indeed, we can extend this metaphor to implement objects with only processes with immutable variables. For instance, this module implements a reverse polish notation calculator object very similar to the one we wrote for the Smalltalk segment of Module 1:
1-module(rpncalc).
2-export([newrpn/0, rpn/1, push/2, binary/2, add/1, sub/1, mul/1, divide/1]).
3
4newrpn() ->
spawn(rpncalc, rpn, []).
5
6rpn(Stack) ->
receive
{push, Return, V} ->
Return ! {rpnval, V},
rpn([V | Stack]);
{op, Return, F} ->
rpnop(Stack, F, Return)
end.
7rpnop([R, L | Rest], F, Return) ->
V = F(L, R),
Return ! {rpnval, V},
rpn([V | Rest]).
8
9push(RPN, V) ->
RPN ! {push, self(), V},
receive
\{rpnval, _\} -> V
end.
binary(RPN, F) ->
RPN ! \{op, self(), F\},
receive
\{rpnval, V\} -> V
end.
add(RPN) ->
binary(RPN, fun(L, R) -> L + R end).
sub(RPN) ->
binary(RPN, fun(L, R) -> L - R end).
mul(RPN) ->
binary(RPN, fun(L, R) -> L * R end).
divide(RPN) ->
binary(RPN, fun(L, R) -> L / R end).
An RPN’s sole field, the stack, is represented by the Stack variable of the \texttt{rpn} function, which is the function run by an RPN process. The \texttt{push} and \texttt{binary} functions send an RPN process messages corresponding to one of its two supported “methods”: push and \texttt{op}. The \texttt{op} message carries a function, representing the binary operation to perform, so like in the Smalltalk version, specific operations can be implemented in terms of it.
Again, there are only immutable variables and processes, but the fact that sending and receiving messages establishes a sequence allows us to emulate more sophisticated features. Erlang has several libraries implementing more elegant object orientation, but still using processes as objects. Erlang is designed for programs to have thousands of processes, so it’s common to mix these styles as well; for instance, fields can be implemented as references which are in turn implemented as processes as in the previous section.
6 Implementation
Operating systems implement processes and threads: in OS terms, two processes do not share memory, but two threads within the same process do share memory. Erlang uses the term “process” because Erlang’s processes do not share memory. However, using operating system processes to implement Erlang processes would result in catastrophically poor performance! Indeed, even using operating system threads to implement Erlang processes would be similarly fraught. The reason is simply that switching between threads or processes in an operating system—that is, context switching—is an expensive process.
Instead, highly-concurrent languages such as Erlang use so-called \textit{green threads}. Basically, the Erlang interpreter must implement its own form of thread switching, and maintain the stack for each thread as a native data structure in the host language. When an Erlang process executes a \texttt{receive} and it does not immediately have a matching message available to act upon, Erlang instead sets aside the thread for that process and loads a stack for another process; in effect, it performs the same kind of context switch that an operating system performs, but with much less context to switch. It runs as many operating system threads as there are CPU cores available, but each one can switch between many green threads. It is not uncommon for an Erlang process to have tens of thousands of processes, so keeping green threads light is extremely important.
To know which processes are available to run, an Erlang process must also be able to pattern match very quickly. Typically, a process that is waiting for a message has its pattern stored, and when another process \textit{sends} it a message, it performs a pattern match immediately, to determine if the other process can be awoken.
The other major implementation roadblock to message-passing concurrency is the actual message passing. In languages with mutable values, it is necessary to copy the message being passed, so that two processes cannot see the same mutable memory (which would be shared-memory concurrency, not message-passing concurrency). Erlang largely sidesteps this issue by being fundamentally immutable, so that it’s harmless to pass around values in any form. Two processes can share a pointer to a value if neither will actually mutate that value.
7 Miscellany
Erlang does actually support some (very limited) mutable data structures, but they may not be sent in a message.
The message-reply style we used in all of our examples was fairly brittle, in that we used a specific atom as the expected reply, but there’s nothing to stop another process from sending the same atom. Erlang supports creating unique values, called “references” to create needless confusion, with the built-in `create_ref` function. Usually, two processes which communicate would exchange such unique references to make sure that they’re receiving messages from the process they thought they were speaking to.
Most Erlang programs are written in a so-called “let it crash” style. That is, instead of trying to anticipate all forms of errors, code is written to simply re-spawn processes that fail unexpectedly. Since processes are mostly independent of each other, large systems can operate even with major bugs. In the Erlang shell, you can re-compile modules, and swap out processes using the old module for processes using the new version, and it is thus often possible to fix bugs in a running system with no downtime. Many Erlang proponents cite this style as the major advantage of Erlang.
8 Fin
In the next (and final) module, we will very briefly look at how our mathematical model of programming languages interfaces with the real world, through models of systems programming.
References
Rights
Copyright © 2020, 2021 Gregor Richards.
This module is intended for CS442 at University of Waterloo.
Any other use requires permission from the above named copyright holder(s).
|
{"Source-Url": "https://student.cs.uwaterloo.ca/~cs442/W21/notes/pdf/Module-9.pdf", "len_cl100k_base": 11538, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 55039, "total-output-tokens": 12785, "length": "2e13", "weborganizer": {"__label__adult": 0.0004067420959472656, "__label__art_design": 0.0003561973571777344, "__label__crime_law": 0.0002930164337158203, "__label__education_jobs": 0.0019426345825195312, "__label__entertainment": 0.00010436773300170898, "__label__fashion_beauty": 0.00014591217041015625, "__label__finance_business": 0.0001906156539916992, "__label__food_dining": 0.0005006790161132812, "__label__games": 0.0006241798400878906, "__label__hardware": 0.0008220672607421875, "__label__health": 0.0004925727844238281, "__label__history": 0.0003154277801513672, "__label__home_hobbies": 0.00012576580047607422, "__label__industrial": 0.0005745887756347656, "__label__literature": 0.0005450248718261719, "__label__politics": 0.0002796649932861328, "__label__religion": 0.0007481575012207031, "__label__science_tech": 0.030975341796875, "__label__social_life": 0.00016307830810546875, "__label__software": 0.00455474853515625, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.00038313865661621094, "__label__transportation": 0.0007882118225097656, "__label__travel": 0.00022554397583007812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48181, 0.02524]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48181, 0.35508]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48181, 0.90653]], "google_gemma-3-12b-it_contains_pii": [[0, 3019, false], [3019, 7548, null], [7548, 11620, null], [11620, 15077, null], [15077, 18534, null], [18534, 20346, null], [20346, 23773, null], [23773, 26753, null], [26753, 30663, null], [30663, 34431, null], [34431, 37355, null], [37355, 39418, null], [39418, 42730, null], [42730, 46429, null], [46429, 48181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3019, true], [3019, 7548, null], [7548, 11620, null], [11620, 15077, null], [15077, 18534, null], [18534, 20346, null], [20346, 23773, null], [23773, 26753, null], [26753, 30663, null], [30663, 34431, null], [34431, 37355, null], [37355, 39418, null], [39418, 42730, null], [42730, 46429, null], [46429, 48181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48181, null]], "pdf_page_numbers": [[0, 3019, 1], [3019, 7548, 2], [7548, 11620, 3], [11620, 15077, 4], [15077, 18534, 5], [18534, 20346, 6], [20346, 23773, 7], [23773, 26753, 8], [26753, 30663, 9], [30663, 34431, 10], [34431, 37355, 11], [37355, 39418, 12], [39418, 42730, 13], [42730, 46429, 14], [46429, 48181, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48181, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
39556bad6ce58b620e605b45829188f9bde57fa5
|
[REMOVED]
|
{"len_cl100k_base": 12059, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 43274, "total-output-tokens": 13054, "length": "2e13", "weborganizer": {"__label__adult": 0.00043702125549316406, "__label__art_design": 0.0005588531494140625, "__label__crime_law": 0.0007405281066894531, "__label__education_jobs": 0.0017442703247070312, "__label__entertainment": 0.00013005733489990234, "__label__fashion_beauty": 0.00018906593322753904, "__label__finance_business": 0.0005311965942382812, "__label__food_dining": 0.0003170967102050781, "__label__games": 0.0012845993041992188, "__label__hardware": 0.000701904296875, "__label__health": 0.0005984306335449219, "__label__history": 0.0004487037658691406, "__label__home_hobbies": 0.00010466575622558594, "__label__industrial": 0.0003731250762939453, "__label__literature": 0.00040435791015625, "__label__politics": 0.0003917217254638672, "__label__religion": 0.00039315223693847656, "__label__science_tech": 0.07342529296875, "__label__social_life": 0.00018584728240966797, "__label__software": 0.02178955078125, "__label__software_dev": 0.89404296875, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.0004372596740722656, "__label__travel": 0.00024580955505371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59062, 0.03218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59062, 0.10121]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59062, 0.92485]], "google_gemma-3-12b-it_contains_pii": [[0, 5007, false], [5007, 11918, null], [11918, 13727, null], [13727, 18541, null], [18541, 21643, null], [21643, 26047, null], [26047, 32469, null], [32469, 38606, null], [38606, 45598, null], [45598, 52613, null], [52613, 59062, null], [59062, 59062, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5007, true], [5007, 11918, null], [11918, 13727, null], [13727, 18541, null], [18541, 21643, null], [21643, 26047, null], [26047, 32469, null], [32469, 38606, null], [38606, 45598, null], [45598, 52613, null], [52613, 59062, null], [59062, 59062, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59062, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59062, null]], "pdf_page_numbers": [[0, 5007, 1], [5007, 11918, 2], [11918, 13727, 3], [13727, 18541, 4], [18541, 21643, 5], [21643, 26047, 6], [26047, 32469, 7], [32469, 38606, 8], [38606, 45598, 9], [45598, 52613, 10], [52613, 59062, 11], [59062, 59062, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59062, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
3c0291d485288ce4889a8812f749da176e2a4c6c
|
[REMOVED]
|
{"Source-Url": "http://www.iis.sinica.edu.tw/page/library/TechReport/tr1999/tr99009.pdf", "len_cl100k_base": 9318, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 61955, "total-output-tokens": 11066, "length": "2e13", "weborganizer": {"__label__adult": 0.0004014968872070313, "__label__art_design": 0.0004677772521972656, "__label__crime_law": 0.0005016326904296875, "__label__education_jobs": 0.000579833984375, "__label__entertainment": 9.423494338989258e-05, "__label__fashion_beauty": 0.00019216537475585935, "__label__finance_business": 0.00030994415283203125, "__label__food_dining": 0.0004553794860839844, "__label__games": 0.0007729530334472656, "__label__hardware": 0.0019502639770507812, "__label__health": 0.0007290840148925781, "__label__history": 0.00031876564025878906, "__label__home_hobbies": 0.00012969970703125, "__label__industrial": 0.0007772445678710938, "__label__literature": 0.0002598762512207031, "__label__politics": 0.0003769397735595703, "__label__religion": 0.0006132125854492188, "__label__science_tech": 0.10516357421875, "__label__social_life": 9.816884994506836e-05, "__label__software": 0.00727081298828125, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.0003886222839355469, "__label__transportation": 0.0008358955383300781, "__label__travel": 0.0002422332763671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37041, 0.02341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37041, 0.40344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37041, 0.84211]], "google_gemma-3-12b-it_contains_pii": [[0, 3383, false], [3383, 8332, null], [8332, 11874, null], [11874, 14011, null], [14011, 18953, null], [18953, 21527, null], [21527, 24988, null], [24988, 28325, null], [28325, 31603, null], [31603, 31672, null], [31672, 33754, null], [33754, 36163, null], [36163, 37041, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3383, true], [3383, 8332, null], [8332, 11874, null], [11874, 14011, null], [14011, 18953, null], [18953, 21527, null], [21527, 24988, null], [24988, 28325, null], [28325, 31603, null], [31603, 31672, null], [31672, 33754, null], [33754, 36163, null], [36163, 37041, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37041, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37041, null]], "pdf_page_numbers": [[0, 3383, 1], [3383, 8332, 2], [8332, 11874, 3], [11874, 14011, 4], [14011, 18953, 5], [18953, 21527, 6], [21527, 24988, 7], [24988, 28325, 8], [28325, 31603, 9], [31603, 31672, 10], [31672, 33754, 11], [33754, 36163, 12], [36163, 37041, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37041, 0.01905]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
311a1f6c89c7f8776791e14d6c0d45c492bc5750
|
INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT)
(19) World Intellectual Property Organization
International Bureau
(43) International Publication Date
16 April 2009 (16.04.2009)
(21) International Application Number:
PCT/US2008/079349
(22) International Filing Date:
9 October 2008 (09.10.2008)
(25) Filing Language:
English
(26) Publication Language:
English
(30) Priority Data:
60/978,643 9 October 2007 (09.10.2007) US
60/978,628 9 October 2007 (09.10.2007) US
61/030,739 22 February 2008 (22.02.2008) US
61/059,134 5 June 2008 (05.06.2008) US
(51) International Patent Classification:
G06F 17/28 (2006.01)
(54) Title: METHOD AND SYSTEM FOR ADAPTIVE TRANSLITERATION
(57) Abstract: A system and method for transliteration between two different character-based languages is provided. In some embodiments, the system and method provide transliteration from the Arabic language into Roman-based languages such as English. In some embodiments this system and method allows a user to more easily produce Arabic text on English or Roman-based computer hardware and software.
(74) Agent: HALLAJ, Ibrahim; Pepper Hamilton LLP, 50th Floor, 500 Grant Street, Pittsburgh, Pennsylvania 15219-2502 (US).
(72) Inventors and
(75) Inventors/Applicants (for US only): HADDAD, Habib [LB/US]; 6 Quirk Street, Watertown, Massachusetts 02472 (US). JUREIDINI, Imad [US/US]; 60 Maple Avenue, #3, Cambridge, Massachusetts 02139 (US).
(71) Designated States (unless otherwise indicated, for every kind of regional protection available): ARIPO (BW, GH, GM, KE, LS, MW, MZ, NA, SD, SL, SZ, TZ, UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, MD, RU, TJ, TM), European (AT, BE, BG, CH, CY, CZ, DE, DK, EE, ES, FI, FR, GB, GR, HR, HU, IE, IS, IT, LT, LU, LV, MC, MT, NL, NO, PL, PT, RO, SE, SI, SK, TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, ML, MR, NE, SN, TD, TG).
METHOD AND SYSTEM FOR ADAPTIVE TRANSLITERATION
I. TECHNICAL FIELD
[0001] The present disclosure relates to systems and methods for transliterating text, and in particular to transliteration between non-Roman character-based languages, such as Arabic, and Roman character-based languages, such as English.
II. RELATED APPLICATIONS
III. BACKGROUND
[0009] Now that computer use has become global, it is a technical challenge to provide speakers and readers of various languages with hardware and software adapted for use in their native languages and written character sets. Modern (e.g., personal) computing systems and other electronic information and communication devices typically include a processor, storage apparatus, and input/output apparatus through which the user of a device interacts to input or enter information into the device, and with which the device displays or outputs information back to the user.
[0004] One input apparatus is a keyboard, which generally includes a plurality of keys or buttons corresponding to the letters of an alphabet and other common numerals or characters. In the United States and most other countries, computer systems including an English-based keyboard with the letters of die
English alphabet and the decimal numbers and other punctuation characters are available, and many major manufacturers of computing equipment produce products in English or Roman-based character sets only. Furthermore, most computer software and system and application programs are also created today with English speakers primarily or only in mind.
[0005] However, for users in locations where the local language is not based on the same Rotnao character set as English, this requires adaptation of the keyboard, altering the user operation of the keyboard, customizing the system and application software, or all of the foregoing, to allow entry of information into the computer in the local native language. In many areas of the world, native keyboards and system and application software does not exist or is cumbersome to learn and use or is inadequate to provide natural and easy means for input and output of information to the computer in the native format or character set.
[0006] One way non-Roman character users have adapted to the use of Roman based computing infrastructures is by way of transliteration. Transliteration is a process used to transcribe text written in a character set into another character set. Transliteration allows users of computers or other electronic devices to express themselves in a language that is difficult to input into Roman-based computing systems for a number of reasons, including for example: the keyboard may not include the characters of the language; even if the keyboard includes the native language's characters, a user may not be familiar with the keyboard layout; and a user may not be fluent in the language, but knows the transliteration of certain names or phrases.
[0007] There still exists a need for better systems and techniques for transliteration between languages with different character sets. This includes in the apparatus for transliteration and underlying methods, as well as improvements in all ways of interacting with the transliteration system, including its user interface and architecture and design.
IV. SUMMARY
[0008] Transliteration relates to conversion of text or character sets from one set to another, for example, form Arabic to Roman character-based languages such as English, and the reverse process (e.g., Roman to Arabic). Some transliteration systems use a one-to-one mapping from one character set to die
other. However, most people are not trained in these rigid transliteration systems, and the systems and processes for using them remain inadequate and difficult to implement and use. Nonetheless ad-hoc transliteration systems are commonly used, relying on loose mappings. These mappings are generally based on phonetic similarities between Arabic and a Roman language the user is familiar with (for example English or French). Where phonetic mappings don't exist (for example in Arabic certain sounds have no equivalents in English, users tend to fall back on one or more commonly understood mappings, which can use alphabetic characters, numbers and/or punctuation (for example, "3" is commonly understood to be a transliteration for the Arabic "£")
[0009] In the absence of an input mechanism for entering a language in its native character set, users sometimes type equivalent or known code characters in the Roman character set to represent the non-Roman characters. This process is referred to herein as "romanization." Note that the present disclosure is described in the context of an example of transliteration between the Arabic and Roman character-based (e.g., English) languages, but the present concepts can be extended to other schemes and character sets. For example, aspects of the present disclosure can be extended to Arabic-French, Farsi-Spanish, or other transliteration pairs.
[0010] Romanization can be performed in a number of ways. Converting Romanized text back into its original character set is difficult because multiple solutions may be available. In some embodiments, the system includes a flexible transliteration system that, when given a Romanized word (non-Roman text written using Roman characters), produces a list of ranked transliteration candidates.
[0011] The present disclosure, in a preferred embodiment, provides a system and method for transliterating a Romanized Arabic word or phrase into its Arabic form in the Arabic character set. The present discussion should be understood to be extendable to transliteration between other character set pairs as well. In some embodiments, the system's input includes a Roman character string. In some embodiments, the system's output includes a list of Arabic word candidates ranked according to a score, ranking, or other quantitative metric.
[0012] As discussed in greater detail elsewhere in this disclosure, a "score" or quantitative measure of confidence can be assigned to one or more of a list of
output candidate words or phrases. The score describes a level of confidence that a given output word is the Arabic word that the user meant to express using the Romanized input.
[0013] Different users might use a range of inputs to express the same desired output. The present system can be "fuzzy" because it is able to produce the same best guesses for a variety of reasonable inputs.
[0014] In other aspects, the present system and method allow a user to input Arabic text without learning specific transliteration rules, and allow the system to improve its accuracy and efficiency for the same user in future uses as well as for other users if such information is used in more than one session or between sessions of multiple users.
[0015] Some embodiments of the present system offer users a choice of Arabic word candidates. The user selects which output word they wish to use. The system can use statistical information about these selections to refine the scoring system. This produces output rankings more in line with the users' expectations. The system is therefore adaptive.
[0016] In one or more embodiments, a user interface is provided which has the following properties: it gives immediate feedback by showing transliteration candidates as the user types; it can display the meaning of the Arabic transliteration candidates; it allows the user to correct mistakes by modifying their transliteration selections at any time; it can automatically provide a best-guess transliteration if the user doesn't actively choose a transliteration; it remembers the user's previous transliteration selections; the transliteration rankings can be customized to a particular user, and; it can provide user selection feedback that can be used to improve the rankings of the transliteration system. Some or all of these features can be implemented in a computing system including a processor, memory, input/output structures, and executing programmed instructions.
[0017] One or more embodiments hereof can also be used to input non-Roman text in a number of applications, such as, but not limited to: inputting non-Roman text in a desktop computer application; inputting non-Roman text in a web-based application; and inputting non-Roman text on a mobile device, such as a cell phone.
V. BRIEF DESCRIPTION OF THE DRAWINGS
For a fuller understanding of the nature and advantages of the present invention, reference is made to the following detailed description of preferred embodiments in connection with the accompanying drawings, in which:
- Fig. 1 illustrates an exemplary flowchart of acts in a process for transliteration;
- Fig. 2 illustrates an exemplary weighted scoring method;
- Fig. 3 illustrates an exemplary selection post-processing method;
- Fig. 4 illustrates an exemplary selected Roman word and die candidate transliterations returned by the transliteration system;
- Fig. 5 illustrates an exemplary selected Roman word and die candidate transliterations returned by the transliteration system, including word meanings;
- Fig. 6 illustrates an exemplary selected Arabic word and die candidate transliterations obtained from the transliteration cache; and
- Fig. 7 illustrates an exemplary sequence of acts from a method implemented in a system for transliteration input and output.
VI. DETAILED DESCRIPTION
The present invention should not be considered limited to the particular embodiments described above, but rather should be understood to cover all aspects of the invention as fairly set out in the attached claims. Various modifications, equivalent processes, as well as numerous structures to which the present invention may be applicable, will be readily apparent to those skilled in the art upon review of the present disclosure. The claims are therefore intended to cover such modifications.
As mentioned earlier, the present system and method provide for transliteration between two character systems, for example a Roman-based character system and the Arabic language. Some terms used herein are presented below, not by way of limitation or exhaustion, but rather as exemplary of the use of the terms, which several of are known to those of skill in the art, and are intended to be taken as such where consistent therewith and where such differing usage is not required by the present illustrative examples.
String, a sequence of characters.
[0029] Roman word, a suing consisting of a combination of: lower case Roman alphabet characters; upper case Roman alphabet characters; roman numbers; and certain punctuation characters, such as the apostrophe.
[0030] Transliteration: The process of converting a Roman string into an Arabic string.
Examples include:
fOOSl “mathaba” مرحبًا;
[0032] “ph” → "ف"; and
[0033] “ma3” → “مع”.
[0034] Harakat. The Arabic language uses diactridc marks called harakat. Other non-Roman languages have similar annotations to differentiate sounds and meanings of similar tokeos or strings. The harakat are often omitted from written Arabic, but are spoken. As a result, they play a rote in die Romanization of Arabic words. Table I illustrates some common harakat from the example of the Arabic language.
<table>
<thead>
<tr>
<th>Harakat</th>
<th>Phonetic equivalent</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fatha:</td>
<td>“а” vowel</td>
</tr>
<tr>
<td>Damma:</td>
<td>“у” vowel</td>
</tr>
<tr>
<td>Kasra:</td>
<td>“ї” vowel</td>
</tr>
<tr>
<td>Shadda:</td>
<td>Stressed consonant</td>
</tr>
<tr>
<td>Madda:</td>
<td>Glottal stop followed by long “а”</td>
</tr>
</tbody>
</table>
Table I
[0035] Token-. A data structure comprising one or more of: a Roman string, e.g., "ph": a transliteration, e.g., an Arabic string which is a plausible transliteration of the Roman string, including harakat, e.g., "ϕ"; or apositionfläg expressing how the token’s Roman string can be positioned in a Romanized string. For example in die: Start; Middle; End; Alone; Prefix; or Suffix.
[0036] A "start" token in some embodiments is followed by a "middle" or "end" token. A "prefix" token is similar, but can also be followed by another "prefix" or "beginning" token. A "prefix" token is typically not part of the stem.
of the word. A "suffix" token is similar to an "end" token, but it indicates that the token is typically not part of the stem of the word.
[0057] It is noted that multiple tokens may exist for a given Roman string, since multiple plausible transliterations are possible, and since the positioning flag may vary for a given Roman string and transliteration pair. For example:
[0038] “th” [as in ‘three’] → "ث";
[0039] “th” [as in ‘thus’] → "ث", and
[0040] “th” [as in ‘thus’] → "ث".
[0041] In some embodiments, metadata can be associated with a token or with each token, including:
[0042] a quality score. Used to distinguish between tokens that are based on same Romanized string. For example: “ئ” → “ة” has a higher quality score than “ئ” → “ه”; and “إ” → “إ” has a higher quality score than “إ” → “ة”;
[0043] a token length store. Longer tokens tend to more specifically capture the intent of the end user, and are therefore assigned a larger score. For example:
“kh” → “ع” scores better than "k" + "h" → “ع" + “ه"; and
[0044] a popularity score. Comprises a score that expresses how often a particular token is used. The value can be derived from collecting statistics from the user input patterns, or from analyzing Romanized Arabic text, in print or electronic form.
[0045] In addition, embodiments hereof employ some or all of the following elements.
(0046) A token database. This comprises a collection of tokens. Token databases are typically optimized for a particular usage scenario. A database can contain tokens optimized for the user's Roman language of choice. In one example, Spanish speakers may use different mappings from Roman to Arabic, compared to English speakers, because of phonetic differences between their two languages. For Spanish, a “j” [kh] → “ع” token makes sense, but not for English.
[0047] The token database may also be optimized for differences within Arabic itself (for example Lebanese colloquial versus classical Arabic), or for differences according to a particular user's preferences.
[0048] Intermediate transliteration quality database comprises a database of transliterations and their respective quality scores. These scores can be positive or negative. In some embodiments, the score is derived from the input language
rules or Arabic spelling rules. Many alternative scoring techniques can be employed, and the present discussion is meant to comprehend such other variations and possible implementations. Note that in the case of the Arabic spelling rules, database entries may be generated according to a set of rules. For example:
[0049] "ph" → "e\text{"e\text{"v}}" has a negative quality score, because the English pronunciation would oevei match the transliteration;
[0050] "ca" → "\text{"\text{"s}} has a high quality score because its English pronunciation and transliteration are unambiguous; and
[0051] "u\text{"e\text{"e\text{"s}}" → "l" has a high quality score because the transliteration follows a common Arabic spelling rule.
[0052] Arabic word database, comprises a database of Arabic words, which can optionally include harakats. In some instances the database includes a large (or even all) words of the language. Each word can optionally be assigned a popularity score as described above.
[0053] In some embodiments, the popularity score may be determined by a combination of the following factors, such as: the frequency a word occurs in a set of Arabic publications, in print or electronic form; the frequency the English word is input by the users; dialect variations. For example, Lebanese colloquial may have words with differing popularity scores than classical Arabic; and application contextual information. For example, a popularity database may be compiled that is geared towards technical users. In this case, technical words, uncommon in everyday usage, would be given a higher popularity score than otherwise.
[0054] Transliteration popularity database, this database can associate a popularity score with Romanized string/Arabic transliteration pair. The score captures how often a Roman string is input to produce the Arabic output. The transliteration popularity score can be compiled from a number of sources, such as: a statistical analysis of users' input and selected output transliterations; or a statistical analysis of Romanized Arabic text, derived in paper or electronic form.
[0055] An interface between the present system and external components is made possible in some embodiments to receive or otherwise exchange such information between the present system for transliteration and the outside world. In some embodiments, an Internet or similar local or remote networks are coupled.
to the present transliteration system to send and/or receive information to and/or from the network.
[0056] The present system and method can include some portions based on or including a scheme or algorithm, but are not necessarily so restricted, and the present disclosure is not directed strictly to algorithms as such, but may employ algorithms in various forms and embodiments embodied by the totality of the present systems and methods. In some embodiments, hardware executing programmed instructions is implemented as part of the system and to carry out the present method.
[0057] Specific embodiments employ the following token database. In addition, the algoridun can optionally use an intermediate transliteration quality database and/or an Arabic word database, which may be generalized to other languages than Arabic of course. In addition, a transliteration popularity database can be employed.
[0058] In some embodiments, the input to the algorithm comprises a Romanized Arabic word which the user wants to convert to native Arabic characters. The present system includes a processor that can execute stored instructions on data available to the system or stored thereon. In specific examples, the system includes a computer processor or similar apparatus such as those found on a personal computer (PQ or a handheld device like a personal digital assistant (PDA), smartphone, or other embedded system. The system may include hardware, firmware, and software in any combination. Various parts of the system can be included within one unit, in a box, or provided separately or obtained from different sources.
[0059] Referring to Fig. 1, a process is disclosed where an input is received at 100 (in a first character set). The input may be received from a human user or a machine (computer, software). The input is tokenized at 110 and the resultant tokens are scored at 120 according to any useful method, including the exemplary ones illustrated above. A sorting is done at 130 to arrange (sort) the possible outputs according to some criteria, including the exemplary ones illustrated above. An output is provided at 140 so that a human user or a machine (computer, software) obtains the sorted output. In one example, the highest scored result is presented first or as a default in a list of possible outputs. The user (again, can be a human user or a machine) provides some feedback at 150 by
indicating which output choice was selected. This feedback is used to update a database at 160 so that a database containing information can be more useful in future scoring acts. Also, the feedback can provide other new information to build up the database and expand it.
[0060] This feedback feature provides an adaptive aspect to the present system and method. It should be understood that the present schemes can be adapted for use with a number of front-end programming interfaces or user interfaces for inputting and outputting information therefrom.
[0061] The present system employs a method that can be programmed into a computing device to accomplish the present transliteration. In some embodiments, an exemplary method includes the following steps, which are not necessarily performed in the order presented for all instances:
[0062] (1) Tokenization. The present exemplary method finds the possible tokenizations of the input Roman word. Each tokenization in some instances fulfills the following conditions: the concatenation of the tokens' Roman strings must match the input word; and a token can only be used if its positioning requirements are met. Again, these steps are presented for an exemplary embodiment or more of the present system and method and are not exhaustive or limiting of other possible examples. Nonetheless, in the exemplary embodiment here, Table II shows one possible tokenization for the input word "khawf":
<table>
<thead>
<tr>
<th>Tokens</th>
<th>Token Roman string</th>
<th>Token position flag</th>
<th>Token transliteration</th>
</tr>
</thead>
<tbody>
<tr>
<td>Token #1</td>
<td>kh</td>
<td>start</td>
<td>خ</td>
</tr>
<tr>
<td>Token #2</td>
<td>a</td>
<td>middle</td>
<td></td>
</tr>
<tr>
<td>Token #3</td>
<td>w</td>
<td>middle</td>
<td>و</td>
</tr>
<tr>
<td>Token #4</td>
<td>f</td>
<td>end</td>
<td>ف</td>
</tr>
</tbody>
</table>
Table II
[0065] This tokenization yields the transliteration (khawfj) "حَوَف".
[0064] (2) Tokenization scoring, an aggregate score is determined for each tokenization produced. This score is the weighted sum of multiple sub-scores. Sub-scores can be generated at three levels: a) token level, b) intermediate level, and c) word level. Each of these is discussed in more detail below for the present exemplary embodiment:
Token level sub-scores are computed by examining the properties of the tokenization's constituent tokens. A token quality sub-score may be obtained by combining the individual tokens' quality scores; a token length sub-score may be obtained by combining the individual tokens' length scores; and a token popularity sub-score may be obtained by combining the individual tokens' popularity scores. Note that any or all of the foregoing can be used in any combination, depending on the performance and implementation desired.
Intermediate level scoring is computed by examining groups of tokens in the tokenization. Each group corresponds to a Roman substring and an Arabic transliteration substring. The pair's score is obtained from the intermediate transliteration quality database. Sub-scores for all possible subgroups are combined to form an overall intermediate sub-score.
Word level sub-scores are computed by examining the tokenization's transliteration as a whole, ignoring the specific tokenization, with the exception of prefixes. If the tokenization includes one or more prefix tokens, they may be stripped when calculating word level sub-scores. Similarly, a suffix token may be stripped from the tokenization when computing word-level sub-score.
In word-level scoring, a (positive) dictionary match sub-score may be assigned if the tokenization's transliteration has a strict match in the Arabic word database. When looking for a strict match, prefixes and suffixes can be stripped from the transliteration. A smaller (positive) sub-score may be assigned if the transliteration has a loose match in the Arabic word database.
Fig. 2 illustrates an exemplary outline of a scoring process employing token level scoring 200 that includes token quality score 202, token length score 204, and token popularity (or frequency) score 206. Also, an intermediate level scoring 210 includes scoring an intermediate transliteration quality 212. A word level scoring 220 includes a dictionary match score 222, an Arabic word popularity (or frequency) score 224, and a transliteration popularity score 226. Weighting of the scores can be applied at any stage that is useful, for example at 230. An aggregate score is obtained after weighting at 240. The weighted score can be used to determine an ordering or sorting of possible matches and outputs to be presented to a user.
In some embodiments, a loose dictionary match can be found by removing one or more short vowels (fatba, katra, damma). For example, if the user enters "kabada" the system will produce (among others) the transliterations (kabada) "کبادا". This transliteration may not be found in the word database with the short vowels. Stripping them gives (k-b-d) "KB4", which may exist in the word database. This is a loose match.
In other embodiments, a loose dictionary match may also be obtained by removing one or more shadads (stress characters), e.g., (berri) "<J" may not be in the word database, but (beri) "ترنت" may be.
In yet other embodiments, a loose dictionary match may further be obtained by removing one or more maddas (extenders), or replacing one or more akj-maddai (CHAR al) with alef-bamas (CHAR-a2), e.g., (aameen) "امين" may not be in the word database, but (ameen) "JH" or famee α) "<>" may be.
In still other embodiments, a loose dictionary match may be obtained by replacing one or more akf-batw as (CKAR-a2] with aitf (CHAR-aJ, e.g., (alula) "JSL" may not be in the word database, but (akalj) "JSL" may be.
According to some exemplary implementations, a score of 0 (zero) may be assigned if no matches for the transliteration can be found in the Arabic word database.
A non-Roman, e.g., Arabic, word popularity sub-score may be obtained from the popularity score of the transliteration in the Arabic word database. The score is higher with higher popularity, although the relation between the popularity and the score need not be linear (it can be logarithmic, have steps, etc.). As in the case of the existence sub-score, the popularity score is higher if the match is strict. This score uses the same strict/loose matching rules as the existence score. For example, when the user enters "ana" the system will produce (among others) the transcriptions (ana-1) "ا" and (ana-2) "&". However, the use of (ana-2) "&" is much more frequent than (ana-1) "<" according to the Arabic word database and is therefore given a higher sub-score.
A transliteration popularity sub-score may be obtained by looking up the Roman input/Arabic transliteration pair in the transliteration popularity database. The sub-score is 0 (zero) if the pair is not found in the database according to some embodiments. For example, "marhaba" "مرحبا" has a high
sub-score, because if is very commonly used, whereas "marrhaba" → "مَرَحَبَة" has a low sub-score, because it is not commonly used.
[0077] The sub-score weights are preferably chosen to optimize or maximize the system’s ability to produce a best-guess transliteration from a predefined database of frequently used Romanized Arabic words and their transliterations. Again, similar notions can be applied for other non-Roman alphabets and vocabularies.
[0078] (5) Tokenization sorting. The list of tokenizations is sorted according to each tokenization’s aggregate score.
[0079] (4) Output. A new list is generated from the sorted tokenization list. This list contains the transliteration of each tokenization, as well as its associated score. Typically the transliteration is stripped of its harakats, since they are not typically used in written Arabic. However, the transliteration can be provided with harakats, or with different combinations of harakats. If multiple harakat combinations are output, they can be ranked by order of popularity in the Arabic word database. The output is provided to the user in the form of a choice of transliterations.
[0080] (5) User action analysis. The user of the present system makes a selection of which candidate transliteration he or she wants to use. These steps can be carried out by a human, a machine in a system, or a combination of the two. A statistical analysis of the user’s selections can be used to refine data.
[0081] Examples of the data that can be refined in this process include: the token popularity scores in the token database, reflecting e.g., how popular is the "th" (as in 'this') → "ث" token versus the "th" (as in 'thus!') → "ث" token; the Arabic word popularity scores in the Arabic word database, reflecting e.g., how popular is the use of the word (keefik) "<i>كيفك"; the transliteration popularity score in the transliteration database, reflecting e.g., how popular is it to input "allah" to mean "الله".
[0082] If the Arabic word database does not include harakat information, the user selections can be used to infer the proper harakat form of a word. For example, if users frequently use "kabada" → "کبادة". one can infer that the word (kabada) is written (k-b-d) "کبادة" with harakats. The Arabic word database can subsequently be updated to include this information. Additionally, frequently
occurring input/output selection pain can be added to the transliteration database used for optimizing the weights used in the scoring process.
[0083] Fig. 3 illustrates an exemplary process for die post-processing of user selections in the present system and method. At 300, the user (man or machine) selects a selection from a list of presented transliteration candidates output to die user. The user selection is post-processed at 310 in any of several ways that result in enhancements to the system, database, algorithms, and future performance thereof. For example, the post-processing 310 can result in additions, modifications, deletions, or improvements to a token database 320, an Arabic word database 330, a transliteration popularity (or frequency) database 340, and a transliteration database for scoring weight optimization 350.
[0084] As mentioned earlier, the present system and method provide for transliteration between two character systems, for example a Roman-based character system and the Arabic language. In one or more exemplary embodiments, an input method can be used in any user interface element that allows textual input and selection. For example, this can be an HTML INPUT element or an HTML TEXTAREA element. We will refer to the input user interface element as a textbox, but other user interface elements can be employed as well.
[0085] In some aspects, the present system and method provides a transliteration system that, given a Romanized word, produces a list of ranked transliteration candidates. In some embodiments this includes translations of selected words or phrases. The transliteration system can optionally provide the meaning of each transliteration candidate. Some embodiments of the system use a transliteration cache to locally store the transliteration candidates that are returned from the transliteration system for quick access.
[0086] Fig. 4 illustrates an exemplary user interface 400 for presenting transliteration candidate outputs to a user. An output window 40S, which can be displayed on a computer or hand-held device display monitor, holds visible information to convey the outputs to the user. Note that audible alternatives can be used instead of or in conjunction with the presently described visual output interface 400 to accommodate those with disabilities or other needs.
[0087] An input "marhaba" is shown at 410. The interface includes a highlighting element, such as a text highlighter 420 to show the presently-selected
option 430. Other options 432, 434, and 436 are available and shown, but not presently selected. The user can select the other options 432, 434, or 436 by using an input interface (e.g., touch screen, scroll wheel, mouse, keyboard, voice input, etc.) to move the highlighting 420 to the user's desired selection. Note that the output options can be sorted as described herein by ordering them for example.
[0088] Fig. 5 illustrates an exemplary user interface 500 that provides a transliteration feature. For a given input 510, the system provides a list of sorted possible outputs. In this case, each possible output is given in its Arabic form (520 - 526) and also along with its corresponding English translation (530 - 536) respectively.
[0089] Fig. 6 illustrates another exemplary user interface 600 that provides a root input word in a first character set and associates it with a group of outputs beneath a corresponding word in a second character set. The outputs are obtained from a cache as described above.
[0090] Now referring to Fig. 7, an exemplary system's logic flow is described below. It should be appreciated that the interface presents an exemplary and illustrative embodiment of the steps in a method, not intended to be limiting or exhaustive of other embodiments, where additional steps can be performed, or some of the indicated steps removed as appropriate.
[0091] At 700, the user enters, or selects text in the textbox. This can include typing with the keyboard into the textbox, copying and pasting text into the textbox, or selecting existing text in the textbox, or other ways of entering information into a place adapted to receive user input.
[0092] At 702, the selected word is identified. When typing, the currently typed word is considered to be selected. If multiple words are selected, the system does nothing further; if a single word is selected, the system proceeds.
[0093] At 704, the system determines whether the selected word is comprised of exclusively Arabic characters, exclusively Roman characters, or neither of those two cases. If the selected word is neither, the system does nothing further; if the selected word is purely Roman, proceed to step 4; if the selected word is purely Arabic, the system proceeds as indicated; if the selected word is purely Roman, it is looked up in the Transliteration Cache. If transliteration candidates are found in the cache, the system proceeds as indicated.
[0094] At 706, the system requests the transliteration candidates from the transliteration system. Once the transliteration candidates are received, they are stored in the transliteration cache at 708, along with their meanings if available. If the selected word is Arabic, the system looks it up in the transliteration cache; if it is not found in the cache, the system does nothing further. Output messages and signals may be delivered from the system to another component or module or to a user (e.g., through a readable display) to indicate the progress at each step of the process. If the selected word is found, the system retrieves the transliteration candidates from the cache.
[0095] At 710, the system displays a user interface element listing the Romanized word and transliteration candidates. Each entry is selectable by the user. If available, the Arabic word's meaning can be displayed next to it.
[0096] When the user makes a selection the original word is replaced by the new selection; the selection is stored, either locally or remotely. This will allow the selection to be remembered if the same word is input at a later time.
[0097] Feedback can be provided to the transliteration system regarding the user selection. This feedback can help improve the accuracy of the transliteration system.
[0098] In some embodiments, the user may input special characters or key combinations into the textbox. One of these can be used to prevent the automatic transliteration of the preceding word. For example, if a user were to input CTRL-SPACE, the system could input a space character without automatically transliterating the preceding word.
[0099] The system can also provide the user with a method to disable transliterations altogether. This can be in the form of a user interface element, such as a button, and/or a special character or key combination input into the textbox. A method can also be provided to re-enable the transliteration functionality.
[0100] Textboxes in software applications can have a text direction setting. The text direction is either left-to-right or right-to-left. An application can either impose a text direction on a user, or it may provide a mechanism to switch the text direction, such as a button. When a textbox has a left-to-right text direction, one may infer that either the application developer or the user intends to use the textbox for text written mostly in a left-to-right language, and vice-versa.
The system can therefore detect the text-direction of the textbox, and automatically enable or disable the automatic transliteration features of the user interface.
(00101) In some specific embodiments, if the textbox direction is left-to-right, the system can infer than the user intends to input mostly roman text. It can therefore disable the automatic transliteration which would normally take place when punctuation is input. In other specific embodiments, if the textbox direction is right-to-left, the system can infer than the user intends to input mostly Arabic text, and can enable the automatic transliteration.
[00102] One or more embodiments of the present system and method can further accomplish ranking of transliteration options. The transliteration system can take into account one or more of the following factors when determining the transliteration candidates and their ranking, for example, in one or more embodiments:
[00103] The user's native language. This can be determined by the user actively choosing what language he or she prefers; the user's locale can be determined by examining the "Accept-Language" header in an HTTP request made by the user's browser; and/or the browser language setting, obtained for example using javascript.
[00104] The transliteration language, including dialect variations. This can be based on the user's previous selections and/or the selections of a population of users. For example: users from the same geographic location, users using the same input language; and/or users of a certain age group, or sharing other demographic attributes.
[00105] In addition, the present system and method can provide communication functions. The transliteration system can run either locally or remotely. A remote transliteration system can be hosted on one or more servers. The request and response can be encoded in many different forms.
[00106] In some embodiments, the transliteration system can be hosted on a web server. A hyper text transfer protocol (HTTP) request can ask the web server for transliterations of a given word. This word, and other relevant information, such as a user id, language, or other preferences, can be encoded in the uniform resource locator (URL), a cookie, HTTP POST header or body, SOAP transaction, XML document, etc. The list of transliteration candidates can
be returned in the HTTP response's body, encoded in any number of formats, such as plain text, JSON, XML, SOAP, etc.
[00107] Some embodiments hereof include the use of a textbox to accomplish a data input function. The textbox can be any user interface element that receives text as input. For example: an HTML or XHTML input element; an HTML or XHTML text area element; or a textbox or rich textbox control; and a custom text input element that allows text input and selection, such as a word processor or email editor.
[00108] It can be seen that the present system and method can be applied to provide input and/or output to/from a transliterator. It should be also seen that the actual and precise nature of the transliterator or transliteration system associated with the present input and output system is not limiting of the present input and output system. That is, a number of transliteration engines, programs, machines, and algorithms are potentially suited for use herewith.
[00109] We claim:
VII. CLAIMS
1. A method for adaptive transliteration between a first and a second character set, comprising:
receiving an input comprising a set of input tokens in a first character set;
processing a subset of said input tokens substantially in real-time as said input tokens are received by comparing said subset of the input tokens against a database of known tokens in a second character set;
determining a set of output tokens in said second character set; and
providing an output representative of said output tokens.
2. The method of claim 1, said input tokens comprising a plurality of characters from said first character set.
3. The method of claim 1, said output tokens comprising a plurality of characters from said second character set.
4. The method of claim 1, said comparing step comprising comparing the input tokens with a corresponding group of possible entries in a database associating said input and said output tokens.
5. The method of claim 1, said receiving comprising receiving typed characters entered by a user into an input element of a user interface.
6. The method of claim 5, said input element comprising a text box adapted for receiving said input token.
7. The method of claim 1, said providing an output comprising providing an output to an output element of a user interface.
8. The method of claim 7, further comprising accepting a user selection from a plurality of possible options available to said user from said output.
9. The method of claim 8, further comprising processing said use selection and providing a result of said processing to affect future transliteration operations.
10. A system for transliteration between a first character set and a second character set, comprising:
an input element for receiving input token entries in a first character set;
a processor for processing at least some of said input tokens substantially in real-time;
an output element for providing an output comprising at least one output token in said second character set, corresponding to said input tokens, substantially in real-time.
11. The system of claim 10, further comprising a database, coupled to said processor, for storing tokens in said first and second characters sets.
12. The system of claim 10, said input element comprising a text box for receiving said input tokens from a user.
13. The system of claim 10, said output element comprising an area for display of said output tokens to a user.
14. The system of claim 10, further comprising a transliteration score engine that receives a pair of associated tokens and provides an output indicative of a quantitative measure of the correlation between said pair of associated tokens.
15. The system of claim 10, further comprising a transliteration popularity database that receives a possible transliteration input and provides an output dependent on a metric indicative of the popularity of said possible transliteration.
16. A method for transliterating information, comprising:
converting an input set of characters in a first character set into a set of input tokens in said first character set;
determining at least one match to said input tokens from a possible set of output tokens in a second character set;
scoring said at least one match to determine a best match between said input tokens and said output tokens; and
presenting said output tokens based on said scoring such that a best suggested output token is preferentially presented.
17. The method of claim 16, said preferentially presenting of said best suggested output token further comprising sorting said output tokens so that said best suggested output token takes a primary place in said sorting.
18. The method of claim 16, further comprising analyzing a user selection of one of a plurality of presented output tokens.
19. The method of claim 18, further comprising storing a result of said user selection in a database for improving future transliteration operations.
Fig. 2
SUBSTITUTE SHEET (RULE 26)
User types in the textbox
Is the input punctuation?
Yes
Identify the word before the punctuation
Word in Arabic or Roman Characters?
If roman
Do nothing
If arabic or neither
Do nothing
No
Is the input a special character used to enter a space without automatically transliterating?
Yes
The selected word is identified
If multiple words are selected
No
User pastes text into the textbox
User selects text in the textbox
Selected word in Arabic or Roman Characters?
If arabic
Do nothing
If neither
Do nothing
If roman
Fig. 7A
INTERNATIONAL SEARCH REPORT
International application No.
PCT/US 08/79349
A CLASSIFICATION OF SUBJECT MATTER
IPC(8) - G06F 17/28 (2008.04)
USPC - 704/2
According to International Patent Classification (IPC) or to both national classification and IPC
B. FIELDS SEARCHED
Minimum documentation searched (classification system followed by classification symbols)
USPC - 704/2
Documentation searched other than minimum documentation to the extent that such documents are included in the fields searched
USPC - 704/8; 704/9; 704/EI 5.003; 715/264; 382/185
Electronic data base consulted during the international search (name of data base and, where practicable, search terms used)
DialogWEB; Google
Search Terms Used: transliteration, token, database, output, sort, sort, compar, data, base, data-base, display, prompt, popular, correlat, real-time, real, time, realtime, automatic, synch, simultaneous, text, string, character
C. DOCUMENTS CONSIDERED TO BE RELEVANT
<table>
<thead>
<tr>
<th>Category*</th>
<th>Citation of document, with indication, where appropriate, of the relevant passages</th>
<th>Relevant to claim No</th>
</tr>
</thead>
<tbody>
<tr>
<td>X</td>
<td>US 5,432,948 A (DAVIS et al.) 11 July 1995 (11.07.1995), Abs. Col 2 Ins 18-22, Ins 61-62, Col 3 Ins 19-22, Col 6 Ins 25-31, Col 7 Ins 4-5, Col 8 Ins 7-8, Ins 43-45, Col 9 Ins 41-42, Col 10 Ins 5-7 and Ins 15-17</td>
<td>1-7, 10-14, and 16-17</td>
</tr>
<tr>
<td>Y</td>
<td>US 6,546,388 B1 (EDLUND et al.) 08 April 2003 (08.04.2003), Col 6 Ins 65-66</td>
<td>15</td>
</tr>
</tbody>
</table>
Further documents are listed in the continuation of Box C.
* Special categories of cited documents
"A" document defining the general state of the art which is not considered to be of particular relevance
"E" earlier application or patent but published on or after the international filing date
"L" document which may throw doubts on priority claim(s) or which is cited to establish the publication date of another citation or other special reason (as specified)
"O" document referring to an oral disclosure, use, exhibition or other means
"P" document published prior to the international filing date but later than the priority date claimed
D later document published after the international filing date or priority date and not in conflict with the application but cited to understand the principle or theory underlying the invention
Y document of particular relevance, the claimed invention cannot be considered novel or cannot be considered to involve an inventive step when the document is taken alone
Y document of particular relevance, the claimed invention cannot be considered to involve an inventive step when the document is combined with one or more other such documents, such combination being obvious to a person skilled in the art
& member of the same patent family
Date of the actual completion of the international search
02 December 2008 (02.12.2008)
Date of mailing of the international search report
18 DEG 2008
Name and mailing address of the ISA/US
Mail Stop PCT, Attn: ISA/US, Commissioner for Patents
P.O. Box 1450, Alexandria, Virginia 22313-1450
Facsimile No. 571-273-3201
Authorized officer: Lee W. Young
PCT Helpdesk 571-272-4300
PCT OSP 571-272-7774
Form PCT/ISA/210 (second sheet) (April 2007)
|
{"Source-Url": "https://patentimages.storage.googleapis.com/24/92/c6/e130eafca41a4a/WO2009049049A1.pdf", "len_cl100k_base": 10809, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 126498, "total-output-tokens": 12982, "length": "2e13", "weborganizer": {"__label__adult": 0.0009379386901855468, "__label__art_design": 0.0026073455810546875, "__label__crime_law": 0.009307861328125, "__label__education_jobs": 0.004627227783203125, "__label__entertainment": 0.0005736351013183594, "__label__fashion_beauty": 0.0004119873046875, "__label__finance_business": 0.008026123046875, "__label__food_dining": 0.0004870891571044922, "__label__games": 0.0027008056640625, "__label__hardware": 0.00737762451171875, "__label__health": 0.000858306884765625, "__label__history": 0.0008821487426757812, "__label__home_hobbies": 0.00013566017150878906, "__label__industrial": 0.0014028549194335938, "__label__literature": 0.0042266845703125, "__label__politics": 0.0011615753173828125, "__label__religion": 0.001556396484375, "__label__science_tech": 0.138916015625, "__label__social_life": 0.00011628866195678712, "__label__software": 0.28173828125, "__label__software_dev": 0.53076171875, "__label__sports_fitness": 0.00026106834411621094, "__label__transportation": 0.0006437301635742188, "__label__travel": 0.00025653839111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50755, 0.05392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50755, 0.66742]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50755, 0.89068]], "google_gemma-3-12b-it_contains_pii": [[0, 2573, false], [2573, 4641, null], [4641, 7042, null], [7042, 9534, null], [9534, 11863, null], [11863, 13917, null], [13917, 15640, null], [15640, 17917, null], [17917, 20336, null], [20336, 22751, null], [22751, 24897, null], [24897, 27274, null], [27274, 29624, null], [29624, 31998, null], [31998, 34499, null], [34499, 36951, null], [36951, 39416, null], [39416, 41767, null], [41767, 42775, null], [42775, 44257, null], [44257, 45922, null], [45922, 46771, null], [46771, 46771, null], [46771, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 47353, null], [47353, 47353, null], [47353, 50755, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2573, true], [2573, 4641, null], [4641, 7042, null], [7042, 9534, null], [9534, 11863, null], [11863, 13917, null], [13917, 15640, null], [15640, 17917, null], [17917, 20336, null], [20336, 22751, null], [22751, 24897, null], [24897, 27274, null], [27274, 29624, null], [29624, 31998, null], [31998, 34499, null], [34499, 36951, null], [36951, 39416, null], [39416, 41767, null], [41767, 42775, null], [42775, 44257, null], [44257, 45922, null], [45922, 46771, null], [46771, 46771, null], [46771, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 46806, null], [46806, 47353, null], [47353, 47353, null], [47353, 50755, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50755, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50755, null]], "pdf_page_numbers": [[0, 2573, 1], [2573, 4641, 2], [4641, 7042, 3], [7042, 9534, 4], [9534, 11863, 5], [11863, 13917, 6], [13917, 15640, 7], [15640, 17917, 8], [17917, 20336, 9], [20336, 22751, 10], [22751, 24897, 11], [24897, 27274, 12], [27274, 29624, 13], [29624, 31998, 14], [31998, 34499, 15], [34499, 36951, 16], [36951, 39416, 17], [39416, 41767, 18], [41767, 42775, 19], [42775, 44257, 20], [44257, 45922, 21], [45922, 46771, 22], [46771, 46771, 23], [46771, 46806, 24], [46806, 46806, 25], [46806, 46806, 26], [46806, 46806, 27], [46806, 46806, 28], [46806, 47353, 29], [47353, 47353, 30], [47353, 50755, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50755, 0.06569]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
d2a792621bfcae19d7101e94f5d81d53ca8ce8a6
|
Data Structures for Disjoint Sets
In this lecture, we describe some methods for maintaining a collection of disjoint sets. Each set is represented as a pointer-based data structure, with one node per element. We will refer to the elements as either ‘objects’ or ‘nodes’, depending on whether we want to emphasize the set abstraction or the actual data structure. Each set has a unique ‘leader’ element, which identifies the set. (Since the sets are always disjoint, the same object cannot be the leader of more than one set.) We want to support the following operations.
- **MAKESET**(*x*): Create a new set \{*x*\} containing the single element *x*. The object *x* must not appear in any other set in our collection. The leader of the new set is obviously *x*.
- **FIND**(*x*): Find (the leader of) the set containing *x*.
- **UNION**(*A*, *B*): Replace two sets *A* and *B* in our collection with their union *A* ∪ *B*. For example, **UNION**(*A*, **MAKESET**(*x*)) adds a new element *x* to an existing set *A*. The sets *A* and *B* are specified by arbitrary elements, so **UNION**(*x*, *y*) has exactly the same behavior as **UNION**(**FIND**(*x*), **FIND**(*y*)).
Disjoint set data structures have lots of applications. For instance, Kruskal’s minimum spanning tree algorithm relies on such a data structure to maintain the components of the intermediate spanning forest. Another application is maintaining the connected components of a graph as new vertices and edges are added. In both these applications, we can use a disjoint-set data structure, where we maintain a set for each connected component, containing that component’s vertices.
### 11.1 Reversed Trees
One of the easiest ways to store sets is using trees, in which each node represents a single element of the set. Each node points to another node, called its **parent**, except for the leader of each set, which points to itself and thus is the root of the tree. **MAKESET** is trivial. **FIND** traverses
parent pointers up to the leader. \textsc{Union} just redirects the parent pointer of one leader to the other. Unlike most tree data structures, nodes do not have pointers down to their children.
\begin{algorithmic}
\State \textbf{MakeSet}(x):
\State \hspace{1em} \textbf{parent}(x) \leftarrow x
\State \textbf{depth}(x) \leftarrow 0
\State return $x$
\end{algorithmic}
\begin{algorithmic}
\State \textbf{Find}(x):
\State \hspace{1em} while $x \neq \textbf{parent}(x)$
\State \hspace{2em} $x \leftarrow \textbf{parent}(x)$
\State \hspace{1em} return $x$
\end{algorithmic}
\begin{algorithmic}
\State \textbf{Union}(x, y):
\State \hspace{1em} $\overline{x} \leftarrow \textbf{Find}(x)$
\State \hspace{1em} $\overline{y} \leftarrow \textbf{Find}(y)$
\State \hspace{1em} if $\text{depth}(\overline{x}) > \text{depth}(\overline{y})$
\State \hspace{2em} $\text{parent}(\overline{y}) \leftarrow \overline{x}$
\State \hspace{1em} else
\State \hspace{3em} $\text{parent}(\overline{x}) \leftarrow \overline{y}$
\State \hspace{3em} if $\text{depth}(\overline{x}) = \text{depth}(\overline{y})$
\State \hspace{4em} $\text{depth}(\overline{y}) \leftarrow \text{depth}(\overline{y}) + 1$
\end{algorithmic}
Merging two sets stored as trees. Arrows point to parents. The shaded node has a new parent.
\textsc{MakeSet} clearly takes $\Theta(1)$ time, and \textsc{Union} requires only $O(1)$ time in addition to the two \textsc{Finds}. The running time of \textsc{Find}(x) is proportional to the depth of $x$ in the tree. It is not hard to come up with a sequence of operations that results in a tree that is a long chain of nodes, so that \textsc{Find} takes $\Theta(n)$ time in the worst case.
However, there is an easy change we can make to our \textsc{Union} algorithm, called \textit{union by depth}, so that the trees always have logarithmic depth. Whenever we need to merge two trees, we always make the root of the shallower tree a child of the deeper one. This requires us to also maintain the depth of each tree, but this is quite easy.
With this new rule in place, it’s not hard to prove by induction that for any set leader $\overline{x}$, the size of $\overline{x}$’s set is at least $2^{\text{depth}(\overline{x})}$, as follows. If $\text{depth}(\overline{x}) = 0$, then $\overline{x}$ is the leader of a singleton set. For any $d > 0$, when $\text{depth}(\overline{x})$ becomes $d$ for the first time, $\overline{x}$ is becoming the leader of the union of two sets, both of whose leaders had depth $d - 1$. By the inductive hypothesis, both component sets had at least $2^{d-1}$ elements, so the new set has at least $2^d$ elements. Later \textsc{Union} operations might add elements to $\overline{x}$’s set without changing its depth, but that only helps us.
Since there are only $n$ elements altogether, the maximum depth of any set is $\log n$. We conclude that if we use union by depth, both \textsc{Find} and \textsc{Union} run in $\Theta(\log n)$ time in the worst case.
### 11.2 Shallow Threaded Trees
Alternately, we could just have every object keep a pointer to the leader of its set. Thus, each set is represented by a shallow tree, where the leader is the root and all the other elements are its
children. With this representation, **MakeSet** and **Find** are completely trivial. Both operations clearly run in constant time. **Union** is a little more difficult, but not much. Our algorithm sets all the leader pointers in one set to point to the leader of the other set. To do this, we need a method to visit every element in a set; we will ‘thread’ a linked list through each set, starting at the set’s leader. The two threads are merged in the **Union** algorithm in constant time.

*Bold arrows point to leaders; lighter arrows form the threads. Shaded nodes have a new leader.*
**Code**
**MakeSet** \((x)\):
- `leader(x) ← x`
- `next(x) ← x`
**Find**:
- return `leader(x)`
**Union** \((x, y)\):
1. \(x' ← \text{Find}(x)\)
2. \(y' ← \text{Find}(y)\)
3. \(y ← y'\)
4. `leader(y) ← x'`
5. `next(y) ← next(x')`
6. `next(x') ← y`
7. return `leader(y)`
The worst-case running time of **Union** is a constant times the size of the *larger* set. Thus, if we merge a one-element set with another \(n\)-element set, the running time can be \(Θ(n)\). Generalizing this idea, it is quite easy to come up with a sequence of \(n\) **MakeSet** and \(n − 1\) **Union** operations that requires \(Θ(n^2)\) time to create the set \(\{1, 2, \ldots, n\}\) from scratch.
**WorstCaseSequence** \((n)\):
1. **MakeSet** \((1)\)
2. for \(i ← 2\) to \(n\)
- **MakeSet** \((i)\)
- **Union** \((1, i)\)
We are being stupid in two different ways here. One is the order of operations in **WorstCaseSequence**. Obviously, it would be more efficient to merge the sets in the other order, or to use some sort of divide and conquer approach. Unfortunately, we can’t fix this; we don’t get to decide how our data structures are used! The other is that we always update the leader pointers in the larger set. To fix this, we add a comparison inside the **Union** algorithm to determine which set is smaller. This requires us to maintain the size of each set, but that’s easy.
**Code**
**MakeWeightedSet** \((x)\):
1. `leader(x) ← x`
2. `next(x) ← x`
3. `size(x) ← 1`
**WeightedUnion** \((x, y)\):
1. \(x' ← \text{Find}(x)\)
2. \(y' ← \text{Find}(y)\)
3. if \(\text{size}(x) > \text{size}(y)\)
- **Union** \((x', y)\)
- \(\text{size}(x) ← \text{size}(x) + \text{size}(y)\)
4. else
- **Union** \((y', x)\)
- \(\text{size}(y) ← \text{size}(x) + \text{size}(y)\)
The new **WeightedUnion** algorithm still takes \( \Theta(n) \) time to merge two \( n \)-element sets. However, in an amortized sense, this algorithm is much more efficient. Intuitively, before we can merge two large sets, we have to perform a large number of **MakeWeightedSet** operations.
**Theorem 1.** A sequence of \( m \) **MakeWeightedSet** operations and \( n \) **WeightedUnion** operations takes \( O(m + n \log n) \) time in the worst case.
**Proof:** Whenever the leader of an object \( x \) is changed by a **WeightedUnion**, the size of the set containing \( x \) increases by at least a factor of two. By induction, if the leader of \( x \) has changed \( k \) times, the set containing \( x \) has at least \( 2^k \) members. After the sequence ends, the largest set contains at most \( n \) members. (Why?) Thus, the leader of any object \( x \) has changed at most \( \lfloor \log n \rfloor \) times.
Since each **WeightedUnion** reduces the number of sets by one, there are \( m - n \) sets at the end of the sequence, and at most \( n \) objects are not in singleton sets. Since each of the non-singleton objects had \( O(\log n) \) leader changes, the total amount of work done in updating the leader pointers is \( O(n \log n) \). \( \square \)
The aggregate method now implies that each **WeightedUnion** has **amortized cost** \( O(\log n) \).
### 11.3 Path Compression
Using unthreaded trees, **Find** takes logarithmic time and everything else is constant; using threaded trees, **Union** takes logarithmic amortized time and everything else is constant. A third method allows us to get both of these operations to have *almost* constant running time.
We start with the original unthreaded tree representation, where every object points to a parent. The key observation is that in any **Find** operation, once we determine the leader of an object \( x \), we can speed up future **Find**s by redirecting \( x \)'s parent pointer directly to that leader. In fact, we can change the parent pointers of all the ancestors of \( x \) all the way up to the root; this is easiest if we use recursion for the initial traversal up the tree. This modification to **Find** is called **path compression**.

```plaintext
**Find(x)**
if x ≠ parent(x)
parent(x) ← **Find**(parent(x))
return parent(x)
```
If we use path compression, the ‘depth’ field we used earlier to keep the trees shallow is no longer correct, and correcting it would take way too long. But this information still ensures that **Find** runs in \( \Theta(\log n) \) time in the worst case, so we’ll just give it another name: **rank**. The following algorithm is usually called **union by rank**:
**Lecture 11: Disjoint Sets [Fa’13]**
### Disjoint Sets
**MAKESET**(x):
- **parent**(x) ← x
- **rank**(x) ← 0
**UNION**(x, y):
- \( \overline{x} \leftarrow \text{FIND}(x) \)
- \( \overline{y} \leftarrow \text{FIND}(y) \)
- if \( \text{rank}(\overline{x}) > \text{rank}(\overline{y}) \)
- **parent**(\( \overline{y} \)) ← \( \overline{x} \)
- else
- **parent**(\( \overline{x} \)) ← \( \overline{y} \)
- if \( \text{rank}(\overline{x}) = \text{rank}(\overline{y}) \)
- \( \text{rank}(\overline{y}) \leftarrow \text{rank}(\overline{y}) + 1 \)
**Find** still runs in \( O(\log n) \) time in the worst case; path compression increases the cost by only most a constant factor. But we have good reason to suspect that this upper bound is no longer tight. Our new algorithm memoizes the results of each **Find**, so if we are asked to **Find** the same item twice in a row, the second call returns in constant time. Splay trees used a similar strategy to achieve their optimal amortized cost, but our up-trees have fewer constraints on their structure than binary search trees, so we should get even better performance.
This intuition is exactly correct, but it takes a bit of work to define precisely how much better the performance is. As a first approximation, we will prove below that the amortized cost of a **Find** operation is bounded by the *iterated logarithm* of \( n \), denoted \( \log^* n \), which is the number of times one must take the logarithm of \( n \) before the value is less than 1:
\[
\log^* n = \begin{cases}
1 & \text{if } n \leq 2, \\
1 + \log^*(\log n) & \text{otherwise}.
\end{cases}
\]
Our proof relies on several useful properties of ranks, which follow directly from the **UNION** and **FIND** algorithms.
- If a node \( x \) is not a set leader, then the rank of \( x \) is smaller than the rank of its parent.
- Whenever **parent**(\( x \)) changes, the new parent has larger rank than the old parent.
- Whenever the leader of \( x \)'s set changes, the new leader has larger rank than the old leader.
- The size of any set is exponential in the rank of its leader: \( \text{size}(\overline{x}) \geq 2^{\text{rank}(\overline{x})} \). (This is easy to prove by induction, hint, hint.)
- In particular, since there are only \( n \) objects, the highest possible rank is \( \lfloor \log n \rfloor \).
- For any integer \( r \), there are at most \( n/2^r \) objects of rank \( r \).
Only the last property requires a clever argument to prove. Fix your favorite integer \( r \). Observe that only set leaders can change their rank. Whenever the rank of any set leader \( \overline{x} \) changes from \( r-1 \) to \( r \), mark all the objects in \( \overline{x} \)'s set. Since leader ranks can only increase over time, each object is marked at most once. There are \( n \) objects altogether, and any object with rank \( r \) marks at least \( 2^r \) objects. It follows that there are at most \( n/2^r \) objects with rank \( r \), as claimed.
\( \text{five.oldstyle} \)
11.4 \( O(\log^* n) \) Amortized Time
The following analysis of path compression was discovered just a few years ago by Raimund Seidel and Micha Sharir.\(^1\) Previous proofs\(^2\) relied on complicated charging schemes or potential-function arguments; Seidel and Sharir’s analysis relies on a comparatively simple recursive decomposition. (Of course, simple is in the eye of the beholder.)
Seidel and Sharir phrase their analysis in terms of two more general operations on set forests. Their more general \textsc{Compress} operation compresses any directed path, not just paths that lead to the root. The new \textsc{Shatter} operation makes every node on a root-to-leaf path into its own parent.
\begin{align*}
\text{\textsc{Compress}}(x, y): & \\
& \begin{cases}
\text{\{y must be an ancestor of x\}} \\
\text{if } x \neq y \\
\text{\textsc{Compress}}(\text{parent}(x), y) \\
\text{\text{parent}(x) \leftarrow \text{parent}(y)}
\end{cases}
\end{align*}
\begin{align*}
\text{\textsc{Shatter}}(x): & \\
& \begin{cases}
\text{if parent}(x) \neq x \\
\text{\textsc{Shatter}}(\text{parent}(x)) \\
\text{\text{parent}(x) \leftarrow x}
\end{cases}
\end{align*}
Clearly, the running time of \textsc{Find}(x) operation is dominated by the running time of \textsc{Compress}(x, y), where \( y \) is the leader of the set containing \( x \). Thus, we can prove the upper bound by analyzing an arbitrary sequence of \textsc{Union} and \textsc{Compress} operations. Moreover, we can assume that the arguments of every \textsc{Union} operation are set leaders, so that each \textsc{Union} takes only constant worst-case time.
Finally, since each call to \textsc{Compress} specifies the top node in the path to be compressed, we can reorder the sequence of operations, so that every \textsc{Union} occurs before any \textsc{Compress}, without changing the number of pointer assignments.
Each \textsc{Union} requires only constant time, so we only need to analyze the amortized cost of \textsc{Compress}. The running time of \textsc{Compress} is proportional to the number of parent pointer assignments, plus \( O(1) \) overhead, so we will phrase our analysis in terms of pointer assignments. Let \( T(m, n, r) \) denote the worst case number of pointer assignments in any sequence of at most \( m \) \textsc{Compress} operations, executed on a forest of at most \( n \) nodes, in which each node has rank at most \( r \).
The following trivial upper bound will be the base case for our recursive argument.
Theorem 2. \( T(m, n, r) \leq nr \)
**Proof:** Each node can change parents at most \( r \) times, because each new parent has higher rank than the previous parent.
Fix a forest \( F \) of \( n \) nodes with maximum rank \( r \), and a sequence \( C \) of \( m \) COMPRESS operations on \( F \), and let \( T(F, C) \) denote the total number of pointer assignments executed by this sequence.
Let \( s \) be an arbitrary positive rank. Partition \( F \) into two sub-forests: a 'low' forest \( F_- \) containing all nodes with rank at most \( s \), and a 'high' forest \( F_+ \) containing all nodes with rank greater than \( s \). Since ranks increase as we follow parent pointers, every ancestor of a high node is another high node. Let \( n_- \) and \( n_+ \) denote the number of nodes in \( F_- \) and \( F_+ \), respectively. Finally, let \( m_+ \) denote the number of COMPRESS operations that involve any node in \( F_+ \), and let \( m_- = m - m_+ \).
Any sequence of COMPRESS operations on \( F \) can be decomposed into a sequence of COMPRESS operations on \( F_+ \), plus a sequence of COMPRESS and SHATTER operations on \( F_- \), with the same total cost. This requires only one small modification to the code: We forbid any low node from having a high parent. Specifically, if \( x \) is a low node and \( y \) is a high node, we replace any assignment \( \text{parent}(x) \leftarrow y \) with \( \text{parent}(x) \leftarrow x \).
This modification is equivalent to the following reduction:
\[
\text{COMPRESS}(x, y, F): \quad \langle y \text{ is an ancestor of } x \rangle \\
\text{if } \text{rank}(x) > s
\quad \text{COMPRESS}(x, y, F_+) \quad \langle \text{in } C_+ \rangle \\
\text{else if } \text{rank}(y) \leq s
\quad \text{COMPRESS}(x, y, F_-) \quad \langle \text{in } C_- \rangle \\
\text{else}
\quad z \leftarrow x \\
\quad \text{while } \text{rank}('parent_F(z)) \leq s
\quad z \leftarrow 'parent_F(z) \\
\quad \text{COMPRESS('parent_F(z)), y, F_+} \quad \langle \text{in } C_+ \rangle \\
\quad \text{SHATTER}(x, z, F_-) \\
\quad \text{parent}(z) \leftarrow z \quad \langle ? \rangle
\]
This modification is equivalent to the following reduction:
The pointer assignment in the last line (†) looks redundant, but it is actually necessary for the analysis. Each execution of that line mirrors an assignment of the form \( \text{parent}(z) \leftarrow w \), where \( z \) is a low node, \( w \) is a high node, and the previous parent of \( z \) was also a high node. Each of these ‘redundant’ assignments happens immediately after a \text{Compress} in the top forest, so we perform at most \( m_+ \) redundant assignments.
Each node \( x \) is touched by at most one \text{Shatter} operation, so the total number of pointer reassignments in all the \text{Shatter} operations is at most \( n \).
Thus, by partitioning the forest \( F \) into \( F_+ \) and \( F_- \), we have also partitioned the sequence \( C \) of \text{Compress} operations into subsequences \( C_+ \) and \( C_- \), with respective lengths \( m_+ \) and \( m_- \), such that the following inequality holds:
\[
T(F, C) \leq T(F_+, C_+) + T(F_-, C_-) + m_+ + n
\]
Since there are only \( n/2^i \) nodes of any rank \( i \), we have \( n_+ \leq \sum_{i \geq s} n/2^i = n/2^s \). The number of different ranks in \( F_+ \) is \( r - s < r \). Thus, Theorem 2 implies the upper bound
\[
T(F_+, C_+) < rn/2^s.
\]
Let us fix \( s = \lfloor \lg r \rfloor \), so that \( T(F_+, C_+) \leq n \). We can now simplify our earlier recurrence to
\[
T(F, C) \leq T(F_-, C_-) + m_+ + 2n,
\]
or equivalently,
\[
T(F, C) - m \leq T(F_-, C_-) - m_- + 2n.
\]
Since this argument applies to any forest \( F \) and any sequence \( C \), we have just proved that
\[
T'(m, n, r) \leq T'(m, n, \lfloor \lg r \rfloor) + 2n,
\]
where \( T'(m, n, r) = T(m, n, r) - m \). The solution to this recurrence is \( T'(n, m, r) \leq 2n \lg^* r \).
\textbf{Voilá!}
\textbf{Theorem 3.} \( T(m, n, r) \leq m + 2n \lg^* r \)
\begin{itemize}
\item[\textbf{*11.5}] \textbf{Turning the Crank}
\end{itemize}
There is one place in the preceding analysis where we have significant room for improvement. Recall that we bounded the total cost of the operations on \( F_+ \) using the trivial upper bound from Theorem 2. But we just proved a better upper bound in Theorem 3! We can apply precisely the same strategy, using Theorem 3 recursively instead of Theorem 2, to improve the bound even more.
Suppose we fix \( s = \lg^* r \), so that \( n_+ = n/2^\lfloor \lg^* r \rfloor \). Theorem 3 implies that
\[
T(F_+, C_+) \leq m_+ + 2n \frac{\lg^* r}{2^\lfloor \lg^* r \rfloor} \leq m_+ + 2n.
\]
This implies the recurrence
\[
T(F, C) \leq T(F_-, C_-) + 2m_+ + 3n,
\]
which in turn implies that
\[
T''(m, n, r) \leq T''(m, n, \lg^* r) + 3n,
\]
where $T''(m, n, r) = T(m, n, r) - 2m$. The solution to this equation is $T(m, n, r) \leq 2m + 3n \lg^{**} r$, where $\lg^{**} r$ is the iterated iterated logarithm of $r$:
$$
\lg^{**} r = \begin{cases}
1 & \text{if } r \leq 2, \\
1 + \lg^{**}(\lg^* r) & \text{otherwise}.
\end{cases}
$$
Naturally we can apply the same improvement strategy again, and again, as many times as we like, each time producing a tighter upper bound. Applying the reduction $c$ times, for any positive integer $c$, gives us $T(m, n, r) \leq cm + (c + 1)n \lg^{**} r$, where
$$
\lg^{**} r = \begin{cases}
\lg r & \text{if } c = 0, \\
1 & \text{if } r \leq 2, \\
1 + \lg^{**}(\lg^{c-1} r) & \text{otherwise}.
\end{cases}
$$
Each time we ‘turn the crank’, the dependence on $m$ increases, while the dependence on $n$ and $r$ decreases. For sufficiently large values of $c$, the $cm$ term dominates the time bound, and further iterations only make things worse. The point of diminishing returns can be estimated by the minimum number of stars such that $\lg^{**-a} r$ is smaller than a constant:
$$
\alpha(r) = \min \left\{ c \geq 1 \mid \lg^{c} n \leq 3 \right\}.
$$
(The threshold value 3 is used here because $\lg^{c} 5 \geq 2$ for all $c$.) By setting $c = \alpha(r)$, we obtain our final upper bound.
**Theorem 4.** $T(m, n, r) \leq ma(r) + 3n(\alpha(r) + 1)$
We can assume without loss of generality that $m \geq n$ by ignoring any singleton sets, so this upper bound can be further simplified to $T(m, n, r) = O(ma(r)) = O(ma(n))$. It follows that if we use union by rank, Find with path compression runs in $O(\alpha(n))$ amortized time.
Even this upper bound is somewhat conservative if $m$ is larger than $n$. A closer estimate is given by the function
$$
\alpha(m, n) = \min \left\{ c \geq 1 \mid \lg^{c} (\lg n) \leq m/n \right\}.
$$
It’s not hard to prove that if $m = \Theta(n)$, then $\alpha(m, n) = \Theta(\alpha(n))$. On the other hand, if $m \geq n \lg^{**} n$, for any constant number of stars, then $\alpha(m, n) = O(1)$. So even if the number of Find operations is only slightly larger than the number of nodes, the amortized cost of each Find is constant.
$O(\alpha(m, n))$ is actually a tight upper bound for the amortized cost of path compression; there are no more tricks that will improve the analysis further. More surprisingly, this is the best amortized bound we obtain for any pointer-based data structure for maintaining disjoint sets; the amortized cost of every Find algorithm is at least $\Omega(\alpha(m, n))$. The proof of the matching lower bound is, unfortunately, far beyond the scope of this class.\(^3\)
11.6 The Ackermann Function and its Inverse
The iterated logarithms that fell out of our analysis of path compression are the inverses of a hierarchy of recursive functions defined by Wilhelm Ackermann in 1928.\footnote{Ackermann didn’t define his functions this way—I’m actually describing a slightly cleaner hierarchy defined 35 years later by R. Creighton Buck—but the exact details of the definition are surprisingly irrelevant! The mnemonic up-arrow notation for these functions was introduced by Don Knuth in the 1970s.}
\[ 2 \uparrow^c n := \begin{cases} 2 & \text{if } n = 1 \\ 2n & \text{if } c = 0 \\ 2 \uparrow^{c-1} (2 \uparrow^c (n-1)) & \text{otherwise} \end{cases} \]
For each fixed integer \( c \), the function \( 2 \uparrow^c n \) is monotonically increasing in \( n \), and these functions grow incredibly faster as the index \( c \) increases. \( 2 \uparrow n \) is the familiar power function \( 2^n \). \( 2 \uparrow \uparrow n \) is the tower function:
\[ 2 \uparrow \uparrow n = \underbrace{2 \uparrow 2 \uparrow \ldots \uparrow 2}_{n} = 2^{2^{2^{\ldots^{2}}}} \]
John Conway named \( 2 \uparrow \uparrow \uparrow n \) the wower function:
\[ 2 \uparrow \uparrow \uparrow n = \underbrace{2 \uparrow 2 \uparrow \ldots \uparrow 2}_{n} \]
And so on, et cetera, ad infinitum.
For any fixed \( c \), the function \( \log^c n \) is the inverse of the function \( 2 \uparrow^{c+1} n \), the \((c+1)\)th row in the Ackerman hierarchy. Thus, for any remotely reasonable values of \( n \), say \( n \leq 2^{256} \), we have \( \log^c n \leq 5, \log^{c+1} n \leq 4 \), and \( \log^{c+2} n \leq 3 \) for any \( c \geq 3 \).
The function \( \alpha(n) \) is usually called the inverse Ackerman function.\footnote{Strictly speaking, the name ‘inverse Ackerman function’ is inaccurate. One good formal definition of the true inverse Ackerman function is \( \bar{\alpha}(n) = \min \{ c \geq 1 \mid 2 \uparrow^c c \geq n \} = \min \{ c \geq 1 \mid 2 \uparrow^{c+2} c \geq n \} \}. However, it’s not hard to prove that \( \bar{\alpha}(n) \leq \alpha(n) + 1 \) for all sufficiently large \( n \), so the inaccuracy is completely forgivable. As I said in the previous footnote, the exact details of the definition are surprisingly irrelevant!} Our earlier definition is equivalent to \( \alpha(n) = \min \{ c \geq 1 \mid 2 \uparrow^{c+2} 3 \geq n \} \); in other words, \( \alpha(n) + 2 \) is the inverse of the third column in the Ackermann hierarchy. The function \( \alpha(n) \) grows much more slowly than \( \log^c n \) for any fixed \( c \); we have \( \alpha(n) \leq 3 \) for all even remotely imaginable values of \( n \). Nevertheless, the function \( \alpha(n) \) is eventually larger than any constant, so it is not \( O(1) \).
<table>
<thead>
<tr>
<th>( 2 \uparrow n )</th>
<th>( n = 1 )</th>
<th>( n = 2 )</th>
<th>( n = 3 )</th>
<th>( n = 4 )</th>
<th>( n = 5 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( 2n )</td>
<td>2</td>
<td>4</td>
<td>6</td>
<td>8</td>
<td>10</td>
</tr>
<tr>
<td>( 2 \uparrow n )</td>
<td>2</td>
<td>4</td>
<td>8</td>
<td>16</td>
<td>32</td>
</tr>
<tr>
<td>( 2 \uparrow \uparrow n )</td>
<td>2</td>
<td>4</td>
<td>16</td>
<td>65536</td>
<td>( 2^{65536} )</td>
</tr>
<tr>
<td>( 2 \uparrow \uparrow \uparrow n )</td>
<td>2</td>
<td>4</td>
<td>65536</td>
<td>( 2^{65536} )</td>
<td>( 2^{65536} )</td>
</tr>
<tr>
<td>( 2 \uparrow \uparrow \uparrow \uparrow n )</td>
<td>2</td>
<td>4</td>
<td>( 2^{65536} )</td>
<td>( 2^{65536} )</td>
<td>( 2^{65536} ) | ( \langle \text{Yeah, right.} \rangle )</td>
</tr>
<tr>
<td>( 2 \uparrow \uparrow \uparrow \uparrow \uparrow n )</td>
<td>2</td>
<td>4</td>
<td>( 2^{65536} )</td>
<td>( 2^{65536} )</td>
<td>( 2^{65536} ) | ( \langle \text{Very funny.} \rangle ) | ( \langle \text{Argh! My eyes!} \rangle )</td>
</tr>
</tbody>
</table>
Small (!!!) values of Ackermann’s functions.
11.7 To infinity... and beyond!
Of course, one can generalize the inverse Ackermann function to functions that grow arbitrarily more slowly, starting with the \textit{iterated} inverse Ackermann function
\[
\alpha^*(n) = \begin{cases}
1 & \text{if } n \leq 4, \\
1 + \alpha^*(\alpha(n)) & \text{otherwise},
\end{cases}
\]
then the \textit{iterated} iterated iterated inverse Ackermann function
\[
\alpha^{**}(n) = \begin{cases}
\alpha(n) & \text{if } c = 0, \\
1 & \text{if } n \leq 4, \\
1 + \alpha^{**}(\alpha^{**}-1(n)) & \text{otherwise},
\end{cases}
\]
and then the diagonalized inverse Ackermann function
\[
\text{Head-asplode}(n) = \min\{c \geq 1 \mid \alpha^{**} n \leq 4\},
\]
and so on ad nauseam. Fortunately(?), such functions appear extremely rarely in algorithm analysis. Perhaps the only naturally-occurring example of a super-constant sub-inverse-Ackermann function is a recent result of Seth Pettie, who proved that if a splay tree is used as a double-ended queue — insertions and deletions of only smallest or largest elements — then the amortized cost of any operation is $O(\alpha^*(n))$!
\section*{Exercises}
1. Consider the following solution for the union-find problem, called \textit{union-by-weight}. Each set leader $\overline{x}$ stores the number of elements of its set in the field $\text{weight} (\overline{x})$. Whenever we \textsc{Union} two sets, the leader of the \textit{smaller} set becomes a new child of the leader of the \textit{larger} set (breaking ties arbitrarily).
\begin{itemize}
\item \textbf{MAKESET}(x):
\begin{align*}
\text{parent}(x) & \leftarrow x \\
\text{weight}(x) & \leftarrow 1
\end{align*}
\item \textbf{FIND}(x):
\begin{align*}
\text{while } x \neq \text{parent}(x) & \\
& x \leftarrow \text{parent}(x) \\
\text{return } x
\end{align*}
\item \textbf{UNION}(x, y):
\begin{align*}
\overline{x} & \leftarrow \text{FIND}(x) \\
\overline{y} & \leftarrow \text{FIND}(y) \\
\text{if } \text{weight}(\overline{x}) > \text{weight}(\overline{y}) & \\
\text{parent}(\overline{y}) & \leftarrow \overline{x} \\
\text{weight}(\overline{x}) & \leftarrow \text{weight}(\overline{x}) + \text{weight}(\overline{y}) \\
\text{else} & \\
\text{parent}(\overline{x}) & \leftarrow \overline{y} \\
\text{weight}(\overline{x}) & \leftarrow \text{weight}(\overline{x}) + \text{weight}(\overline{y})
\end{align*}
\end{itemize}
Prove that if we use union-by-weight, the \textit{worst-case} running time of $\text{FIND}(x)$ is $O(\log n)$, where $n$ is the cardinality of the set containing $x$.
2. Consider a union-find data structure that uses union by depth (or equivalently union by rank) \textit{without} path compression. For all integers $m$ and $n$ such that $m \geq 2n$, prove that there is a sequence of $n$ MakeSet operations, followed by $m$ Union and Find operations, that require $\Omega(m \log n)$ time to execute.
3. Suppose you are given a collection of up-trees representing a partition of the set \( \{1, 2, \ldots, n\} \) into disjoint subsets. **You have no idea how these trees were constructed.** You are also given an array \( \text{node}[1..n] \), where \( \text{node}[i] \) is a pointer to the up-tree node containing element \( i \). Your task is to create a new array \( \text{label}[1..n] \) using the following algorithm:
```plaintext
LABEL EVERYTHING:
for \( i \leftarrow 1 \) to \( n \)
\( \text{label}[i] \leftarrow \text{Find}(\text{node}[i]) \)
```
(a) What is the worst-case running time of \( \text{LABEL EVERYTHING} \) if we implement \( \text{Find} \) without path compression?
(b) **Prove** that if we implement \( \text{Find} \) using path compression, \( \text{LABEL EVERYTHING} \) runs in \( O(n) \) time in the worst case.
4. Consider an arbitrary sequence of \( m \) \( \text{MakeSet} \) operations, followed by \( u \) \( \text{Union} \) operations, followed by \( f \) \( \text{Find} \) operations, and let \( n = m + u + f \). Prove that if we use union by rank and \( \text{Find} \) with path compression, all \( n \) operations are executed in \( O(n) \) time.
5. Suppose we want to maintain an array \( X[1..n] \) of bits, which are all initially zero, subject to the following operations.
- **LOOKUP**\((i)\): Given an index \( i \), return \( X[i] \).
- **BLACKEN**\((i)\): Given an index \( i < n \), set \( X[i] \leftarrow 1 \).
- **NEXTWHITE**\((i)\): Given an index \( i \), return the smallest index \( j \geq i \) such that \( X[j] = 0 \).
(Because we never change \( X[n] \), such an index always exists.)
If we use the array \( X[1..n] \) itself as the only data structure, it is trivial to implement **LOOKUP** and **BLACKEN** in \( O(1) \) time and **NEXTWHITE** in \( O(n) \) time. But you can do better! Describe data structures that support **LOOKUP** in \( O(1) \) worst-case time and the other two operations in the following time bounds. (We want a different data structure for each set of time bounds, not one data structure that satisfies all bounds simultaneously!)
(a) The worst-case time for both **BLACKEN** and **NEXTWHITE** is \( O(\log n) \).
(b) The amortized time for both **BLACKEN** and **NEXTWHITE** is \( O(\log n) \). In addition, the worst-case time for **BLACKEN** is \( O(1) \).
(c) The amortized time for **BLACKEN** is \( O(\log n) \), and the worst-case time for **NEXTWHITE** is \( O(1) \).
(d) The worst-case time for **BLACKEN** is \( O(1) \), and the amortized time for **NEXTWHITE** is \( O(\alpha(n)) \). [Hint: There is no **WHITEN**.]
6. Suppose we want to maintain a collection of strings (sequences of characters) under the following operations:
- **NEWSTRING**\((a)\) creates a new string of length 1 containing only the character \( a \) and returns a pointer to that string.
• **CONCAT**(S, T) removes the strings S and T (given by pointers) from the data structure, adds the concatenated string ST to the data structure, and returns a pointer to the new string.
• **REVERSE**(S) removes the string S (given by a pointer) from the data structure, adds the reversal of S to the data structure, and returns a pointer to the new string.
• **LOOKUP**(S, k) returns the kth character in string S (given by a pointer), or NULL if the length of the S is less than k.
Describe and analyze a simple data structure that supports **CONCAT** in \(O(\log n)\) amortized time, supports every other operation in \(O(1)\) worst-case time, and uses \(O(n)\) space, where \(n\) is the sum of the current string lengths. Unlike the similar problem in the previous lecture note, there is no **SPLIT** operation. [Hint: Why is this problem here?]
7. (a) Describe and analyze an algorithm to compute the size of the largest connected component of black pixels in an \(n \times n\) bitmap \(B[1..n, 1..n]\).
For example, given the bitmap below as input, your algorithm should return the number 9, because the largest connected black component (marked with white dots on the right) contains nine pixels.
(b) Design and analyze an algorithm **BLACKEN**(i, j) that colors the pixel \(B[i, j]\) black and returns the size of the largest black component in the bitmap. For full credit, the amortized running time of your algorithm (starting with an all-white bitmap) must be as small as possible.
For example, at each step in the sequence below, we blacken the pixel marked with an X. The largest black component is marked with white dots; the number underneath shows the correct output of the **BLACKEN** algorithm.
(c) What is the worst-case running time of your **BLACKEN** algorithm?
*8. Consider the following game. I choose a positive integer \(n\) and keep it secret; your goal is to discover this integer. We play the game in rounds. In each round, you write a list of at most \(n\) integers on the blackboard. If you write more than \(n\) numbers in a single round, you lose. (Thus, in the first round, you must write only the number 1; do you see why?) If \(n\) is one of the numbers you wrote, you win the game; otherwise, I announce which of
the numbers you wrote is smaller or larger than $n$, and we proceed to the next round. For example:
<table>
<thead>
<tr>
<th>You</th>
<th>Me</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>It’s bigger than 1.</td>
</tr>
<tr>
<td>4, 42</td>
<td>It’s between 4 and 42.</td>
</tr>
<tr>
<td>8, 15, 16, 23, 30</td>
<td>It’s between 8 and 15.</td>
</tr>
<tr>
<td>9, 10, 11, 12, 13, 14</td>
<td>It’s 11; you win!</td>
</tr>
</tbody>
</table>
Describe a strategy that allows you to win in $O(\alpha(n))$ rounds!
|
{"Source-Url": "http://jeffe.cs.illinois.edu/teaching/algorithms/notes/11-unionfind.pdf", "len_cl100k_base": 10850, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 57028, "total-output-tokens": 12161, "length": "2e13", "weborganizer": {"__label__adult": 0.00037789344787597656, "__label__art_design": 0.0004432201385498047, "__label__crime_law": 0.0004301071166992187, "__label__education_jobs": 0.0020809173583984375, "__label__entertainment": 0.00011169910430908204, "__label__fashion_beauty": 0.00018489360809326172, "__label__finance_business": 0.00020647048950195312, "__label__food_dining": 0.0006093978881835938, "__label__games": 0.0011529922485351562, "__label__hardware": 0.001735687255859375, "__label__health": 0.000736236572265625, "__label__history": 0.00045990943908691406, "__label__home_hobbies": 0.0002334117889404297, "__label__industrial": 0.00069427490234375, "__label__literature": 0.0003237724304199219, "__label__politics": 0.00029659271240234375, "__label__religion": 0.0006642341613769531, "__label__science_tech": 0.078857421875, "__label__social_life": 0.00014483928680419922, "__label__software": 0.006511688232421875, "__label__software_dev": 0.90234375, "__label__sports_fitness": 0.0004198551177978515, "__label__transportation": 0.0007190704345703125, "__label__travel": 0.00027823448181152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35991, 0.01736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35991, 0.44683]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35991, 0.80761]], "google_gemma-3-12b-it_contains_pii": [[0, 1984, false], [1984, 5198, null], [5198, 7620, null], [7620, 10388, null], [10388, 13407, null], [13407, 16169, null], [16169, 18347, null], [18347, 20981, null], [20981, 23759, null], [23759, 27287, null], [27287, 30418, null], [30418, 33288, null], [33288, 35548, null], [35548, 35991, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1984, true], [1984, 5198, null], [5198, 7620, null], [7620, 10388, null], [10388, 13407, null], [13407, 16169, null], [16169, 18347, null], [18347, 20981, null], [20981, 23759, null], [23759, 27287, null], [27287, 30418, null], [30418, 33288, null], [33288, 35548, null], [35548, 35991, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35991, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35991, null]], "pdf_page_numbers": [[0, 1984, 1], [1984, 5198, 2], [5198, 7620, 3], [7620, 10388, 4], [10388, 13407, 5], [13407, 16169, 6], [16169, 18347, 7], [18347, 20981, 8], [20981, 23759, 9], [23759, 27287, 10], [27287, 30418, 11], [30418, 33288, 12], [33288, 35548, 13], [35548, 35991, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35991, 0.04011]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
fa0a1bdeec47c5cf0c9c9391d210925dfd7706de
|
PROTECTION ANALYSIS:
Final Report
Richard Bisbey
Dennis Hollingworth
<table>
<thead>
<tr>
<th>1. REPORT NUMBER</th>
<th>151/SR-78-13</th>
</tr>
</thead>
<tbody>
<tr>
<td>2. GOVT ACCESSION NO.</td>
<td></td>
</tr>
<tr>
<td>3. RECIPIENT'S CATALOG NUMBER</td>
<td></td>
</tr>
<tr>
<td>4. TITLE (and Subtitle)</td>
<td>Protection Analysis: Final Report</td>
</tr>
<tr>
<td>5. TYPE OF REPORT & PERIOD COVERED</td>
<td>Research</td>
</tr>
<tr>
<td>6. PERFORMING ORG. REPORT NUMBER</td>
<td></td>
</tr>
</tbody>
</table>
| 7. AUTHOR(s) | Richard Bisbey II
Dennis Hollingworth |
| 8. CONTRACT OR GRANT NUMBER(s) | DAHC 15 72 C 0308 |
| 9. PERFORMING ORGANIZATION NAME AND ADDRESS | USC/Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA 90291 |
| 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERS | ARPA Order #2223 |
| 11. CONTROLLING OFFICE NAME AND ADDRESS | Defense Advanced Research Projects Agency
1400 Wilson Blvd.
Arlington, VA 22209 |
| 12. REPORT DATE | May 1978 |
| 13. NUMBER OF PAGES | 30 |
| 14. MONITORING AGENCY NAME & ADDRESS (IF different from Controlling Office) | |
| 15. SECURITY CLASS. (OF THIS REPORT) | Unclassified |
| 15a. DECLASSIFICATION/Downgrading Schedule | |
| 16. DISTRIBUTION STATEMENT (Of this Report) | This document is approved for public release and sale; distribution is unlimited. |
| 17. DISTRIBUTION STATEMENT (Of the abstract entered in Block 20, IF different from Report) | |
| 18. SUPPLEMENTARY NOTES | |
| 19. KEY WORDS (Continue on reverse side if necessary and identify by block number) | access control, computer security, error analysis, error-driven evaluation, error types, operating system security, protection evaluation, protection policy, software security |
| 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) | (OVER) |
20. ABSTRACT
The Protection Analysis project was initiated at ISI by ARPA IPTO to further understand operating system security vulnerabilities and, where possible, identify automatable techniques for detecting such vulnerabilities in existing system software. The primary goal of the project was to make protection evaluation both more effective and more economical by decomposing it into more manageable and methodical subtasks so as to drastically reduce the requirement for protection expertise and make it as independent as possible of the skills and motivation of the actual individuals involved. The project focused on near-term solutions to the problem of improving the security of existing and future operating systems in an attempt to have some impact on the security of the systems which would be in use over the next ten years.
A general strategy was identified, referred to as "pattern-directed protection evaluation" and tailored to the problem of evaluating existing systems. The approach provided a basis for categorizing protection errors according to their security-relevant properties; it was successfully applied for one such category to the MULTICS operating system, resulting in the detection of previously unknown security vulnerabilities.
Abstract vi
1. Project Background and Context 1
2. Project Description 3
Collection of Raw Error Data 6
Development of Raw Error Patterns 6
Development of Generalized Patterns 7
Feature Extraction 8
Comparison Process 10
3. Redirection of Research 12
Error Categorization 13
Analysis of Individual Categories 13
4. Conclusions and Future Resource Directions 16
References 18
Appendix A 19
Appendix B 21
The Protection Analysis project was initiated at ISI by ARPA IPTO to further understand operating system security vulnerabilities and, where possible, identify automatable techniques for detecting such vulnerabilities in existing system software. The primary goal of the project was to make protection evaluation both more effective and more economical by decomposing it into more manageable and methodical subtasks so as to drastically reduce the requirement for protection expertise and make it as independent as possible of the skills and motivation of the actual individuals involved. The project focused on near-term solutions to the problem of improving the security of existing and future operating systems in an attempt to have some impact on the security of the systems which would be in use over the next ten years.
A general strategy was identified, referred to as "pattern-directed protection evaluation" and tailored to the problem of evaluating existing systems. The approach provided a basis for categorizing protection errors according to their security-relevant properties; it was successfully applied for one such category to the MULTICS operating system, resulting in the detection of previously unknown security vulnerabilities.
I. PROJECT BACKGROUND AND CONTEXT
When general purpose resource-sharing operating systems became available, system customers (both governmental agencies and private firms) naturally wished to exploit fully the economies such systems offered in processing sensitive together with nonsensitive information. Responding to customers' pressure, the systems' manufacturers at first claimed that the hardware and software mechanisms supporting resource sharing would also (with perhaps minor alterations) provide sufficient protection and isolation to permit multiprogramming of sensitive and nonsensitive programs and data. A skeptical technical community challenged this claim and proved it false. Relatively cursory inspection of selected operating systems by "tiger teams" (individuals brought together specifically to attempt to penetrate a target operating system) established that the protection offered fell far short of that required if multiprogramming of sensitive and nonsensitive programs and information were to be permitted [And+71, Bran73]. The protection mechanisms functioned adequately when users exercised prescribed system functions in approximately the prescribed way, but could not resist the system penetrator who looked for unusual or extraordinary means to avoid access checking.
Lacking some of today's insight and knowledge, various manufacturers attempted to retrofit their existing operating systems for security by simply correcting the individual implementation errors and obvious design oversights that contributed to their system's security deficiencies. Critical analysis of these systems, however, established that piecemeal efforts to secure an existing general-purpose operating system were unlikely to succeed [Abb+76, Att+76, BelW74, HolG74, Mcph74].
Out of this early floundering came an appreciation that the security problem was much more difficult to deal with than expected. Furthermore, a number of disturbing issues surfaced:
1. The question of what constituted an appropriate degree of security and how this is determined for a computer system had not been adequately addressed. Indeed, the notion of security was itself difficult to formalize in the context of computer systems, i.e., it was a research issue in its own right. Intuitive statements such as "the system should not allow an unauthorized user to access information he had no right to access" somehow had to be translated into specific assertions about specific operating system objects.
2. No methodology existed for insuring that a given system's design was complete with respect to a particular security policy which might be chosen, i.e., that there were not substantial or significant areas where the desired protection policy could simply be circumvented or ignored.
3. Existing operating systems were poorly structured when it came to security and integrity, usually having grown from early releases to patched, error-ridden monoliths of interconnected code and tables.
4. Efforts to correct known errors were as likely as not to introduce an equal number of new errors, merely manifested in other ways. This became painfully evident during the system penetration activities conducted in conjunction with security retrofit efforts.
5. Program verification techniques would ultimately have to be applied to insure that operating system code functioned correctly and according to specification. However, existing techniques could handle only relatively small pieces of code, limited data types, and relatively simple data structures and data accessing schemes—nothing within an order of magnitude of the size and complexity of an operating system as then structured and implemented.
While these and other issues were troublesome enough with regard to future systems, they were particularly troublesome in light of the large inventory of systems in the DoD and private sector. It had been suggested that an existing operating system would have to be restructured if any substantial improvement in the security afforded was to be effected or if program verification techniques were to be successfully applied. However, restructuring of an existing system (in many cases tantamount to redesign of the system) meant committing substantial resources and rewriting a considerable amount of code. It was also apparent that this could be considered only for a few special systems such as MULTICS and VM/370, which were already well-structured with the access control mechanisms at the innermost level of control.
It became obvious that additional insight into the design and implementation deficiencies responsible for operating system security vulnerabilities was necessary. A much more comprehensive view was required of the number and form taken by such vulnerabilities. The system penetration work performed in the past did little to provide any such collective insight, however; the expertise resulting from such studies consisted of the individual insights of a few individuals rather than communicable ideas and knowledge.
In September of 1973, the Protection Analysis project was initiated at ISI by ARPA IPTO to enhance our understanding of operating system vulnerabilities, expand the rather sparse knowledge base on this subject, and, if possible, identify automatable techniques for detecting vulnerabilities in existing system software. Near-term solutions to the problem of improving the security of existing and future systems were important if operating systems security research was to have much impact on the systems which would be in use over the next ten years. It was hoped that the effort would yield a more formalized knowledge base on operating system security, making it possible to decouple security and operating system expertise to some degree, i.e., to allow individuals having limited expertise in operating system security to effectively detect system vulnerabilities.
The approach adopted was a significant departure from the protection evaluation projects going on elsewhere at that time, such as those at Project RISOS and at System Development Corporation. These efforts to systematize penetration activities dealt primarily with the organization of the project staff itself rather than the discipline applied [Weis73]. They addressed the organizational and training aspects of teams of individuals tasked to analyze operating systems for security vulnerabilities—individuals who themselves would make good "penetrators" of a given target system, who had not only an intimate knowledge of that system but also a good understanding of and feel for protection error possibilities.
It was evident that the success of such groups would depend heavily on individual motivation as well as skill in finding protection errors—an apparent shortcoming when it came to making definitive statements about the validity of the evaluation effort in which such an approach was adopted. The primary goal of the ISI project was to make protection evaluation both more effective and more economical by decomposing it into more manageable and methodical subtasks so as to drastically reduce the requirement for protection expertise and make it as independent as possible of the skills and motivation of the actual individuals involved.
A general strategy was identified which promised to meet these objectives. It included the following five steps:
1. Collection of "raw" error descriptions.
2. Representation of raw error descriptions in a more formalized notation (producing "raw error patterns").
3. Elimination of superfluous features and abstraction of specific system elements into system-independent elements to develop generalized error patterns.
4. "Normalization" of the target system by extracting the information relevant to the evaluation and representing it in the form required by a "comparison" procedure.
5. Execution of the comparison procedure.
The specific approach adopted—subsequently referred to as "pattern-directed protection evaluation" [Car+75]—was tailored to the problem of evaluating existing systems. It differed from the more general approach principally in that specific features of interest were "extracted" from the operating system source code rather than the entire operating system being rerepresented in a "normalized" format (Figure 1). Thus, steps 4 and 5 changed as follows:
4. "Feature extraction": instantiation of generalized features and searches for instances of these features in the target operating system, and the description of their relevant contexts.
5. Comparison of combinations of feature instances and their contexts with the features and relations expressed in the appropriate error patterns.
A major expectation was that adopting this approach would make it easier to identify previously undiagnosed errors in given operating systems. As superfluous
\[\text{Development} \quad \text{Production}\]
\[\text{Collected Errors} \quad \text{Operating System}\]
\[\text{Error Analysis} \quad \text{Feature Extraction}\]
\[\text{Features} \quad \text{Patterns}\]
\[\text{Pattern Matching} \quad \text{Errors}\]
\[\text{Figure 1. Error-driven evaluation process}\]
features and qualifying details were eliminated and specific system features replaced by more generic or abstract features, a more generalized error representation would evolve. The process could conceivably result in a hierarchy of error patterns, with the most general and abstractly defined patterns at the upper levels and the most specialized and concrete ones at the lower levels. Subsequent instantiation of the generalized patterns by replacing the more general features with their more specific counterparts in particular classes of operating systems or particular functional areas might be expected to reveal previously undiscovered operating system errors (Figure 2).
A second expectation was that this approach might result in an empirically sound taxonomy of operating system vulnerabilities and their causes, which would be particularly useful for system designers and implementers. The derivation of raw patterns, their generalization, and the instantiation of generalized patterns toward other systems and functional areas would all add new elements to the lattice of patterns formed by the relation "generalization of" and its converse, "instance of," with the more abstract patterns at the top and the more concrete ones at the bottom. As this structure was enriched with additional patterns, major substructures might emerge, at least below some level of abstractness. If, as was also expected, the search techniques determined to be appropriate for the patterns of each such substructure were also similar, then a reasonable basis would be provided to define major "error types."
The approach was tested with regard to a particular error type frequently found in operating systems, and it proved successful at uncovering previously undiagnosed errors in the MULTICS operating system [Bis+75, Bis+76]. The specific details of the approach and the results and problems which ensued are discussed in the sections which follow.
COLLECTION OF RAW ERROR DATA
Prior to this project, little data on known protection error vulnerabilities had actually been assembled as such in one place. Thus, the first phase of the project involved developing a sufficiently rich collection of data on operating system errors from as many operating systems as possible to provide a good sampling of the types of errors which existed.
Ultimately more than 100 errors that could be employed directly to penetrate existing operating systems were recorded in an error data base; numerous minor variations on these errors were also possible. These errors came from six systems: TENEX, MULTICS, EXEC-8, GCOS, UNIX, and OS/360.
The project staff itself was familiar in varying degrees with five of the six operating systems. They had been directly involved in penetration work on only three of these operating systems, however, and then in projects which examined the systems at widely differing levels of detail. Consequently, the project had to rely to some extent upon information it could gather from outside sources, namely other individuals involved in operating system penetration studies.
Unfortunately, it was difficult to acquire useful data on errors for systems which had not been directly reviewed by the staff. Perhaps the major difficulty was the unavailability of any overall information about operating system vulnerabilities, principally because most installations were reluctant to air weaknesses that might subsequently be exploited by individuals inside as well as outside their organizations. Another significant difficulty also arose whose principal impact was felt in the development of raw error patterns; it is discussed in the following section.
DEVELOPMENT OF RAW ERROR PATTERNS
Given a raw error description, the next step was to formulate an appropriate raw error pattern, a redescription of the error in terms specific to its source operating system but in the form of predicates that express "conditions," properties of or relations among distinct objects or features of that system. During this process those aspects of the initial description superfluous to the actual error itself were eliminated. The "condition set" of a raw pattern was a minimal set of conditions in the sense that if any were removed the raw pattern would no longer represent a potential error.
However, from a particular raw error description, it was often extremely difficult to write down a pattern that satisfactorily captured the essence of the error. First, of course, the error description had to be thoroughly comprehended, e.g., in terms of how the error could be exploited by a knowledgeable penetrator. This required substantial familiarity with and sufficient information on the operating system context in which it occurred. Unfortunately, even where such information was available, the errors were sometimes described in a rather incomplete fashion or in a fashion which presumed substantial knowledge about specific low-level details of the system implementation. This was further complicated by the lack of a common vocabulary for describing both functional elements of the system as well as the particulars of a given security deficiency, requiring some conjecture on the part of the staff as to the exact circumstances of the problem.
Despite these complications, the staff generally was fairly successful in ascertaining what appeared to be the significant characteristics of the error from the available documentation. Even with that, however, it was not always clear precisely what policy was being violated and thus what conditions should constitute the pattern. In some cases, in which equally valid policies could be postulated, the same raw error appeared to lead to more than one pattern.
This process did not appear to be inordinately difficult in the case of the first pattern processed, "Inconsistency of a Single Data Value over Time." The relevant characteristics of such errors were readily apparent, as manifested in the various examples in the error data base. Thus, the textual description of a given instance of the error type was successfully rerepresented in a raw pattern for which superfluous details had been eliminated. This is illustrated by the following raw error description and derived raw error pattern taken from an early version of MULTICS [Bis+75].
Raw Error Description: STOP-PROCESS-ERROR
STOP-PROCESS is a supervisor procedure for halting processes. The user can call the procedure with the process-id of the process to be stopped. The user entry to this procedure checks that the ID is that of the caller, then calls the traffic controller termination routine. The user can modify the value of the process-id between the time it is checked and the time it is passed to the traffic controller.
Raw Error Pattern:
1. Procedure "STOP-PROCESS" is invoked by a user process to halt a specified process as indicated by a user-supplied parameter.
2. The "STOP-PROCESS" interface checks that the user-supplied process-id parameter is valid.
3. The traffic-controller termination routine uses the process-id to identify the appropriate process.
4. The user process may modify the checked parameter between the times of (2) and (3).
DEVELOPMENT OF GENERALIZED PATTERNS
As an error search criterion, a raw pattern is directly applicable only to operating systems that share the policy violated by that error and in which the features of that pattern are known by the same names. Even then, it may apply only to a particular functional area such as input/output control, and miss similar errors in other areas such as interprocess communication. To broaden the applicability of a pattern, its expression must be generalized by substituting more generic names or more abstract features for more specific ones or by deleting qualifying details without affecting the essence of the conditions themselves. The same concept, such as the call on a privileged system procedure by an unprivileged user procedure, may be known by different names (such as "MME," "JSYS," and "SVC") in different systems. Classes of similar objects, such as bytes or blocks of physical storage, pages, segments, variables, structured variables,
and files (to give an extreme example), can be regarded as instances of a more abstract object, in this case the "abstract cell," something that has a name and holds information (its value). The benefit of generalizing is that the generalized pattern applies to a correspondingly wider class of errors in a wider class of systems.
Generalization of the raw pattern for the inconsistency error examples yielded the following error pattern and corresponding security policy statement:
Generalized Error Pattern:
B:M(X) and for some operation L occurring before M,
[for operation L which does not modify Value(X),
Value(X) before L NOT = Value(X) before M], and
Value(X) after L NOT = Value(X) before M.
Informally stated, process B performs operation M on variable X and the value of X at the time operation M is performed is not equal to the value of X either before or after some operation L which occurs before M.
Corresponding Operating System Security Policy Statement:
(B,M,X) => for some operation L occurring before M, either
[for operation L which does not modify Value(X),
Value(X) before L = Value(X) before M], or
Value(X) after L = Value(X) before M.
Intuitively stated, process B (which presumably performs some critical function) can perform operation M on variable X only if the value of X at the time operation M is performed is equal to the value of X either before or after some operation L which occurs before M.
FEATURE EXTRACTION
Detecting errors in a set of target information implies some kind of comparison process between the target and the correctness or error criteria. The comparison need not be direct; various transformations may be applied, as practical, to either the criteria or the target to bring them into a suitable form, as long as essential properties are preserved. In the case of pattern-directed protection evaluation, the target is a set of operating system source programs and specifications; the criteria are the error patterns; and the comparison process is essentially one of "pattern recognition," in the sense of an ability to detect instances of errors embedded or camouflaged in a system.
Conceptually, the ideal tool is a general-purpose "protection evaluator," a computer program that not only could be applied to a wide class of operating systems but could also reliably detect a wide class of errors. The inputs to such a program would be representations of the patterns for the error types covered, together with a representation of the target operating system. The program would compare the target representation with the given patterns by searching it for all combinations of features related in one of the ways specified in some pattern, and would report every such combination found. In this concept, protection evaluation would seem to consist of two subtasks:
1. "Normalizing" the target system by extracting the information relevant to the evaluation and representing it in the form required by a comparison procedure.
2. Executing the comparison procedure.
Such an ideal is clearly out of reach, however. There exists no model into which the protection-relevant features of an existing system can be mapped and in which they can be related for comparison with given patterns, general enough to apply to wide classes of errors and systems. It is even difficult to determine with precision which elements of existing systems are relevant to protection and which are not.
Nevertheless, the goal of developing pattern-directed techniques and tools to systematize and automate protection evaluation might be achieved with a somewhat altered approach. This becomes evident when one investigates what the two major requirements for protection evaluation techniques imply about their form, application, and development.
The first requirement, that of general-purposeness with respect to operating systems, carries an obvious implication: there must exist some generalized set of terminology—a "comparison language"—in which the techniques are specified and in which the error patterns are expressed. To apply these techniques to a given system, it is necessary that a correspondence be established between the objects and terminology of the comparison language, i.e., between the features of the given patterns and their instantiations in the target system. Either the features of the patterns must be instantiated to the concepts, objects, and terminology of the target system or the target system must be represented in terms of the comparison language, or an intermediate comparison framework must be established and transformations performed in both directions. If no error possibilities are to be overlooked, then all the instances of a given pattern feature in the target system must be identified.
If one uses the term "features" to refer to objects that have concrete and typically localized representations in the target system description (e.g., variables, procedure calls, critical parameters), then identifying the relevant features in the target system is only part of the problem. The other part is to determine whether any of the relations among these features are those indicated by the conditions of an error pattern. The requirement that evaluators need not have a talent for recognizing protection errors and that difficult pattern-recognition processes must not be involved, makes it essential that the search for an error be decomposed. The search through the target system code (or some representation of it) for a single dispersed collection of instances of features in some given relation must be replaced. Instead we must require only independent searches for individual instances of features in the target system. This implies, of course, that the output of these searches must include simple specifications of the contexts in which the feature instances were found. The needed feature context is determined from the relations expressed in the patterns and is used to determine whether the features found actually satisfy these relations. Thus, the single integrated search step is replaced by a two-step procedure, the first of which is more amenable to automation, while the second is probably best performed manually. While the analysis of the relations among features is not avoided, it is deferred to a more convenient point in the process where the feature-set to be considered is greatly reduced in size.
In the case of the inconsistency error, the feature extraction process was applied to a particular instantiation of the error type involving the consistency of user-supplied parameters in the MULTICS operating system. To find instances of the error in code, a pattern was formed using the Error Statement above, which was then instantiated for identifying inconsistent parameter usage. The Error Statement requires the existence of two operations, both of which refer to a common variable X. The first operation, L, either fetches the value of the variable or generates a new value. The second operation, M, fetches the value of the variable. Other information contained in the Error Statement includes the fact that L occurs before M and that M performs some critical function. These statements give rise to the following pattern elements:
1. An operation L which either fetches or stores into a cell X.
2. An operation M which fetches cell X.
3. Operation M is critical.
4. Operation L occurs before operation M.
For this particular error, X is instantiated to a parameter, and thus the following additional pattern element is derived:
5. A procedure B which is interdomain-callable by user procedures and which accepts a parameter X.
This pattern ultimately resulted in the following search procedure intended to recognize, for each parameter, executable sequences of store or fetch operations followed by a fetch operation:
1. Filter out everything except procedures which are interdomain-callable by users.
2. Of these, identify those with parameters.
3. For each parameter, identify and output all instructions or statements which involve store or fetch operations on the parameter.
4. Identify and output all instructions or statements which contain flow of control operators.
This procedure was subsequently automated and applied to MULTICS with significant success, resulting in the detection of a number of candidate errors [Bis+76].
**COMPARISON PROCESS**
The search output constitutes the input to a separate, methodical comparison process in which the properties of the feature instances found are examined to determine whether actual error conditions exist. Obviously, the comparison is still not direct, since a translation must be made between the generalized relations expressed in the patterns and the descriptions of feature instances provided as input. Again, in-
In general, the choice must be made between expressing the search results in the comparison language and instantiating the reference properties. The former is required for a system-independent comparison algorithm.
In the case of the inconsistency error, that comparison was handled manually. The feature matches were examined manually to determine if the second operation was in fact critical. Forty-seven procedures were examined in the MULTICS system. Of these, seven were observed to have one or more errors; five other procedures had matches for which "criticality" of the second fetch could not be determined due to lack of system documentation.
3. REDIRECTION OF RESEARCH
In September 1975 the research direction was significantly modified to conform to revised schedule and resource considerations. The major problem with the pattern-directed approach (detailed analysis and relating of error characteristic from the bottom-up) was that the process was both time-consuming and extremely tedious; it consumed a substantial amount of the project's resources while yielding few demonstrable results. The sponsor questioned whether or not the protection analysis process was bounded—i.e., whether the number of error categories was both finite and small enough to warrant the expenditure of the resources required. The project was asked to postulate the highest level error categories directly from the existing error data base—to categorize the entries in the error data base in some appropriate fashion based upon the analysis performed to date. We were to subsequently work from the postulated error categories to develop automatable search strategies rather than pursue the pattern-directed approach of gradually building up a set of empirically based categories. It was thought that we might short-circuit some of the more time-consuming elements of the pattern-directed approach, directly identifying an appropriate set of error types without having to devote much effort to analyzing individual errors. The process was expected to be iterative, possibly leading to a set of nonoverlapping error categories which could be precisely defined and which covered the known protection vulnerabilities in existing operating systems and ultimately to viable search techniques for identifying instances of the error categories in target operating systems. Thus, the earlier approach as characterized by Figure 2 was supplanted by that represented in Figure 3 below.

Various difficulties were encountered along the way—unexpected problems which further altered our approach and perspective as to the most appropriate strategy for achieving the original goals. They are mentioned below in the discussion of the specific steps in the revised process.
ERROR CATEGORIZATION
As a consequence of the error-pattern activities the errors collected in the error data-base had already been redescribed in a self-consistent fashion. Thus an attempt was made to directly identify a set of categories which covered the recorded set of protection errors. These categories were to serve the purpose of grouping like error types for in-depth study and analysis. The expectation was that the categories would be refined as the analysis process proceeded until a final set of highly representative, nonintersecting categories was identified.
Ten categories were identified which seemed to cover all the errors which were documented and which did not exclude any known error types. Unfortunately, the ten categories seemed to manifest themselves at differing levels of abstraction; thus, it was assumed that this would not be the final set of categories, that some would be absorbed by more abstract categories or possibly be a basis for new categories when additional analysis had been completed. The categories are briefly described in Appendix A.
ANALYSIS OF INDIVIDUAL CATEGORIES
After an initial set of categories had been identified, attention was directed toward analyzing individual categories to gain additional understanding into the associated operating system security vulnerabilities, allow refinement of the categories, and accommodate the identification of search techniques for given error types. The categories which first received attention were those which appeared to be the most tractable and manifested themselves at the less abstract levels of system object representation. The error type "Inconsistency of a Single Data Value over Time," pursued under the pattern-directed work, had been particularly tractable and facilitated identification and implementation of specific tools for identifying errors of this type in existing operating systems. The results of our efforts on that error type suggested that a quite comprehensive semi-automated search could be conducted for such errors in a given operating system. It was hoped that the same would hold true for other error types.
Analysis of the second error category led to a somewhat different result, however. In studying the error category "Validation of Operands" it became apparent that the objects under consideration were much less tangible than those dealt with in the "Inconsistency..." document. The definition of an operator or operand depended primarily on the level of abstraction on which the operating system was being represented, and the necessary validation was generally at a comparable level [Carl76].
A general strategy was devised for reviewing an operating system for errors of this type, and the requisite tools were identified. However, the analysis of this error type brought into sharp focus the requirement for research in the area of program verification, since the objectives of program verification and the requisite effort in diagnosing errors of this type were quite similar. With this error type it became apparent that the formalization and abstractions that were part and parcel of verifying an operating system were also important in identifying points where validation of critical conditions had not taken place or had been implemented improperly. Determination and analysis of the cumulative effect of conditions and results along relevant control paths as is addressed in the area of program verification is also required in identifying points where incomplete validation has occurred.
The third error type analyzed was that of residuals, i.e., information left over in an object when the object is deallocated from one process and allocated to another. Residuals represented the first error type which had a particularly concrete manifestation in terms of operating system objects (data left undestroyed in a deallocated cell) as well as being a highly intuitive error type. However, it was evident from the outset that the causes of residual errors might well result from other types of errors and that this category might eventually be absorbed by one or more categories handled later on [Hol876]. A strategy for identifying sources of residual errors amenable to partial automation was identified but once again it became apparent that successful identification of the causes of residual errors in operating systems would require sophisticated tools involving symbolic program execution and control flow analysis as well as possibly application of program verification techniques in order to determine the paths and condition sets that might result in bypassing of code intended to clear data cells on deallocation.
The fourth and final error type undertaken was that of serialization. Treatment of this error type launched the project into consideration of the fundamental notions of program structure, operator synchronization, principles of programming practice, etc., and it became quite difficult to identify a viable search strategy. As a side effect, it became immediately evident that the error type "Interrupted Atomic Operations" was a special manifestation of this error category and should be treated in the same context.
A major consequence of work on the aforementioned error types was that it became apparent that the original ten error categories might be reformulated in a more meaningful way in terms of the following four global error categories:
1. Domain Errors
2. Validation Errors
3. Naming Errors
4. Serialization Errors
The remainder of the ten error types (with the exception of the operator selection errors) presented earlier seem either to fall into or split across the four types shown in Table 1.
Of these four categories, two (serialization and validation) were addressed explicitly as a result of the work on the ten originally hypothesized error types; the other two (naming and domain errors) were partially covered through the analysis of one of the remaining error types (allocation/deallocation residual errors). However, the bulk of the examples associated with the latter two categories have not been addressed at any greater detail than was required to group them into their respective categories. Thus, while we believe that the four general categories and their respective subcategories identified represent a useful and representative grouping of example errors and a basis for more directed analysis, it is possible that further study and analysis would result in an even more insightful error classification set.
Appendix B summarizes the four documents produced by the project which address the aforementioned error types.
# TABLE 1
<table>
<thead>
<tr>
<th>Naming Errors</th>
<th>Validation Errors</th>
</tr>
</thead>
<tbody>
<tr>
<td>Access</td>
<td>Queue</td>
</tr>
<tr>
<td>Residual Errors</td>
<td>Management/Boundary Errors</td>
</tr>
<tr>
<td>Originally Catalogued Naming Errors</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Originally Catalogued Validation Errors</td>
</tr>
<tr>
<td>Serialization Errors</td>
<td>Domain Errors</td>
</tr>
<tr>
<td>Multiple Reference Errors</td>
<td>Exposed Representation Errors</td>
</tr>
<tr>
<td>Interrupted Atomic Operator Errors</td>
<td>Attribute Residual Errors</td>
</tr>
<tr>
<td>Originally Catalogued Serialization Errors</td>
<td>Composition Residual Errors</td>
</tr>
<tr>
<td></td>
<td>Originally Catalogued Domain Errors</td>
</tr>
</tbody>
</table>
In general, the technical community has continually underestimated the difficulty of the security problem; we feel that the PA effort was no exception. It has proved surprisingly difficult to diagnose protection error vulnerabilities, much less design techniques for detecting them. However, while the PA project is terminating at ISI we feel that work might be profitably continued in the original area of pattern-directed protection evaluation despite the inherent difficulties. This approach proved quite successful for the case in which it was taken to completion and we feel that it should prove equally successful in others. Progress occurs at its own rate, however; research of this type is painfully slow. Much thrashing about and some false starts must be allowed for if real progress is to be made in this difficult research area; the desire to produce useful results quickly can be counterproductive to the total effort.
The PA project has had its principal impact in extending the knowledge base and general understanding of operating system protection vulnerabilities, relating apparently unrelated example errors in terms of those common characteristics which result in a security vulnerability. In addition, it has identified some general procedures which will be valuable in detecting future security system vulnerabilities. Finally, the PA project has, along with other efforts, made the user community increasingly aware of the amount of effort and the extensive cost involved in producing a system which has even a remote chance of providing a reasonable degree of security in an open environment. Unfortunately, it has also become apparent that the commercial sector is unwilling to bear this cost at the present time - that there is no apparent commercial market for systems with the development costs, reduced performance and usage and environmental constraints that must be accepted if secure processing is to take place. Consequently, the procedures developed by this project will probably be of little benefit to the commercial sector and of only marginal benefit to the military sector at this time. They will find application only when we decide that the value of data security and personal privacy are greater than the price we must pay for secure data processing.
The analysis of identified error types was particularly useful in identifying some appropriate research and development activities in the area of data security, particularly with respect to the types of tools required if protection evaluation is to become automatable. Tools of the sort described in the "Data Dependency Analysis" document will be needed in much of the evaluation activity, but might be constructed so as to be generalizable across systems and programming languages.
During the research effort one thing that became evident was the role of program verification techniques in detecting operating system security vulnerabilities. It is hard to see how truly definitive statements about the security afforded by an operating system can ever be made until PV techniques have been applied. However, certain unsettled issues about the appropriate application of PV techniques to O.S. security analysis suggest that research in protection evaluation might be profitably continued in parallel with research in PV, principally to insure that PV is applied at appropriate levels of operating system representation, that mapping between levels is handled properly, and that the operating system is represented in sufficient detail to insure that security vulnerabilities do not go undetected.
As a final footnote to this research effort we offer the following comment for those who are optimistic about near-term improvement of the data security problem. Our insight into and awareness of security vulnerabilities has tended to vastly exceed our progress in detecting and correcting them. There are still difficult research problems to be attacked in the area of PE in particular and data security research in general. In the course of addressing these research problems there will undoubtedly be much floundering and some abortive starts. Progress can be expected to be painful and slow in final disposition of the security problem, particularly since such work seems to involve delving into the basic premises of programming theory and practice.
REFERENCES
Bis+76 Bisbey, Richard, II et al., Data Dependency Analysis, Information Sciences Institute, ISI/RR-76-45, February 1976.
APPENDIX A
1. Consistency of data over time
Operating systems continuously make protection-related decisions based on data values contained within the system data base as well as on values which have been submitted to and validated by the system.
In order for a correct protection decision to be made (in the absence of other types of protection errors), the data must be in a consistent state, and remain in a specific relationship with other data items during the interval in which the protection decision is made and the corresponding action taken.
2. Validation of operands
Within an operating system, numerous operators are responsible for maintaining the system's data base and for changing the protection state of processes or objects known to the system. Many of these operators are critical in the sense that if invalid or unconstrained data are presented to them, a protection error results.
3. Residuals
A generally accepted error type is that of the "residual," i.e., information which is "left over" in an object when the object is deallocated from one process and allocated to another. Several types of residual errors exist, including the following:
1. Access residuals: Incomplete revocation or deallocation of the access capabilities to the object or cell.
2. Composition residuals: Incomplete destruction of the cell's context with other cells or objects.
3. Data residuals: Incomplete destruction of old values within the cell.
4. Naming
Names are used within operating systems to distinguish objects from one another. There are many ways in which name binding errors can lead to protection errors. For example, often the naming scheme does not have enough resolution (or does not use that resolution) to distinguish properly between named objects. This results in those errors typified by a user creating an ambiguity by naming objects with the same name as a previously named (or about to be named) object with the system, as a result, referencing the wrong object.
5. Domain
A domain is an authority specification over an object or set of objects (usually thought of in terms of an address space). Enforcement of domains is typically limited to the resolution of the hardware protection mechanism provided by the computer. Many
of the errors in operating systems are the direct result of one of two types of domain-related errors:
1. Information associated with the wrong domain.
2. Incorrect enforcement at domain crossing.
6. Serialization
Within any operating system, there are resources to which the operating system must not only control access, but also prevent concurrent use or otherwise enforce orderly use. This problem, known as "serialization," is of particular importance in multiprogramming systems where serialization errors often result in protection errors.
7. Interrupted Atomic Operations
Several protection errors have appeared in which the enforcement of a protection policy was based on the assumed uninterruptability of an operation. In each of the cases, the operation was in fact interruptable, resulting in a protection error.
8. Exposed Representations
To each user, an operating system presents an abstract machine consisting of the hardware user instruction set plus the pseudo-instructions provided through the supervisor call/invocation mechanism. The pseudo-instructions, in general, allow the user to manipulate abstract objects for which representations and operations are not provided in the basic hardware instruction set. Inadvertent exposure by the system of the representation of the abstract object, the primitive instructions which implement the pseudo-instructions or the data structures involved in the manipulation of the abstract object can sometimes result in protected information being made accessible to the user, thereby resulting in a protection error.
9. Queue Management Dependencies
This error type broadly includes those errors characterized by improper or incomplete handling of boundary conditions in manipulating data structures such as system queues or tables. The consequence is generally a system crash or lockup resulting in gross denial of service. We distinguish this from legitimate denial of service conditions when the system is merely overloaded, but still functioning according to the scheduling algorithm design specifications.
10. Critical Operator Selection Errors
This error type includes those errors in which the implementer invoked the wrong function, statement, or instruction resulting in the program performing the wrong function. In a sense, this is a catch-all category, since every programming error can ultimately be so classified.
APPENDIX B
The purpose of this appendix is to provide a context for reading the respective error detection papers.
Inconsistency of a single data value
A common error in contemporary operating systems is the assumed consistency of operands between multiple uses. If an operand can be modified between two uses by a program and the second use relies on an attribute referenced in or set by the first usage, an error results. Multiple usage of a single operand often occurs during validation/use sequences where an operand is first validated and subsequently used in a computation. Numerous variations exist that make locating instances of the error difficult. For example, the operand can be referred to by different names, or the uses may be contained in textually disjoint routines.
Two patterns for finding inconsistency errors are as follows:
1a. Find any sequence of REFERENCE ... REFERENCE to a common operand, or
1b. Find any sequence of STORE ... REFERENCE to a common operand,
whenever
2. the operand can be modified between the pair of operators.
Detection of Inconsistency Errors. Outlined below is a set of search strategies for finding consistency errors based on detecting possible instances of condition 1a or 1b. Large portions can be automated.
Consider the possible storage classes that operand A can take with respect to the routine containing the two references. They are limited to one of the following three:
1. A local
2. A parameter
3. A global
Case 1: Local Operand
If the operand is local (in the sense that no other routine can access it), then the error cannot occur and, thus, no search technique is needed.
Case 2: Parameter Operand
If the operand is a value parameter, then, since it is copied at invocation time into a local variable within the routine in question, it can be treated as a local operand as in Case 1. If the operand is a name or reference parameter, the following search strategy applies:
1. For each parameter within a routine, find all reference and store instructions to the parameter.
2. For the routine, find all control flow operators.
3. For any REFERENCE ... REFERENCE or STORE ... REFERENCE on a control path (determined by the control flow operators found in 2), examine the pair to determine if the second reference operation relies on an attribute referenced or stored by the first operator.
4. For any control path that allows a single REFERENCE to be executed iteratively, determine if the second execution of the REFERENCE relies on an attribute referenced by the first execution.
The above procedure finds all possible occurrences of the error for parameter operands. Steps 1 and 2 can easily be implemented by computer program.
Case 3: Global Operand
If the operand is a global, then it can be accessed by multiple routines. The following search strategy applies:
1. For each global, find all reference and store instructions to the global.
2. Find all the control flow operators.
3. For any REFERENCE ... REFERENCE or STORE ... REFERENCE on a control path examine the pair to determine if the second reference operation relies on an attribute referenced or stored by the first.
4. For any control path that allows a single REFERENCE to be executed iteratively or recursively, determine if the second execution of the REFERENCE relies on an attribute referenced by the first execution.
Note that, with one exception, this is the same search strategy used for parameters. The difference is that, for globals, multiple execution of a single instruction can also result from recursion. Otherwise, the procedure is identical, and in fact the same code used to detect potential inconsistency errors for parameters can also be used to detect potential inconsistency errors for globals.
The above search strategies find all possible consistency errors. A more detailed description of Inconsistency Errors can be found in Bis+75.
Validation
Validation of operands is one of the more basic functions performed in operating systems; it constitutes one of the more basic error types. Validation can take a variety of forms, from checking that an integer subscript is within the bounds before allowing an array access operator to proceed, to checking that a set of properties such as the time-of-day and the caller's access rights hold for an operation to be performed. No single evaluation approach seems adequate to deal with the wide variety of validation found in contemporary systems and information a protection evaluator may have available for performing the evaluation task. As such, two approaches for finding validation errors have been identified. The protection evaluator may choose either or a combination of both.
The first requires the protection evaluator to be able to recognize an invalid condition for an operand. It begins with the sources of data needing validation, finds the operators which use such data (i.e., those which are potential candidates for validation errors), and computes the validation condition holding for a given operator/operand. A protection evaluator must then judge the adequacy of the validity condition for the given operator. The second approach begins with operators and validation conditions which must hold and determines if the conditions are actually enforced by the code. It requires the evaluator to be able to identify all critical operators and specify their associated validation conditions before proceeding with the evaluation.
Outside-to-Inside Approach. A purpose of validation is to prevent privileged system operators from operating on incorrect/unvalidated operands. Externally-supplied user data constitutes such a source. They enter the system in a variety of ways. Direct or indirect parameters to supervisor subroutines constitute one large source. Others include mutually agreed upon mail boxes, communications areas, or files. The operating system is responsible for insuring that this data is properly checked before a system operator uses it.
One approach for determining the adequacy of validation is to begin at the user/system interface and calculate the validity conditions for all user-supplied data at various operators within the system. This can be done as follows:
1. Identify all data entry points into the system. (At all such points, data can enter the system that needs to be validated.)
2. For each data entry point, calculate data flow paths through the system. All operating system variables to which the entering data is directly or indirectly assigned must be recorded.
3. Examine all operators referencing a variable identified in (2) above. Verify that the validity condition enforced on each data path leading to that operator/operand is sufficient.
Step 2 can be automated using data dependency analysis or a modified form of symbolic execution. Steps 1 and 3 must be done manually. It is important to note that without detailed semantic information describing operations being performed, any procedure, such as the above, can only tell an evaluator where to look for errors, but not what to look for.
Inside-to-Outside Approach. Suppose a protection evaluator can identify all critical operators in the system and can specify for each operator the validity condition that must hold for the successful completion of that operator. The problem of finding validation errors then amounts to determining the sufficiency of validation code on all paths leading to that operator. A procedure for checking sufficiency would be as follows:
1. Identify the critical operations within the operating system and the necessary conditions associated with those operations. Record the condition with the associated operand.
2. If an operand is a local or a parameter, follow all possible control paths leading from the operation to determine the data paths leading to the critical operation. In passing in a reverse direction through code that enforces
portions of the validation condition, discard the enforced condition. Eventually, one of the following will occur:
a. All conditions are enforced for that control path.
b. All conditions are not enforced upon reaching a user/system interface, i.e., a validation error can be caused by supplying a value outside the range of the remaining unenforced condition.
c. The control path terminates at a global variable/parameter interface within the system. Go to 3.
3. If the operand is a global or formal parameter from 2c, all operators modifying the global/parameter must contain as an output condition the validity condition associated with the respective variables. They become critical operators to be evaluated by this same algorithm.
A more detailed description of validation errors can be found in Carf76.
Residuals
A common security problem is the residual—data or access capability left after the completion of a process and not intended for use outside the context of that process. If a residual becomes accessible to another process, a security error may result. A major source of such residuals is improper or incomplete allocation/deallocation processing.
Probably the most widely recognized type of residual is the data residual in which some property of the data associated with a cell is not disposed of upon reallocation. One typically thinks of content residuals, i.e., residuals where the cell content is retained after reallocation. Data residuals can, however, involve other cell attributes. Such attributes can include cell size, cell location, and the physical relationship of the cell to other cells. While not representing as high a communications bandwidth as the content residuals, these latter forms of data residual can also represent significant security errors.
The following procedure for finding data residuals is based on identifying the cell allocation/deallocation routine in which residual prevention code should be contained. It consists of four basic steps:
1. Identify all cell types found in the system. This can be done by manually listing various storage media and cells on that media and by examining system data declarations.
2. For each cell, identify its particular freepool, i.e., the buffers for cell resources between deallocation and allocation.
3. For each freepool, identify allocation/deallocation code by finding all symbolic references to the freepool.
4. For each allocation/deallocation routine, determine if a data residual can occur.
A second major type of residual is the access management residual, sometimes known as a "dangling reference." Unlike data residuals that deal with the various attributes of a cell, access management residuals deal with the access paths used to reference a cell, their creation and destruction.
Access paths are, at some level of representation, simply data stored in special cells (e.g., bounds registers, PSW's, segment/page tables, capability cells, etc.). Thus, techniques similar to those described above for finding content residuals will also find certain types of access residuals, i.e., those caused by incomplete deallocation of an access path created by an allocation routine. Access management residuals differ from content residuals in an important aspect. There may be multiple access paths to a given cell, all of which must be deallocated. Furthermore, access paths can be created by other than the formal allocation routines. For example, code that copies an existing access path produces an access path which must also be accounted for at deallocation. Similarly, special instructions may exist (e.g., the IBM 370 "LOAD-REAL-ADDRESS") that produce access paths as a result of invocation, or that can be interrupted causing an access path to be stored for use when the instruction is reinvoked. Thus, in addition to the above procedure, one must examine the system for these latter three sources of access paths and account for the paths at deallocation.
A more detailed description of Residual errors can be found in HolB76.
Serialization
Serialization errors represent one of the broader categories investigated. As such, the error has numerous manifestations and can be described in a variety of ways including ordering specifications; interoperation communication and insuring the proper use of communication channels; mutual exclusion for preserving object integrity; and mutual exclusion for the noninterference of non-atomic operations.
Three distinct approaches for detecting serialization errors are:
1. Analyze the target system macroscopically and informally for the adequacy of each of a list of serialization provisions. The problem with this approach is that no actual algorithm is suggested by the serialization provisions for deciding when serialization errors do or do not exist.
2. Determine potential concurrences, and, given these, determine whether any of them (taken pairwise) represent access conflicts.
3. Assume all access sequences to sharable objects are critical and represent potentially conflicting concurrences unless these are made impossible either by explicit invocations of serialization mechanisms or by other serializing program logic. The problem with this approach is that it detects a great many access intervals that are not serialized in an obvious manner, and one must then resort to deeper analysis such as that in (2).
Each approach is discussed in greater detail along with suggested ways for alleviating deficiencies in Carl 78.
|
{"Source-Url": "http://csrc.nist.gov/publications/history/bisb78.pdf", "len_cl100k_base": 11916, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 60690, "total-output-tokens": 13794, "length": "2e13", "weborganizer": {"__label__adult": 0.0003542900085449219, "__label__art_design": 0.0004420280456542969, "__label__crime_law": 0.000911712646484375, "__label__education_jobs": 0.0013761520385742188, "__label__entertainment": 0.00010126829147338869, "__label__fashion_beauty": 0.00018155574798583984, "__label__finance_business": 0.00047135353088378906, "__label__food_dining": 0.0002980232238769531, "__label__games": 0.0007658004760742188, "__label__hardware": 0.00319671630859375, "__label__health": 0.0004837512969970703, "__label__history": 0.00043582916259765625, "__label__home_hobbies": 0.0001556873321533203, "__label__industrial": 0.0007090568542480469, "__label__literature": 0.0003876686096191406, "__label__politics": 0.00034809112548828125, "__label__religion": 0.0003936290740966797, "__label__science_tech": 0.2279052734375, "__label__social_life": 0.00011342763900756836, "__label__software": 0.0300750732421875, "__label__software_dev": 0.72998046875, "__label__sports_fitness": 0.000232696533203125, "__label__transportation": 0.0005879402160644531, "__label__travel": 0.0001672506332397461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65809, 0.02317]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65809, 0.57724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65809, 0.94256]], "google_gemma-3-12b-it_contains_pii": [[0, 71, false], [71, 1686, null], [1686, 2950, null], [2950, 2950, null], [2950, 3374, null], [3374, 4624, null], [4624, 7611, null], [7611, 9664, null], [9664, 12478, null], [12478, 13782, null], [13782, 15728, null], [15728, 19040, null], [19040, 21957, null], [21957, 24788, null], [24788, 28365, null], [28365, 30757, null], [30757, 31410, null], [31410, 33533, null], [33533, 37073, null], [37073, 40167, null], [40167, 41003, null], [41003, 44598, null], [44598, 45353, null], [45353, 47716, null], [47716, 49986, null], [49986, 52384, null], [52384, 54435, null], [54435, 57092, null], [57092, 60306, null], [60306, 62809, null], [62809, 65809, null]], "google_gemma-3-12b-it_is_public_document": [[0, 71, true], [71, 1686, null], [1686, 2950, null], [2950, 2950, null], [2950, 3374, null], [3374, 4624, null], [4624, 7611, null], [7611, 9664, null], [9664, 12478, null], [12478, 13782, null], [13782, 15728, null], [15728, 19040, null], [19040, 21957, null], [21957, 24788, null], [24788, 28365, null], [28365, 30757, null], [30757, 31410, null], [31410, 33533, null], [33533, 37073, null], [37073, 40167, null], [40167, 41003, null], [41003, 44598, null], [44598, 45353, null], [45353, 47716, null], [47716, 49986, null], [49986, 52384, null], [52384, 54435, null], [54435, 57092, null], [57092, 60306, null], [60306, 62809, null], [62809, 65809, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65809, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65809, null]], "pdf_page_numbers": [[0, 71, 1], [71, 1686, 2], [1686, 2950, 3], [2950, 2950, 4], [2950, 3374, 5], [3374, 4624, 6], [4624, 7611, 7], [7611, 9664, 8], [9664, 12478, 9], [12478, 13782, 10], [13782, 15728, 11], [15728, 19040, 12], [19040, 21957, 13], [21957, 24788, 14], [24788, 28365, 15], [28365, 30757, 16], [30757, 31410, 17], [31410, 33533, 18], [33533, 37073, 19], [37073, 40167, 20], [40167, 41003, 21], [41003, 44598, 22], [44598, 45353, 23], [45353, 47716, 24], [47716, 49986, 25], [49986, 52384, 26], [52384, 54435, 27], [54435, 57092, 28], [57092, 60306, 29], [60306, 62809, 30], [62809, 65809, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65809, 0.10274]]}
|
olmocr_science_pdfs
|
2024-12-05
|
2024-12-05
|
7fcac4c1a72e643fa32ca607909f3949c9fa71c1
|
[REMOVED]
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/19990663/GoesColeEtAlWorklist.pdf", "len_cl100k_base": 8481, "olmocr-version": "0.1.48", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 61358, "total-output-tokens": 10963, "length": "2e13", "weborganizer": {"__label__adult": 0.00045180320739746094, "__label__art_design": 0.0005092620849609375, "__label__crime_law": 0.0004351139068603515, "__label__education_jobs": 0.0006170272827148438, "__label__entertainment": 0.00013971328735351562, "__label__fashion_beauty": 0.0002359151840209961, "__label__finance_business": 0.0003399848937988281, "__label__food_dining": 0.0004165172576904297, "__label__games": 0.0010461807250976562, "__label__hardware": 0.00640106201171875, "__label__health": 0.0006532669067382812, "__label__history": 0.0005125999450683594, "__label__home_hobbies": 0.00016832351684570312, "__label__industrial": 0.0010614395141601562, "__label__literature": 0.00029158592224121094, "__label__politics": 0.0003914833068847656, "__label__religion": 0.0007824897766113281, "__label__science_tech": 0.279052734375, "__label__social_life": 8.910894393920898e-05, "__label__software": 0.01142120361328125, "__label__software_dev": 0.693359375, "__label__sports_fitness": 0.000446319580078125, "__label__transportation": 0.0011577606201171875, "__label__travel": 0.00028634071350097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45726, 0.03007]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45726, 0.21335]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45726, 0.88831]], "google_gemma-3-12b-it_contains_pii": [[0, 1271, false], [1271, 2500, null], [2500, 5526, null], [5526, 8380, null], [8380, 9605, null], [9605, 12301, null], [12301, 14256, null], [14256, 16326, null], [16326, 19458, null], [19458, 20956, null], [20956, 24007, null], [24007, 25413, null], [25413, 28401, null], [28401, 29214, null], [29214, 30951, null], [30951, 34155, null], [34155, 37304, null], [37304, 39993, null], [39993, 43301, null], [43301, 45726, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1271, true], [1271, 2500, null], [2500, 5526, null], [5526, 8380, null], [8380, 9605, null], [9605, 12301, null], [12301, 14256, null], [14256, 16326, null], [16326, 19458, null], [19458, 20956, null], [20956, 24007, null], [24007, 25413, null], [25413, 28401, null], [28401, 29214, null], [29214, 30951, null], [30951, 34155, null], [34155, 37304, null], [37304, 39993, null], [39993, 43301, null], [43301, 45726, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45726, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45726, null]], "pdf_page_numbers": [[0, 1271, 1], [1271, 2500, 2], [2500, 5526, 3], [5526, 8380, 4], [8380, 9605, 5], [9605, 12301, 6], [12301, 14256, 7], [14256, 16326, 8], [16326, 19458, 9], [19458, 20956, 10], [20956, 24007, 11], [24007, 25413, 12], [25413, 28401, 13], [28401, 29214, 14], [29214, 30951, 15], [30951, 34155, 16], [34155, 37304, 17], [37304, 39993, 18], [39993, 43301, 19], [43301, 45726, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45726, 0.05479]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
c9ff43c9392ab64a64ea32be314b48e089bb6cfc
|
Abstract
SETL2 is evolving rapidly as our work on it continues. This document describes many changes made since the original SETL2 report ([Sny89]) was written.
The most significant change is the incorporation of features of object-oriented programming languages, including abstract data types, encapsulation, multiple inheritance, and operator overloading. The other changes we regard as temporary solutions to pressing problems. We have added several new I/O procedures and a link to lower level programming languages, but both of these areas are in need of considerably more work and are likely to change in the future.
## Contents
1. Introduction .................................................. 1
2. Operators ..................................................... 1
3. Statements .................................................... 2
4. Objects and Classes ........................................ 2
4.1 Basic Concepts ........................................... 2
4.2 Overall Structure of a Class .............................. 3
4.3 Instance Variables And Class Variables ................ 4
4.4 Methods ..................................................... 5
4.4.1 Methods As First Class Objects .................. 7
4.5 Creating An Object ....................................... 7
4.6 Inheritance ................................................ 8
4.6.1 An Application Of Inheritance .................. 9
4.7 SETL2 Operator Overloading .............................. 11
4.7.1 Binary Operator Methods .......................... 12
4.7.2 Unary Operator Methods ............................ 13
4.7.3 Relational Operator Methods ..................... 13
4.7.4 Map, Tuple, And String Component Methods ...... 13
4.7.5 Deletion Operations ................................ 14
4.7.6 Iteration Over An Object .......................... 14
4.7.7 Printing An Object .................................. 15
4.8 Testing An Object’s Type ................................ 16
4.9 Storing Objects In Files ................................ 16
5. An Interface With C Functions ................................ 16
5.1 Call-Out .................................................. 17
5.2 Call-Back ................................................ 18
5.3 Avoiding String Conversions ............................. 19
5.4 Putting It All Together .................................. 20
1 Introduction
As our work on SETL2 progresses, improvements in the language are being implemented at a rapid pace. Since the original SETL2 report [Sny89] was written we have added a number of new I/O procedures, a temporary call-out facility, and most significantly support for object-oriented programming. This document describes those new facilities.
The goal in adopting features from object-oriented programming languages is to provide SETL2 with abstract data types and the ability to overload operators. An important feature of SETL which is missing in SETL2 is user-defined operators. We feel that the concept is good, but what one really wants is the ability to redefine SETL2’s built-in operators on user-defined types, i.e. more operator overloading. The concepts of object-oriented programming seem well-suited as a model for these features.
The I/O system in SETL2 is very much like that in lower level programming languages, which is a shame. There seems to be no compelling reason why that need be the case. Most of the high-level dictionary used on strings and maps can be easily extended to apply to files as well. Although this is our eventual goal, we haven’t accomplished it yet. For the time being we have extended the I/O system somewhat, but not changed its nature significantly. What we have done is provide the low-level primitives which, combined with the object features, can be used to implement a superior I/O system in SETL2 itself. Our next task in this area is to experiment with various new I/O systems using classes.
An often-requested feature is the ability to call procedures written in lower level languages. For various reasons this is not at the top of our priority list, although we do recognize its importance. As a stop-gap measure, we have implemented a temporary call-out facility, which we intend to replace as time permits. If the use of the provided features is encapsulated in a single package, it should be possible to use functions written in a lower level language now, without requiring drastic changes when we implement a more ambitious interface.
This is not a comprehensive description of SETL2, and as this is being written no such document exists. Plans are being made to produce such a reference but until that is available we suggest you refer to [SDDS86] for a general description on programming with sets, [Sny89] for a description of the core features of SETL2, and this document for those features not described in that report.
2 Operators
We have not added any new operators to SETL2, but the function of some of the relational operators has been enhanced. We have extended <, <=, >, and >= to apply to sets and maps in the obvious way:
\[
\begin{align*}
\text{s < ss} & \quad \text{Yields true if } s \subset ss, \text{false otherwise.} \\
\text{s <= ss} & \quad \text{Yields true if } s \subseteq ss, \text{false otherwise.} \\
\text{s > ss} & \quad \text{Yields true if } s \supset ss, \text{false otherwise.} \\
\text{s >= ss} & \quad \text{Yields true if } s \supseteq ss, \text{false otherwise.}
\end{align*}
\]
This addition makes the subset and incs operators redundant. They have not been removed from the current implementation yet, but we consider them obsolete and expect to remove them eventually. For this reason we did not provide the ability to overload them (see 4.7), although we do provide the ability to overload the less than operator.
3 Statements
Statements are as described in the SETL2 report, with one minor addition. The loop statements described there included for, while, and until loops, but nothing to support a do forever loop. It is obviously easy to code this as while true loop, but that is a bit uglier than we like. We now allow a loop header without a for, while, or until clause, so a do forever loop can be written with the following syntax:
```
loop-forever
loop
statement-list
end
end
```
Clearly there should be an explicit exit statement somewhere in the statement-list, or the loop will never terminate.
4 Objects and Classes
The most significant extension to SETL2 is the incorporation of features found in object-oriented programming languages. The features provided include encapsulation, multiple inheritance, user-defined abstract data types, and operator overloading on user-defined types.
The principal goal of this project is to provide a mechanism for the SETL2 programmer to create his own types and to define the normal SETL2 operations (+, *, etc.) on those types. The concepts of object-oriented programming languages seem to be the best model for this feature. The overloading of methods provided in other languages is easily extended to include many of SETL2’s built-in operations.
Although we have provided multiple inheritance we do not provide the many mechanisms used to control exactly what is inherited and how it is named. We have provided an ‘all or nothing’ inheritance, in which a subclass inherits all of the names in its superclasses without modification. There are rules for handling duplicate names, but these control what is hidden and what is visible, they do not allow selective inheritance or the ability to rename an inherited method.
We have gone to considerable effort to be consistent with the existing features of SETL2. We have preserved the concepts of value-semantics and passing parameters by copying even when it seemed more efficient to abandon these ideas. This is somewhat unusual among object-oriented programming languages. Pointer semantics are much more common.
4.1 Basic Concepts
Before getting into details on the SETL2 object system, we will present some of the general concepts and terminology of object oriented programming. Much more detail on these topics can be found in [Mey88] or [GR89].
An object is a set of values and a set of operations which can be performed on that object. It will normally be used to represent something not easily represented by one of SETL2’s built-in types, such as a display item, file, heap, etc. The operations defined on an object are called methods, and are conceptually similar to procedures.
The fundamental difference between a method and a procedure is that a method specifies an operation to be performed, but not the specific procedure which should be used in performing the operation. To illustrate the difference between these ideas, suppose we have two objects, one a stack and the other a queue. Each of
Overall Structure of a Class
These objects has a method called top, which returns the top element. We have a variable, x, which can be either a stack or a queue. Then the expression x.top() will call either the method defined on stacks or the method defined on queues, depending on the value of x at the time the expression is evaluated. A method is therefore very much like an overloaded procedure.
A method is invoked by passing it the containing object and any other operands, or arguments, required by the method. The process is much like a procedure call except the object determines the procedure called and becomes an implicit argument to the call. In the terminology of object-oriented programming this is called passing a message to the object.
A class describes the set of data elements and methods defined on a set of similar objects. The set of objects described by a class are called the instances of the class. Both data elements and methods defined on a class can be either public or private. We also allow public or private data elements common to all instances of a class, but this is somewhat unusual.
The data elements and methods of a class can be merged into another class by a process called inheritance. The source class in such an operation is called the superclass and the target is called the subclass. The subclass is able to use data elements or methods in the superclass as if they were declared internally. It is possible for the subclass to override methods in the superclass with its own, simply by locally defining a method with the same name.
4.2 Overall Structure of a Class
The syntax of a class definition closely resembles a package definition. A class is treated as a compilation unit, at the same level as a package or program. It is not possible to embed a class within a package, program or other class. Like packages it consists of two parts, a class specification and a class body. It is not necessary for the specification and body to be in the same source file, but unlike packages there is no benefit in separating them. The class specification contains a list of superclasses and names of methods and data elements visible to units which use the class. The complete syntax of a class specification is:
class-definition -> class class-name ;
- inherit-clause
- data-declarations
- method-declarations
- end
Each of the components of a class specification will be elaborated in following sections.
A class body contains the bodies of methods declared in the class specification, along with methods and data elements visible only within the class. The syntax of a class body is:
```
class-body -> class -> body -> class-name -> ;
```
We have adopted syntax similar to the package specification / package body syntax for consistency, but there are a few differences we would like to point out. First, any procedure in a class specification is treated as a method and must be called as such. Second, the inherit clause is placed within a class specification, not a class body. Inheriting a superclass is quite different from using a package. The distinction will be explained in detail in 4.6.
If this is a little cryptic, please be patient. Each of these concepts will be explained in following sections, but before we get into details it is important to see how a class is declared at the outermost level.
### 4.3 Instance Variables And Class Variables
Data elements in classes are divided into four categories, based upon where they are stored and where they are visible. A data element can be stored either in an object, in which case there is a distinct element owned by each class instance, or it can be global to all instances of the class. We will refer to an element stored with an object as an *instance variable* and a variable global to all instances as a *class variable*.
An instance variable is declared with the same syntax as a variable declaration:
```
variable-declaration -> var -> identifier -> ;
```
Note that all instance variables *must* be explicitly declared. Instance variables in the *current instance* (see 4.4) may be referenced by name only and instance variables in other instances are referenced by *instance.name*.
The initialization clauses on an instance variable are assignments executed when an instance is created, but before any `create` method (see 4.5) is called.
Class variables are declared in a similar manner to instance variables, but we require an extra keyword:
```
class-variable-declaration
class var identifier := expression;
```
The initialization clauses on class variables are executed when a class is loaded by the interpreter. This can happen at one of two times: If a class is explicitly used by a program, the class is loaded at the start of execution. Otherwise, the class might be loaded implicitly by the `unbinstr` procedure (see 6.2) or the `getb` procedure.
At this point we will start on an example, to illustrate the concepts presented so far. Suppose we are building a system for a corral, and we would like a class to store information about horses, donkeys, etc. We will start out with the following outline of a class. At this point we are only declaring instance and class variables, not methods.
```
class beast_of_burden;
class var total_beasts; -- count of instances of beast_of_burden
var kind_of_beast; -- "horse", "donkey", etc.
end beast_of_burden;
class body beast_of_burden;
class var beasts_in_use := {}; -- set of beasts currently assigned
var assigned_to; -- who has a particular animal
end beast_of_burden;
```
In this example the variable `total_beasts` is available to any unit using the class, and there will be only one such variable. Each instance of `beast_of_burden` will contain a variable `kind_of_beast` and any unit using the class will be able to use or change that variable. The variable `beasts_in_use` will be visible only within the class body and a single copy is shared by all instances. Each instance of `beast_of_burden` will contain a variable `assigned_to` but it will be visible only within the class body.
### 4.4 Methods
A method is similar to a procedure, except that it is owned by a particular instance of a class, called the `current instance`. The syntax of a method definition is identical to a procedure definition, except that read-write and write-only parameters are not allowed:
Methods are called with an expression like this:
Here \textit{object} is an instance of a class in which \textit{method-name} is a method. The method will be invoked and \textit{object} will become the current instance. Any references to instance variables which are not proceeded by an explicit object will refer to variables in the current instance. A method may also refer to objects other than the current instance, using the expression \textit{instance.instance-variable}.
Methods may be called within a class body without the instance prefix. In that case the current instance will remain current. There is also a new nullary operator, \texttt{self}, which will produce the value of the current instance.
Expanding on our corral example, suppose we now wish to provide a method to allow clients to check out beasts. With this method added our \texttt{beast_of_burden} class is as shown in figure 1. With that modified class we can assign a beast to a client with the following method call.
beast.check_out(client)
The current instance in a method call is analogous to a read-write parameter in a procedure. Internally, the \texttt{SETL2} interpreter stores objects as tuples, where the first element of the tuple is a key indicating the class of the object and the remainder of the tuple stores the values of instance variables. When a method is called, the instance variable values will be copied into the instance variables. When the method returns, those values will be copied back into the tuple. It is important to note that the current instance in a method call can change as a result of the call, but \textit{not} until the method returns. This protocol is exactly the same as passing parameters by copying, not by reference.
4 Creating An Object
4.4.1 Methods As First Class Objects
A method may be used as a first class object just as a procedure can, but only if there is an implicit instance variable included. Consider the method `check_out` in our corral example. The way to use its value is as follows:
\[
\text{result} := \text{beast}.\text{check}_\text{out};
\]
The value of `result` is similar to a procedure. If used in a procedure context, i.e. \( y := \text{result}(x) \), then the method `check_out` will be invoked with `beast` as the current instance. It is a little like absorbing `beast` into the method.
This system was chosen for consistency with procedure values. When a procedure value is used, Setl2 saves the environment of that procedure, or the current activations of all enclosing procedures. Within a class body the current instance is part of a method’s environment as well. A method value is denoted by the method name alone within a class body, so we bind the current instance to the method and save the combination as a procedure. We do not allow methods to be used as first-class objects without an associated object.
4.5 Creating An Object
An object is created with a call to a class, so continuing with the corral example the statement to create a new beast of burden is:
\[
\text{new_beast} := \text{beast}_\text{of}_\text{burden}();
\]
As part of the creation process, each of the initialization clauses on the instance variable declarations are executed.
Although it isn’t necessary, it is possible to provide a `create` method which accepts parameters and uses them to initialize the created instance. Such a method must appear in both the class specification and class
body, since it must be visible outside the class body. Any parameters on the creation call will be passed to `create`. Thus if `beast_of_burden` contained the following method:
```plaintext
procedure create (a,b);
kind_of_beast := a;
assigned_to := b;
end create;
```
then the previous creation call might look like this:
```plaintext
new_beast := beast_of_burden ("horse","George");
```
The number of actual parameters on the creation call must agree with the formal parameters in `create`. Notice that `create` does not return anything. The `create` procedure does not actually create the new object. When `create` receives control, the object has been created by the interpreter and its instance variables have been set according to the initialization clauses on the instance variable declarations. The `create` procedure is invoked after this preliminary initialization, and the new object will be installed as the current instance. When `create` terminates that current instance will be returned. Any value returned by `create` will be ignored.
We would like to point out that `create` is not a reserved word. It is only special in that if a method named `create` is present in a class specification it will be called implicitly at the time an object is created.
### 4.6 Inheritance
The concept of inheritance among classes is related to the concept of using a package or class, but considerably stronger. The fundamental difference is this: When a class is used by a program, package, or another class the methods and instance variables are only available as components of objects of that class. When a class is inherited, its instance variables and methods become a part of the subclass, as if the superclass were textually inserted in the subclass. It is possible and reasonable for a class to both use and inherit the same class. This probably won’t become clear without some examples, but we have to present the syntax first. The syntax of an inherit clause is:
```plaintext
inherit-clause
```
The inherit clause is placed in a class specification, after the header and before any other declarations. Each of the `identifier`’s above must be classes available in the library. All variables and methods in the superclass will be brought into the current class, unless there are name conflicts. Here are the rules for resolving those:
- No variable names may be redefined.
- Method names follow similar rules to packages. A local definition overrides an inherited one. If there are conflicts among inherited names, all are hidden. Hidden names are accessible with the syntax: `superclass.method-name`. Hidden names are accessible only within class bodies, and then only for names in superclasses.
To illustrate the distinction between use and inherit, suppose we have a class `animal` in our library which looks like this:
```pascal
class animal;
var species;
procedure create(kind);
end animal;
class body animal;
var owner;
procedure create(kind);
species := kind;
end create;
end animal;
```
Now a `beast_of_burden` is also an animal, so we might want to make use of some of the facilities in `animal` within `beast_of_burden`. What happens if we use the class `animal`? Then within `beast_of_burden` we can create animals and store and retrieve their species. We will not be able to access the `owner` instance variable because it is not public. Essentially, we can use an animal as a component of a beast of burden, but that's all.
Now suppose `beast_of_burden` inherits `animal`. In this case all of the instance variables in `animal` are available as part of a `beast_of_burden`. So if `silver` is a `beast_of_burden` we can get its species with `silver.species`. If `beast_of_burden` does not declare its own `create` method, then the one in `animal` will be called when a `beast_of_burden` is created. If it does contain a `create` method, then that method will be called. If we want to call the method in `animal`, it can be called with an expression like `animal.create("horse")`.
The only difference between inheriting and textual inclusion is that redefined methods are hidden, not an error, and those hidden methods can be accessed by including the class with the name.
Another consequence of the differences in strength between inheriting and using a class, is the distance of the name transfer. Suppose `a`, `b`, and `c` are classes. If `a` uses `b`, and `b` uses `c`, then nothing in `c` is visible in `a` (unless of course `a` also uses `c`). That is not the case with inheritance. Remember it is similar to textual insertion and is recursive. If `a` inherits `b` and `b` inherits `c` then everything in `c` not blocked by a redefinition in `a` or `b` is visible in `a`. Even hidden methods in `c` are accessible with the expression `c.method-name`.
### 4.6.1 An Application Of Inheritance
At this point it seems worthwhile to illustrate inheritance with a somewhat more realistic example. Suppose we have a class in our library for ordered trees (an ordered tree is just a tree in which the order of child nodes is significant). A reasonable representation of ordered trees is a set of nodes, a distinguished root, and a map from nodes to children, where children are represented by tuples. That is, the first child of a node will be `child(node)(1)`, the second child will be `child(node)(2)`, etc. An outline of this class with a depth first search method might be as shown in figure 2.
Now suppose we are building a system which must manipulate expression trees. An expression tree is a kind of ordered tree, but each node represents an operator or a literal. The children of an operator node are its associated operands. To evaluate an expression, we can perform a depth first traversal of the tree, evaluating each node after we have evaluated its children. For example, the expression tree for \((6 \cdot 8) + ((5 + 4) / 2)\) would be as follows:
class ordered_tree;
procedure dfs(p);
end ordered_tree;
class body ordered_tree;
var nodes := { }, root, child := { };
procedure dfs(p); -- expect procedure to execute on each node
recursive_dfs(root);
procedure recursive_dfs(current);
if current = om then
return;
end if;
for i in [1 .. # (child (current) ) ] loop
recursive_dfs(child (current) (i) );
end loop;
p(current); -- visit the node
end recursive_dfs;
end dfs;
end ordered_tree
Figure 2: Ordered tree class
Expression tree for \((6 \times 8) + ((5 + 4) / 2)\)
Clearly, any operation defined on ordered trees might also be useful on expression trees, so this seems a good application for inheritance. We want to inherit ordered trees, but we have to add a couple of things. We would certainly want a label associated with each node, to hold either an operator or a literal. We would also like a method to return the value of an expression. A class for expression trees might look as shown in figure 3.
From this example we can get some notion of the situations in which inheritance is useful. Usually the subclass should be a specialization of the superclass. In our example above expression trees are clearly a specialization of ordered trees. Furthermore, it must be a large enough specialization that the underlying data structures of the superclass are still useful. For example, an expression tree is also a tree, but it is unlikely that a tree would be implemented in such a way that it would be a useful superclass unless we insist that it be ordered. It would probably be stored as a set of edges, or at best the children of a node would be a set rather than a tuple. Either of these representations are awkward for expression trees. A good indicator of applications of inheritance is to find situations in which variant records would be used in languages which
4.7 SETL2 Operator Overloading
It is possible to overload SETL2's built-in operators with methods defined on classes, and it is necessary if you want to use those operations on objects. Clearly the SETL2 interpreter will not know how to add two objects unless an addition method is explicitly defined. To define an addition method for the class *beast_of_burden*, for example, the following could be placed in the class body:
```setl2
procedure self + right;
var result;
result := beast_of_burden();
if kind_of_beast = right.kind_of_beast then
result.kind_of_beast := kind_of_beast;
elseif {kind_of_beast, right.kind_of_beast} = {"horse","donkey"} then
result.kind_of_beast := "mule";
else
abort("Unknown cross-breed: ", kind_of_beast, " and ", result.kind_of_beast);
end if;
return result;
end;
```
With this method defined two beasts can be mated with the normal SETL2 addition operator, i.e.
There are a few things which are important to notice here. First, it is not necessary to declare this method in the class specification, only to define it in the class body. Since this is essentially a hook into the SETL2 syntax it is always available. In fact, if an addition is attempted without this method being defined the interpreter will abort the program.
The second thing to notice is that it is the responsibility of the method to create a result instance and to explicitly return it. The current instance is not usually returned, although it could be. Note that you will not normally want to modify the current instance in one of these operations, although you can do so. If this is done you will just have to understand that the expression \( a + b \) might modify \( a \) as well as yielding a value.
Finally, the headers of operator methods are quite unusual. We include the operator itself in the header rather than an identifier. This has the advantage of being easy to remember, but the disadvantage that the method can not be explicitly invoked. The only way to call an operator method is with ordinary expression syntax. The lack of a name means that these methods do not have first class values and that once hidden they are completely inaccessible.
For a good example of the power of operator overloading, see the multiset example in Appendix A.
### 4.7.1 Binary Operator Methods
Most of SETL2's binary operators have two associated methods. We always use the operator itself in the header, so they are fairly easy to remember. The headers have the following forms:
```plaintext
procedure self binary-op id1 ;
procedure id1 binary-op self ;
```
The reason for the two forms is that an object might appear either as the left or the right hand operand in an expression. Most SETL2 operations require both operands to be of the same type, but there are exceptions. The \( * \) operator, for instance, will operate on integers and strings or integers and tuples, in which case the operands can appear in either order. We allow the two forms of binary operators to enable the same sort of thing here. The first form above is used if the left operand determines the method used, in which case the left operand will become the current instance. Otherwise the second method will be used and the right operand will become the current instance.
In deciding how to process a binary operation the SETL2 interpreter gives precedence to the left operand. That is, it goes through the following steps before performing the operation:
1. If the left operand is an object and that object has a corresponding method then that method is used and the left operand will become the current instance.
2. If the left operand is not an object or it doesn’t have a left operand method but the right operand is an object with a right operand method, then the right operand method is used and the right operand will become the current instance.
3. If neither operand has an appropriate method than the program is aborted.
Each of these methods should return a value, although that is not enforced by the system. If no value is returned, then \( \Omega \) will be used, which is probably not what is desired. The method should create a new instance, build its values, and return it.
Here is a complete list of the binary operator methods.
4.7.2 Unary Operator Methods
All of SETL2’s unary operators also have associated methods. The headers for these methods have the following form:
```plaintext
procedure unary-op self ;
```
The particular method used will always be determined by the class of the value of the operand, which will become the current instance. Each of these methods should return a value, although that is not enforced by the system. If no value is returned, then \( \Omega \) will be used, which is probably not what is desired. The method should create a new instance, build its values, and return it. Here is a complete list of the unary operator methods.
- # arb domain range pow
4.7.3 Relational Operator Methods
Relational operators are somewhat of a problem, particularly the equality and inequality operators. For one thing, in order to reduce the number of branch instructions the SETL2 code generator assumes that \( a < b \iff b > a \). Therefore, we allow \( a < \) method but do not allow \( a > \) method. For each operation we invoke the \( < \) method, but in the expression \( a < b \), \( a \) will become the current instance and in \( a > b \), \( b \) will become the current instance.
The difficulty with equality and inequality is that these primitive operations are used in determining set and map membership, as well as being explicitly used in SETL2 programs. To further complicate matters, SETL2 uses hash tables to implement sets, so we must absolutely guarantee that two equal values produce the same hash code. Since we can see no way to enforce this, we do not allow equality and inequality to be overridden. Two objects are considered equal if they are instances of the same class, and if all their corresponding instance variables are equal.
Because of these restrictions, only \( < \) and \( \in \) have associated methods. The \( < \) method has two forms (left and right) just like other binary operators and follows the same rules. The \( \in \) operator also has two forms but the precedence is reversed. We give precedence to the right operator in \( \in \) expressions.
Each of these methods must return either true or false.
The \( < \) method will be called for any of the expressions: \( a < b \), \( a \leq b \), \( a > b \), or \( a \geq b \). The expression \( a \leq b \), means the \( < \) method returned true or the objects are equal in the sense described above.
4.7.4 Map, Tuple, And String Component Methods
SETL2 has four expressions used to refer to portions of maps, tuples or strings: \( f(x) \), \( f\{x\} \), \( f(i..j) \) and \( f(i..) \). Each of these expressions can appear in both left and right hand side contexts. The ability to overload these syntactic constructs is particularly valuable, since it enables us to define our own aggregates organized any way we like, while retaining the ability to access components of those aggregates quite elegantly.
Each of these expressions has two associated methods, one for left hand contexts and one for right hand contexts. The syntax of the headers for each of these methods is as follows:
procedure self (id1);
procedure self (id1) := id2;
procedure self {id1};
procedure self {id1} := id2;
procedure self (id1 .. id2);
procedure self (id1 .. id2) := id3;
procedure self (id1 ..);
procedure self (id1 ..) := id2;
Each of the methods for right hand side contexts (those without an := symbol) should return a value. Otherwise Ω will be returned. The methods for left hand contexts should not return anything, but the rightmost argument should be used to modify the current instance.
### 4.7.5 Deletion Operations
**SETL2** has three deletion operations, from, fromb, and frome, and methods for all of these may be defined on objects. The syntax of the method headers is:
procedure from self;
procedure fromb self;
procedure frome self;
These must each return a value, or Ω will be used. Notice that there is no left operand in these headers, even though each is a binary operator. The left operand in a deletion operation is written but not read. Whatever value these methods return will be assigned to the left operand as well as the target operand.
### 4.7.6 Iteration Over An Object
**SETL2** has several syntactic constructs which call for iteration over an aggregate. For instance, in the expression:
{x in S | x < 5}
the interpreter will iterate over S screening each element with the condition x < 5 and inserting into the result any value which satisfies that condition. Iterators are used in set and tuple forming expressions, for loops and quantified expressions. There are two general forms of iterators:
expression\textsubscript{1} in expression\textsubscript{2}
expression\textsubscript{1} = expression\textsubscript{2} \{ expression\textsubscript{3} \}
Note that the expression $y = f(x)$ is equivalent to $[x, y] \text{ in } f$, and so is included in the first form above.
We have two pairs of built-in methods, corresponding to these two syntactic constructs:
procedure iterator\_start;
procedure iterator\_next;
procedure set\_iterator\_start;
procedure set\_iterator\_next;
When the interpreter encounters code requiring an iteration over an object, it calls the iterator\_start or set\_iterator\_start method depending on whether the iterator was of the first or second form above. Then it repeatedly calls iterator\_next or set\_iterator\_next to get successive elements of the object.
The iterator\_start and set\_iterator\_start methods need not return a value. They are only there to let the object initialize an iteration. The iterator\_next and set\_iterator\_next methods should return the next element in the object \textit{within a tuple} if such an element can be found, or $\Omega$ if there is no such element. The tuple enclosing the result value is used by the interpreter to determine if the iterator method was able to produce a value.
If the iterator expression is $y = f(x)$, then the first pair of iterator methods will be used, but each return value must be a pair, so each return will look something like this:
\[
\text{return } [(x,y)];
\]
Notice the double brackets. The outer tuple indicates that a value was found, and the inner tuple is the pair of values required by this iteration form.
If the iterator expression is $y = f\{x\}$ then the second pair of iterator methods will be used. The return values must be the same as for $y = f(x)$ iterators.
None of the names of methods described in this section are reserved words. If not used as iterator methods, they can have any number of parameters and return anything you like. If they are to be used for iteration, they must conform to the rules above, or the program will be aborted.
### 4.7.7 Printing An Object
Objects are printed by first calling the built-in procedure \texttt{str}, then printing the string. The default value produced by \texttt{str} is useful mainly for debugging. It prints all the instance variables, but in an ugly format. This string can be overridden with a method having the name \texttt{selfstr}, declared with the following header:
\[
\text{procedure selfstr;}
\]
If this method is provided it will be called by \texttt{str} for objects of the relevant class. It can return any value, but ideally should return a printable string of the object.
4.8 Testing An Object’s Type
The type of an object can be determined with the built-in type procedure. The value returned will be the name of the object’s class as an upper case character string. SETL2 is not case sensitive, and always keeps names as upper case.
4.9 Storing Objects In Files
Objects may be stored in either text or binary files, but may only be re-read from binary files. If an object is read from a binary file by a program which does not explicitly use the object’s class, then the class will be loaded at the time the object is read. A similar load takes place if an object is converted from a string value with the unbinstr built-in procedure. After the class is implicitly loaded, all the methods corresponding to built-in operations will be available on objects of that class.
5 An Interface With C Functions
One extremely useful feature in very high level languages, previously lacking in SETL2, is the ability to call procedures in lower level languages. There are two compelling reasons why this is desirable: to make use of the vast amount of existing software written in other languages, and to code small portions of programs in a lower level language for efficiency. Unfortunately, providing such an interface raises many small technical problems, takes quite a bit of coding, and holds no research interest. For these reasons it is not a very high priority in spite of its importance.
As an interim solution to this problem, a crude call-out and call-back facility has been implemented. It is a kludge, but it does work and provides the ability to experiment with interfaces between SETL2 and other languages before a more ambitious implementation is available. It requires re-linking the SETL2 interpreter with the functions to be provided along with a substantial amount of glue code. There is also glue code required on the SETL2 side. The following compromises were made in order to provide some level of call-out, with a minimum of effort:
1. The only language supported is C. Interfaces to other languages have to go through the same glue code, so have to go through C.
2. There is no name-binding between C and SETL2. Neither C nor SETL2 have access to the other’s identifiers, or even the ability to directly call the other’s functions or procedures. SETL2 is able to call a single pre-set C function, which may then re-route the call to the function desired based on the arguments passed from SETL2. There is no linking directly from SETL2 to arbitrary C functions.
3. All parameters are passed as C character strings. This greatly increases the cost of call-out, but means we don’t have to deal with the numerous type-conversion issues at this time.
4. There is the ability for C functions to call back into SETL2, but with similar restrictions.
So there is a crude, clumsy, callout capability which we expect to replace when time permits. We hope most readers are scared off by now. If so jump to section 6. For the intrepid, trudge on.
5 Call-Out
5.1 Call-Out
Call-out from SETL2 to C is accomplished through two procedures. On the SETL2 side, the user should call a new built-in procedure, callout. When callout is invoked, the interpreter will do some transformation of the arguments, then call the C procedure setl2_callout, which must be provided by the user and linked with the SETL2 interpreter. The procedure callout expects the following three arguments:
1. The first argument should be an integer service code. Remember that SETL2 knows nothing of the C names, so will always call the same C function. The integer service code is convenient for the C function to use as a selector in a switch statement, to in turn call the function really desired.
2. The second argument should be a SETL2 procedure, which will be called if the C function calls back into SETL2. The C functions can no more see the SETL2 names than SETL2 can see C names, so we must let the interpreter know how to handle callbacks. We’ll go into much more detail on this below. If it is not necessary for the C functions to call SETL2, this argument can be anything, but Ω seems most appropriate.
3. A tuple of strings, comprising the data to be sent to the C function.
When callout is called, it will save the call-back handler (C will always call the same function, which will in turn call the call-back handler), convert the SETL2 data into corresponding C forms, and call setl2_callout, which must have the following prototype:
char *setl2_callout(int service, int argc, char **argv);
The first argument is obviously the service code from SETL2. The second is the length of the tuple passed to callout, and the third is an array of pointers to the actual strings in that tuple. The function setl2_callout should convert those strings into C internal form and call some other C function based on the service code.
To illustrate, let’s go through an example. Suppose we have a SETL2 program which operates on matrices. We need a determinant function, and happen to have one available in C. We would like to have this available in SETL2 since it seems to be a generally useful procedure. The first step is to decide how we would like to call the procedure in SETL2. This seems to be a reasonable setup:
\[
\text{result} := \text{determinant}(m);
\]
where \( m \) is a tuple of rows, and each row is a tuple of reals. We would expect \( \text{result} \) to be a real. The glue code on the SETL2 side consists of converting the arguments into strings and passing the strings to callout, along with an integer code indicating the service we would like performed. We will assume the matrix is square. We want to pass the size of the matrix as the first element of the tuple followed by each cell, listed row by row. The glue code to perform this transformation might look like this:
\[
\begin{align*}
\text{procedure determinant}(m); \\
&\quad \text{return unstr(callout(1,om,[str(#m)]+[str(x) : x in +/m]));}
\end{align*}
\]
Now we move to the C side. The user must provide a function prepared to accept the arguments from the SETL2 side. Here is a skeleton, assuming we have a function \( \text{c_determinant} \) which will calculate determinants.
18
char *setl2_callout(
int service, /* an integer service code */
unsigned argc, /* length of argument vector */
char **argv) /* argument vector */
{
static char return_string[100];
switch (service) {
case 1 : /* this is our determinant service */
/* You get the idea. The first string in argv is the size */
/* of the matrix, the others are matrix data. You have to */
/* take these strings and set up to call c_determinant(). */
sprintf(return_string, "%f", c_determinant(/* args */));
return return_string;
}
}
There are a couple things to notice here. First, the service code is intended to be used as a selector in a
switch statement, but you can do anything you want with it, including ignore it. It is also reasonable to use
the first character string as the selector, so that you can get something readable when you print it, but then
you would have to use if..then..else’s to pick out the C function you wish to call.
The second thing to notice is that setl2_callout must return a character pointer. That pointer can be
NULL, but it can not be some random value. If it is you will get a segmentation error if you are lucky, and
you will destroy random data in your program if you are unlucky.
5.2 Call-Back
Call-back is the ability of a C function to access services in SETL2. This is even uglier than call-out, so skip
to section 6 if you don’t really need this capability.
From the C side, you will call a function in the SETL2 interpreter passing it character strings as
arguments, and accepting a character string as return value. The function called is setl2_callback, and
has the following prototype:
char *setl2_callback(char *firstarg,...);
The SETL2 interpreter uses the ANSI C convention for functions accepting a variable number of argu-
ments, so you will have to include stdarg.h in any C source files containing a call to setl2_callback.
Any number of character pointers may be passed as arguments, but the final one must be a NULL. If you
forget the NULL, setl2_callback will keep reading arguments until it happens to run into a NULL, or
tries to access something which upsets the operating system. Again, if you’re lucky you’ll get a segmentation
error, if not you’ll destroy random data and continue.
The function setl2_callback in the SETL2 interpreter will gather up all arguments except the first
one into a tuple and pass them to the procedure passed as a call-back handler in callout. That procedure
should have been declared with two arguments. We assume the first will be a service code, and the rest are
data used to perform that service.
As an example, let’s suppose there are some global variables in the SETL2 side which we want to make
available in C. We only want to allow C to reference them, not set them. We will provide a procedure
get_value, which will return the value of those variables. The C statement to get the value of the variable
user in the SETL2 program is as follows:
void some_c_function();
{
char user_name[100];
strcpy(user_name,setl2_callback("get_value","user",NULL));
}
The return value will be a pointer into the space of setl2_callback, so you have to do something with the return value before you call that function again. In this case we immediately copied the return value to local storage. Remember the NULL at the end of the argument list, without it you’ll have big problems.
Now we move back to the SETL2 side. We must call out to C passing a procedure which can handle call-backs. The call-back handler should look at the first argument to decide what service to provide, then use the remaining data to provide it. That should generally be done by calling some lower level provider. Here’s an outline of a procedure which calls out to C and accepts call-backs.
procedure use_c_for_something;
callout(some_service,call_back_handler,some_args);
return;
procedure call_back_handler(service,args);
const callback_map := { ["get_value",get_value] };
return callback_map(service)(args);
end call_back_handler;
procedure get_value(variable);
[variable] := variable; -- disassemble the tuple of strings
case variable -- return one value
when "user" => -- should be more of these
return user;
end case;
end get_value;
end use_c_for_something;
WARNING: This is just an outline. You’ll have to fill in many details, particularly for error checking!
### 5.3 Avoiding String Conversions
If you are transferring large arrays of numeric data to and from a C function, the cost of converting that data to string form and back might become expensive. There are two new built-in procedures in SETL2 which can help avoid that, binstr and unbinstr. These procedures convert SETL2 values into a character string form and back, conceptually like str and unstr, but do not produce easily readable character strings. Their purpose is to provide a quick and reversible conversion, so unbinstr(binstr(x)) is always equal to x, but unstr(str(x)) is not necessarily x. **Caution:** The binary string of an atom or procedure can not be stored on disk and converted back in another program. These values are pointers, and only have a life of a single program execution.
The format of a binary string is dependent on the internal representation of SETL2 values, and is therefore subject to change. We’re only going to describe the format of integers, reals, and tuples here, but even that should not be taken as gospel.
Each binary string consists of a number of binary values concatenated. There are no alignment characters included, so even though we will use C structures to show the sequence of fields, you must understand that
these are not really structures. If your C compiler must add extra characters for alignment, you’ll have to pick apart these structures as character strings.
An integer consists of the following fields:
```c
struct {
int form_code;
long int_value;
};
```
We assume type checking is done on the SETL2 side, so the form code can be ignored. The integer value is a C `long`, but only the low-order half less one bit is used. If you pass integers longer than that you will get a long integer structure, which we prefer not to describe. This is only for short integers.
A real consists of the following fields:
```c
struct {
int form_code;
double real_value;
};
```
Again, you can ignore the form code. The format of the `double` varies with machine implementation. We prefer IEEE long format, and that is how the system is compiled if IEEE long is available.
A tuple is a little more complex. The first thing you will find is a header, which has the following fields:
```c
struct {
int form_code;
long tuple_length;
};
```
Here `tuple_length` is the number of components in the tuple. Following this header will be each of the components in sequence. They will be concatenated with no intervening spaces or nulls for alignment. This will be a little difficult to handle unless you verify on the SETL2 side that the tuple is homogeneous.
### 5.4 Putting It All Together
Now that we have all the pieces, how do we assemble them? First notice that you should be thinking in terms of extensions to SETL2, rather than linking a function into SETL2 for one specific program. It’s a bit too much work if you aren’t adding a generally useful feature. It’s very important, though not required, to hide all this in a package rather than coding call-outs in more general SETL2 programs. Remember that this is a temporary facility. We do intend to replace it with something easier to use and requiring fewer data conversions, so minimize the number of calls to `callout` appearing in your code. If you restrict it to a single package conversion will be easier when a better facility is available.
With this document, you should have received the SETL2 interpreter in library form, along with a sample `Makefile` and skeletons of both a C call-out handler and a SETL2 package for call-out. To assemble all this do the following:
1. Modify the Makefile, C source file, and SETL2 source file to use the C procedures you want to provide in the SETL2 interpreter.
2. Create your customized version of the SETL2 interpreter by running make.
3. Compile the SETL2 package.
4. Insert use callout_package in the beginning of any programs or packages which use the procedures in the SETL2 package.
The SETL2 interpreter contains many calls to library routines, so you’ll have to have available the libraries we use. Here are the libraries (and possibly compilers) you will need:
- Unix: Gnu C and gnulib.
- VMS: VMS C.
- Macintosh: MPW C 3.0.
### 6 New Built-In Procedures
We have added a number of new built-in procedures, primarily for input and output. There are also a few new string handling procedures, but these are really designed to support I/O as well.
#### 6.1 Input-Output
The most drastic change in I/O so far is the addition of a random file type. Random files don’t really fit nicely in SETL2, since there is no convenient fixed-length type. We implemented a kludgy random file system by assuming all values read or written will be character strings, and relying on the SETL2 programmer to convert to internal types. We have provided procedures to do a default conversion, but these yield value strings of wildly varying length for similar types, so will be tricky to handle in random files.
A strong word of caution: We are not very satisfied with SETL2 I/O, and are considering a radical redesign. The new procedures we have provided were chosen to support an I/O system based on map and string syntax, which will be implemented using objects.
Here are the new or changed procedures:
\( h := \text{open}(f,m) \)
The function of \( \text{open} \) has not changed, but there are two new file modes. Here is the complete list:
- **"text-in"** File will be opened for input in text, or formatted, mode. It may then be accessed with \( \text{reada} \) or \( \text{geta} \).
- **"text-out"** File will be opened for output in text, or formatted, mode. It may then be accessed with \( \text{printa} \).
"text-append" File will be opened for output in text, or formatted, mode. The file is positioned at EOF. It may then be accessed with printa.
"binary-in" File will be opened for input in binary mode. It may then be accessed with getb.
"binary-out" File will be opened for output in binary mode. It may then be accessed with putb.
"random" File will be opened for random access. All records read from or written to the file must be character strings. The file may be accessed with gets or puts.
gets(h,s,l,wr v) h must be a file handle created with a call to open. s and l must be integers. A string of length l starting from file position s will be read into the variable v.
puts(h,s,v) h must be a file handle created with a call to open. s must be an integer, and v must be a string. v will be written to the file starting at position s.
fsize(h) h must be a file handle created with a call to open, and the file must have been opened in random mode. This procedure returns the length of the file, in characters.
nprint(v1,v2...) Each of the values v1,v2... will be printed on the standard output device (usually the terminal). There will be no spaces or newlines between the values. The only difference between print and nprint is that nprint does not print a newline at the end.
nprinta(h,v1,v2...) h must be a file handle created with a call to open. Each of the values v1,v2... will be written to that file. There will be no spaces or newlines between the values. The only difference between printa and nprinta is that nprinta does not print a newline at the end.
6.2 String Handling
unstr(s) This procedure operates like reads, but has no write parameters. It converts a character string to internal form, following the same scanning procedure, and returns the internal value.
binstr(v) v can be any SETL2 value. binstr converts v to a character string which is not readable, but can be converted back into its original form. Combined with unbinstr, this has the property that unbinstr(binstr(v)) = v, which is not true of str and unstr. It is valuable in writing data to random files or passing binary data in callout (see section 5).
unbinstr(s) $s$ must be a value created by binstr. unbinstr converts the string back into internal form.
**Caution:** Atoms and procedures may be converted to strings and back only within a single program execution. Their string representations may not be written to a file, read back in another execution, and converted to internal form. They are comparable to memory pointers in lower level languages.
### 6.3 System Access Procedures
**callout(n,p,t)** Used to call functions written in other languages. See section 5 for a detailed explanation.
**abort(s)** $s$ must be a character string. **abort** calls the interpreter’s abnormal end handler passing it $s$ as an error message. The program is aborted and the message, current source location, and stack are printed. This is most useful in writing packages and classes meant to be used by other programmers or in program traps, since it is most appropriate in program error situations, not user error situations.
## A Multisets: An Example Class
The ability to overload SETL2’s built-in operations on objects allows us to add new types to the language fairly easily. To illustrate this, we will construct a class for multisets, or bags. A bag is similar to a set, but we don’t require all elements of a bag to be unique. We are not aware of any well-established mathematical properties of bags so we’ll define those we need ourselves.
The first step in adding a new type is to decide which operations we want to support. In the case of bags, we would like to allow any of the set and map operations if that is possible. An ambitious set of operations is defined in figure 4. In addition, we want to be able to iterate over a bag and print it in some reasonable form.
Having established the operations we would like to allow, we now choose our data structures. At first glance, a good candidate seems to be a map from distinct elements to counts. The problem with this data structure is that the map-oriented operations (domain, range, lessf, and $f\{x\}$) will all be very slow, so let’s consider alternatives.
Another option is to use the data structure just considered, but change the representation to some other format for maps as needed. The SETL2 interpreter uses a scheme like this for maps. Essentially, we hope that the application program will use either map operations or set operations in long sequences, so we don’t have to change representations very often. We have a problem with this data structure as well. Suppose we have an equality check on two bags, in which one happens to be in map format and the other is not. The equality test will yield false, even if the two bags contain the same elements, only the representation is different. Within the interpreter, the values would be converted to a common format before the equality test, but since we can’t create an equality method, we can’t do the same thing.
The solution is to use two maps, one for pairs and one for everything else. The map of pairs is organized so that the image of the first element of the pair is also a map, and that map is from the second element to the number of occurrences of the pair in the bag. We get the number of occurrences of $[x,y]$ with the expression pairs$\{x\}(y)$. The map of non-pairs is as described in our first try at a solution, namely it is a map from
#s
Yields the number of elements in s.
arb s
Yields an arbitrary element of s.
pow s
Yields the power set of s. The value will be a set of all bags containing only elements in s, and no more of any element than s has.
domain s
domain and range seem to be naturally sets, not bags. We can still use the notation, although it’s not clear that it is a useful concept. For domain to work properly, we require each element of the bag to be a pair. We will then yield the set of all the left elements of those pairs.
range s
For range to work properly, we require each element of the bag to be a pair. We will then yield the set of all the right elements of those pairs.
s + ss
Analogous to set union. We create a bag containing all the elements of s and ss. Notice that we don’t flush out duplicates here, so for example if s contains one "a" and ss contains two "a"'s then s + ss will contain three "a"'s.
s - ss
Analogous to set difference. We copy s into the result, then remove each element of ss.
s * ss
Analogous to set intersection. We create a bag, then for each distinct element of s we place the minimum of the number of occurrences of that element in s or ss in the result bag.
s mod ss
Defined as (s + ss) - (s * ss).
s npow i
Yields a subset of the power set, in which each bag has exactly i elements.
s with x
Yields a bag with all elements of s and {x}.
s less x
Yields a bag with all elements of s less {x}.
s lessf x
We require each element of s to be a pair. We remove all pairs with x as left element.
x from s
We select an arbitrary element from s, remove it, and assign the value to x.
x in s
Yields true if x is an element of s, false otherwise.
s = ss
Yields true if s and ss are the same, false otherwise.
s /= ss
Yields true if s and ss are different, false otherwise.
s < ss
Yields true if s ⊊ ss, false otherwise. We define s ⊊ ss when applied to bags to mean that for each distinct element θ of s, there are at least as many occurrences of θ in ss as there are in s.
s <= ss
Yields true if s ⊆ ss, false otherwise.
s > ss
Yields true if s ⊋ ss, false otherwise.
s >= ss
Yields true if s ⊌ ss, false otherwise.
f(x)
This generally yields the image of x in f. It’s not quite clear what this means for bags, since we allow duplicates, so let’s make an unusual interpretation. Let f(x) yield the number of occurrences of x within f, when used on the right. When used on the left, we expect the right hand side to be an integer, and we will set the number of occurrences of x in f.
f(x)
Yields the image set of x in f. This makes a little more sense than the previous operation. For this expression to work at all, each element of f must be a pair. We look for all the pairs in f with x as left hand element and gather the right hand elements into a bag. If this expression is used on the left, we set the image set of x in f. Note that we must have a bag on the right in this case.
Figure 4: Operations on multisets
distinct elements to counts. We also keep the cardinality explicitly, since that would be fairly expensive to compute.
Having chosen a reasonable data structure, we’re ready to start building the class. We’re not going to show all the required methods here, since that would take too much space. We’ll pick out those which are essential or illustrate an interesting point.
A.1 Class Specification
We don’t have any methods referred to by name, all we’re providing is {\textbf{SETL2}} operations. Therefore our class specification is almost empty. We do have to provide a creation function and make that globally visible.
```plaintext
class bag;
procedure create(source);
end bag;
```
A.2 Class Body
All our data is private, to insure integrity. We start with declarations of instance variables.
```plaintext
class body bag;
var pairs := {}, -- pair values
others := {}, -- non-pair values
cardinality := 0; -- cardinality
end body;
```
A.2.1 Create
We want to allow the programmer to provide an initial set of values on the creation call. We’re just going to iterate over those values, so we don’t really require a set, we just need something we can iterate over. It might be a set, a map, a tuple, a string, or some object with iteration methods defined.
```plaintext
procedure create(source);
cardinality := #source;
for x in source loop
if is_tuple(x) and #x = 2 then
[left,right] := x;
pairs(left)(right) := (pairs(left)(right) ? 0 /) + 1;
else
others(x) := (others(x) ? 0 /) + 1;
end if;
end loop;
end create;
```
A.2.2 Number Of Elements
The cardinality operator is a snap. We don’t want to have to count the number of elements, since that’s expensive. Therefore we maintain the cardinality. This method just returns it.
```plaintext
procedure # self;
return cardinality;
end;
```
A.2.3 Domain
The domain of a bag is the same as the domain of its `pairs` instance variable. Notice that we must check whether there are any non-pair values, since if there are the bag is not a valid map.
```plaintext
procedure domain self;
if #others /= 0 then
abort("May not find domain of non-map BAG:
\n "+str(self) );
end if;
return domain(pairs );
end;
```
A.2.4 Bag Union
Perhaps union isn’t an appropriate name. What we really mean is the sum of two bags. Notice that we have only one version of this method, though in general there can be two for binary operators. We don’t allow mixed mode addition on bags, so we don’t need a method with `self` on the right. If an expression has a bag on the left we will be called. If not we should get a run-time error anyway.
The procedure we follow is pretty straightforward. We copy the left operand, then loop over the right operand adding all the elements to the result.
```plaintext
procedure self + right_bag;
if type(right_bag ) /= "BAG" then
abort("Invalid operands for +:
\n \n Left => "+str(self) +"nRight => "+str(right_bag ) \n ");
end if;
result := self;
result.cardinality +:= right_bag.cardinality;
for right_map = right_bag.pairs {left} , count = right_map (right) loop
result.pairs {left} (right) := (result.pairs {left} (right) ? 0 / ) + count;
end loop;
for count = right_bag.others (left) loop
result.others (left) := (result.others (left) ? 0 ) + count;
end loop;
return result;
end;
```
A.2.5 The npow Operator
The npow operator is interesting since its operands are ordinarily of different types, and it is commutative. We must have two forms therefore, one for each operand order. We will use a common procedure to do most of the work, to avoid code duplication.
```pascal
procedure self npow right;
if not is_integer(right) then
abort("Invalid operands for NPOW\nLeft => " + str(self) + "\nRight => " + str(right) );
end if;
return npower(right);
end;
procedure left npow self;
if not is_integer(left) then
abort("Invalid operands for NPOW\nLeft => " + str(left) + "\nRight => " + str(self) );
end if;
return npower(left);
end;
procedure npower(i);
power_array := [ [0,x] : x in self ];
powerset := { }; loop
if +/\{c : [c,-] in power_array\} = i then
powerset with := bag ( [e : [s,e] in power_array | s = 1] );
end if;
if not (exists n in [1 .. #power_array] | power_array(n)(1) = 0) then
exit;
end if;
for i in [1 .. n - 1] loop
power_array(i)(1) := 0;
end loop;
power_array(n)(1) := 1;
end loop;
return powerset;
end npower;
```
A.2.6 The from Operator
In most of the methods on bags we do not modify the current instance. The from method is an exception. Since from generally modifies its source set we also want to modify the source bag.
```pascal
procedure from self;
if cardinality = 0 then
return om;
end;
```
A.2.7 The < Operator
The < operator performs a subset test. It’s a crucial method, since it will be called for any of <, <=, >, or >=.
First we perform a quick cardinality test. If that fails we just return false. If it succeeds we have to perform a more expensive test, checking each element in the current instance.
```plaintext
procedure self < right_bag;
if type(right_bag) /= "BAG" then
abort("Invalid operands for <:
Left => "+str(self)+"Right => "+str(right_bag) );
end if;
if cardinality >= right_bag.cardinality then
return false;
end if;
for right_map = pairs(left), count = right_map(right) loop
if count > right_bag.pairs(left) (right) ? 0 then
return false;
end if;
end loop;
for count = others(left) loop
if count > right_bag.others(left) ? 0 then
return false;
end if;
end loop;
return true;
end;
```
A.2.8 Image Set Assignment
The map and image set assignment methods are particularly useful. In this example, we set the image set of one domain value in a bag. All we have to do is verify our operands, remove the old image set, and install a new one.
```
procedure self(left) := value;
if type(value) /= "BAG" then
abort("Invalid value for f{x} assignment\nValue => "+str(value));
end if;
if #others /= 0 then
abort("May not assign image set to non-map BAG:\n"+str(self));
end if;
for count = pairs(left) (right) loop -- remove old image set
cardinality -:= count;
end loop;
pairs(left) := { };
for right in value loop -- and install a new one
pairs(left) (right) := (pairs(left) (right) ? 0) +1;
cardinality +:= 1;
end loop;
end;
```
A.2.9 Iterators
We want to provide both iterator forms for bags, since we have operations similar to maps and multi-valued maps. Notice that we always return a tuple unless the bag is empty.
One strange thing to notice here: in the process of iterating over a bag we destroy it. This certainly isn’t necessary, but it is safe. Remember that we have preserved SETL2’s value semantics. This means that any other references to the bag are not affected by the iteration.
```
procedure iterator_start;
null;
end iterator_start;
procedure iterator_next;
if cardinality = 0 then
return om;
end if;
cardinality -:= 1;
if #pairs > 0 then
[left, [right, count]] from pairs;
if count > 1 then
pairs(left) (right) := count - 1;
end if;
return [[left,right]];
else
```
A.2.10 Print Strings
The default print string will be particularly ugly for bags, since we have split our data into two variables based on a transparent distinction, and because we keep quite a bit of information the user isn’t aware of. We’ll create print strings similar to sets, but we’ll use the delimiters {> and <}, to distinguish bags from sets.
procedure selfstr;
first_element := true;
for x in self loop
if is_string(x) then
x := "\""+x+"\"";
end if;
if first_element then
first_element := false;
result := "{> "+str(x);
else
result +:= ", "+str(x);
end if;
end loop;
if first_element then
return "(> <)";
else
return result+" <)";
end if;
That’s all the methods we care to show here. The rest are fairly straightforward to code. They are included in an example file bags.stl distributed with the system.
References
|
{"Source-Url": "http://www.softwarepreservation.org/projects/SETL/setl2/Snyder-SETL2_Update-1990.pdf", "len_cl100k_base": 16270, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 81148, "total-output-tokens": 18347, "length": "2e13", "weborganizer": {"__label__adult": 0.00031065940856933594, "__label__art_design": 0.0002701282501220703, "__label__crime_law": 0.00017511844635009766, "__label__education_jobs": 0.0004916191101074219, "__label__entertainment": 5.21540641784668e-05, "__label__fashion_beauty": 0.00010347366333007812, "__label__finance_business": 0.00012189149856567384, "__label__food_dining": 0.0002853870391845703, "__label__games": 0.0004467964172363281, "__label__hardware": 0.00070953369140625, "__label__health": 0.0002524852752685547, "__label__history": 0.00017333030700683594, "__label__home_hobbies": 9.459257125854492e-05, "__label__industrial": 0.00026869773864746094, "__label__literature": 0.0001939535140991211, "__label__politics": 0.0001571178436279297, "__label__religion": 0.0004127025604248047, "__label__science_tech": 0.0031452178955078125, "__label__social_life": 7.158517837524414e-05, "__label__software": 0.003536224365234375, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0003695487976074219, "__label__travel": 0.0001728534698486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71865, 0.01398]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71865, 0.39386]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71865, 0.89047]], "google_gemma-3-12b-it_contains_pii": [[0, 625, false], [625, 625, null], [625, 2467, null], [2467, 2467, null], [2467, 5898, null], [5898, 8913, null], [8913, 11366, null], [11366, 13123, null], [13123, 15329, null], [15329, 17073, null], [17073, 18766, null], [18766, 21489, null], [21489, 24701, null], [24701, 26603, null], [26603, 27527, null], [27527, 30869, null], [30869, 33961, null], [33961, 35495, null], [35495, 38169, null], [38169, 41159, null], [41159, 44360, null], [44360, 47348, null], [47348, 50063, null], [50063, 52411, null], [52411, 54499, null], [54499, 56653, null], [56653, 59987, null], [59987, 62945, null], [62945, 64570, null], [64570, 66462, null], [66462, 67884, null], [67884, 68767, null], [68767, 70417, null], [70417, 71133, null], [71133, 71865, null]], "google_gemma-3-12b-it_is_public_document": [[0, 625, true], [625, 625, null], [625, 2467, null], [2467, 2467, null], [2467, 5898, null], [5898, 8913, null], [8913, 11366, null], [11366, 13123, null], [13123, 15329, null], [15329, 17073, null], [17073, 18766, null], [18766, 21489, null], [21489, 24701, null], [24701, 26603, null], [26603, 27527, null], [27527, 30869, null], [30869, 33961, null], [33961, 35495, null], [35495, 38169, null], [38169, 41159, null], [41159, 44360, null], [44360, 47348, null], [47348, 50063, null], [50063, 52411, null], [52411, 54499, null], [54499, 56653, null], [56653, 59987, null], [59987, 62945, null], [62945, 64570, null], [64570, 66462, null], [66462, 67884, null], [67884, 68767, null], [68767, 70417, null], [70417, 71133, null], [71133, 71865, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71865, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71865, null]], "pdf_page_numbers": [[0, 625, 1], [625, 625, 2], [625, 2467, 3], [2467, 2467, 4], [2467, 5898, 5], [5898, 8913, 6], [8913, 11366, 7], [11366, 13123, 8], [13123, 15329, 9], [15329, 17073, 10], [17073, 18766, 11], [18766, 21489, 12], [21489, 24701, 13], [24701, 26603, 14], [26603, 27527, 15], [27527, 30869, 16], [30869, 33961, 17], [33961, 35495, 18], [35495, 38169, 19], [38169, 41159, 20], [41159, 44360, 21], [44360, 47348, 22], [47348, 50063, 23], [50063, 52411, 24], [52411, 54499, 25], [54499, 56653, 26], [56653, 59987, 27], [59987, 62945, 28], [62945, 64570, 29], [64570, 66462, 30], [66462, 67884, 31], [67884, 68767, 32], [68767, 70417, 33], [70417, 71133, 34], [71133, 71865, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71865, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
041217684d952438bffb16654da1d0072607d959
|
A Pragmatic, Scalable Approach to Correct-by-Construction Process Composition Using Classical Linear Logic Inference
Citation for published version:
Digital Object Identifier (DOI):
10.1007/978-3-030-13838-7_5
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
Logic-Based Program Synthesis and Transformation
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and/or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
A Pragmatic, Scalable Approach to Correct-by-construction Process Composition Using Classical Linear Logic Inference
Petros Papapanagiotou and Jacques Fleuriot
School of Informatics, University of Edinburgh
10 Crichton Street, Edinburgh EH8 9AB, United Kingdom
{ppapapan, jdf}@inf.ed.ac.uk
Abstract. The need for rigorous process composition is encountered in many situations pertaining to the development and analysis of complex systems. We discuss the use of Classical Linear Logic (CLL) for correct-by-construction resource-based process composition, with guaranteed deadlock freedom, systematic resource accounting, and concurrent execution. We introduce algorithms to automate the necessary inference steps for binary compositions of processes in parallel, conditionally, and in sequence. We combine decision procedures and heuristics to achieve intuitive and practically useful compositions in an applied setting.
Keywords: process modelling, composition, correct by construction, workflow, linear logic
1 Introduction
The ideas behind process modelling and composition are common across a variety of domains, including program synthesis, software architecture, multi-agent systems, web services, and business processes. Although the concept of a “process” takes a variety of names – such as agent, role, action, activity, and service – across these domains, in essence, it always captures the idea of an abstract, functional unit. Process composition then involves the combination and connection of these units to create systems that can perform more complex tasks. We typically call the resulting model a (process) workflow. Viewed from this standpoint, resource-based process composition then captures a structured model of the resource flow across the components, focusing on the resources that are created, consumed, or passed from one process to another within the system.
Workflows have proven useful tools for the design and implementation of complex systems by providing a balance between an intuitive abstract model, typically in diagrammatic form, and a concrete implementation through process automation. Evidence can be found, for example, in the modelling of clinical care pathways where workflows can be both understandable by healthcare stakeholders and yet remain amenable to formal analysis [10,15].
A scalable approach towards establishing trust in the correctness of the modelled system is that of correct-by-construction engineering [12,26]. In general,
this refers to the construction of systems in a way that guarantees correctness properties about them at design time. In this spirit, we have developed the WorkflowFM system for correct-by-construction process composition [21]. It relies on Classical Linear Logic (see Section 2.1) to rigorously compose abstract process specifications in a way that:
1. systematically accounts for resources and exceptions;
2. prevents deadlocks;
3. results in a concrete workflow where processes are executed concurrently.
From the specific point of view of program synthesis, these benefits can be interpreted as (1) no memory leaks or missing data, (2) no deadlocks, hanging threads, or loops, and (3) parallel, asynchronous (non-blocking) execution.
The inference is performed within the proof assistant HOL Light, which offers systematic guarantees of correctness for every inference step [11]. The logical model can be translated through a process calculus to a concrete workflow implementation in a host programming language.
There are numerous aspects to and components in the WorkflowFM system, including, for instance, the diagrammatic interface (as shown in Fig. 1), the code translator, the execution engine, the process calculus correspondence, and the architecture that brings it all together [21]. In this particular paper we focus on the proof procedures that make such resource-based process compositions feasible and accessible. These are essential for creating meaningful workflow models with the correctness-by-construction properties highlighted above, but without the need for tedious manual CLL reasoning. Instead, the user can use high level composition actions triggered by simple, intuitive mouse gestures and without the need to understand the underlying proof, which is guaranteed to be correct thanks to the rigorous environment of HOL Light.
It is worth emphasizing that our work largely aims at tackling pragmatic challenges in real applications as opposed to establishing theoretical facts. We rely on existing formalisms, such as the proofs-as-processes theory described below, in our attempt to exploit its benefits in real world scenarios. As a result, the vast majority of our design decisions are driven by practical experience and the different cases we have encountered in our projects.
Table 1 is a list of some of our case studies in the healthcare and manufacturing domain that have driven the development of WorkflowFM. It includes an indication of the size of each case study based on (1) the number of (atomic) component processes, (2) the number of different types of resources involved in the inputs and outputs of the various processes (see Section 3), (3) the number of binary composition actions performed to construct the workflows (see Section 4), and (4) the total number of composed workflows.
All of these case studies are models of actual workflows, built based on data from real-world scenarios and input from domain experts such as clinical teams and managers of manufacturing facilities. The results have been useful towards process improvement in their respective organisations, including a better qualitative understanding based on the abstract model and quantitative analytics obtained from the concrete implementation. As a result, we are confident that the
证据和经验表明,我们从这些案例研究中积累的经验代表了真实应用的需求和要求,以及本文中提出的方案和算法可以提供显著的价值。
我们注意到,本文的附带内容可以在网上获取。[1]
2 背景
在我们方法中对资源的系统性会计可以被通过来自健康照护领域的假设性示例来展示 [21]。假定一个过程 DeliverDrug 对应于药物的交付给一名患者。这样的过程需要关于患者的 Patient、药物的 Dosage 以及一些保留的 NurseTime 以供护士交付药物。可能的结果是患者要么被 Treated,要么药物 Failed。在后一种情况下,我们希望应用 Reassess 过程,其给定分配给临床人员的时间(ClinTime)结果是患者被 Reassessed。两个过程的图形表示,其中虚线边表示 DeliverDrug 的可选结果,如图 1 所示。
如果我们要在工作流中组装两个过程,其中药物失败总是由 Reassess 处理,复合过程的规范(或具体地说是输出)会是什么?
图 1。DeliverDrug 和 Reassess 过程的可视化(顶部)以及它们的顺序组合。辅助三角形有助于正确展示输出。
根据图 1 中的工作流表示,一个人可能会倾向于将 DeliverDrug 中的 Failed 边直接连接到对应的 Reassess 边,从而造成过程的错误连接。
leading to an overall output of either Treated or Reassessed. However, this would be erroneous, as the input ClinTime, is consumed in the composite process even if Reassess is never used. Using our CLL-based approach, the workflow output is either Reassessed which occurs if the drug failed, or Treated coupled with the unused ClinTime, as shown at the bottom of Fig. 1[21].
Systematically accounting for such unused resources is non-trivial, especially considering larger workflows with tens or hundreds of processes and many different outcomes. The CLL inference rules enforce this by default and the proof reflects the level of reasoning required to achieve this. In addition, the process code generated from this synthesis is fully asynchronous and deadlock-free, and relies on the existence of concrete implementations of DeliverDrug and Reassess.
2.1 Classical Linear Logic
Linear Logic, as proposed by Girard [9], is a refinement to classical logic where the rules of contraction and weakening are limited to the modalities ! and ?. Propositions thus resemble resources that cannot be ignored or copied arbitrarily.
In this work, we use a one-sided sequent calculus version of the multiplicative additive fragment of propositional CLL without units (MALL). Although there exist process translations of full CLL and even first-order CLL, the MALL fragment allows enough expressiveness while keeping the reasoning complexity at a manageable level (MALL is PSPACE-complete whereas full CLL is undecidable [14]). The inference rules for MALL are presented in Fig. 2.
\[
\begin{align*}
\Gamma, A & \vdash A, \quad \text{Id} \\
\Gamma, A, \Delta & \vdash B, \quad \text{Cut} \\
\Gamma, A & \vdash \Delta, A \otimes B, \quad \otimes \\
\Gamma, A & \vdash B, \quad \oplus L \\
\Gamma, A & \vdash B, \quad \oplus R \\
\end{align*}
\]
Fig. 2. One-sided sequent calculus versions of the CLL inference rules.
In this version of MALL, linear negation ($\cdot^\perp$) is defined as a syntactic operator with no inference rules, so that both $A$ and $A^\perp$ are considered atomic formulas. The de Morgan style equations in Fig. 3 provide a syntactic equivalence of formulas involving negation [27]. This allows us to use syntactically equivalent formulas, such as $A^\perp \nRightarrow B^\perp$ and $(A \otimes B)^\perp$ interchangeably. In fact, in the proofs presented in this paper we choose to present formulas containing $\otimes$ and $\oplus$ over their counterparts $\nRightarrow$ and $\&$ due to the polarity restrictions we introduce in Section 3.
In the 90s, Abramsky, Bellin and Scott developed the so-called proofs-as-processes paradigm [24]. It involved a correspondence between CLL inference and concurrent processes in the $\pi$-calculus [18]. They proved that cut-elimination
\[(A\perp)^\perp \equiv A \quad (A \otimes B)^\perp \equiv A^\perp \bowtie B^\perp \quad (A \oplus B)^\perp \equiv A^\perp \& B^\perp \quad (A \otimes B)^\perp \equiv A^\perp \oplus B^\perp \]
**Fig. 3.** The equations used to define linear negation for MALL.
in a CLL proof corresponds to reductions in the \(\pi\)-calculus translation, which in turn correspond to communication between concurrent processes. As a result, \(\pi\)-calculus terms constructed via CLL proofs are inherently free of deadlocks.
The implications of the proofs-as-processes correspondence have been the subject of recent research in concurrent programming by Wadler [28], Pfenning et al. [3,5,25], Dardha [7,8] and others. Essentially, each CLL inference step can be translated to an executable workflow, with automatically generated code to appropriately connect the component processes. As a result, the CLL proofs have a direct correspondence to the “piping”, so to speak, that realises the appropriate resource flow between the available processes, such that it does not introduce deadlocks, accounts for all resources explicitly, and maximizes runtime concurrency. The current paper examines CLL inference and we take the correspondence to deadlock-free processes for granted.
### 2.2 Related work
Diagrammatic languages such as BPMN [20] are commonly used for the description of workflows in different organisations. However, they typically lack rigour and have limited potential for formal verification [23]. Execution languages such as BPEL [19] and process calculi such as Petri Nets [1] are often used for workflow management in a formal way and our CLL approach could potentially be adapted to work with these. Linear logic has been used in the context of web service composition [22], but in a way that diverges significantly from the original theory and compromises the validity of the results. Finally, the way the resource flow is managed through our CLL-based processes is reminiscent of monad-like structures such as Haskell’s arrows\(^2\). One of the key differences is the lack of support for optional resources, which is non-trivial as we show in this paper.
### 3 Process Specification
Since CLL propositions can naturally represent resources, CLL sequents can be used to represent processes, with each literal representing a type of resource that is involved in that process. These abstract types can have a concrete realisation in the host programming language, from primitive to complicated objects.
Our approach to resource-based composition is to construct CLL specifications of abstract processes based on their inputs (and preconditions) and outputs (and effects), also referred to as IOPEs. This is standard practice in various process formalisms, including WSDL for web services [6], OWL-S for Semantic Web services [16], PDDL for actions in automated planning [17], etc.
\(^2\) [https://www.haskell.org/arrows](https://www.haskell.org/arrows)
The symmetry of linear negation as shown in Fig. 3 can be used to assign a polarity to each CLL connective in order to distinctly specify input and output resources. We choose to treat negated literals, $\neg$, and $\&$ as inputs, and positive literals, $\otimes$, and $\oplus$ as outputs, with the following intuitive interpretation:
- Multiplicative conjunction (tensor $\otimes$) indicates a pair of parallel outputs.
- Additive disjunction (plus $\oplus$) indicates exclusively optional outputs (alternative outputs or exceptions).
- Multiplicative disjunction (par $\nabla$) indicates a pair of simultaneous inputs.
- Additive conjunction (with $\&$) indicates exclusively optional input.
Based on this, a process can be specified as a CLL sequent consisting of a list of input formulas and a single output formula. In this, the order of the literals does not matter, so long as they obey the polarity restrictions (all but exactly one are negative). In practice, we treat sequents as multisets of literals and manage them using particular multiset reasoning techniques in HOL Light. The description of these techniques is beyond the scope of this paper.
The polarity restrictions imposed on our process specifications match the specification of Laurent’s Polarized Linear Logic (LLP) [13], and has a proven logical equivalence to the full MALL. Moreover, these restrictions match the programming language paradigm of a function that can have multiple input arguments and returns a single (possibly composite) result.
4 Process Composition
Using CLL process specifications as assumptions, we can produce a composite process specification using forward inference. Each of the CLL inference rules represent a logically legal way to manipulate and compose such specifications.
The axiom $\vdash A, A$ $\perp$ represents the so-called axiom buffer, a process that receives a resource of type $A$ and outputs the same resource unaffected.
Unary inference rules, such as the $\ominus_L$ rule, correspond to manipulations of a single process specification. For example, the $\ominus_L$ rule (see Fig. 2) takes a process $P$ specified by $\vdash \Gamma, A$, i.e. a process with some inputs $\Gamma$ and an output $A$, and produces a process $\vdash \Gamma, A \ominus B$, i.e. a process with the same inputs $\Gamma$ and output either $A$ or $B$. Note that, in practice, the produced composite process is a transformation of $P$ and thus will always produce $A$ and never $B$.
Binary inference rules, such as the $\otimes$ rule, correspond to binary process composition. The $\otimes$ rule in particular (see Fig. 2) takes a process $P$ specified by $\vdash \Gamma, A$ and another process $Q$ specified by $\vdash \Delta, B$ and composes them, so that the resulting process $\vdash \Gamma, \Delta, A \otimes B$ has all their inputs $\Gamma$ and $\Delta$ and a simultaneous output $A \otimes B$. Notably, the Cut rule corresponds to the composition of 2 processes in sequence, where one consumes a resource $A$ given by the other.
Naturally, these manipulations and compositions are primitive and restricted. Constructing meaningful compositions requires several rule applications and, therefore, doing this manually would be a very tedious and impractical task.
Our work focuses on creating high level actions that use CLL inference to automatically produce binary process compositions that are correct-by-construction based on the guarantees described above. More specifically, we introduce actions for parallel (\(\text{TENSOR}\)), conditional (\(\text{WITH}\)), and sequential composition (\(\text{JOIN}\)).
Since we are using forward inference, there are infinitely many ways to apply the CLL rules and therefore infinite possible compositions. We are interested in producing compositions that are intuitive for the user. It is practically impossible to produce a formal definition of what these compositions should be. Instead, as explained earlier, we rely on practical experience and user feedback from the various case studies for workflow modelling (see Table 1).
Based on this, we have introduced a set of what can be viewed as unit tests for our composition actions, which describe the expected and logically valid results of example compositions. As we explore increasingly complex examples in practice, we augment our test set and ensure our algorithms satisfy them. Selected unit tests for the \(\text{WITH}\) and \(\text{JOIN}\) actions are shown in Tables 2 and 3 respectively. Moreover, as a general principle, our algorithms try to maximize resource usage, i.e. involve as many resources as possible, and minimize the number of rule applications to keep the corresponding process code more compact.
For example, row 3 of Table 3 indicates that a process with output \(A \oplus B\) when composed with a process specified by \(\vdash A^\perp\), \(B\) should produce a process with output \(B\). As we discuss in Section 8.3, a different CLL derivation for the same scenario could lead to a process with output \(B \oplus B\). This result is unnecessarily more complicated, and its complexity will propagate to all subsequent compositions which will have to deal with 2 options of a type \(B\) output. The unit test therefore ensures that the algorithm always leads to a minimal result.
<table>
<thead>
<tr>
<th>(P)</th>
<th>(Q)</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>(\vdash X^\perp, Z)</td>
<td>(\vdash Y^\perp, Z)</td>
<td>(\vdash (X \oplus Y)^\perp, Z)</td>
</tr>
<tr>
<td>(\vdash X^\perp, Z)</td>
<td>(\vdash Y^\perp, W)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, B^\perp, Z \oplus W)</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, B^\perp, Z)</td>
<td>(\vdash Y^\perp, Z)</td>
<td>(\vdash (X \oplus Y)^\perp, Z \oplus (Z \oplus A \oplus B))</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, Z)</td>
<td>(\vdash Y^\perp, B^\perp, W)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, B^\perp, (Z \oplus B) \oplus (W \oplus A))</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, C^\perp, Z)</td>
<td>(\vdash Y^\perp, B^\perp, C^\perp, W)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, B^\perp, C^\perp, (Z \oplus B) \oplus (W \oplus A))</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, C^\perp, Z)</td>
<td>(\vdash Y^\perp, B^\perp, C^\perp, W)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, B^\perp, C^\perp, (Z \oplus B) \oplus (W \oplus A))</td>
</tr>
<tr>
<td>(\vdash X^\perp, B^\perp, A \oplus B)</td>
<td>(\vdash Y^\perp, B \oplus A)</td>
<td>(\vdash (X \oplus Y)^\perp, A \oplus B)</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, Z \oplus A)</td>
<td>(\vdash Y^\perp, Z)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, Z \oplus A)</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, A \oplus Z)</td>
<td>(\vdash Y^\perp, Z)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, A \oplus Z)</td>
</tr>
<tr>
<td>(\vdash X^\perp, A^\perp, Z \oplus (Z \oplus A))</td>
<td>(\vdash Y^\perp, Z)</td>
<td>(\vdash (X \oplus Y)^\perp, A^\perp, Z \oplus (Z \oplus A))</td>
</tr>
</tbody>
</table>
*Table 2.* Examples of the expected result of the \(\text{WITH}\) action between \(X^\perp\) of a process \(P\) and \(Y^\perp\) of a process \(Q\).
All our algorithms are implemented within the Higher Order Logic proof tactic system of HOL Light. As a result, the names of some methods have the _\~TAC_ suffix, which is conventionally used when naming HOL Light tactics.
5 Auxiliary Processes
During composition, we often need to construct auxiliary processes that manipulate the structure of a CLL type in particular ways. We have identified 2 types of such processes: buffers and filters.
Buffers: Similarly to the axiom buffer introduced in the previous section, composite buffers (or simply buffers) can carry any composite resource without affecting it. This is useful when a process is unable to handle the entire type on its own, and some resources need to be simply buffered through. For example, if a process needs to handle a resource of type $A \otimes B$, but only has an input of type $A^\perp$, then $B$ will be handled by a buffer.
More formally, buffers are processes specified by $\vdash A^\perp, A$, where $A$ is arbitrarily complex. Such lemmas are always provable in CLL for any formula $A$. We have introduced an automatic procedure BUFFER_TAC that can accomplish this, but omit the implementation details in the interest of space and in favour of the more interesting composition procedures that follow.
We also introduce the concept of a parallel buffer, defined as a process $\vdash A_1^\perp, A_2^\perp, \ldots, A_n^\perp, A_1 \otimes A_2 \otimes \cdots \otimes A_n$. Such buffers are useful when composing processes with an optional output (see Section 8.3). Their construction can also be easily automated with a decision procedure we call PARBUF_TAC.
Filters: Often during process composition by proof, resources need to match exactly for the proof to proceed. In some cases, composite resources may not match exactly, but may be manipulated using the CLL inference rules so that they end up matching. For example, the term $A \otimes B$ does not directly match $B \otimes A$. However, both terms intuitively represent resources $A$ and $B$ in parallel. This intuition is reflected formally to the commutativity property of $\otimes$, which is easily provable in CLL: $\vdash (A \otimes B)^\perp, B \otimes A$. We can then use the Cut rule with this property to convert an output of type $A \otimes B$ to $B \otimes A$ (similarly for inputs).
We call such lemmas that are useful for converting CLL types to logically equivalent ones, filters. In essence, a filter is any provable CLL lemma that preserves our polarity restrictions. We prove such lemmas automatically using the proof strategies developed by Tammet [24]. We call such lemmas that are useful for converting CLL types to logically equivalent ones, filters. In essence, a filter is any provable CLL lemma that preserves our polarity restrictions. We prove such lemmas automatically using the proof strategies developed by Tammet [24].
We give some examples of how filters are used to match terms as we go through them below. However, as a general rule the reader may assume that, for the remainder of this paper, by “equal” or “matching” terms we refer to terms that are equal modulo the use of filters.
A main consequence of this is that our algorithms often attempt to match literals that do not match. For example, the attempt to compose $\vdash A^\perp, B$ in sequence with $\vdash C^\perp, D^\perp, E$ would generate and try to prove 2 false conjectures $\vdash B^\perp, C$ and $\vdash B^\perp, D$ in an effort to match the output $B$ with any of the 2
inputs $C^\perp$ and $D^\perp$ before failing\textsuperscript{3}. This highlights the need for an efficient proof procedure for filters, with an emphasis on early failure.
<table>
<thead>
<tr>
<th>P</th>
<th>Pr.</th>
<th>Q</th>
<th>Selected Input</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>⊢ $X^\perp, A$</td>
<td>⊢ $A^\perp, Y$</td>
<td>$A^\perp$</td>
<td>⊢ $X^\perp, Y$</td>
<td></td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus B$</td>
<td>L</td>
<td>⊢ $A^\perp, Y$</td>
<td>$A^\perp$</td>
<td>⊢ $X^\perp, Y \otimes B$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \otimes B$</td>
<td>L</td>
<td>⊢ $A^\perp, B$</td>
<td>$A^\perp$</td>
<td>⊢ $X^\perp, B$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus B \odot C$</td>
<td>L</td>
<td>⊢ $A^\perp, Y$</td>
<td>$A^\perp$</td>
<td>⊢ $X^\perp, Y \otimes B \otimes C$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus B$</td>
<td>L</td>
<td>⊢ $A^\perp, C^\perp, Y$</td>
<td>$A^\perp$</td>
<td>⊢ $X^\perp, C^\perp, Y \oplus (C \odot B)$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus B$</td>
<td>L</td>
<td>⊢ $B^\perp, C^\perp, Y$</td>
<td>$B^\perp$</td>
<td>⊢ $X^\perp, C^\perp, (C \odot A) \oplus Y$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \otimes B$</td>
<td>L</td>
<td>⊢ $(B \otimes A)^\perp, Y$</td>
<td>$(B \otimes A)^\perp$</td>
<td>⊢ $X^\perp, Y$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus (B \otimes C)$</td>
<td>L</td>
<td>⊢ $(B \otimes A)^\perp, Y$</td>
<td>$(B \otimes A)^\perp$</td>
<td>⊢ $X^\perp, Y \oplus (B \otimes C)$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus (B \otimes C)$</td>
<td>R</td>
<td>⊢ $(B \otimes A)^\perp, Y$</td>
<td>$(B \otimes A)^\perp$</td>
<td>⊢ $X^\perp, A \oplus (Y \odot C)$</td>
</tr>
<tr>
<td>⊢ $X^\perp, A \oplus (A \otimes B)$</td>
<td>L</td>
<td>⊢ $(C \odot A \oplus D)^\perp, Y$</td>
<td>$(C \odot A \oplus D)^\perp$</td>
<td>⊢ $X^\perp, Y \oplus B$</td>
</tr>
<tr>
<td>⊢ $X^\perp, C \oplus (A \otimes B)$</td>
<td>L</td>
<td>⊢ $C^\perp, A \otimes B$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, A \otimes B$</td>
</tr>
<tr>
<td>⊢ $X^\perp, C \oplus (A \otimes (B \oplus D))$</td>
<td>L</td>
<td>⊢ $C^\perp, (B \oplus D) \oplus A$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, (B \oplus D) \oplus A$</td>
</tr>
<tr>
<td>⊢ $X^\perp, C \oplus (A \otimes B)$</td>
<td>L</td>
<td>⊢ $C^\perp, Y \oplus (B \otimes A)$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, Y \oplus (B \otimes A)$</td>
</tr>
<tr>
<td>⊢ $X^\perp, C \oplus (A \otimes B)$</td>
<td>L</td>
<td>⊢ $C^\perp, (B \otimes A) \oplus Y$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, (B \otimes A) \oplus Y$</td>
</tr>
<tr>
<td>⊢ $X^\perp, (A \otimes B) \oplus C$</td>
<td>R</td>
<td>⊢ $C^\perp, Y \oplus (B \otimes A)$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, Y \oplus (B \otimes A)$</td>
</tr>
<tr>
<td>⊢ $X^\perp, (A \otimes B) \oplus C$</td>
<td>R</td>
<td>⊢ $C^\perp, (B \otimes A) \oplus Y$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, (B \otimes A) \oplus Y$</td>
</tr>
<tr>
<td>⊢ $X^\perp, C \oplus (A \otimes B)$</td>
<td>L</td>
<td>⊢ $C^\perp, Y \oplus (B \otimes A)$</td>
<td>$C^\perp$</td>
<td>⊢ $X^\perp, Y \oplus (B \otimes A)$</td>
</tr>
</tbody>
</table>
Table 3. Examples of the expected result of the \texttt{JOIN} action between a process \texttt{P} and a process \texttt{Q}. Column \texttt{Pr.} gives the priority parameter (see Section 8.4).
### 6 Parallel Composition - The \texttt{TENSOR} Action
The \texttt{TENSOR} action corresponds to the parallel composition of two processes so that their outputs are provided in parallel. It trivially relies on the tensor ($\otimes$) inference rule. Assuming 2 processes, $\vdash A^\perp, C^\perp$, $D$ and $\vdash B^\perp$, $E$, the \texttt{TENSOR} action will perform the following composition:
$$\vdash A^\perp, C^\perp, D \vdash B^\perp, E \quad \otimes$$
### 7 Conditional Composition - The \texttt{WITH} Action
The \texttt{WITH} action corresponds to the \textit{conditional} composition of two processes. This type of composition is useful in cases where each of the components of an optional output of a process needs to be handled by a different receiving process.
For example, assume a process \texttt{S} has an optional output $A \oplus C$ where $C$ is an exception. We want $A$ to be handled by some process \texttt{P}, for example specified by $\vdash A^\perp, B^\perp, X$, while another process \texttt{Q} specified by $\vdash C^\perp, Y$ plays the role of the exception handler for exception $C$. For this to happen, we need
\textsuperscript{3} In practice, the user will have to select a matching input to attempt such a composition (see Section 8).
to compose $P$ and $Q$ together using the WITH action so that we can construct an input that matches the output type $A \oplus C$ from $S$. This composition can be viewed as the construction of an if-then statement where if $A$ is provided then $P$ will be executed (assuming $B$ is also provided), and if $C$ is provided then $Q$ will be executed in a mutually exclusive choice. The generated proof tree for this particular example is the following:
$$
\begin{array}{c}
\Gamma, A^\perp, X \vdash P \\
\Gamma, C^\perp, Y \vdash Q
\end{array}
\quad
\begin{array}{c}
\Gamma, A^\perp, B^\perp, X \oplus (Y \otimes B) \quad \Gamma, C^\perp, B^\perp, Y \otimes B
\end{array}
\quad
\begin{array}{c}
\Gamma, (A \oplus C)^\perp, X \oplus (Y \otimes B)
\end{array}
\quad
\begin{array}{c}
\Gamma, (A \oplus C)^\perp, (X \otimes (\otimes \Delta_P^\perp)) \oplus (Y \otimes (\otimes \Delta_Q^\perp))
\end{array}
(1)
The $\text{WITH}$ action fundamentally relies on the $\&$ rule of CLL. The following derivation allows us to compose 2 processes that also have different outputs $X$ and $Y$:
$$
\begin{array}{c}
\Gamma, A^\perp, X \vdash P \\
\Gamma, C^\perp, Y \vdash Q
\end{array}
\quad
\begin{array}{c}
\Gamma, A^\perp, X \oplus Y
\end{array}
\quad
\begin{array}{c}
\Gamma, C^\perp, X \oplus Y
\end{array}
\quad
\begin{array}{c}
\Gamma, (A \oplus C)^\perp, X \oplus Y
\end{array}
\quad
\begin{array}{c}
\Gamma, (A \oplus C)^\perp, (X \otimes (\otimes \Delta_P^\perp)) \oplus (Y \otimes (\otimes \Delta_Q^\perp))
\end{array}
(2)
The particularity of the $\&$ rule is that the context $\Gamma$, i.e. all the inputs except the ones involved in the $\text{WITH}$ action, must be the same for both the involved processes. In practice, this means we need to account for unused inputs. In the example above, $P$ apart from input $A^\perp$ has another input $B^\perp$ which is missing from $Q$. In the conditional composition of $P$ and $Q$, if exception $C$ occurs, the provided $B$ will not be consumed since $P$ will not be invoked. In this case, we use a buffer to let $B$ pass through together with the output $Y$ of $Q$.
More generally, in order to apply the $\&$ rule to 2 processes $P$ and $Q$, we need to minimally adjust their contexts $\Gamma_P$ and $\Gamma_Q$ (i.e. their respective multisets of inputs excluding the ones that will be used in the rule) so that they end up being the same $\Gamma = \Gamma_P \cup \Gamma_Q$. By “minimal” adjustment we mean that we only add the inputs that are “missing” from either side, i.e. the multiset $\Delta_P = \Gamma_Q \setminus \Gamma_P$ for $P$ and $\Delta_Q = \Gamma_P \setminus \Gamma_Q$ for $Q$, and no more.
In the previous example in [1], excluding the inputs $A^\perp$ and $C^\perp$ used in the rule, we obtain $\Delta_Q = \Gamma_P \setminus \Gamma_Q = \{B^\perp\} \setminus \{\} = \{B^\perp\}$. We then construct a parallel buffer (see Section 5) of type $\otimes \Delta_P^\perp$ (converting all inputs in $\Delta_Q$ to an output; in this example only one input) using $\text{PARBUF_TAC}$. In the example, this is an atomic $B$ buffer. The parallel composition between this buffer and $Q$ results in the process $\vdash \Gamma, \Delta_Q, C^\perp, Y \otimes (\otimes \Delta_Q^\perp)$. The same calculation for $P$ yields $\Delta_P = \emptyset$ so no change is required for $P$.
Since $\Gamma_P \uplus \Delta_P = \Gamma_Q \uplus \Delta_Q = \Gamma$ (where $\uplus$ denotes multiset union), the $\&$ rule is now applicable and derivation [2] yields the following process:
$$
\vdash \Gamma, (A \oplus C)^\perp, (X \otimes (\otimes \Delta_P^\perp)) \oplus (Y \otimes (\otimes \Delta_Q^\perp))
$$
(3)
$\otimes\{a_1, \ldots, a_n\}^\perp = a_1^\perp \otimes \ldots \otimes a_n^\perp$
The output \(Y\) of \(Q\) has now been paired with the buffered resources \(\Delta_Q\).
Finally, we consider the special case where the following holds:
\[
(X \otimes (\otimes \Delta^\perp_P)) = (Y \otimes (\otimes \Delta^\perp_Q)) = G
\]
(4)
In this case, the output of the composition in (3) will be \(G \oplus G\). Instead we can apply the \& directly without derivation (2), yielding the simpler output \(G\).
Note that, as discussed in Section 5, above does not strictly require equality. The special case can also be applied if we can prove and use the filter \(\vdash (X \otimes (\otimes \Delta^\perp_P))^\perp, (Y \otimes (\otimes \Delta^\perp_P))\).
These results and the complexity underlying their construction demonstrate the non-trivial effort needed to adhere to CLL’s systematic management of resources and, more specifically, its systematic accounting of unused resources. These properties, however, are essential guarantees of correct resource management offered by construction in our process compositions.
8 Sequential Composition - The JOIN Action
The JOIN action reflects the connection of two processes in sequence, i.e. where (some of) the outputs of a process are connected to (some of) the corresponding inputs of another. More generally, we want to compose a process \(P\) with specification \(\vdash \Gamma, X\), i.e. with some (multiset of) inputs \(\Gamma\) and output \(X\) in sequence with a process \(Q\) with specification \(\vdash \Delta, C\perp, Y\), i.e. with an input \(C\perp\), output \(Y\), and (possibly) more inputs in context \(\Delta\). We also assume the user selects a subterm \(A\) of \(X\) in \(P\) and a matching subterm \(A\) of the input \(C\perp\) in \(Q\).
The strategy of the algorithm behind the JOIN action is to construct a new input for \(Q\) based on the chosen \(C\perp\) such that it directly matches the output \(X\) of \(P\) (and prioritizing the output selection \(A\)). This will enable the application of the Cut rule, which requires the cut literal to match exactly. In what follows, we present how different cases for \(X\) are handled.
8.1 Atomic or Matching Output
If \(X\) is atomic, a straightforward use of the Cut rule is sufficient to connect the two processes. For example, the JOIN action between \(\vdash A\perp, B\perp, X\) and \(\vdash X\perp, Z\) results in the following proof:
\[
\frac{\vdash A\perp, B\perp, X \quad \vdash X\perp, Z}{\vdash A\perp, B\perp, Z} \text{ Cut}
\]
The same approach can be applied more generally for any non-atomic \(X\) as long as a matching input of type \(X\perp\) (including via filtering) is selected in \(Q\).
8.2 Parallel Output
If $X$ is a parallel output, such as $B \otimes C$, we need to manipulate process $Q$ so that it can receive an input of type $(B \otimes C)^\perp$.
If $Q$ has both inputs $B^\perp$ and $C^\perp$, then we can use the $\gamma$ rule to combine them. For example, the generated proof tree of the $\text{JOIN}$ action between $\vdash A^\perp, D^\perp, B \otimes C$ and $\vdash B^\perp, C^\perp, E^\perp, Y$ is the following:
\[
\begin{array}{c}
\vdash A^\perp, D^\perp, B \otimes C \\
\vdash B^\perp, C^\perp, E^\perp, Y \\
\vdash A^\perp, D^\perp, E^\perp, Y
\end{array}
\]
\[
\vdash A^\perp, D^\perp, B \otimes C \\
\vdash B^\perp, C^\perp, E^\perp, Y \\
\vdash A^\perp, D^\perp, E^\perp, Y \\
\vdash A^\perp, D^\perp, B \otimes C \\
\vdash B^\perp, C^\perp, E^\perp, Y \\
\vdash A^\perp, D^\perp, E^\perp, Y
\]
As previously mentioned, the $\text{JOIN}$ action attempts to connect the output of $P$ to $Q$ maximally, i.e. both $B$ and $C$, regardless of the user choice. The user may, however, want to only connect one of the two resources. We have currently implemented this approach as it is the most commonly used in practice, but are investigating ways to enable better control by the user.
If $Q$ has only one of the two inputs, for example $B^\perp$, i.e. $Q$ is of the form $\vdash \Delta, B^\perp, Y$ and $C^\perp \notin \Delta$, then $C$ must be buffered. In this case, we use the following derivation:
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, C \\
\vdash \Delta, B^\perp, C^\perp, Y \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes C
\end{array}
\]
We use $\text{BUFFER_TAC}$ from Section 5 to prove the buffer of $C$.
Depending on the use of the $\otimes$ rule in (5), the resulting output could be either $Y \otimes C$ or $C \otimes Y$. We generally try to match the form of $P$’s output, so in this case we would choose $Y \otimes C$ to match $B \otimes C$. Our algorithm keeps track of this orientation through the $\text{orient}$ parameter (see Section 8.4).
8.3 Optional Output
If $X$ is an optional output, such as $B \oplus C$, then we need to manipulate process $Q$ to synthesize an input $(B \oplus C)^\perp$. Assume $Q$ can handle $B$ (symmetrically for $C$) and thus has specification $\vdash \Delta, B^\perp, Y$. We construct a parallel buffer (using $\text{PARBUF_TAC}$, see Section 5) of type $(\otimes \Delta^\perp) \otimes C$ (converting all inputs in $\Delta$ to outputs). We then apply derivation (2) as follows:
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\end{array}
\]
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\end{array}
\]
\[
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\]
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\end{array}
\]
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\end{array}
\]
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, (\otimes \Delta^\perp) \otimes C \\
\vdash \Delta, (B \otimes C)^\perp, Y \otimes (\otimes \Delta^\perp) \otimes C
\end{array}
\]
Similarly to the WITH action, the particular structure of the & rule ensures the systematic management of unused resources. In the example above, if C is received then Q will never be executed. As a result, any resources in ∆ will remain unused and need to be buffered together with C. This is the reason behind the type $(\ominus(\Delta^\perp)) \ominus C$ of the constructed buffer (as opposed to plainly using type C).
The proof tree of an example of the JOIN action between process P specified by $\vdash A^\perp, D^\perp, B \oplus C$ and process Q specified by $\vdash B^\perp, E^\perp, Y$ is shown below:
\[
\begin{array}{c}
\vdash A^\perp, D^\perp, B \oplus C \\
\vdash B^\perp, E^\perp, Y (C \otimes E) \\
\vdash C^\perp, C \\
\vdash E^\perp, E \\
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, Y \\
\vdash \Delta, (B \oplus C)^\perp, Y (C \otimes E) \\
\end{array}
\]
It is interesting to consider a couple of special cases.
Case 1: If $\vdash \Delta, C^\perp, Y$ is a parallel buffer, (6) can be simplified as follows:
\[
\begin{array}{c}
\vdash \Delta, B^\perp, Y \\
\vdash \Delta, C^\perp, Y \\
\vdash \Delta, (B \oplus C)^\perp, Y (C \otimes E) \\
\end{array}
\]
Case 2: If $Y = D \oplus E$ for some D and E such that $\vdash \Delta, C^\perp, D$ (or symmetrically $\vdash \Delta, C^\perp, E$) is a parallel buffer, then we can apply the following derivation:
\[
\begin{array}{c}
\vdash \Delta, B^\perp, D \oplus E \\
\vdash \Delta, C^\perp, D \oplus E \\
\vdash \Delta, (B \oplus C)^\perp, D \oplus E \\
\end{array}
\]
This may occur, for example, if $\Delta = \emptyset$ and $Y = C$. Such cases arise in processes used to recover from an exception. For instance, a recovery process $\vdash \text{Exception}^\perp, \text{Resource}$ can convert an output $\text{Resource} \oplus \text{Exception}$ to simply $\text{Resource}$ (which either was there in the first place, or was produced through the recovery process).
8.4 Putting It All Together
In the general case, the output X of P can be a complex combination of multiple parallel and optional outputs. For that reason, we apply the above proof
\[
X = A \odot (A \oplus B) \quad \text{Left} \quad \vdash A^\perp, Y \\
X = A \odot (A \oplus B) \quad \text{Right; Left} \quad \vdash A^\perp, Y \\
X = A \oplus (B \odot C) \quad \text{Left} \quad \vdash (B \odot A)^\perp, Y \\
X = A \oplus (B \odot C) \quad \text{Right; Left} \quad \vdash (B \odot A)^\perp, Y
\]
<table>
<thead>
<tr>
<th>Target</th>
<th>Priority</th>
<th>(Q)</th>
<th>Result of INPUT_TAC</th>
</tr>
</thead>
<tbody>
<tr>
<td>(X = A \odot (A \oplus B))</td>
<td>\text{Left}</td>
<td>(\vdash A^\perp, Y)</td>
<td>(\vdash X^\perp, Y \odot (A \oplus B))</td>
</tr>
<tr>
<td>(X = A \odot (A \oplus B))</td>
<td>\text{Right; Left}</td>
<td>(\vdash A^\perp, Y)</td>
<td>(\vdash X^\perp, A \odot (Y \oplus B))</td>
</tr>
<tr>
<td>(X = A \oplus (B \odot C))</td>
<td>\text{Left}</td>
<td>(\vdash (B \odot A)^\perp, Y)</td>
<td>(\vdash X^\perp, Y \odot (B \odot C))</td>
</tr>
<tr>
<td>(X = A \oplus (B \odot C))</td>
<td>\text{Right; Left}</td>
<td>(\vdash (B \odot A)^\perp, Y)</td>
<td>(\vdash X^\perp, A \odot (Y \odot C))</td>
</tr>
</tbody>
</table>
Table 4. Examples of how the priority parameter can affect the behaviour of INPUT\_TAC. The selected subterms and the output of \(Q\) are highlighted in bold.
strategies in a recursive, bottom-up way, prioritizing the user selections. We call the algorithm that produces the appropriate input \(X^\perp\) (or equivalent) from \(Q\) “INPUT\_TAC” and it has the following arguments (see Algorithm 1):
- \texttt{sel}: optional term corresponding to the user selected input \(C^\perp\) of \(Q\).
- \texttt{priority}: a list representing the path of the user selected subterm \(A\) in the syntax tree of the output \(X\) of \(P\). For example, if the user selects \(B\) in the output \((A \odot B) \oplus C\), the priority is \([\text{Left; Right}]\).
- \texttt{orient}: our latest path (left or right) in the syntax tree of \(X\) so that we add the corresponding buffers on the same side (see Section 8.2).
- \texttt{inputs}: a list of inputs of \(Q\). We remove used inputs from this to avoid reuse.
- \texttt{target}: the input term we are trying to construct. This is initially set to \(X\), but may take values that are subterms of \(X\) in recursive calls.
- \texttt{proc}: the CLL specification of \(Q\) as it evolves.
The priority parameter is useful when more than one subterms of the output either (a) are the same or (b) have the same matching input in \(Q\). Table 4 shows examples of how different priorities change the result of INPUT\_TAC.
9 Conclusion
CLL’s inherent properties make it an ideal language to reason about resources. CLL sequents (under polarity restrictions) can be viewed as resource-based specifications of processes. The CLL inference rules then describe the logically legal, but primitive ways to manipulate and compose such processes.
We presented algorithms that allow intuitive composition in parallel, conditionally, and in sequence. We call these composition actions \texttt{TENSOR}, \texttt{WITH}, and \texttt{JOIN} respectively, and they are implemented in HOL Light. We analysed how each action functions in different cases and examples.
As a result of the rigorous usage of CLL inference rules, the constructed compositions have guaranteed resource accounting, so that no resources disappear or are created out of nowhere. The proofs-as-processes paradigm and its recent evolutions allow the extraction of process calculus terms from these proofs, for concurrent and guaranteed deadlock-free execution.
In the future, we intend to work towards relaxing identified limitations along 2 main lines: (a) functionality, by incorporating and dealing with increasingly more
Algorithm 1 Derives a new process specification from the given “proc” such that it includes an input of type “target”.
1: function INPUT_TAC(sel, priority, orient, inputs, target, proc)
2: Try to match target with sel (if provided) or one of the inputs
3: if it matches then return proc
4: else if target is atomic then
5: if priority ≠ None then fail ▷ we couldn’t match the user selected output
6: else Create a target buffer using □ depending on orient
7: end if
8: else if target is $L \otimes R$ then
9: if priority = Left then
10: proc’ = INPUT_TAC(sel, tail(priority), orient, inputs, L, proc)
11: proc = INPUT_TAC(None, None, Right, inputs - {L}, R, proc’)
12: else
13: proc’ = INPUT_TAC(sel, tail(priority), orient, inputs, R, proc)
14: proc = INPUT_TAC(None, None, Left, inputs - {R}, L, proc’)
15: end if
16: end if
17: else if target is $L \oplus R$ then
18: if priority = Left then
19: proc = INPUT_TAC(sel, tail(priority), orient, inputs, L, proc)
20: Try derivation □ orElse Try derivation □ orElse Use derivation □
21: else if priority = Right then
22: proc = INPUT_TAC(sel, tail(priority), orient, inputs, R, proc)
23: Try derivation □ orElse Try derivation □ orElse Use derivation □
24: else
25: Try as if priority = Left orElse Try as if priority = Right
26: else Create a target buffer using □ depending on orient
27: end if
28: end if
29: return proc
30: end function
complex specifications including those requiring formulation of more complex filters, and (b) expressiveness, by extending the fragment of CLL we are using while keeping a balance in terms of efficiency.
Through this work, it is made obvious that intuitive process compositions in CLL require complex applications of a large number of inference rules. Our algorithms automate the appropriate deductions and alleviate this burden from the user. We have tied these with the diagrammatic interface of WorkflowFM [21], so that the user is not required to know or understand CLL or theorem proving, but merely sees inputs and outputs represented graphically. They can then obtain intuitive process compositions with the aforementioned correctness guarantees with a few simple clicks.
Acknowledgements
This work was supported by the “DigiFlow: Digitizing Industrial Workflow, Monitoring and Optimization” Innovation Activity funded by EIT Digital. We would like to thank the attendants of the LOPSTR conference, 4-6 September 2018, in Frankfurt, Germany for their insightful comments that helped improve this paper.
References
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/87628716/A_Pragmatic_Scalable_Approach_PAPAPANAGIOTOU_DoR081119_AFV.pdf", "len_cl100k_base": 13566, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 62772, "total-output-tokens": 16003, "length": "2e13", "weborganizer": {"__label__adult": 0.00037741661071777344, "__label__art_design": 0.0006322860717773438, "__label__crime_law": 0.0003995895385742187, "__label__education_jobs": 0.0013151168823242188, "__label__entertainment": 0.00011664628982543944, "__label__fashion_beauty": 0.0001852512359619141, "__label__finance_business": 0.000423431396484375, "__label__food_dining": 0.0004754066467285156, "__label__games": 0.0007510185241699219, "__label__hardware": 0.0008873939514160156, "__label__health": 0.0008616447448730469, "__label__history": 0.0003452301025390625, "__label__home_hobbies": 0.00015056133270263672, "__label__industrial": 0.0007042884826660156, "__label__literature": 0.0004787445068359375, "__label__politics": 0.00031948089599609375, "__label__religion": 0.0005841255187988281, "__label__science_tech": 0.1021728515625, "__label__social_life": 0.00012493133544921875, "__label__software": 0.0082244873046875, "__label__software_dev": 0.87939453125, "__label__sports_fitness": 0.0003001689910888672, "__label__transportation": 0.0006575584411621094, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50327, 0.02134]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50327, 0.62173]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50327, 0.81699]], "google_gemma-3-12b-it_contains_pii": [[0, 1508, false], [1508, 3997, null], [3997, 7309, null], [7309, 7892, null], [7892, 10686, null], [10686, 13646, null], [13646, 16913, null], [16913, 20722, null], [20722, 24007, null], [24007, 27697, null], [27697, 31427, null], [31427, 34068, null], [34068, 37777, null], [37777, 39901, null], [39901, 43492, null], [43492, 45663, null], [45663, 48537, null], [48537, 50327, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1508, true], [1508, 3997, null], [3997, 7309, null], [7309, 7892, null], [7892, 10686, null], [10686, 13646, null], [13646, 16913, null], [16913, 20722, null], [20722, 24007, null], [24007, 27697, null], [27697, 31427, null], [31427, 34068, null], [34068, 37777, null], [37777, 39901, null], [39901, 43492, null], [43492, 45663, null], [45663, 48537, null], [48537, 50327, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50327, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50327, null]], "pdf_page_numbers": [[0, 1508, 1], [1508, 3997, 2], [3997, 7309, 3], [7309, 7892, 4], [7892, 10686, 5], [10686, 13646, 6], [13646, 16913, 7], [16913, 20722, 8], [20722, 24007, 9], [24007, 27697, 10], [27697, 31427, 11], [31427, 34068, 12], [34068, 37777, 13], [37777, 39901, 14], [39901, 43492, 15], [43492, 45663, 16], [45663, 48537, 17], [48537, 50327, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50327, 0.09204]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
b57b8545154809bc07fda80633e2deb8efbac9e1
|
Interprocedural Transformations
for Parallel Code Generation
Mary Hall
Ken Kennedy
Kathryn McKinley
CRPC-TR91149
April, 1991
Center for Research on Parallel Computation
Rice University
P.O. Box 1892
Houston, TX 77251-1892
Interprocedural Transformations for Parallel Code Generation
Mary W. Hall Ken Kennedy Kathryn S. McKinley
Department of Computer Science, Rice University, Houston, TX 77251-1892
Abstract
We present a new approach that enables compiler optimization of procedure calls and loop nests containing procedure calls. We introduce two interprocedural transformations that move loops across procedure boundaries, exposing them to traditional optimizations on loop nests. These transformations are incorporated into a code generation algorithm for a shared-memory multiprocessor. The code generator relies on a machine model to estimate the expected benefits of loop parallelization and parallelism-enhancing transformations. Several transformation strategies are explored and one that minimizes total execution time is selected. Efficient support of this strategy is provided by an existing interprocedural compilation system. We demonstrate the potential of these techniques by applying this code generation strategy to two scientific applications programs.
1 Introduction
Modern computer architectures, such as pipelined, superscalar, VLIW and multiprocessor machines, demand sophisticated compilers to exploit their performance potentials. To expose parallelism and computation for these architectures, the compiler must consider a statement in light of its surrounding context. Loops provide a proven source of both context and parallelism. Loops with significant amounts of computation are prime candidates for compilers seeking to make effective utilization of the available resources. Given that increased modularity is encouraged to manage program computation and complexity, it is natural to expect that programs will contain many procedure calls and procedure calls in loops, and the ambitious compiler will want to optimize them.
Unfortunately, most conventional compiling systems abandon parallelizing optimizations on loops containing procedure calls. Two existing compilation technologies are used to overcome this problem: interprocedural analysis and interprocedural transformation.
Interprocedural analysis applies data-flow analysis techniques across procedure boundaries to enhance the effectiveness of dependence testing. A sophisticated form of interprocedural analysis, called regular section analysis, makes it possible to parallelize loops with calls by determining whether the side effects to arrays as a result of each call are limited to nonintersecting subarrays on different loop iterations [12, 20].
Interprocedural transformation is the process of moving code across procedure boundaries, either as an optimization or to enable other optimizations. The most common form of interprocedural transformation is procedure inlining. Inlining substitutes the body of a called procedure for the procedure call and optimizes it as a part of the calling procedure.
Even though regular section analysis and inlining are frequently successful, each of these methods has its limitations [20, 23]. Compilation time and space considerations require that regular section analysis summarize array side effects. In general, summary analysis for loop parallelization is less precise than the analysis of inlined code. On the other hand, inlining can yield an explosion in code size which may disastrously increase compile time and seriously inhibit separate compilation [13]. Furthermore, inlining may cause a loss of precision in dependence analysis due to the complexity of the operation of the loop expression from array parameter reuses. For example, when the dimension size of a formal array parameter is also passed as a parameter, translating references of the formal to the actual can introduce multiplications of unknown symbolic values into subscript expressions. This situation occurs when inlining is used on the SPEC Benchmark program matrix300 [8].
In this paper, a hybrid approach is developed that overcomes some of these limitations. We introduce a pair of new interprocedural transformations: loop embedding, which pushes a loop header into a procedure called within the loop, and loop extraction, which extracts the outermost loop from a procedure body into the calling procedure. These transformations expose such loops to intraprocedural optimizations. In this paper, the intraprocedural optimizations considered are loop fusion, loop interchange and loop distribution. However, many other transformations that require loop nests will also benefit from embedding and extraction. Some examples are loop skewing [36] and memory hierarchy optimizations such as unroll and jam [10].
As a motivating example, consider the Fortran code in Example 1(a). The J loop in subroutine S may be made parallel, but the outer I loop in subroutine P may not. However, the amount of computation in the J loop is small relative to the I loop and may not be sufficient to make parallelization profitable. If the I loop is embedded into subroutine S as shown in (b), the
*This research was supported by the Center for Research on Parallel Computation, a National Science Foundation Science and Technology Center, by IBM Corporation, the state of Texas and by a DARPA/NASA Research Assistantship in Parallel Processing, administered by the Institute for Advanced Computer Studies, University of Maryland.
SUBROUTINE P
REAL A(M,N)
INTEGER I
DO I = 1, 100
CALL S(A)
ENDDO
SUBROUTINE S(F,I)
REAL F(N,N)
INTEGER I,J
DO J = 1,3
F(J,I) = F(J,I-1) + 10
ENDDO
(a) before transformation
(b) loop embedding
(c) loop interchange
Example 1:
inner and outer loops may be interchanged as shown in (c). The resulting parallel outer J loop now contains plenty of computation. As an added benefit, procedure call overhead has been reduced.
Loop embedding and loop extraction provide many of the optimization opportunities of inlining without its significant costs. Code growth of individual procedures is nominal, so compilation time is not seriously affected. Overall program growth is also moderate because multiple callers may invoke the same optimized procedure body. In addition, the compilation dependences among procedures are reduced since the compiler controls the small amount of code movement across procedures and can easily determine if an editing change of one procedure invalidates other procedures.
Our approach to interprocedural optimization is fundamentally different from previous research in that the application of interprocedural transformations is restricted to cases where it is determined to be profitable. This strategy, called goal-directed interprocedural optimization, avoids the costs of interprocedural optimization when it is not necessary[8]. Interprocedural transformations are applied as dictated by a code generation algorithm that explores possible transformations, selecting a choice that minimizes total execution time. Estimates of execution time are provided by a machine model which takes into account the overhead of parallelization. The code generator is part of an interprocedural compilation system that efficiently supports interprocedural analysis and optimization by retaining separate compilation of procedures.
The remainder of this paper is organized into five major sections, related work, and conclusions. Section 2 provides the technical background for the rest of the paper. In Section 3, a compilation system is described which is powerful enough to support interprocedural optimization but also retains the advantages of a separate compilation system. Section 4 explains the interprocedural and intraprocedural transformations in more detail, and Section 5 presents a code generation algorithm that uses these to parallelize programs for a shared-memory multiprocessor. Section 6 describes an experiment where this approach was applied to the Perfect Benchmark programs spec77 and ocean.
2 Technical Background
2.1 Dependence Analysis
Dependence analysis and testing have been widely researched, and in this paper a working knowledge of these is assumed [3, 7, 9, 17, 18, 27, 37]. In particular, the reader should be familiar with dependence graphs, where dependence edges are characterized with such information as dependence type and hybrid direction/distance vectors [25]. The dependence graph specifies a conservative approximation of the partial order of memory accesses necessary to preserve the semantics of a program. The safe application of program transformations is based on preserving this partial order.
2.2 Augmented Call Graph
The program representation for interprocedural transformations requires an augmented call graph to describe the calling relationship among procedures and specify loop nests. The code generation algorithm considers loops containing procedure calls and loops adjacent to procedure calls. For this purpose, the program's call graph, which contains the usual procedure nodes and call edges, is augmented to include special loop nodes and nesting edges. If a procedure p contains a loop l, there will be a nesting edge from the procedure node representing p to the loop node representing l. If a loop l contains a call to a procedure p, there will be a nesting edge from l to p. Any inner loops are also represented by loop nodes and are children of their outer loop. The outermost loop of each routine is marked enclosing if all the other statements in the procedure fall inside the loop. Figure 1(a) shows the augmented call graph for the program from Example 1.
2.3 Regular Section Analysis
A regular section describes the side effects to the substructures of an array. Sections represent a restricted set of the most commonly occurring array access patterns; single elements, rows, columns, grids and their higher dimensional analogs. This restriction on the shapes assists in making the implementation
3 Support for Interprocedural Optimization
In this section, we present the compilation system of the ParaScope Programming Environment [11, 14]. This system was designed for the efficient support of interprocedural analysis and optimization. The tools in ParaScope cooperate to enable the compilation system to perform interprocedural analysis without direct examination of source code. This information is then used in code generation to make decisions about interprocedural optimizations. The code generator only examines the dependence graph for the procedure currently being compiled, not the graph for the entire program. In addition, ParaScope employs recompilation analysis after program changes to minimize program reanalysis [15].
3.1 The ParaScope Compilation System
Interprocedural analysis in the ParaScope compilation system consists of two principal phases. The first takes place prior to compilation. At the end of each editing session, the immediate interprocedural effects of a procedure are determined and stored. For example, this information includes the array sections that are locally modified and referenced in the procedure. The procedure’s calling interface is also determined in this phase. It includes descriptions of the calls and loops in the procedure and their relative positions. In this way, the information needed from each module of source code is available at all times and need not be derived on every compilation.
Interprocedural optimization is orchestrated by the program compiler, a tool that manages and provides information about the whole program [14, 19]. The program compiler begins by building the augmented call graph described in Section 2.2. The program compiler then traverses the augmented call graph, performing interprocedural analysis, and subsequently, code generation. Conceptually, program compilation consists of three principal phases: (1) interprocedural analysis, (2) dependence analysis, and (3) planning and code generation.
Interprocedural analysis. The program compiler calculates interprocedural information over the augmented call graph. First, the information collected during editing is recovered from the database and associated with the appropriate nodes and edges in the call graph. This information is then propagated in a top-down or bottom-up pass over the nodes in the call graph, depending on the interprocedural problem. Section analysis is performed at this time. Interprocedural constant propagation and symbolic analysis are also performed, as these greatly increase the precision of subsequent dependence analysis.
Dependence analysis. Interprocedural information is then made available to dependence analysis, which is performed separately for each procedure. Dependence analysis results in a dependence graph. Edges in the dependence graph connect statements that form the source and sink of a dependence. If the source or sink of a dependence is a call site, a sec-
The section may more accurately describe the portion of the array involved in the dependence. Dependence analysis also distinguishes parallel loops in the augmented call graph. Dependence analysis is separated from code generation for an important reason; it provides the code generator knowledge about each procedure without reexamining their source or dependence graph.
**Planning and Code Generation.** The final phase of the program compiler determines where interprocedural optimization is profitable. When more than one option for interprocedural transformation exists, it selects the most profitable option. Planning is important to interprocedural optimization since unnecessary optimizations may lead to significant compile-time costs without any execution-time benefit. To determine the profitability of transformations requires a machine model. To determine the safety of transformations, the dependence graph and sections are sufficient. Once profitable transformations are located, they are applied and parallelism is introduced in the transformed program.
The relationship among the compilation phases is depicted in Figure 2. Each step adds annotations to the call graph that are used by the next phase. Following program transformation, each procedure is separately compiled. Interprocedural information for a procedure is provided to the compiler to enhance *intraprocedural* optimization.
### 3.2 Recompilation Analysis
A unique part of the ParaScope compilation system is its recompilation analysis, which avoids unnecessary recompilation after editing changes to the program. Recompilation analysis tests that interprocedural facts used to optimize a procedure have not been invalidated by editing changes [15]. To extend recompilation analysis for interprocedural transformations, a few additions are needed. When an interprocedural transformation is performed, a description of the interprocedural transformations annotates the nodes and edges in the augmented call graph. On subsequent compilations, this information indicates to the program compiler that the same tests used initially to determine the safety of the transformations should be reapplied.
To determine if interprocedural transformations are still safe, the new and old sections are first compared, in most cases avoiding examination of the dependence graph. This means that dependence analysis is only applied to procedures where it is no longer valid, allowing separate compilation to be preserved. The recompilation process after interprocedural transformations have been applied is described in more detail elsewhere [19].
### 4 Interprocedural Transformation
We introduce two new interprocedural transformations, loop extraction and loop embedding. These expose the loop structure to optimization without incurring the costs of inlining. The movement of a single loop header is detailed below. Moving additional statements that precede or are enclosed by a loop is a straightforward generalization of these two transformations and for simplicity is not described. This section also describes the additional information needed to perform the applicability and safety tests for loop fusion and loop interchange across call boundaries. All of these are used in our code generation algorithm. The code generation algorithm also uses loop distribution, but does not apply it across call boundaries. Therefore, it may be performed with no additional information. Loop distribution is discussed in detail in Section 5.2.
#### 4.1 Loop Extraction
Loop extraction moves an enclosing loop of a procedure $p$ outward into one of its callers. This optimization may be thought of as partial inlining. The new version of $p$ no longer contains the loop. The caller now contains a new loop header surrounding the call to $p$. The index variable of the loop, originally a local in $p$, becomes a formal parameter and is passed at the call. The calling procedure creates a new variable to serve as the loop index, avoiding name conflicts. It is always safe to extract an outer enclosing loop from a procedure. Example 2(a) contains a loop with two calls to procedure $S$ and (b) contains the result after loop extraction. Note that (b) has an additional variable declaration for the loop index $J$ in $P$. It is included in the actual parameter list for $S$. In this example, the $J$ loop may now be fused and interchanged to improve performance.
#### 4.2 Loop Embedding
Loop embedding moves a loop that contains a procedure call into the called procedure and is the dual of loop extraction. The new version of the called procedure requires a new local variable for the loop's index variable. If a name conflict exists, a new name for the loop's index variable must be created. This transformation is illustrated in Example 1.
4.3 Loop Fusion
Loop fusion places the bodies of two adjacent loops with the same number of iterations into a single loop [1]. When several procedure calls appear contiguously or loops and calls are adjacent, it may be possible to extract the outer loop from the called procedure(s), exposing loops for fusion and further optimization. In the algorithm checkFusion, we consider fusion for an ordered set $S = \{s_1, \ldots, s_p\}$, where $s_i$ is either a call or a loop. There cannot be any intervening statements between $s_i$ and $s_{i+1}$ and each call must contain an enclosing loop which is being considered for fusion.
Fusion is safe for two loops $l_1$ and $l_2$ if it does not result in values flowing from the statements in $l_2$ back into the statements in $l_1$ in the resultant loop and vice versa. The simple test for safety performs dependence testing on the loop bodies as if they were in a single loop. Each forward dependence originally between $l_1$ and $l_2$ is tested. Fusion is unsafe if any dependences are reversed, becoming backward loop-carried dependences in the fused loop.
This test requires the inspection of the dependence source and sink variable references in $l_1$ and $l_2$. If one or more of the loops is inside a call, the variable references are represented instead as the modified and referenced sections for the call. The slices that annotate the sections correspond to the loops being considered for fusion and are tested identically to variable references (see Section 2.3). Unfortunately, while variable references are always exact, a section and its slice are not. If the slice is not exact, fusion is conservatively assumed to be unsafe. To be more precise would require the inspection of the dependence graphs for each called procedure, possibly a significant overhead.
```
checkFusion (S)
/* Input: S = {s_1, ..., s_p}; s_i is a call or a loop */
/* s_i is adjacent to s_{i+1} */
/* Output: returns true if fusion is safe \forall s_i */
F = \{s_1\}
for i = 2 to n
let l_i = the loop header of s_i
if the number of iterations of l_i differ from F then
return false
for each forward dependence (src, sink)
if src or sink is not exact then
return false
if (src, sink) becomes backward loop-carried then
return false
endfor
F = F \cup \{s_i\}
endfor
return true
```
4.4 Loop Interchange
Loop interchange of two nested loops exchanges the loop headers, changing the order in which the itera-
tion space is traversed. It is used to introduce parallelism or to adjust granularity of parallelism. In particular, when a loop containing calls is not parallel or parallelizing the loop is not profitable, it may be possible to move parallel loops in the called procedures outward using loop interchange as in Examples 1 and 2. The safety of loop interchange may be determined by inspecting the distance/direction vector to ensure that no existing dependence is reversed after interchange [3, 37].
Our algorithm considers loop interchange only when a perfect nest can be created via loop extraction, embedding, fusion, and distribution. If a loop contains more than one call, it may be possible to fuse the outer enclosing loops of calls to create a perfect nest. Even if there are multiple statements and calls, it may be possible to use loop distribution to create a perfect nest. If a perfect nest may be safely created, testing the safety of interchange simply requires inspection of the direction vectors and slices for dependences between calls or statements in the nest.
5 Interprocedural Parallel Code Generation
In this section we present an algorithm for the interprocedural parallel code generation problem. This algorithm moves loops across procedure boundaries when other transformations such as loop fusion, interchange, and distribution may be applied to the resulting loop nests to introduce or improve single-level loop parallelism. The goal of this algorithm is to only apply transformations which are proven to minimize execution time for a particular code segment. To determine the minimum execution time of a code segment, a simple machine model is used. This model includes the cost of arithmetic and conditional statements as well as operations such as parallel loops, sequential loops, and procedure call overhead. Both Polychronopoulos and Sarkar have used similar machine models in their research [33, 34].
5.1 Machine Model and Performance Estimation
A cost model is needed to compare the costs of various execution options. First, a method for estimating the cost of executing a sequential loop is presented. Consider the following perfect loop nest, where \( w_1, \ldots, w_n \) are constants and \( B \) is the loop body.
\[
\begin{align*}
\text{DO} & \quad i_1 = 1, \ w_1 \\
& \quad \ldots, \ w_n \\
& \quad B \\
& \text{ENDDO}
\end{align*}
\]
In order to estimate the cost of running this loop on a single processor, a method for estimating the running time of the loop body is needed. If \( B \) consists of straight-line code, simply sum the time to execute each statement in the sequence. To handle control flow, we assume a probability for each branch and compute the weighted mean of the branches. Once the sequential running time of the loop body \( t(B) \) is computed, then the running time for the inner loop is given by the formula:
\[ w_n(t(B) + o), \]
where \( o \) is the sequential loop overhead. The running time for the entire loop nest is then given by the following:
\[ w_1(\ldots(w_n(t(B) + o)\ldots) + o). \]
In order to estimate the running time of a parallel loop, we need to take into account any overhead introduced by the parallel loop. Our experiments on uniform shared-memory machines indicate that this overhead consists of a fixed cost \( c_f \) of starting the parallel execution and a cost \( c_f \) of forking and synchronizing each parallel process. If there are \( P \) parallel processors, an estimate of the cost of executing the inner loop of the above example in parallel is given by the equation
\[ c_f + c_f P + \left[ \frac{w_n}{P} \right] (t(B) + o). \]
This formula assumes that the iterations are divided into nearly equal blocks at startup time and the overhead of an iteration \( o \) remains the same. Given a perfect loop nest where just one loop is being considered for parallel execution, these two formulae may be generalized to compute the expected sequential and parallel execution time. If the parallel execution time is less than the sequential execution time, it is profitable to run the loop in parallel.
To enable the parallel code generator to compare the costs of different transformation choices, we introduce the following cost function:
\[ \text{cost}(\mathcal{L}, \text{how}, B), \]
where
\[ \mathcal{L} = \{l_1, \ldots, l_n\}, \] a perfect loop nest
\[ \text{how} \] indicates whether \( l_n \) is parallel (||) or sequential
\[ B = \] the loop body
The function \( \text{cost} \) estimates the running time of a loop nest \( l_1, \ldots, l_n \), where the inner loop \( l_n \) is specified as either parallel or sequential, and all outer loops are sequential. The loop body \( B \) may contain any types of statements, including calls and inner loop nests.
5.2 Code Generation Algorithm
The goal of our interprocedural parallel code generation algorithm is to introduce effective loop parallelism for programs which contain procedure calls and loops. This algorithm applies the following transformations: loop fusion, loop interchange, loop distribution, loop embedding, loop extraction, and loop parallelization. These transformations are applied at call sites and for a loop nest containing call sites. The algorithm seeks a minimum cost single loop parallelization based on performance estimates.
Potential loop and call sequences that may benefit from these interprocedural transformations are adjacent procedure calls, loops adjacent to calls, and loop nests containing calls. To find candidates for interprocedural optimization, the augmented call graph is traversed in a top-down pass. If a candidate benefits
BestCost \( (S, \mathcal{L}) \)
\[
\text{/* Input: a set of statements } S = \{s_1, \ldots, s_p\} \text{ in perfect loop nest } \mathcal{L} = \{l_1, \ldots, l_n\} \text{ */}
\]
\[
\text{/* Output: a tuple } (\tau, T) \text{, where } \tau \text{ is the minimum execution time and } T \text{ the set of transformations that result in } \tau \text{ */}
\]
\[
(\tau, T) = (\text{cost}(\mathcal{L}, \text{sequential}, S), \emptyset)
\]
if \( (\mathcal{L} = \emptyset) \) then
if (checkFusion\( (S) \) & (fused loop \( l_f \) is ||)) then
\[
(\tau, T) = \min((\text{cost}(l_f, ||, body(l_f)), \{\text{fuse, make } l_f ||\}), (\tau, T))
\]
return \( (\tau, T) \)
endif
for (i = 1, n)
if \( (l_i \text{ is ||}) \) then
\[
(\tau, T) = \min((\text{cost}(\{l_1, \ldots, l_i\}, ||, body(l_i)), \{\text{make } l_i ||\}), (\tau, T))
\]
if \( i \neq n \) then return \( (\tau, T) \)
endif
endfor
if (checkFusion\( (S) \)) then
if (fused loop \( l_f \) is ||) then
if (checkInterchange\( (l_n, l_f) \) & \( l_f \text{ is || after interchange} \)) then
(1) \[
(\tau, T) = \min((\text{cost}(\{l_1, \ldots, l_{n-1}, l_f\}, ||, body(l_f)), \{\text{fuse, interchange, make } l_f ||\}), (\tau, T))
\]
else
(2) \[
(\tau, T) = \min((\text{cost}(\{l_1, \ldots, l_n, l_f\}, ||, body(l_f)), \{\text{fuse, make } l_f ||\}), (\tau, T))
\]
else if \( (l_n \text{ is } \tau ||) \) & \( (\text{checkInterchange}\{l_n, l_f\}) \) & \( (l_n \text{ || after interchange}) \) then
(3) \[
(\tau, T) = \min((\text{cost}(\{l_1, \ldots, l_{n-1}, l_f, l_n\}, ||, body(l_f)), \{\text{fuse, interchange, make } l_n ||\}), (\tau, T))
\]
endif
return \( (\tau, T) \)
}
from interprocedural transformation, the transformations are performed and no further optimization of that call sequence is attempted. Additional candidates for optimization may be created by using judicious code motion and loop coalescing (combining nested loops into a single loop)[33].
**BestCost Algorithm**
**BestCost** considers \( \mathcal{L} = \{l_1, \ldots, l_n\} \) a perfect loop nest with body \( S = \{s_1, \ldots, s_p\} \), where \( l_n \) is the innermost loop and \( L \) may be the empty set \( \emptyset \). \( S \) consists of at least one call and may also contain other statements such as loops, control flow, and assignments.
The **BestCost** algorithm makes use of loop parallelization, fusion, interchange, extraction, and embedding (loop distribution is excluded) to determine a tuple \((\tau, T)\), such that \( \tau \) is the best execution time and \( T \) specifies the transformations needed to obtain this time. Unfortunately, finding the best ordering of a loop nest via loop interchange requires that all possible permutations (\( n! \)) be considered. Therefore to restrict the search space and simplify this presentation, **BestCost** only considers loop interchange of \( l_n \) the innermost nest and \( l_f \) the result of fusing \( S \). However, opportunities to test various interchange strategies are pointed out in the text.
The sequential execution time is computed first \((T = \emptyset)\). If there is no surrounding loop nest \((\mathcal{L} = \emptyset)\), \( S \) may be a group of adjacent calls and loops that can be fused. If fusion of all members of \( S \) is possible and produces a parallel loop, its execution time is computed and compared to the sequential cost using the function \( \min \). The function \( \min \) assigns \( \tau \) the minimum of the two times, and \( T \) the corresponding program transformation. If \( \mathcal{L} \neq \emptyset \), other transformations are considered as follows.
First, the outermost parallel loop of \( \mathcal{L} \) is sought and compared with the sequential time. If any of \( l_1 \ldots l_{n-1} \) are parallel, **BestCost** returns. Loop interchange outward of any of these parallel loops could also be considered. Otherwise, if all of \( S \) fuses into \( l_f \), three transformations on \( l_f \) and \( l_n \) are considered.
1. Interchanging a parallel \( l_f \) with \( l_n \) to make a parallel loop with increased granularity.
2. A parallel \( l_f \) in its current position.
3. Interchanging \( l_n \) and \( l_f \) to introduce inner loop parallelism.
Case 1 is illustrated in Examples 1 and 2. Further interchanging of \( l_f \) to enable a more outer loop to be parallel may also be tested here.
**Embedding versus Extraction**
To apply the set of transformations specified by \((\tau, T)\), the loops involved may need to be placed in the same routine. In particular, if \( T \) specifies interchange or fusion across a call then one of embedding or extraction must be applied. If there is only one call, then embedding loop \( l_n \) into the called procedure is preferable because it reduces procedure call overhead. If there is more than one call and \( T \) requires fusion, extraction from all the calls is performed. Fusion, inter-
change, and parallelization may then be performed on the transformed loops.
Loop Distribution
If $BestCost(\mathcal{L}, S)$ cannot introduce parallelism, then it may be possible to use loop distribution to do so. Loop distribution seeks parallelism by separating independent parallel and sequential statements in $\mathcal{L}$. For example, loop distribution may create loop nests of adjacent calls and loops which $BestCost$ can optimize.
Ordered Partitions. Loop distribution is safe if the partition of statements into new loops preserves all of the original dependences [24, 32]. Dependences are preserved if any statements involved in a cycle of dependences, a recurrence, are placed in the same loop (partition). The dependences between the partitions form an acyclic graph that can always be ordered using topological sort [3, 28].
By first choosing a safe partition with the finest possible granularity and then grouping partitions, larger partitions may be formed. Any one of these groupings may expose the optimal parallelization of the loop. Unfortunately, there exists an exponential number of possible groupings [2].
To limit the search space, statement order is fixed based on a topological sort of all the dependences for $\mathcal{L}$. Ambiguities are resolved in favor of placing parallel partitions adjacent to each other. The advantage of this ordering is that loop-carried anti-dependences may be broken, allowing parallelism to be exposed.
Grouping partitions via dynamic programming. A dynamic programming solution is used to compute the best grouping for the finest granularity ordered partitions. This algorithm is similar to techniques for calculating the shortest path between two points in a graph [31]. The algorithm is $O(N \cdot M^3)$. $N$ is the number of perfectly nested loops. $M$ is the maximum number of partitions and is less than or equal to the number of statements in the loop. Both $N$ and $M$ are typically small numbers.
The dynamic programming solution appears in Figure 3. The algorithm begins by finding the finest partition for the inner loop $l_i$ that satisfies its own dependences and the ordering constraints. On subsequent iterations, the initial partition is further constrained by including the dependences for the next outer loop. Since an inner loop may have more partitions than its enclosing loop, a map is constructed that correlates a statement's partition for the previous and current iteration; $map(j)$ returns the partition from $l_{i+1}$ that corresponds to $\pi_j$ in $l_i$.
For each loop level, $BestCost$ calculates the best execution time of each possible grouping of partitions. The grouping algorithm first tests the finest partition and then each pair of adjacent partitions. Increasingly larger groupings of partitions are tested for a particular loop level. At each level, the minimal execution time for each grouping analyzed is stored. The minimal grouping time is taken from the grouping at this level, as well as that of the previous inner loops. This strategy allows inner loop distributions to be used within an outer loop distribution to minimize overall execution time. On completion, the best execution time for the grouping of the entire loop nest is determined.
Each time the algorithm locates a grouping of partitions that improves execution time, a set $D$ is constructed to describe how partitions are grouped together. For a loop $l_i$, $D_{i,m}$ provides the best grouping of partitions at loop $l_i$. Upon termination of the algorithm, $D_{i,m}$ indicates the final grouping with the minimal cost. Implicit in $D$ is also a description of any additional transformations specified by $BestCost$.
Improvements. To leverage the dynamic programming solution, the distribution algorithm generates partitions based on a fixed statement order that satisfies all the dependences. A correct and less restrictive statement order uses only the dependences for the particular loop nest being distributed. In general, this ordering causes the map between solutions for adjacent loop partitions to be useless. It provides a single best solution for each nesting level of distribution instead of one overall best solution. In practice, experimentation will be needed to differentiate these strategies.
6 Experimental Validation
This section presents significant performance improvements due to interprocedural transformation on two
scientific programs, spec77 and ocean, taken from the
Perfect Benchmarks[16]. Spec77 contains 3278 non-comment lines and is a fluid dynamics weather sim-
ulation that uses Fast Fourier Transforms and rapid
elliptic problem solvers. Ocean has 1902 non-comment lines and is a 2-D fluid dynamics ocean simulation that
also uses Fast Fourier Transforms.
To locate opportunities for transformations, we
browsed the dependences in the program using the
ParaScope Editor [6, 25, 26]. Using other ParaScope
tools, we determined which procedures in the program
contained procedure calls. We examined the pro-
dcedures containing calls, looking for interesting call struc-
tures. We located adjacent calls, loops adjacent to
calls, and loops containing calls which could be opti-
mized.
The rest of this section describes our experiences exec-
ting these programs on a 20-processor Sequent Sym-
metry S81. Since the optimizations used and the exper-
imental methodology differed slightly for each program,
they are described separately.
6.1 Optimizing spec77
In spec77, loops containing calls were common. Over-
all, transformations were applied to 19 such loops.
Embedding and interchange were applied to 8 loops
which contained calls to a single procedure. The re-
maining 11 loops, which contained multiple procedure
calls, were optimized using extraction, fusion and in-
terchange. These loops were found in procedures dell,
gloop and gwater.
For the 19 transformed loops, performance was mea-
sured among three possibilities: (1) no parallelization of
loops containing procedure calls, (2) parallelization
using interprocedural information, and (3) inter-
procedural information and transformations. To ob-
tain these versions, the steps illustrated in Figure 4
were performed.
The Original version contains directives to parallelize
the loops in the leaf procedures that are invoked by the
19 loops of interest. The IPinfo version parallels the
19 loops containing calls. For the IPtrans version, we
performed interprocedural transformation followed by
outer loop parallelization. The parallel loops in each
version were also blocked to allow multiple consecutive
iterations to execute on the same processor without
synchronization. The compiler default is to create a
separate process for each iteration of a parallel loop.
The results reported above are the best execution
time in seconds for the optimized portions of each ver-
sion. The speedups are compared against the execution
time in the optimized portion of the program on a sin-
gle processor, which was 463.7s. This accounted for
more than 21 percent of the total sequential execution
time.
With seven processors, the results are similar for all
three versions, since each program version provided ad-
quate parallelism and granularity for seven processors.
On 19 processors, IPinfo was slower than the original
program because the parallel outer loops had insufficient parallelism — only 7 to 12 iterations. The par-
allel inner loops of Original were better matched to the
number of processors because they had at least 31 it-
erations. The interprocedural transformation version
IPtrans demonstrated the best performance, a speedup of
12.7, because it combined the amount of parallelism in Original with increased granularity. The inter-
procedural transformations resulted in a 21 percent
improvement in execution time over Original in the opti-
mized portion.
Parallelizing just these 19 loops resulted in a speedup
for the entire program of about 1.25 on 19 processors
and 1.23 on 7 processors. Higher speedups might result
from parallelizing the entire application.
6.2 Optimizing ocean
There were 31 places in the main routine of ocean
where we extracted and fused interprocedurally adja-
cent loops. They were divided almost evenly between
adjacent calls and loops adjacent to calls. In all 15
cases where a loop was adjacent to a call, the loop
was 2-dimensional, while the loop in the called pro-
cedure was 1-dimensional. Prior to fusion, we coalesced
the 2-dimensional loop into a 1-dimensional loop by
linearizing the subscript expressions of its array refer-
ences. The resulting fused loops consisted of between
2 and 4 parallel loops from the original program, thus
increasing the granularity of parallelism.
To measure performance improvements due to inter-
procedural transformation, we performed steps similar
to those in Figure 4. Directives forced the paralleliza-
tion and blocking of the individual loops in the Original
version, and the fused loops in IPtrans. The execution
times were measured for the entire program and just
the optimized portion. The optimized execution times
are shown below.
<table>
<thead>
<tr>
<th>Processors = 7</th>
<th>Time in optimized portion</th>
<th>Speedup</th>
</tr>
</thead>
<tbody>
<tr>
<td>Original</td>
<td>81.9s</td>
<td>5.7</td>
</tr>
<tr>
<td>IPinfo</td>
<td>80.9s</td>
<td>5.8</td>
</tr>
<tr>
<td>IPtrans</td>
<td>80.6s</td>
<td>5.8</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Processors = 19</th>
<th>Time in optimized portion</th>
<th>Speedup</th>
</tr>
</thead>
<tbody>
<tr>
<td>Original</td>
<td>45.8s</td>
<td>10.1</td>
</tr>
<tr>
<td>IPinfo</td>
<td>48.0s</td>
<td>9.7</td>
</tr>
<tr>
<td>IPtrans</td>
<td>36.4s</td>
<td>12.7</td>
</tr>
</tbody>
</table>
The speedups are relative to the time in the opti-
mized portion of the sequential version of the pro-
gram, which was 645.9 seconds. The optimized code
accounted for about 5 percent of total program execu-
tion time. For the whole program, the parallelized
versions achieve a speedup of about 1.06 over the se-
quential execution time.
Note that IPtrans achieved a 32 percent improvement over Original in the optimized portion. This improvement resulted from increasing the granularity of parallel loops and reducing the amount of synchronization. It is also possible that fusion reduced the cost of memory accesses. Often the fused loops were iterating over the same elements of an array. These 31 groups of loops were not the only opportunities for interprocedural fusion; there were many other cases where fusion was safe, but the number of iterations were not identical. Using a more sophisticated fusion algorithm might result in even better execution time improvements.
7 Related Work
While the idea of interprocedural optimization is not new, previous work on interprocedural optimization for parallelization has limited its consideration to inline substitution [4, 13, 23] and interprocedural analysis of array side effects [5, 9, 12, 20, 29, 30, 35]. The various approaches to array side-effect analysis must make a tradeoff between precision and efficiency. Section analysis used here loses precision because it only represents a few array substructures, and it merges sections for all references to a variable into a single section. However, these properties make it efficient enough to be widely used by code generation. In addition, experiments with regular section analysis on the LINPACK library demonstrated a 33 percent reduction in parallelism-inhibiting dependences, allowing 31 loops containing calls to be parallelized [20]. Comparing these numbers against published results of more precise techniques, there was no benefit to be gained by the increased precision of the other techniques [29, 30, 35].
Sections inspired a similar but more detailed array summary analysis, data access descriptors, which stores access orders and expresses some additional shapes [5, 21, 22]. In fact, the slice annotation to sections could be obviated by using some of the techniques in Huelsergen et. al. for determining exact array descriptors for use in dependence testing. However, slices are appealing due to our existing implementation and their simplicity.
8 Conclusions
This paper has described a compilation system; introduced two interprocedural transformations, loop embedding and loop extraction; and proposed a parallel code generation strategy. The usefulness of this approach has been illustrated on the Perfect Benchmark programs spec77 and ocean. Taken as a whole, the results indicate that providing freedom to the code generator becomes more important as the number of processors increase. Effectively utilizing more processors requires more parallelism in the code. This behavior was particularly observed in spec77, where the benefits of interprocedural transformations were increased with the number of processors.
Although it may be argued that scientific programs structured in a modular fashion are rare in practice, we believe that this is an artifact of the inability of previous compilers to perform interprocedural optimizations of the kind described here. Many scientific programmers would like to program in a more modular style, but cannot afford to pay the performance penalty. By providing compiler support to effectively optimize procedures containing calls, we encourage the use of modular programming, which, in turn, will make these transformations applicable on a wider range of programs.
Acknowledgments
We are grateful to Paul Havlak, Chau-Wen Tseng, Linda Torczon and Jerry Roth for their contributions to this work. Use of the Sequent Symmetry S81 was provided by the Center for Research on Parallel Computation under NSF Cooperative Agreement # CDA8619893.
References
|
{"Source-Url": "http://www.crpc.rice.edu/softlib/TR_scans/CRPC-TR91116throughTR91152/CRPC-TR91149.PDF", "len_cl100k_base": 9601, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 29718, "total-output-tokens": 12835, "length": "2e13", "weborganizer": {"__label__adult": 0.0003514289855957031, "__label__art_design": 0.0003604888916015625, "__label__crime_law": 0.00030803680419921875, "__label__education_jobs": 0.0005917549133300781, "__label__entertainment": 6.896257400512695e-05, "__label__fashion_beauty": 0.00017118453979492188, "__label__finance_business": 0.0002460479736328125, "__label__food_dining": 0.00035452842712402344, "__label__games": 0.0007424354553222656, "__label__hardware": 0.002071380615234375, "__label__health": 0.0005245208740234375, "__label__history": 0.0003139972686767578, "__label__home_hobbies": 0.0001201629638671875, "__label__industrial": 0.0005741119384765625, "__label__literature": 0.00023114681243896484, "__label__politics": 0.00029468536376953125, "__label__religion": 0.00058746337890625, "__label__science_tech": 0.04742431640625, "__label__social_life": 6.723403930664062e-05, "__label__software": 0.00568389892578125, "__label__software_dev": 0.9375, "__label__sports_fitness": 0.0004105567932128906, "__label__transportation": 0.0007882118225097656, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51327, 0.027]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51327, 0.41445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51327, 0.88157]], "google_gemma-3-12b-it_contains_pii": [[0, 225, false], [225, 225, null], [225, 225, null], [225, 225, null], [225, 5544, null], [5544, 5544, null], [5544, 10037, null], [10037, 10037, null], [10037, 12995, null], [12995, 12995, null], [12995, 17808, null], [17808, 17808, null], [17808, 20282, null], [20282, 20282, null], [20282, 25921, null], [25921, 25921, null], [25921, 30834, null], [30834, 30834, null], [30834, 35121, null], [35121, 35121, null], [35121, 40846, null], [40846, 40846, null], [40846, 45049, null], [45049, 45049, null], [45049, 51327, null], [51327, 51327, null]], "google_gemma-3-12b-it_is_public_document": [[0, 225, true], [225, 225, null], [225, 225, null], [225, 225, null], [225, 5544, null], [5544, 5544, null], [5544, 10037, null], [10037, 10037, null], [10037, 12995, null], [12995, 12995, null], [12995, 17808, null], [17808, 17808, null], [17808, 20282, null], [20282, 20282, null], [20282, 25921, null], [25921, 25921, null], [25921, 30834, null], [30834, 30834, null], [30834, 35121, null], [35121, 35121, null], [35121, 40846, null], [40846, 40846, null], [40846, 45049, null], [45049, 45049, null], [45049, 51327, null], [51327, 51327, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51327, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51327, null]], "pdf_page_numbers": [[0, 225, 1], [225, 225, 2], [225, 225, 3], [225, 225, 4], [225, 5544, 5], [5544, 5544, 6], [5544, 10037, 7], [10037, 10037, 8], [10037, 12995, 9], [12995, 12995, 10], [12995, 17808, 11], [17808, 17808, 12], [17808, 20282, 13], [20282, 20282, 14], [20282, 25921, 15], [25921, 25921, 16], [25921, 30834, 17], [30834, 30834, 18], [30834, 35121, 19], [35121, 35121, 20], [35121, 40846, 21], [40846, 40846, 22], [40846, 45049, 23], [45049, 45049, 24], [45049, 51327, 25], [51327, 51327, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51327, 0.02857]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ac4faba9223eaaa7c3e3c3461e2b88d7c228d955
|
Embracing Technical Debt, from a Startup Company Perspective
Terese Besker 1, Antonio Martini 2a,b, Rumesh Edirisooriya Lokuge 3, Kelly Blincoe 3, Jan Bosch 1
1Computer Science and Engineering, Software Engineering, Chalmers University of Technology Göteborg, Sweden, besker@chalmers.se, jan.bosch@chalmers.se
2CA Technologies Strategic Research Team, Barcelona, Spain
2bProgramming and Software Engineering, University of Oslo Oslo, Norway, antonima@ifi.uio.no
3Dept. of Electrical and Computer Engineering, The University of Auckland Auckland, New Zealand, kblincoe@acm.org, redi099@aucklanduni.ac.nz
Abstract— Software startups are typically under extreme pressure to get to market quickly with limited resources and high uncertainty. This pressure and uncertainty is likely to cause startups to accumulate technical debt as they make decisions that are more focused on the short-term than the long-term health of the codebase. However, most research on technical debt has been focused on more mature software teams, who may have less pressure and, therefore, reason about technical debt very differently than software startups. In this study, we seek to understand the organizational factors that lead to and the benefits and challenges associated with the intentional accumulation of technical debt in software startups. We interviewed 16 professionals involved in seven different software startups. We find that the startup phase, the experience of the developers, software knowledge of the founders, and level of employee growth are some of the organizational factors that influence the intentional accumulation of technical debt. In addition, we find the software startups are typically driven to achieve a “good enough level,” and this guides the amount of technical debt that they intentionally accumulate to balance the benefits of speed to market and reduced resources with the challenges of later addressing technical debt.
Keywords— Technical Debt, Startup, Software development
I. INTRODUCTION
Software startups are freshly created companies with no operating history and mainly oriented towards developing high-tech and innovative products, aiming to grow their business in highly scalable markets [18], [10]. Startups often operate with limited resources and under extreme time pressure as they strive to produce their product and avoid being beaten to market by a competitor or running out of capital [19]. Thus, startups typically develop early software versions to test and validate emerging ideas to avoid wasteful implementation of complicated software which may be unsuccessful in the markets [26]. Under these conditions, often the extra effort required to design and implement software with an optimal design is considered an unaffordable luxury and a potential waste of time and effort.
Software companies often make sub-optimal design decisions to allow them to get to market quickly [19]. For instance, the product might be built with an inflexible architecture that cannot be easily changed to speed up time-to-market and let the startup put their product in users’ hands earlier, get feedback, and evolve it [3]. If and when the developed software becomes successful on the market, then the pressure turns modifying the software to meet the user needs (i.e., adding new features). This can cause startups to build upon the original inflexible architecture that was not designed to last for the long term and is not easily extendable.
The result of this situation is the accrual of what is described as Technical Debt (TD). The TD metaphor was first coined at OOPSLA ’92 by Ward Cunningham [8], to describe the need to recognize the potential long-term negative effects of immature code that is made during the software development lifecycle. A recent definition was provided by Avgeriou et al. [4] who define TD as “In software-intensive systems, technical debt is a collection of design or implementation constructs that are expedient in the short term, but set up a technical context that can make future changes more costly or impossible. Technical debt presents an actual or contingent liability whose impact is limited to internal system qualities, primarily maintainability and evolvability”.
TD has been the focus of much recent research, but this research has been mostly focused on mature software companies, where large amount of TD is considered to be detrimental to the long-term success of software development [24]. However, deliberately accumulating TD could be much more beneficial since it can considerably speed up time-to-market, allowing them to release their product to end-users faster, get feedback, evolve the software, and preserve capital [14]. However, TD must be managed to ensure it is addressed at an appropriate time; unmanaged TD can have negative consequences, such as the death of the startup itself [7].
There is a current paucity of empirical research focusing specifically on TD and startups [25]. This paper reports on a qualitative study that examines the organizational factors that influence the introduction of TD and the benefits and challenges of deliberate taking on TD. Through interviews with 16 professionals at seven different startups, we identified six organization factors that lead to TD. In addition, we present a list of benefits and challenges of TD in startups, which can be considered by practitioners to aid them in the TD decisions.
The remainder of this paper is structured as follows: In Section II we describe the background and related work. Our research methods are described in Section III. We describe the cases in Section IV. The results are presented in Section V. Finally, we discuss the implications and limitations of our work in Section VI, and offer a brief conclusion in Section VII.
II. BACKGROUND AND RELATED WORK
In this section, we provide a complete description of a software startup, provide some background on the startup lifecycle, and review related work on TD in startups.
A. Software Startups: A Definition
Giardino et al. [10] define software startups as those “organizations focused on the creation of high-tech and innovative products, with little or no operating history, aiming to aggressively grow their business in highly scalable markets”. Sutton [23] presents different characteristics that reflect both engineering and business concerns, which software startup companies must operate within. Software startups are relatively young and inexperienced compared to more established and mature development organizations, and they commonly have very little accumulated experience or history. Typically, their resources are limited, and they primarily focus on getting the product out, promoting the product, and building up strategic alliances. Their business is dependent on influences from various sources, such as investors, customers, partners, and competitors. The software these startup companies are developing are commonly technologically innovative products, and their developing often involves cutting-edge development tools and techniques [23].
B. Software Startups Life Cycle
Crowne [9] identified four distinct stages for a software startup: startup, stabilization, growth, and maturity. Each stage has different types of critical product development issues that potentially can lead to company failure. The first “Startup” phase refers to the period between product idea and the first sale. This stage is characterized by a product where the product doesn’t meet the customer’s requirements and is unreliable and fails frequently. Rectifying defects takes longer than expected and often creates additional defects [9]. The second “stabilization” phase begins when the first customer takes delivery of the product and ends when the product is stable enough to be commissioned without any overhead on product development. During this stage, a divide between developers can be spotted, where the developers who join the company early, and those who are recruited later differ in terms of that the early developers mount significant resistance to organizational change. During this stage, the non-functional requirements such as security, reliability, scalability, and performance gain additional attention, and the result of the previously introduced sub-optimal solutions becomes evident [9]. The third “growth” phase takes place when the product can be commissioned for new customers without creating any overhead on the development team. This phase ends when market size, share, and growth rate have been established, and all business processes necessary to support product development and sales are in place. In this stage, new features implementation requires a coordinated program of activities across functional areas including product development, professional services, support, and sales and marketing, which stresses the importance of having a repeatable process for software development implementation. The last “maturity” stage occurs when the company has evolved from a startup into a mature organization, where, e.g., market size, share, and growth rate have been established. In this stage also all processes necessary to support product development and sales are in place [9].
C. Startups and Technical debt
There is a lack of research studies on TD management in software startups [25]. Giardino et al. [10], conducted an empirical study addressing how startups employ software development strategies, using a Greenfield Startup Model (GSM), which also covers startups and TD to some extent. Giardino et al. describe that to be faster, startups may introduce TD as an investment, whose repayment may never come due, with the long-term negative effects on morale, productivity, and product quality. Further, in their study they state that “Startups achieve high development speed by radically ignoring aspects related to documentation, structures, and processes”, and that “instead of traditional requirement engineering activities, startups make use of informal specification of functionalities through ticket-based tools to manage low-precision lists of features to implement, written in the form of self-explanatory user stories”.
Gralha et al. [21] investigated the evolution of requirements practices of software startups. They found that TD is one of the six factors that influence the requirements practices of a startup. They identified three phases regarding the accumulation of TD in startups. They also identified trigger points that cause startups to transition from one phase to the next. An increase in the number of employees and software features causes startups to transition from simply knowing and accepting TD to tracking and recording it. Then, when their client retention rate goes down, or they begin to see an increase in negative feedback, they begin to manage and control TD.
Another study which to some extent covers TD in startups is presented by Yli-Huomen et al. [28]. In that study, they investigate the relationship between business model experimentation and TD, with the goal of understanding if conducting these types of experimentations have any effect on the amount of TD occurring during the software life cycle. The concept of a business model experimentation in their study refers to when a company uses the technique to validate assumptions made on a product from real customers before the actual product is created. An example of this can be illustrated when a Minimum Viable Product (MVP) is used to test the actual product is created. An example of this can be illustrated when a MVP is used to test the actual product is created. An example of this can be illustrated when a MVP is used to test the actual product is created. An example of this can be illustrated when a MVP is used to test the actual product is created. An example of this can be illustrated when a MVP is used to test the actual product is created.
In a recent study by Klotins et al. [13], where the authors explore how startups estimate TD, the precedents for accumulating TD, and to what extent startups experience outcomes associated with TD, it was found that TD peaks at the growth stage and that the number of people in a team amplifies precedents for TD and finally that there is an association between a startup outcome and their TD management strategy.
Unterkalmsteiner et al.'s [25] research agenda for software startups states that researchers must build a more comprehensive, empirical knowledge base to support forthcoming software startups. They list several research question related to TD, and by answering these questions, they state that it could help clarify the role of design decisions in software development in the context of a software product roadmap, similarly to what happens in other engineering disciplines. The overall goal of the research questions listed by Unterkalmsteiner et al. [25] address in what way practitioners will be able to make better decisions considering the characteristics of the current software product implementation.
III. RESEARCH METHODOLOGY
The goal of this study is to understand how software startups reason about TD. In particular, we are interested in the organizational factors that impact TD together with the potential benefits and challenges of TD. We, therefore, aim at answering the following research questions:
**RQ1**: What organizational factors influence the accumulation of TD in software startups?
**RQ2**: What are the challenges and benefits of Technical Debt for software startups?
In order to answer these research questions, we investigated the strategy of software development in different software startup companies by interviewing 16 practitioners in seven different startup companies, working in seven different areas.
### A. Participants
We collected data from software professionals active in seven different software startup companies, shown in TABLE I. The sample population was selected using a non-probability sampling technique [27], where the selection of participant companies was obtained using convenience sampling. The startup companies were located in two different countries. The companies are described in more detail in Section IV.
### B. Data Collection
Initially, we ran two workshops (one in each country) with participants from four different startups (A, B, C, and D). The workshops included both a presentation made by one of the authors about TD, followed by a group discussion where the participants explored their own experiences with TD within their startup companies. Each workshop lasted about 120 minutes and in total 12 practitioners from the investigated startup companies participated.
The goal of these workshops was to introduce the participants to the study, to align and equip them with relevant knowledge about the concept of TD and to gather background and contextual information on each participating startup company in preparation for the following interviews.
We conducted semi-structured (as suggested in [20]), face-to-face interviews with 16 professionals from seven different companies in two different countries. To improve the reliability of collected data at least two of the authors participated in each interview session. Each interview lasted between 60 and 120 minutes and was digitally recorded and transcribed verbatim. The questions were prepared by three of the authors together.
The aim of the interviews was to understand the accumulation and refactoring of TD and what contextual aspects (related to the startup's environment) influenced such accumulation. We started by asking participants to describe their startup company and product and a. We asked follow-ups to learn about the contextual aspects of the startups (inspired by [18]). Next, we asked about TD. Specifically, we asked:
- Describe some critical TD issues.
- Which TD issues were refactored (and when)?
- Which TD issues are planned to be refactored (and when)?
- If TD issues are not planned to be refactored, why not?
- What value did the accumulated TD give the company?
- What cost was (or will be) paid to remove the TD?
- What extra costs were (or will be) paid because of the TD?
- What led to the accumulation of TD?
- What roles, processes, guidelines, and strategies were used for TD?
### TABLE I. STUDY PARTICIPANTS
<table>
<thead>
<tr>
<th>Role</th>
<th>Company</th>
<th>Country</th>
<th>Segment</th>
</tr>
</thead>
<tbody>
<tr>
<td>Developer</td>
<td>A</td>
<td>Sweden</td>
<td>Sport</td>
</tr>
<tr>
<td>Developer</td>
<td>B</td>
<td>Sweden</td>
<td>Energy</td>
</tr>
<tr>
<td>Developer</td>
<td>C</td>
<td>New Zealand</td>
<td>Retail</td>
</tr>
<tr>
<td>CEO / Developer</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Co-founder / Developer</td>
<td>C</td>
<td>New Zealand</td>
<td>Medical</td>
</tr>
<tr>
<td>CEO</td>
<td>D</td>
<td>New Zealand</td>
<td>Medical</td>
</tr>
<tr>
<td>Advisor (Business and Technology)</td>
<td>E</td>
<td>Sweden</td>
<td>Media</td>
</tr>
<tr>
<td>CEO</td>
<td>F</td>
<td>Sweden</td>
<td>Software Development</td>
</tr>
<tr>
<td>Chairman of the board</td>
<td>G</td>
<td>Sweden</td>
<td>Mental Health</td>
</tr>
</tbody>
</table>
Finally, to get more insight into the existing TD, we also jointly ran the software SonarQube [2], and AnaConDebt [1] during the interviews. None of the companies previously used these tools, and they were not familiar with the output from the tools in advance. We asked questions on:
- What issues were revealed and were they already known?
- Would it have helped to use the tool (and when)?
- Will you use the tool in the next iterations?
C. Data Analysis
We used thematic analysis [5] to identify, analyze, and report patterns and themes within the interview data. Thematic analysis involves searching across a dataset to find repeated patterns of meaning. The thematic analysis provides a flexible and useful research tool, which offers a detailed, and yet complex account of the collected data.
The thematic analysis was conducted using a six-phase guide. First, the audio-recorded qualitative data collected from interviews were transcribed, and we familiarized ourselves with the data through careful reading of the transcripts. The second step involved the production of initial codes from the data, where we organized the data into meaningful groups. The third phase focused on searching for themes by sorting the different codes into potential themes and collating all the relevant coded data extracts within each identified theme. Each extract of data was assigned to at least one theme and, in many cases, to multiple themes. For example, the citation “if it [the software from a third-party application] lifts and take off, we can build our own solution” was coded as “Third party” in the theme “Software development Process.” To ensure that the coding was performed in a consistent and reliable fashion and in order to triangulate the interpretation of the data and to avoid bias as much as possible, two authors synchronized some of the output of the coding, following guidelines provided by Campbell et al. [6]. The fourth phase focused on the revised set of candidate themes, involving the refinement of those themes. When needed, we revised the themes or created a new theme. The fifth phase focused on identifying the essence of each theme and determining what aspect of the data is captured by each theme. The final phase of the thematic analysis took place when we had a set of fully developed themes, and involved the final analysis and write-up of the publication. We have made a figure illustrating how the codes and the corresponding themes were assigned during the thematic analysis available at https://figshare.com/articles/Thematical_Analysis/6115172.
IV. DESCRIPTION OF CASES
In this Section, to provide more context for our study, we describe the companies in more detail. TABLE II summarizes the seven companies that participated in this study. As can be seen, there is diversity across all aspects. We also indicate the startup stage for each company (using the stages in Crown’s [9] classification of startups, which we described in Section II.B). Across the seven cases, all stages are represented by at least one of the cases in this study.
Figure 1 shows how TD was accumulated or addressed in each stage. All companies reported accumulating significant TD in the startup phase. Surprisingly, two companies reported undertaking either a major refactoring or a complete redesign during the startup phase prior to securing their first customer. Both of these cases were due to unintentional issues with the code or the design. During the stabilization phase, most companies reported addressing the TD that accumulated in the previous stage either by taking on formal refactoring initiatives or by informally removing TD as needed. The two companies in the growth and maturity stages indicated that most of the TD had been addressed before entering these stages. Only two of the companies, C and F, had not yet performed a large refactoring or redesign, but both planned this for the future.
### TABLE II. DESCRIPTION OF CASES
<table>
<thead>
<tr>
<th>Company</th>
<th>Product</th>
<th>Domain</th>
<th>Years since founding</th>
<th>Founders SW Knowledge</th>
<th>Software developed</th>
<th>Current Employees</th>
<th>Experience of Software Developers</th>
<th>Development Practices</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Mobile app</td>
<td>Sport</td>
<td>2.5</td>
<td>None</td>
<td>Initially external then in-house</td>
<td>Founder, CTO, CMO, 3 developers, one salesperson</td>
<td>2 junior, 1 senior + senior CTO</td>
<td>Some agile practices (e.g. sprint planning)</td>
</tr>
<tr>
<td>B</td>
<td>Mobile and web apps</td>
<td>Energy</td>
<td>6</td>
<td>High</td>
<td>In-house</td>
<td>CEO, 5 developers, two sale reps</td>
<td>4 senior, 1 junior</td>
<td>Scrum</td>
</tr>
<tr>
<td>C</td>
<td>Web app</td>
<td>Retail</td>
<td>2</td>
<td>High</td>
<td>In-house</td>
<td>4 Founders</td>
<td>All junior</td>
<td>No formal process</td>
</tr>
<tr>
<td>D</td>
<td>Web app</td>
<td>Medical</td>
<td>2</td>
<td>None</td>
<td>In-house</td>
<td>3 Founders, 2 Technical staff</td>
<td>All senior</td>
<td>Some agile practices (e.g. Kanban, CI)</td>
</tr>
<tr>
<td>E</td>
<td>SaaS app</td>
<td>Media</td>
<td>9*</td>
<td>Low</td>
<td>In-house</td>
<td>35 employees (Two-thirds are developers)</td>
<td>All junior</td>
<td>Some agile practices</td>
</tr>
<tr>
<td>F</td>
<td>Web app</td>
<td>Software</td>
<td>2</td>
<td>High</td>
<td>Combination in-house and consultant</td>
<td>Founder + consultant as needed</td>
<td>Senior</td>
<td>Scrum</td>
</tr>
<tr>
<td>G</td>
<td>Mobile app</td>
<td>Mental Health</td>
<td>6</td>
<td>None</td>
<td>Initially external then in-house</td>
<td>Founder, CTO, 3 developers, 1 salesperson</td>
<td>3 junior + senior CTO</td>
<td>Scrum</td>
</tr>
</tbody>
</table>
* Today this startup is 9 years old, but the data collected for this startup reflects a time period of 3-5 years after they were founded
### V. RESULTS
The following subsections present results for the research questions presented in Section III and the results are grouped according to each research question.
**A. What organizational factors influence the accumulation of TD in software startups? (RQ1)**
Our analysis has identified many factors that influenced the amount of TD that the startups accumulated.
1) **Experience of software developers**
Our results indicate that the experience level of the software developers can have both positive and negative influence on the accumulation of TD. As startups are typically very small in terms of number of developers initially, the experience level of individual developers can be impactful.
Less experienced (junior) developers often unintentionally accumulate TD due to their lack of experience. As one interviewee from Company A stated, “It’s really good to have at least one guy that is more experience in the team.” Another interviewee from Company E explained this as: “Junior developer are less able to project outcome to the future about how the system is likely to evolve, which means that they have a tendency to focus on the ‘here and now’, and solve the today’s requirement whereas people that are experienced can often predict a little bit more easily what is likely to come in the future and already start to prepare the system for that.” Thus, junior developers are more likely to introduce unintentional TD due to their lack of experience.
More experienced (senior) software developers are more aware of and have accumulated more experience about the effect of introducing TD, compared to junior developers. Thus, having senior developers to guide the development is very beneficial. However, senior developers are more expensive, and startups typically cannot afford to have many senior developers. “I think that it would be very expensive to get another very experienced person. And maybe it’s not worth it.”
In addition to high salary costs, senior developers may be less likely to intentionally accumulate TD if they have experience working on more mature software products that are not under such extreme time pressures to get to market. A participant from Company D stated, “If we had had the knowledge or the insight, we probably would have taken on board technical debt earlier on, but I think because we ended up hiring senior developers that were used to working in certain ways with testing and re-testing everything. They ended up building, a fairly robust, as far as we can tell, but for our purposes, there might have been something over-engineered perhaps.” Senior developers may be less willing to operate in an unstructured and less quality oriented approach. For example, one interviewee from Company A said: “So, you need to be more flexible, and if you are senior maybe you aren’t ready to cope with that.” This could cause startups delays in getting to market if TD is always avoided in favor of producing high quality software.
2) **Software knowledge of startup founders**
We found that the knowledge of the founders, related to software development, has an impact on how TD is accumulated. Founders with limited software development knowledge are less likely to accumulate TD intentionally. Since they are unable to implement the product themselves, they are likely to employ an external consultancy company or hire in-house developers to implement the first software solution, which involves a significant investment prior to being able to receive revenue from the software. The founders typically expect a high-quality implementation in return for this investment since they tend to have no knowledge about the benefits of TD.
On the other hand, when the startup founders are experienced software developers, they are more likely to implement the product on their own. They often accumulate a large amount of TD because they focus on producing the first release quickly. They view the initial release as more expendable since they have not invested money towards its development (despite having invested their time).
3) Employee growth
We found that when startup teams were remaining stable in terms of the number of developers, they did not feel a need to reduce their TD since the issues related to the TD affected only the developers, not the customers. The participants did not believe their TD impacted product performance or usability. While the TD did make the code more difficult to extend or modify, the existing developers were already familiar with the TD in the code, so it was not necessary to reduce the TD.
However, the addition of new developers caused the TD to decrease for several reasons. First, the existing developers reduce the technical debt prior to hiring new developers. The developers want the code to be easier to understand so that new developers can be onboarded more quickly. They also do not want new developers to unintentionally introduce additional technical debt because they are modeling their own code on existing TD. For example, an interviewee at company B stated: “But as time goes on, the quality of real code, or its readability and how easy it is to work with, becomes more and more important. It is very easy when you as a developer comes into a project that you start writing code in the way of the existing code base. You kind of go ‘oh, this is how they do it here,’ and that is not always a positive thing. A lot of time that is quite a negative thing, because, you slip into those habits and before you know it, all the things that you personally hold true about what good code is, you are not doing that anymore’.” This fear of duplicating TD was also described by one interviewee from Company A stating: “And if you come in as a new developer, you might copy-paste some code, and you copy-paste that old thing of doing it, and we get the more messy code. And that is what we don’t want.”
In addition to the existing developers purposely reducing TD, new developers also remove TD as it is difficult to extend. The existing developers may no longer notice the problems, while they will be more obvious to the new developers. For example, a developer from Company D said “I mean there’s a big refactor when they brought me on. ...[we] ended up throwing a lot of code out and rewriting it. And that was probably because of the technical debt side of things in there, using constants throughout and the like.” Our results corroborates to some extent the results found by Klotins et al. [13] stating that “increase in team size is also associated with outcomes of technical debt”.
4) Uncertainty
In general, uncertainty about the future of the organization and product is very common characteristic in the startup companies. Our results suggest that, not surprisingly, the uncertainty plays a major role when making decisions about TD. One of the interviewees from Company C put this as “with these sorts of projects, you need to build a business case, and you’d be silly to like build something with no technical debt in it until you’ve at least proven that it’s something you have to pay for. As soon as we confirm that there will be [revenue], and see the money starting to come in, that’s when you probably start to look at the repaying the technical debt”. Another participant from Company D stated “there was a point where basically we said, okay, now we just need to stop spending money because we don’t know if this is even going to be a viable project and if it’s going to generate any money or anybody’s going to want to buy it”. This uncertainty causes startups to accumulate significant TD so they can release a proof-of-concept as quickly as possible. Once their idea is validated and they have a number of paying clients, they can worry about paying off their TD – possibly be rewriting the entire codebase from scratch.
5) Lack of development process
None of the interviewed startup companies adopted a systematic software development process, and the need of having such a process was not considered by the interviewees to be important during the first phases in the startups’ life-cycle. However, this topic was brought up as a challenge, especially when the startup grows and hires more developers. A lack of processes for the management, identification, and prioritization of TD means that TD decisions are often made ad hoc, and there are no consistent decisions being made across the team. This is especially important as the team grows to ensure there is a conformity. As one interviewee in company A said: “Multiple ways of doing things, are spreading at the same time… I mean, it is quite important for me, when we start to grow, that we have the same way of writing code.”
6) Autonomy of developers (related to TD)
Related to the lack of development process, developers often have full autonomy to decide when to take on TD and plan when to refactor the TD. Developers typically do not discuss TD-related decisions with others. While this allows for flexible work and short decision paths, it means developers, who are often not financially invested in the project, are making very important decisions without possibly considering the financial repercussions of these decisions.
This can be especially problematic when employing external software consultancies since decisions tend to made based on the benefits to the consultancy company, rather than making the best decision for the software product under development. The consultancy could decide to minimize TD because they want to maintain a high-quality reputation for their company and do not want to deliver software that is not maintainable. If the development is not on a fixed price contract, this desire for perfection could cost the startup significant time and money. On the other hand, they may be driven to take on significant TD since they know they do not need to maintain the software and they are driven by the desire to save money during the development. For example, the interviewee from Company G stated: “the externally hired consultants, they just did what was asked of them in their contract, with the lowest possible development effort. That is commonly how it works with externally hired developers, they do not really care about Technical Debt, they care about delivering the software according to the given specification.
they are paid for." We saw only one case where developers were not given full autonomy regarding TD decisions. The founders of this company found being involved in even trivial implementation decisions very useful. One of the founders of Company D said “I think that they got used to basically involving us in their decision-making even though on a relatively trivial scale so that they’d ask about everything... And then we could understand and be involved in making those decisions about, how much debt and things will take on, even though we didn’t call it debt. And there was a point probably about two-thirds of the way through the project where ‘cause we’d often get updates on estimates of hours required to complete certain tasks so we’d keep an eye on how much money we were spending.”
TABLE III. ORGANIZATIONAL FACTORS INFLUENCING TD IN STARTUPS
<table>
<thead>
<tr>
<th>Factor</th>
<th>Level</th>
<th>TD</th>
<th>Reason</th>
</tr>
</thead>
<tbody>
<tr>
<td>Experience of developers</td>
<td>low (junior)</td>
<td>increases</td>
<td>poor design decisions (unintentional)</td>
</tr>
<tr>
<td></td>
<td>high (senior)</td>
<td>increases</td>
<td>developers aware of benefits of TD (intentional)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>decreases</td>
<td>developers accustomed to producing high quality software</td>
</tr>
<tr>
<td>Software knowledge of founders</td>
<td>low</td>
<td>decreases</td>
<td>founders unaware of TD benefits; large investment for developers causes desire for high-quality</td>
</tr>
<tr>
<td></td>
<td>high</td>
<td>increases</td>
<td>founders develop product themselves; code seen as expendable</td>
</tr>
<tr>
<td>Employee growth</td>
<td>stable</td>
<td>stable</td>
<td>devs already familiar with code (and its TD); no impact to customer</td>
</tr>
<tr>
<td></td>
<td>increasing</td>
<td>decreases</td>
<td>existing devs refactor to make onboarding easier</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>existing devs refactor to prevent a culture of “bad” code</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>new devs refactor because code not readable</td>
</tr>
<tr>
<td>Uncertainty</td>
<td>high</td>
<td>increases</td>
<td>goal: reduce dev time and cost</td>
</tr>
<tr>
<td></td>
<td>decreasing</td>
<td>decreases</td>
<td>TD repaid after market validation</td>
</tr>
<tr>
<td>Lack of dev. process</td>
<td>---</td>
<td>varies</td>
<td>ad hoc decisions</td>
</tr>
<tr>
<td>Autonomy of developers</td>
<td>high</td>
<td>varies</td>
<td>developers make decisions without any guidance (possible poor business decisions)</td>
</tr>
<tr>
<td></td>
<td>low</td>
<td>varies</td>
<td>strategic decisions made</td>
</tr>
</tbody>
</table>
Answer to RQ1: We identified six organizational factors that influence the accumulation of technical debt: experience of developers, software knowledge of startup founders, employee growth, uncertainty, lack of development process, and the autonomy of developers regarding technical debt decisions. The results are summarized in TABLE III.
B. What are the challenges and benefits of deliberately introducing Technical Debt for software startups? (RQ2)
In this section, we explore how software startups determine and reason about both the challenges and benefits of intentionally introducing TD. In general, startup companies deliberately introducing TD, have a positive attitude of doing that. They are also relatively aware of the harmful effects these decisions can have on the future software in terms of impeding innovation and expansion of their software systems.
1) Benefits of intentional technical debt
We identified many benefits of intentionally introducing TD in software startups.
Cutting development time in order to be able to release the product as quickly as possible is seen as a large benefit for startups. Getting to market quickly can:
- enable fast feedback from the customers. An interviewee in Company A said: “We prefer to cut some corners to improve the speed, and get something out instead of making it more mature directly” .... “It is more important to get to the market fast and get feedback from the users, then to focus on avoiding TD, taking on TD is ok.”
- increase revenue. One of Company C’s founders said, “Yeah, we probably wouldn’t have got the contract earlier, right... Then we wouldn’t have the capital”. Another participant from Company A said “we are a startup, and we need to make money. We need to get things working, but they don’t need to be perfect”.
Another benefit is the preservation of startup capital since commonly startup companies have less money in the early stages. A participant from Company D stated, “it’s just that we had to get the code out the door. And we had to get it so that we could afford it.” Another participant from Company F said “by taking the first technical debt, we spent 10% of what we would have spent if we would have done the whole product without TD.”
Related to saving money and time, another benefit is the decreased risk. Since the startups involve uncertainty, it is sometimes wise to invest as little money and time as possible prior to validating the idea through evaluation of the product. A participant from Company F said: “In case the product would turn out to be a failure, we would have saved 90% of the money...we avoided a big risk, and we reduced uncertainty thanks to technical debt. It was a great decision, I think.”
Intentional TD also allows startups to stay flexible. When they do not spend large amounts of money or time developing new features, they are more willing to discard them and alter the product significantly when needed. Thus, the TD allows them to make more objective decisions. “If you put too much time and effort in there, it could be harder to throw it away in the next version. So, I think it’s not always bad that you don’t do the best”.
2) Challenges of intentional technical debt
Despite the benefits of intentional TD, we also identified challenges since the sub-optimal solutions would eventually need to be fixed. The two companies who initially hired an external consultancy company to implement the first software solution failed in doing so. Most of the initial implementation was later removed and replaced by in-house developers, causing significant delays and additional expenditures. In such extreme cases, TD can cause the product failure or a business disruption. Another challenge of TD is the reduced scalability it often introduces. “If you validated it and it’s
looking good, you wanna be able to put your foot on the gas and go quickly and scale. And if the architecture’s not ready...” A first, light and sub-optimal solution may only work in a specific setting but will need to be refactored in order to scale the software. One developer from Company A put it “growing is not just like taking what we have and do the exact same thing because that will only scale to a specific limit...There was no segmentation of the code in any part. We started to split the code up, we started to segment and to separate the code, so that we also can scale different part of the code.”
The interviewees mentioned different TD types such as architectural, infrastructural and source code related TD as having a substantial negative impact on the system growth. Another challenge is that the harmful effects of TD increases in severity as the software grows and when more developers were involved in the development process. Thus, the introduction of TD can have compounding effects on the development time and resources, since it will take more time to develop code on top of existing TD. Then, if the TD is removed later, it all of the code built on top of the TD will also potentially be impacted. As one interviewee at Company B put it: “In a greenfield project, I think there is an argument hacking together something that works quickly. But as time goes on, the quality of real code, or its readability and then how easy it is to work with, it becomes more and more important.” Another challenge is that fixing TD could increase risk. When fixing TD, it might create new bugs in the code, adding to the amount of future work that needs to be done. “The bugs will probably grow, especially if we try and fix it, spend time trying to fix it.”
Finally, the introduction of TD requires the loss of productivity to be managed later. We found that during the early phases, startups rarely manage their TD and decisions are often made on an ad hoc basis and none of the interviewed startups used any software tools assisting their TD management strategy. In order to understand if the startups would consider using tools as beneficial, we jointly run both SonarQube and AnaConDebt on four of the startups’ software (A, B, C, and D). After running the tools we went through the output and assessed whether the result was perceived as useful or not. All the startups using SonarQube found it specifically valuable identifying specific areas within their codebase that could further be improved in terms of refactoring initiatives of TD. As the founder from Company D said, “I think this is very useful in terms of prioritizing the back end of what we have and what we need to sort of like work on.”
The result of running AnaConDebt provided the startups with estimates on the TD principal and interest and also the growth of them with respect to different future scenarios, was also unanimous perceived as valuable to the startups’ TD management strategy. However, using these kind of tools was not a considered as a good choice during the first startup phase since it would have distracted the developers from being fast with the first product release. The output from running the tools cannot be reported due to confidentiality reasons.
3) Good Enough Level
When startup companies deliberately introduce TD, they implicitly decide what a Good Enough Level (GEL) of the software quality is and what amount of TD is acceptable to take on. They weigh the benefits and challenges of the TD when making their decisions (illustrated in 1). However, it is not usually an easy decision. A founder of Company D said “It’s difficult to balance where you’re constantly making decisions do how we balance what we’re spending on this, versus the likelihood of producing these results.”
<table>
<thead>
<tr>
<th>Benefits</th>
<th>Challenges</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Shorter development time</td>
<td>• Product failure</td>
</tr>
<tr>
<td>• faster feedback</td>
<td>• Business disruption</td>
</tr>
<tr>
<td>• increased revenue</td>
<td>• Reduced Scarcity</td>
</tr>
<tr>
<td>• Preserved resources</td>
<td>• Compounding effects</td>
</tr>
<tr>
<td>• Decreased risk (current)</td>
<td>• Increased risk (future)</td>
</tr>
<tr>
<td>• More objective decisions</td>
<td>• Loss of Productivity</td>
</tr>
</tbody>
</table>
Fig. 2. Good Enough Level is achieved by considering the ideal balance between the benefits and challenges associated with intentional TD.
Answer to RQ2: Intentionally introducing technical debt allows startups to cut development time, enabling faster feedback and increased revenue, preserve their resources, decrease risk, and make more objective decisions. However, the technical debt causes reduced scalability, becomes more severe as the product grows, and introduces future development risks. Thus, deliberately introducing technical debt brings both benefits and challenges and startups must weigh these to determine a “Good Enough Level”.
VI. DISCUSSION
In this section, we discuss recommendations for startups, compare our results to existing knowledge on accumulation and refactoring of TD in other contexts, and describe the limitations of this study.
A. Recommendations for software startups
Based on the finding related to the organizational factors that influence TD in startups and the benefits and challenges associated with TD, we have the following recommendations for startups.
Balanced experience levels (of developers) needed. We found that the team should have a mix of both senior and junior developers. Senior developers are often more calculated in their TD decisions. However, senior developers may be less risk adverse if they have more experience working on more structured, mature products where quality is paramount. A mix of both senior and junior developers seems ideal to find the right balance between TD and quality. These results are in line with the ideas of Crown [9], who states that “The principal developer for the company must be highly experienced, and familiar with all aspects of software engineering practice. This person must also be an accomplished technical leader, as they will need to influence their less experienced colleagues”. Though, we advocate that junior developers are equally important.
Unbiased technical advisors needed. When the startup founders do not have software development knowledge, those implementing the software are likely to make decisions that benefit their own needs, rather than the startup company. For example, they may cut corners to save their own time, or they may gold plate the software to build up their own reputation (and to increase their own revenue). Thus, startup founders who lack software development expertise should consider seeking technical guidance from someone other than the company or developers they hire to implement the solution so they can obtain unbiased advice related to TD decisions. Depending on the stage of the startup (and the available capital), this advice could be obtained by the introduction of a CTO or from an external consultant.
Consider “contagiousness” of TD in prioritization. We found that TD is often removed as the number of developers increases. This is in line with the results of Gralha et al. [17]. We found there are various reasons for this decrease in TD. One of which is the removal of TD that could be “contagious” – new developers may model their code off existing TD or may directly duplicate poorly written code. Thus, in addition to prioritizing TD that might block key features planned in the upcoming iterations [8], contagious TD [16] should also be prioritized, especially during times of growth in the development team. If such TD is not removed, it can generate new TD in a vicious spiral, reducing the growth time and compromising the software quality [22], [12] and culture of the startup in the future.
Encourage autonomy with high-level guidance. We found that in most startups, developers make TD-related decisions with full autonomy. Thus, they could possibly be making poor business decisions without considering the strategic repercussions of their decisions. Providing overall guidance to the developers, so they know what level and types of TD are appropriate can mitigate this risk, while still maintaining developer autonomy.
B. Strategy to balance TD over time
Startups need to balance several factors affecting the accumulation of TD, to reach a Good Enough Level. However, how do startups do this over time? We report, in Fig. 32, a first interpretation that helps to understand the strategy adopted by the studied cases in different phases.
Fig. 2 shows the accumulation of TD with respect to each startup phase and key events. The black line suggests the accumulation of Technical Debt that has been preferred by the studied startups. We also show GELs (“Good Enough Level”), or else thresholds under which TD needs to be kept via strategic refactoring, otherwise causing possible disruptive events (red lines and crosses). Finally, in the bottom of the picture, we outline which mechanisms have been reported by the participants to be necessary and effective to keep a GEL of TD in a specific phase. In the startup phase, startups recklessly accumulate TD. This has been reported to be not only necessary, but very valuable to quickly satisfy the first customers, to reduce risks and costs. However, too much TD can still be disruptive in the first phase, leading to product failure and business disruption, if the acquired TD prevents the successful delivery of the MVP itself. In particular, the cases report that the domain specific technology needs to be well understood and that the usability of the product should not be overlooked (GEL1). In the stabilization phase, a partial refactoring (Stabilization refactoring) is recommended to reach GEL2. In this case, the TD to be prioritized is the one blocking key features planned in the upcoming iterations for the delivery of the product to key customers. In addition, TD that is judged to be especially contagious (likely to spread to the new features and to be picked up by new developers) should be at least considered. The challenges if the startup fails to keep this level of TD is the difficulty (if not the halt) of evolving the system with new features, with the consequent loss of key customers. Additionally, while entering the growth phase, TD that is accessed by new developers can generate new TD in a vicious spiral, reducing the growth time and compromising the code and culture of the startup in the future. Here the high-level guidance and the experience of the developers are key to keep the right level of TD, but a budget needs to be allocated for the refactoring to reach GEL2. During the growth phase, there is a need to remove some more TD (Growth refactoring) to reach a GEL3. If the contagious debt is not removed in the previous phase, it needs to be removed here before hiring new developers. In addition, the code is optimized to be scalable and to be delivered to several customers in the market: the architecture of the system should be refactored to allow the productive management of customer variability, to reduce the cost of maintenance and operations for the developers, to avoid a loss of productivity. In the growth phase, several other mechanisms can be introduced to not only reduce the current TD, but also to prevent the accumulation of future TD (e.g. tools, processes). TD needs to be well communicated in order to make business decisions. In their maturity phase, startups seem to start behaving like mature companies. However, in this study, we do not have enough cases to report common practices related to this phase.
C. Comparison of TD Management with non-Startups
Looking at the current literature, we can see some differences with how startups accumulate and refactor TD, compared to large and more mature organizations. Some large and mature organizations might have internal innovation projects that have a more similar context to startups or might have high turnover of junior developers. Since we did not find studies on such context and TD, such cases are excluded from the following analysis and will require additional studies. In both startups and mature organizations, there is often a peak of accumulated TD at the beginning of feature development [17]. However, in mature organizations, there is usually a defined quality threshold, in the form of the desired software architecture or other quality models. In such cases, TD is referred to the divergence from such desired thresholds. Such reference points do not seem to exist in startups. Consequently, they tend to accumulate more TD, which is also considered a benefit. There is, naturally, some level of uncertainty in both startups and mature organizations at the start of a new project. However, the uncertainty in young startup companies is greater than in a mature company [11]. Thus, taking on a right amount of TD seems to be a well-established strategy to deal with the high levels of uncertainty. Another difference can be found on
how inexperienced developers are considered in startups and mature companies. Inexperienced developers seems to be considered as less aware of the long-term effects of TD, which consequently leads them to be keener to accumulate it.
<table>
<thead>
<tr>
<th>Phase</th>
<th>Startup</th>
<th>Stabilization</th>
<th>Growth</th>
<th>Maturity</th>
</tr>
</thead>
<tbody>
<tr>
<td>Key Events</td>
<td>First release</td>
<td>First customers</td>
<td>First key features</td>
<td>Additional customers</td>
</tr>
</tbody>
</table>
**Technical Debt Accumulation and Levels**
- GEL1: Initial development
- GEL2: Maturation
- GEL3: Growth
<table>
<thead>
<tr>
<th>Technical Debt Mechanisms to keep TD level</th>
</tr>
</thead>
<tbody>
<tr>
<td>Experience</td>
</tr>
<tr>
<td>High-level guidance</td>
</tr>
<tr>
<td>Time for development</td>
</tr>
<tr>
<td>Refactoring</td>
</tr>
</tbody>
</table>
**Legend**
- Conventional TD accumulation
- Disruptive
- Disruptive effect
Fig. 3. TD balanced differently in different startup phases
This choice seems to fit with the importance to accrue TD in startups. However, as we have seen in all the analyzed cases, an experienced developer (technical lead or CTO) is crucial in the startup team to keep the TD level to desired thresholds. In contrast, in mature organizations, it is preferred to have team members that have a higher understanding of TD and to make sure that TD is not accumulated [15]. One of the main reasons is that code developed by mature organizations, especially in large projects, is continuously integrated with a large codebase and needs to be available and reliable for other teams’ work. In other words, TD has a bigger impact. Such impact is not present in the startup and stabilization phase of startup companies, but comes into play when the startup enters the growing phase.
A similar difference can be seen with respect to processes and tools: a recent survey in the large organization [15] highlights how a third of the participants, answering the survey, use tools to track TD. In startups, we could see the complete lack and conscious avoidance of such processes and tools until the company reaches the growing phase. On the other hand, both in startups and partially (2/3 of the participants) in large organizations [15], we notice the lack of knowledge on how to implement such processes and what tools to use to keep TD at bay. Learning how to manage TD seems to be equally important for large companies and for startups entering the growth phase.
In summary, despite some similarities exist regarding TD management between large, mature organizations and startups, the first three startup phases seem to stand out with respect to managing TD. This is due to the level of uncertainty, the environment, and the business context being different. Although this analysis includes a small sample of both startups and large companies, and more studies are needed to corroborate this analysis, we have some initial evidence suggesting that the strategic management of TD in startups might differ from the best practices related to large organizations.
### D. Limitations and Threats to Validity
The main limitations of this study are related to the limited sample of startups investigated and to the qualitative nature of the investigation. However, these are limitations that can be considered acceptable in light of the exploratory purpose of this study. We preferred to gain a deep and rich understanding of the context of a few cases to build a holistic first theory rather than surveying the topic on a high level only.
Specific threats to validity include construct validity related to the concept of TD, external validity with respect to the limited contexts analyzed, and reliability of the results affected by the high level of interpretation that both interviewees and researchers might have been injected in the study [20].
To mitigate construct validity, we held a workshop with several of the participants in the startups to clearly define and align on what TD was. We gave concrete examples, we used the up to date definition of TD reported in the Dagstuhl seminar [4], and we asked the participants to share examples in order to test if their understanding matched the community’s definition. Additionally, when asking questions, we have always asked and probed the claims by inquiring for additional concrete examples.
To mitigate the external validity threat, we collected information from two different countries in different geographical areas. In addition, the case companies represent different segments, and we interviewed different roles, from developers to CTOs to CEOs, to external advisors.
Although we do not claim to provide fully generalizable results in this exploratory study, we have aimed at maximizing the coverage of our cases. Furthermore, we plan to expand our sample in the future, to reach a higher degree of validation of our results. Reliability threats were mitigated by assuring that two researchers were always present when conducting interviews, that one of the researchers was always attending all workshops and interviews for consistency purposes, and that the analysis was organized in two groups where researchers analyzed the codes separately and then merged the findings. In other words, we made sure that different observers were contributing in different phases of the data collection and analysis, reducing the bias of single researchers.
### VII. Conclusion
This exploratory study set out to provide a first understanding of how software startups reason about TD. Through interviews with 16 software professionals in seven different startup companies, we identified six organizational factors that influence the accumulation of TD in software startups (experience of developers, software knowledge of startup founders, employee growth, uncertainty, lack of development process, and the autonomy of developers regarding TD decisions). We also found that startups must strive towards a Good Enough Level, over time, for their product, while weighing the benefits and challenges associated with taking on TD. This study provides a set of recommendations and a first strategy which can be used by software startups to support their decisions related to the accumulation and refactoring of TD.
REFERENCES
[1] https://anacondadebt.com/
|
{"Source-Url": "https://research.chalmers.se/publication/505220/file/505220_Fulltext.pdf", "len_cl100k_base": 12178, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 44840, "total-output-tokens": 13711, "length": "2e13", "weborganizer": {"__label__adult": 0.000553131103515625, "__label__art_design": 0.0004703998565673828, "__label__crime_law": 0.0004019737243652344, "__label__education_jobs": 0.0023746490478515625, "__label__entertainment": 7.2479248046875e-05, "__label__fashion_beauty": 0.00025773048400878906, "__label__finance_business": 0.0019197463989257812, "__label__food_dining": 0.0004825592041015625, "__label__games": 0.0006875991821289062, "__label__hardware": 0.0006122589111328125, "__label__health": 0.0004954338073730469, "__label__history": 0.0002351999282836914, "__label__home_hobbies": 0.00010776519775390624, "__label__industrial": 0.00033473968505859375, "__label__literature": 0.0003192424774169922, "__label__politics": 0.0004527568817138672, "__label__religion": 0.00036787986755371094, "__label__science_tech": 0.0028228759765625, "__label__social_life": 0.0001264810562133789, "__label__software": 0.0036678314208984375, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00037932395935058594, "__label__transportation": 0.0005998611450195312, "__label__travel": 0.00022864341735839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64085, 0.01289]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64085, 0.1592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64085, 0.94988]], "google_gemma-3-12b-it_contains_pii": [[0, 5433, false], [5433, 11930, null], [11930, 17060, null], [17060, 22932, null], [22932, 26627, null], [26627, 33286, null], [33286, 39715, null], [39715, 45832, null], [45832, 52670, null], [52670, 58954, null], [58954, 64085, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5433, true], [5433, 11930, null], [11930, 17060, null], [17060, 22932, null], [22932, 26627, null], [26627, 33286, null], [33286, 39715, null], [39715, 45832, null], [45832, 52670, null], [52670, 58954, null], [58954, 64085, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64085, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64085, null]], "pdf_page_numbers": [[0, 5433, 1], [5433, 11930, 2], [11930, 17060, 3], [17060, 22932, 4], [22932, 26627, 5], [26627, 33286, 6], [33286, 39715, 7], [39715, 45832, 8], [45832, 52670, 9], [52670, 58954, 10], [58954, 64085, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64085, 0.23767]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
27c83e4d8cbfbae2633c0644e89aa569835d28f8
|
Case Study: Transforming a Traditional Windows Client/Server Application
David Strubbe
Case Study: Transforming a Traditional Windows Client/Server Application Into a Secured ASP Offering
Abstract:
Our software firm’s financial application was developed on a traditional client-server model. Individual user workstations run the application (on the Microsoft Windows Operating System) on a local area network against shared file, print, and database servers. Our customer required that remote users from five locations across the country access the application over remote connectivity. They needed to provide an Application Service Provider (ASP) service with these sites accessing the application on central common hardware. It was critical that the individual locations remain logically independent of each other.
Our financial application consists of millions of lines of code. It was not practical to rewrite it to operate effectively over a wide area network. Off the shelf technology, namely Citrix Metaframe and MS Terminal Server, was chosen to enable remote access to the application without major modification. Placing our application on Terminal Server and Citrix introduced new security concerns, as users no longer had dedicated workstations. Our application had resource requirements and security exposures that were a risk on shared hardware. We also had to consider the security of the network traffic to the remote users. This paper explores the process that we (the software vendor) and our client (the ASP provider) used to securely implement a solution.
Pre-Migration State
Overview of Application Needs
Our client wanted to securely provide our application to five distant offices using an Application Service Provider model. A fairly concise definition of Application Service Provider (ASP) is “a third-party software distribution and/or management service. Generally provides software via a wide area network from a centralized data center. [It] Allows companies to outsource and more efficiently upgrade software.”
Our client determined that it would be more cost effective to host our application centrally than to maintain a separate instance of the application at each office.
ASPs often service many different applications and offices on one platform. Although the users may share hardware and software, each client site’s activities must be secured from the others. The information security triad of confidentiality, integrity, and availability of the application must be maintained across all users.
Figure 1. The CIA Triad
Information Security Triad
Many ASP applications (including our application) are financial in nature. Serious financial losses could result from the release of private information, the inability to process transactions, or the malicious exploitation of the application into creating unintended transactions.
Pre-Migration Application Overview
Our application is built on the traditional two-tier client server model. A 32-bit Windows client application runs on a dedicated end user workstation. A substantial portion of the processing takes place on this workstation. The client uses a database requester to communicate to a database server on a LAN, and there is a fair amount of network traffic travelling over the wire.
The application was never designed to work in wide area network. The amount of wire traffic precludes the ability to simply install the client on a remote client. The data stream from the remote client to the database server is often in plain text.
Our application also assumes that each user has substantial rights to a private workstation with their own unique environment (e.g. for temporary files, registry access, and file access).
There are millions of lines of Borland Delphi code to this application. As a result, re-coding the application for WAN access would require excessive resources, as well as extensive testing to confirm the proper port of the business logic.
ASP Model Application Threats to Consider
There are several threat vectors that we considered with the design of the ASP.
These included:
1. Authorized Application Users / External Clients attempting to cross into other client partitions.
2. Unauthorized Malicious Agents (External) attempting to access the system, inspect the data, or hijack a session.
3. Denial of service by authorized or unauthorized users (e.g. resource exhaustion, processor saturation).
4. Application Faults that cross to other application partitions (e.g. memory faults, buffer overruns).
5. Physical threats to the equipment (environmental, catastrophic).
Pre-Migration Application Technology Needs
The following table depicts the software used in our case-study financial application:
Figure 2. Current Application Technology
<table>
<thead>
<tr>
<th>Item</th>
<th>Component</th>
<th>Platform</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Client Operating System</td>
<td>Windows NT Workstation 4.0, Windows 2000 Professional, Windows XP Professional</td>
</tr>
<tr>
<td>2</td>
<td>Financial Application</td>
<td>32 bit Application consisting of 200+ executables located on a network share</td>
</tr>
<tr>
<td>3</td>
<td>Database requesters</td>
<td>Pervasive.SQL 2000i Client, MS SQL Server 2000 Client in addition with latest application release</td>
</tr>
<tr>
<td>4</td>
<td>Server Operating System</td>
<td>Windows 2000 Server (Pervasive supports Novell Netware, but this is not recommended due to a different authentication system from MS Windows – Novell NDS)</td>
</tr>
<tr>
<td>5</td>
<td>Database Management Systems</td>
<td>Pervasive.SQL 2000i, MS SQL Server 2000 in addition with latest application release</td>
</tr>
<tr>
<td>6</td>
<td>File Shares</td>
<td>Network shares must be used to share common files for a given instance of the application. Pervasive data files reside on shares.</td>
</tr>
<tr>
<td>7</td>
<td>Backup</td>
<td>Package backup solution (Backup Exec), Native MS SQL Server Backups</td>
</tr>
</tbody>
</table>
Overview of Proposed Technology Solution
We recommended that the client use Citrix Metaframe as the foundation for their ASP. Citrix Metaframe\(^3\), in conjunction with Microsoft Windows 2000 Server Terminal Services\(^4\), can be used to provide thin client access to an application in a secure manner. Citrix also offers load-balancing services that allow for redundancy and improved application response.
All of the major processing would take place on the centralized platform, and only the presentation (screen input and output, mouse navigation, and printing) would need to travel across a wide area network or the Internet.
The installations of the software and respective databases needed to be logically partitioned to insure that users could only access their data without impacting other users.
This “partition” concept is critical to a successful ASP. With this common hardware and software, there has to be an additional security layer between the overall platform and individual clients (sets of users).
Workstation and network access rights must be tuned according to the Principle of Least Privilege (PLP)\(^5\) to prevent access to unauthorized data and denial of service to other users.
---
The Citrix ICA Client was the thin client chosen to access the central ASP platform. MS Windows Security (i.e. domain accounts and associated rights) was used to apply access control to the file and database resources.
Connectivity to Citrix can be provided in a variety of modes. ASP Wide Area Networks generally implement TCP/IP connectivity using one of three options\(^6\):
- Private – dedicated (fractional) T1 or higher
- Semi-Private – frame relay
- Wide Open – Internet (possibly in conjunction with a Virtual Private Network).
For additional security, Citrix traffic can be tunneled through a secure connection (e.g. a VPN), or Metaframe itself provides various modes of native support for encryption. We will examine the Citrix encryption options later in this case study.
Our client decided to utilize leased fractional T-1 connectivity with TCP/IP as the primary protocol.
A Secure Migration Process – Step by Step
This implementation consisted of over two hundred users in five different locations with the need for five distinct instances of the application. We used our experience with similar configurations of a smaller scale to recommend a structured process to securely implementing this ASP model.
Figure 5. The Overall Process – Step by Step
1. Identify the customers and their specialized security needs.
2. Inventory the applications to be published by the ASP.
3. Analyze the application and modify it if required.
4. Provision hardware, software, and facility.
5. Provision secure connectivity.
6. Install the operating system environment.
7. Install the application.
8. Harden the configuration (application, rights, and authentication).
9. Test the application.
10. Deploy the application.
11. Maintain the ASP application (audit and update).
Step by Step – The Process in Detail
1. Identify the customers and their specialized security needs.
We had to evaluate the customer to determine if there where any specialized security needs. For example, health care related applications might need to comply with HIPAA. Financial firms may have record retention policies that they must adhere to.
For our application, it is essential that users have access to a minimum of two years worth of transactions, and service standards dictate that end of month full backup tapes must be retained indefinitely.
It must be determined whether the remote client, the ASP administrator, or both will be granted system administrator rights. Our client decided to retain all administrative functions at the central site.
We also had to consider the application rights. Our financial application has its own operator and rights database that is in addition to the operating system.
operator database. We had to assist with creating five distinct and separate user databases for the five separate user sites.
2. Inventory the applications to be published by the ASP.
The client may identify a core application (such as our financial application) as the primary application. Often additional applications may be required in addition to the core application. These applications are easy to forget, and can be lost in the planning process.
For example, our financial application also requires Seagate Crystal Reports and Microsoft Access.
Some of these applications could potentially expose data if rights are not properly secured. For example, Seagate Crystal Reports makes a great tool to use to inspect unauthorized databases (if the database access rights are not properly configured).
3. Analyze the application and modify it if required.
To develop an ASP, you must examine the application for resource requirements. These requirements include client software, registry rights, file rights, and application authentication needs. This information is critical to tuning the rights on the multi-user terminal servers. For example, you cannot simply assume that each user has their own temporary directory (e.g. C:\winnt\temp) on their own PC.
In particular, you must examine the application for functionality that can be exploited to compromise security. This process is often very manual. One large ASP provider admits, “humans do all the work – Push hasn’t found any automated tools that work as well as an engineer.”
Regmon and Filemon are two useful tools that we used to analyze our applications for an ASP model. They show what resources are accessed, as well as the nature of the access (read versus write). They allow systems analysts to review necessary application file and registry activity in an effort to minimize resource rights.
For example, when we hardened the rights to the Terminal Server \WINNT\TEMP directory, the reporting functions of our financial application would often fail. Filemon allowed us to see that the Borland Database Engine (BDE)
---
7 Anderson, p. 57.
was attempting to write out a temporary file to this directory. After expanding the user rights, the reporting function worked properly. The temporary files did not have any data that would place confidential information at risk.
Figure 6. FILEMON and REGMON Filters, and a network share write caught from our application
Applications may contain functionality that poses a risk to security. For example, an application may be used to launch additional executables. An application may also have resource connection dialogues that can be used to probe for additional databases or accounts. The application may need to have these functions removed or disabled.
Our application has a “favorites” functionality that can be exploited. Although the ASP may remove desktop shortcuts to applications, this function could still allow a user to find and run an application. We advised the client of this risk, and we plan to allow our customers to disable this feature in a future release.
Figure 7. Demonstration of an unauthorized command shell from our financial application
Of particular concern should be any user interface that allows the adjustment of database connection resources or authentication strings. A malicious user could use these dialogues to attempt to change databases or authenticate to unauthorized resources. Our application has an initial login screen that allows the user to adjust database connection strings.
To assist with such exposures, we added command line startup functionality, which allows these parameters to be hard coded into an application startup script. The user never sees any database connection parameters.
Figure 8. Example of a User Interface Configuration Exploit – Database Connection Parameters
While implementing our application on the ASP platform, we had several working directories that required careful management. If they had not been secured with unique drives and directories, users may have inspected each other’s temporary data, or overwritten each other’s files causing a denial of service.
Figure 9. Work Directories Managed with Private Root Drive Mappings
The following table summarizes the primary application functionality issues that we surveyed with our application in an ASP environment:
Figure 10. Summary of Application Functionality Concerns with our Financial Application
<table>
<thead>
<tr>
<th>Item</th>
<th>Threat</th>
<th>Description</th>
<th>Impact</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Cross Partition Temporary or Work File Access.</td>
<td>Temporary work files are visible across users, temporary files remain after logout, or users attempt to write to the same workspace.</td>
<td>Confidential data may leak across users.</td>
<td>Use scripting, application server rights, and drive mapping to insure private work areas.</td>
</tr>
<tr>
<td>2</td>
<td>Execution of arbitrary commands from functionality.</td>
<td>Application allows pointing to command shells, registry editors, or other system commands.</td>
<td>Compromise system integrity or data confidentiality, and possible denial of service.</td>
<td>Modify the application to remove such functionality. Use policies to restrict execution of dangerous applications like REGEDIT.</td>
</tr>
<tr>
<td>3</td>
<td>Conflict of user work space or registry settings.</td>
<td>Application may assume that workspace and registries are private. E.g. Current User versus Local Machine Registry Settings.</td>
<td>Inadvertent denial of service if users compromise other user environments.</td>
<td>Modify the application, and script essential settings at application startup.</td>
</tr>
<tr>
<td>4</td>
<td>Access to database, host connection, and login parameters.</td>
<td>Malicious end users may vary or attempt cracking into additional resources.</td>
<td>Data confidentiality and integrity could be compromised.</td>
<td>Script connection strings and settings into application startup. Remove these settings from end user interface.</td>
</tr>
<tr>
<td>5</td>
<td>Work Files of Excessive Size.</td>
<td>Temporary Files and Work Files exhaust application server resources.</td>
<td>Denial of Service.</td>
<td>Institute disk quotas.</td>
</tr>
</tbody>
</table>
4. Provision hardware, software, and facility.
The hardware must be sized to adequately support the quantity of users for the application. For our application, our experience has demonstrated that approximately fifteen users can be supported on a single dual processor server with two gigabytes of memory. Our application ASP client provisioned twelve separate servers for their 5 site / 200 user installation base.
The physical facility (power, environment, and access) should also be secured and redundant. Our client had an enterprise level data center with electronic access control, redundant power, and redundant cooling systems for their central ASP platform.
Citrix also offers software add-ons that can assist in securing the ASP environment.
Our client opted to purchase both the Citrix Load Balancing and Resource Management features. Metaframe XP Advanced includes Load Balancing. This feature provides for the automatic failover to additional servers if a server should fail (e.g. a denial of service attack exhaust all processing power of a given server). Metaframe XP Enterprise includes Resource Management for monitoring use of storage, CPU and memory.
Both of these features help the client to secure their ASP. If a malicious user should cause an application server to fail, the load balancing services will distribute additional users to the remaining functioning servers.
Resource monitoring allows our client to closely inspect the memory, process, and processor usage on the application servers. If a user were to introduce a rogue process, the resource monitoring would inventory this process, as well as any suspicious disk, memory, or processor usage.
In a highly secure environment, an ASP could even consider separate hardware for each of their clients as another layer of partitioning.
The following table summarizes the primary physical and hardware issues that we considered for our application as an ASP:
Figure 11. Hardware, Software, and Facility Considerations
<table>
<thead>
<tr>
<th>Item</th>
<th>Threat</th>
<th>Description</th>
<th>Impact</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Physical Attack on the Server.</td>
<td>Malicious agents destroy or steal equipment.</td>
<td>Denial of service and possible data theft.</td>
<td>Secure facilities and recovery measures.</td>
</tr>
<tr>
<td>2</td>
<td>Rogue or runaway tasks.</td>
<td>Processor or Disk Resources are exhausted for unusually or unauthorized tasks.</td>
<td>Denial of Service.</td>
<td>Use Resource Management to watch for unusual or processor exhausting tasks; use load balancing to fail over to additional servers. Provision an adequate number of servers.</td>
</tr>
</tbody>
</table>
10 Mathers, pp. 592-594.
5. ** Provision secure connectivity.
As clients of an ASP are often located vast distances from the data center, ASP providers must insure that the connectivity to their remote clients is available and secure.
Three connectivity aspects should be considered:
- Security of the activity transported between the ASP and the remote user
- Security of the ASP from the hostile Internet
- Security of the applications within the ASP itself.
For secure connectivity an ASP should consider a virtual private network (VPN) or native Citrix Encryption. VPNs, although more flexible, could pass various types of traffic, and increase the threat from end users to the ASP. Using native Citrix encryption reduces the breadth of the remote threat to Citrix traffic only.
Citrix SecureICA services can use the RSA RC5 algorithm with up to a 128 bit session key. You can also force remote ASP users to connect with a minimal length key.\(^3\)
Citrix also offers the Citrix SSL Relay and Citrix Secure Gateway. These add-ons allow Citrix to use SSL 3.0 for connectivity.\(^4\)
If possible, edge routers and firewalls should be adjusted to only pass Citrix traffic to client sites. This includes filtering based on ports and addresses.
The client in this case study opted to use dedicated leased connectivity from the five remote offices to the central office. As a result, they opted to simply use the basic encryption capability of Citrix Metaframe. We have had several clients utilize VPN technology with end users that telecommute from home.
We also recommend that our clients disable the Microsoft Remote Desktop Protocol (RDP) and leave only the Citrix ICA Protocol enabled on the terminal servers. This is consistent with the general security recommendation to disable unnecessary services.
Figure 13. Summary of Connectivity Considerations
<table>
<thead>
<tr>
<th>Item</th>
<th>Threat</th>
<th>Description</th>
<th>Impact</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Malicious Attackers on the Internet.</td>
<td>Malicious users probe and attack the ASP.</td>
<td>Confidentiality of the data may be compromised, or denial of service.</td>
<td>Utilize a DMZ Model, Implement the appropriate Access Control Lists on Routers, Utilize Intrusion Detection.</td>
</tr>
<tr>
<td>2</td>
<td>Attack on the ASP.</td>
<td>Physical or network based attack on the ISP used by the ASP.</td>
<td>Denial of Service.</td>
<td>Utilize two separate ISPs with Failover.</td>
</tr>
<tr>
<td>3</td>
<td>Traffic Snooping.</td>
<td>Malicious Agents sniff traffic for data or account credentials.</td>
<td>Confidentiality compromised.</td>
<td>Use a Virtual Private Network or Citrix Encryption.</td>
</tr>
<tr>
<td>4</td>
<td>Non-Citrix Traffic traversing the Internet.</td>
<td>The DMZ model may permit malicious traffic other than ICA traffic to or from the ASP.</td>
<td>The ASP may be attack, or serve as a host to attack others.</td>
<td>Secure the DMZ to permit traffic only to the ASP on TCP port 1494\textsuperscript{13}. Limit source IP addresses to known business partners.</td>
</tr>
</tbody>
</table>
6. Install the operating system environment.
For our financial application, the required operating system components included the following software:
- Microsoft Windows 2000 Server Operating System
- Windows 2000 OS Patches (SP2 or SP3)
- Compaq/HP Insight Management
- Metaframe XPe for Windows 2000
- Network Associates Antivirus
- Backup Exec Backup Agents
We recommended that all disks be formatted with the NTFS file system. This allows the appropriate security and usage quotas to be applied to the system.
We also recommended that the support for non-essential OS services and applications (e.g. the Microsoft Internet Information Server) should be removed from both the application and data servers.
Note that the operating system environment not only includes the operating system, but applications that complement the security capabilities of the operating system (such as anti-virus software and backup software).
Some ASPs may want to implement a host based intrusion detection system (such as Tripwire\(^{14}\)) to proactively detect unauthorized system changes.
7. **Install the application.**
Installing an application within an ASP environment is more complicated than installing on typical workstations.
An inventory of all installed components and locations had to be compiled. Any files shared between user partitions had to be read-only. This was to prevent any leaking of information or introduction of malicious software across user partitions.
We recommended to the client that our application startup be completely scripted. This can be accomplished by scripting tools such as Windows Scripting Host, Winbatch, KixTart, or a simple batch file. The script should guarantee that the software environment (search path, drive mappings, registry entries, and launched executables) are exactly what the application needs. If something is changed (e.g. maliciously or by accident), the scripting insures that the appropriate setup returns.
For this instance of our application, a simple batch file was used to start the application.
**Figure 14. Example Script for the Case Study Application**
```
run this batch file to start up the application
run 4/2003
rem optionally add registry.exe command here
run into the drive
for list of \servername\apps
rem set the current directory
at \appname\%
rem start up program
start \appname\program.exe
rem exit this script
exit
```
To present the application in Citrix, the application is “published” out to the authorized users. Generally, the ASP must publish the script that launches the application. In a load-balanced environment, the application must be installed on all application servers.
With Citrix, you can publish a complete desktop or a single application. For our application, we recommended publishing our financial application in a seamless window. As a result, the user only sees the application that they wish to run, and they cannot easily access any additional desktop functionality that could compromise the application server.
Figure 15. Desktop (showing programs) versus Seamless Published (with Notepad)
Installed software should be minimized to the essential components that are required to run the application. For our financial application, we recommended
that the Pervasive and Microsoft database tools should be removed or restricted. They present powerful tools to malicious users attempting to exploit the system.
Figure 16. Default Client Options for Pervasive, MS Platforms that should be Removed
The following table summarizes the critical items when installing our application in an ASP environment:
Figure 17. Summary of Application Installation Issues
<table>
<thead>
<tr>
<th>Item</th>
<th>Threat</th>
<th>Description</th>
<th>Impact</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Environment is Changed.</td>
<td>Paths, environment variables, or registry entries are maliciously modified.</td>
<td>Possible denial of service or breach of confidentiality.</td>
<td>Script the startup of the application to insure the proper environment. Secure and protect the startup script from modification.</td>
</tr>
<tr>
<td>2</td>
<td>Unauthorized application access.</td>
<td>Malicious users may try to run administrative tools or other applications outside of the authorized ASP application.</td>
<td>Possible denial of service or breach of confidentiality.</td>
<td>Publish only the application that the users should see. Remove powerful and unnecessary software tools such as database utilities.</td>
</tr>
</tbody>
</table>
8. Harden the configuration (application, rights, and authentication).
To properly secure our application within an ASP, the client must take full advantage of the Microsoft Windows security model. All operating system and Citrix security features should be used to enforce the Principle of Least Privilege to ASP users.
We recommended that the following rights should be minimized:
- Rights to the application servers running Citrix
- Rights to the database servers housing application data
- Rights to databases themselves
- Rights to any file shares used by the application
It is critical that the databases, temporary work space, and file shares used by different ASP customers are securely partitioned between each other. Under no circumstances should system rights allow customers to cross over and see or modify another customer’s data.
Typical end users should not obtain rights beyond the Windows 2000 Users Group, which is more restrictive than the Windows NT 4.0 Users Group.
The Citrix integration with Windows Security (Domain or Active Directory) should be used to publish applications only to users that are authorized.
In the example application, each client has their own directory tree on the data server that is secured to the respective client via Domain Rights. Pervasive transactional files (Btrieve files) are simply secured by the operating system rights to the data files.
There are several additional Citrix and Windows 2000 settings that are essential to hardening an ASP application Server. Citrix has the ability to share back resources from the remote client to the application server. This can include the Windows clipboard, local disk systems, and printers. For our financial application, remote printing was enabled to provide the functionality back to the end users. We advised the client of the benefits and risks of clipboard and disk system sharing. For example, a sharing loop back to a remote workstation could increase the risk of accessing mal-ware located at the remote workstation.
Figure 18. Sharing Back of Resources, Metaframe Configuration Option to Disable
Users of a Windows system often have work files and directories stored in a temporary folder. Our financial application generates temporary work files from Crystal Reports and the Borland Database Drivers. These files and directories may contain sensitive information that could be compromised if later users attach to the same work directory. Terminal Server has an option, which forces the cleanup of these work directories at the end of a session. 15 We recommended enabling this option.
It is critical that Windows Security be used to partition different clients from each other. Citrix, Microsoft SQL Server 2000, and Pervasive / file sharing security all integrate to Windows. For our financial application, Windows groups were set up for each respective client. Discrete user accounts were then created for each individual user. Each individual user is assigned to one and only one client group. The resources for that client are then attached to that client group.
For this model to work, we recommended that the applications be published in Citrix Explicit Security mode (versus Anonymous). This requires the end user to enter Windows account credentials to access the application.
After securely structuring the user groups, these Windows groups can be used to secure the Citrix publishing, application server rights, and file share rights. They also should be used to secure database access. Our Pervasive databases are secured via file share rights. Our current product revision now integrates with MS SQL Server 2000, and we can leverage MS SQL Server Windows integrated security to the respective client database.
Figure 21. Example of Using Windows Groups to assign an ASP Client Group to a MS SQL Database with Minimal Database Rights

The following table highlights the critical steps that we considered while hardening the setup of our application for the ASP:
Figure 22. Summary of Hardening Steps
<table>
<thead>
<tr>
<th>Item</th>
<th>Threat</th>
<th>Description</th>
<th>Impact</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Cross Partition Resource Access.</td>
<td>Users access files or databases of other customers.</td>
<td>Confidentiality.</td>
<td>Utilize Windows Security to restrict access to databases and published application. Model Windows groups after the customer base. Minimize user rights attached to these groups.</td>
</tr>
<tr>
<td>2</td>
<td>Introduction of Mal-ware.</td>
<td>Malicious Users introduce trojan or other harmful executables.</td>
<td>Confidentiality or Denial of Service.</td>
<td>In addition to virus scanning software, disable client disk redirection in Citrix.</td>
</tr>
<tr>
<td>3</td>
<td>Leak of Temporary Work.</td>
<td>Temporary files remain from previous sessions, and are viewable by other clients.</td>
<td>Confidentiality.</td>
<td>Enable Citrix to delete temporary directories on end of session.</td>
</tr>
<tr>
<td>4</td>
<td>Resource Exhaustion.</td>
<td>Users fill up drive volumes (maliciously or inadvertently).</td>
<td>Denial of Service.</td>
<td>On NTFS volumes on application servers, activate Disk Quotas.(^{16})</td>
</tr>
</tbody>
</table>
9. Test the application.
After hardening the application platform, it is critical to test it.
The application should function as expected. Hardening may break some of the functionality, and the rights or application may need to be adjusted.
While our client initially tested their ASP, some of the functionality did not work. We found that we had to open up the rights to certain areas of the registry, as well as certain temporary directories on the application servers. As stated earlier, Regmon and Filemon are invaluable for this process.
It is critical that an end user is not able to access the files, applications, or databases of other customers outside of their application partition. Engineers should test for any such exposures.
Vulnerability Scanners such as Nessus\(^\text{17}\) may be used to verify system exploits visible from within the DMZ, from the outside customer sites, or the Internet. Since our client decided to use secure leased connectivity for the remote connectivity, they did not utilize any vulnerability scanners in their testing.
10. Deploy the application.
Once tested, the ASP may begin to deploy the application into the field. The ASP must assist their remote offices to install ICA Clients on the remote clients.
There are alternative models for deployment from Citrix, such as the Nfuse web based front end or Embedded Clients (e.g. Java based from a browser).
For this ASP installation, the full Citrix ICA Client was deployed to the remote workstations. This client has had a better track record with out application than the embedded browser clients or Nfuse. The remote users simply had to download the ICA client install, run the setup, and set the host connection properties.
11. Maintain the ASP application (audit and update).
Once it was deployed into the field, our client has had to actively manage their ASP platform to keep it secure. Critical activities they conduct include:
- Audit of the platform activity, including Citrix Servers and Database Servers.
- Regular Testing of the Backup and Recovery Solutions.
- Verification and application of Operating System, Citrix, and Database Server Patches.
Some additional steps that we suggest for a public connectivity ASP include:
- Regular Vulnerability Scans with an appropriate scanner.
- Audit of the IDS and Firewalls.
Tools such as the Microsoft Baseline Security Analyzer\textsuperscript{18} and resources such as the BugTraq mailing list\textsuperscript{19} are useful references for managing Terminal Server and Citrix exploits.
We recommend that an incident response team and written response plan should be assembled to address any potential system compromises that occur.
Appropriate off-site facilities (tape storage, and potentially hardware and connectivity) should be obtained to mitigate complete physical loss of the ASP site. For this case study, our client had a separate data center in another city to which they regularly shipped the backup tapes. There was a smaller hardware platform available for recovery if needed at the second site.
For our case study client, many of the details for recovery and support were agreed upon in a SLA (Service Level Agreement) between the ASP and the remote offices. This document was key to agreement on many of the important features of the ASP offering, including some of the security policies and procedures.
Summary and Conclusion
Deploying an application as an ASP has special security considerations. The connectivity between the ASP and remote client must be secure, reliable, and monitored.
Co-locating applications for different customers on common hardware requires an additional level of security that is not typical for a traditional Windows application. Different clients must not see each other’s data. There must be a secured logical partition for each client instance of an application.
Many features of Citrix Metaframe and MS Terminal Server assist with managing resources and connectivity security. Often Microsoft Windows’ security can be effectively utilized to secure each customer’s data properly.
Our application was never intended for an ASP on multi-user platform. By engaging us in a consultative role, we were able to help analyze and retrofit our application to securely function on multi-user application servers. We also helped advise our client on some of the security exposures, as well as some of the platform configurations that could be utilized to mitigate these risks.
Once deployed, our client’s ASP platform has required active management to insure that the applications remain accessible and secure to the remote clients.
To date, we have had many other clients implement similar platforms with fairly good success. As outsourcing becomes more popular and IT expenditures decrease, we project that the ASP model of deploying our application will continue to become more popular.
List of References
<http://www.nessus.org/>.
<table>
<thead>
<tr>
<th>Course</th>
<th>City, Country</th>
<th>Dates</th>
<th>Format</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS Amsterdam August 2020 Part 1</td>
<td>Amsterdam, NL</td>
<td>Aug 03, 2020 - Aug 08, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam August 2020 Part 2</td>
<td>Amsterdam, NL</td>
<td>Aug 17, 2020 - Aug 22, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS FOR508 Canberra August 2020</td>
<td>Canberra, AU</td>
<td>Aug 17, 2020 - Aug 22, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Virginia Beach 2020</td>
<td>Virginia Beach, VAUS</td>
<td>Aug 30, 2020 - Sep 04, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Philippines 2020</td>
<td>Manila, PH</td>
<td>Sep 07, 2020 - Sep 19, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS London September 2020</td>
<td>London, GB</td>
<td>Sep 07, 2020 - Sep 12, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Baltimore Fall 2020</td>
<td>Baltimore, MDUS</td>
<td>Sep 08, 2020 - Sep 13, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Munich September 2020</td>
<td>Munich, DE</td>
<td>Sep 14, 2020 - Sep 19, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Network Security 2020</td>
<td>Las Vegas, NVUS</td>
<td>Sep 20, 2020 - Sep 25, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Northern VA - Reston Fall 2020</td>
<td>Reston, VAUS</td>
<td>Sep 28, 2020 - Oct 03, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS San Antonio Fall 2020</td>
<td>San Antonio, TXUS</td>
<td>Sep 28, 2020 - Oct 03, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>Oil & Gas Cybersecurity Summit & Training 2020</td>
<td>Houston, TXUS</td>
<td>Oct 02, 2020 - Oct 10, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam October 2020</td>
<td>Amsterdam, NL</td>
<td>Oct 05, 2020 - Oct 10, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS FOR500 Milan 2020 (In Italian)</td>
<td>Milan, IT</td>
<td>Oct 05, 2020 - Oct 10, 2020</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>OnlineUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
<tr>
<td>SANS SelfStudy</td>
<td>Books & MP3s OnlyUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/casestudies/case-study-transforming-traditional-windows-client-server-application-1134", "len_cl100k_base": 8372, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 67904, "total-output-tokens": 10809, "length": "2e13", "weborganizer": {"__label__adult": 0.0005574226379394531, "__label__art_design": 0.0007104873657226562, "__label__crime_law": 0.0033855438232421875, "__label__education_jobs": 0.0026187896728515625, "__label__entertainment": 0.0002434253692626953, "__label__fashion_beauty": 0.0002410411834716797, "__label__finance_business": 0.00917816162109375, "__label__food_dining": 0.00029015541076660156, "__label__games": 0.001605987548828125, "__label__hardware": 0.0094451904296875, "__label__health": 0.0006923675537109375, "__label__history": 0.0003962516784667969, "__label__home_hobbies": 0.00023806095123291016, "__label__industrial": 0.0019216537475585935, "__label__literature": 0.0003247261047363281, "__label__politics": 0.000392913818359375, "__label__religion": 0.0004620552062988281, "__label__science_tech": 0.2147216796875, "__label__social_life": 0.00012767314910888672, "__label__software": 0.33251953125, "__label__software_dev": 0.418701171875, "__label__sports_fitness": 0.00025343894958496094, "__label__transportation": 0.000579833984375, "__label__travel": 0.00022232532501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43580, 0.03159]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43580, 0.03504]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43580, 0.89048]], "google_gemma-3-12b-it_contains_pii": [[0, 88, false], [88, 1579, null], [1579, 3224, null], [3224, 5090, null], [5090, 7273, null], [7273, 7941, null], [7941, 8946, null], [8946, 10759, null], [10759, 12979, null], [12979, 13962, null], [13962, 14410, null], [14410, 15234, null], [15234, 17373, null], [17373, 19682, null], [19682, 21505, null], [21505, 21757, null], [21757, 23743, null], [23743, 25545, null], [25545, 26400, null], [26400, 28012, null], [28012, 29973, null], [29973, 30758, null], [30758, 31459, null], [31459, 31896, null], [31896, 33394, null], [33394, 35573, null], [35573, 37223, null], [37223, 38737, null], [38737, 41092, null], [41092, 41208, null], [41208, 43580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 88, true], [88, 1579, null], [1579, 3224, null], [3224, 5090, null], [5090, 7273, null], [7273, 7941, null], [7941, 8946, null], [8946, 10759, null], [10759, 12979, null], [12979, 13962, null], [13962, 14410, null], [14410, 15234, null], [15234, 17373, null], [17373, 19682, null], [19682, 21505, null], [21505, 21757, null], [21757, 23743, null], [23743, 25545, null], [25545, 26400, null], [26400, 28012, null], [28012, 29973, null], [29973, 30758, null], [30758, 31459, null], [31459, 31896, null], [31896, 33394, null], [33394, 35573, null], [35573, 37223, null], [37223, 38737, null], [38737, 41092, null], [41092, 41208, null], [41208, 43580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43580, null]], "pdf_page_numbers": [[0, 88, 1], [88, 1579, 2], [1579, 3224, 3], [3224, 5090, 4], [5090, 7273, 5], [7273, 7941, 6], [7941, 8946, 7], [8946, 10759, 8], [10759, 12979, 9], [12979, 13962, 10], [13962, 14410, 11], [14410, 15234, 12], [15234, 17373, 13], [17373, 19682, 14], [19682, 21505, 15], [21505, 21757, 16], [21757, 23743, 17], [23743, 25545, 18], [25545, 26400, 19], [26400, 28012, 20], [28012, 29973, 21], [29973, 30758, 22], [30758, 31459, 23], [31459, 31896, 24], [31896, 33394, 25], [33394, 35573, 26], [35573, 37223, 27], [37223, 38737, 28], [38737, 41092, 29], [41092, 41208, 30], [41208, 43580, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43580, 0.18301]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
6e71191a16493ead2d11bb965c7284b69d1b527c
|
Aggregation Path Index for Incremental Web View Maintenance
Li Chen
Worcester Polytechnic Institute, lichen@cs.wpi.edu
Elke A. Rundensteiner
Worcester Polytechnic Institute, rundenst@cs.wpi.edu
Suggested Citation
Retrieved from: https://digitalcommons.wpi.edu/computerscience-pubs/230
Aggregation Path Index for Incremental Web View Maintenance
by
Li Chen
Elke A. Rundensteiner
Computer Science Technical Report Series
WORCESTER POLYTECHNIC INSTITUTE
Computer Science Department
100 Institute Road, Worcester, Massachusetts 01609-2280
Aggregation Path Index for Incremental Web View Maintenance
Li Chen, Elke A. Rundensteiner
Department of Computer Science
Worcester Polytechnic Institute
Worcester, MA 01609-2280
{lchen|rundenst}@cs.wpi.edu
Abstract
As web data becomes more essential in our work and play and it keeps growing in an explosive way, web view mechanisms are extensively employed to offer customized value-added services to customers and they are usually materialized to achieve fast query response time. However, the dynamicity problems of the underneath web information is not as easy to tackle as it is in the context of conventional database systems. Developing maintenance techniques for materialized web views over dynamic web data sources becomes more challenging because of the lack of a schema restricting the structure of all the web data sources and the shareability of web data sources enabling each update on a single data source to potentially affect many others in the web data graph. To compute web view “patches” for its incremental maintenance in response to an update, a large amount of accesses back to base data is usually inevitable, but it is obviously not desirable because of the likelihood of severe impact from the heavy network overhead and the intense contention for base data. In this paper, given a web view specification defined over a hierarchical web data graph, we analyze the query pattern, conduct the evaluation strategy along aggregation paths as to distill a subgraph of web data objects, for which we set up an index structure. By utilizing the precomputed value aggregation results stored in such an index, our algorithms show that both web view computation and its maintenance can be done more efficiently. Cost analysis and experiment studies on the gains of our incremental maintenance approach compared to the state-of-art solutions are also conducted.
Keywords: Web View, Incremental View Maintenance, Query Graph, Aggregation Path Index, Self-maintenability, Query Performance.
1 Motivation
1.1 Introduction
As web data becomes more essential, a lot of work focusses on developing database and XML tools to aid the modeling of web data [PGMW95] and integration of diverse web data sources into one “unified”
* This work was supported in part by the NSF NYI grant #IRI 94-57609. We would also like to thank our industrial sponsors, in particular IBM for the IBM Partnership Award and our collaborators at IBM Toronto for their support.
resource. Techniques are being developed for querying web data sources as well as for building web views over them. Given that the volume of data available on the web is growing exponentially, web view mechanisms [LMSS95, SDJJ96] are extensively employed to offer customized value-added services to customers. They can serve as filters over the huge network of inter-connected web sources and integrated bits and pieces of “raw” web data into a “personalized” view.
To achieve fast query response time, web views are often materialized. However, the dynamicity problems of information joining in as a new data source and leaving to be not available any more is not as easy to tackle as it is in conventional database systems. In the latter context materialized view mechanisms and their maintenance issues have for long been one well-studied topic [AMR+98, GGMS97, GM95, GMS93, KLMM97, RKRC96, SLT91]. Developing maintenance techniques for materialized web views over dynamic web data sources becomes more challenging because of the lack of a schema restricting the structure of all the web data sources and the shareability of web data sources enabling each update on a single data source to potentially affect many others in the web data graph [CAW98]. To compute web view “patches” for its incremental maintenance in response to an update, a large number of accesses back to base data is usually inevitable. It is however obviously not desirable because of the likelihood of severe impact from the heavy network overhead and the intense contention for base data.
In this paper, we model the distributed web data sources as a hierarchical graph model, over which a web view can be specified. A strategy based on separating the web view evaluation into two phases is developed. We analyze the query pattern, conduct the first phase evaluation along aggregation paths so to distill a subgraph of web data objects and then set up index structures for them. By utilizing the value evaluation results that are computed along aggregation paths and storing them in such indexes in the second evaluation phase, we can conduct both the web view computation and its maintenance more efficiently. Especially in the process of maintaining a materialized web view, our approach can lead to big savings in terms of the costs of access time of the base data compared to alternate solutions in the literature by integrating the updated objects with their precomputed aggregation results.
1.2 Related Work
Incremental view maintenance techniques attempt to reduce the number of references back to remote distributed base data sites through a better utilization of local data resources. The naive recomputation method would heavily impact the query performance and worsen the load on base data sites. An incremental view maintenance approach considers referencing back to base data as the last resort and investigates strategies to minimize the examining scope of base data. If incremental maintenance can be done using the local cache site information only, we call this view self-maintainable with respect to these updates.
There is some work tackling the view maintenance problem in the context of semi-structured data available on the web. Suciu et al. [Suc95] assume semistructured data to be rooted graphs, composed of a subset (subtrees) by union, concatenation, juxtaposition and recursion operations. For both the relational and the nested relational models that are subsumed, the queries are join-free but the lengths of traversal
paths are not restricted. Zhuge and Garcia-Molina [ZG98a] address general issues related to graph-structured views and their view maintenance. They simplify views by only considering select-project view specifications over tree-structured databases, and the resultant view is a flat collection of objects without any edges between objects.
Zhuge et al. [ZG98b] also study the characteristics of self-maintainability that can be utilized to avoid any access to base data for irrelevant updates. They also show how to perform those tests when different update information is available. But this strategy is not an all-purpose solution, especially it would turn back to conventional maintenance techniques for these relevant updates while no improvement can be achieved in these cases. The limitation of their work also lies in the strong assumption of a simple view specification.
Abiteboul et al. [AMR+98] generalize these previous studies to cover arbitrary graph-structured databases. Their approach can handle joins and the resultant view is a structured sub-graph of the base graph instead of just a flat collection of objects. Their incremental maintenance algorithm minimizes the searching scope by directly applying the updated object instance to the view specification, thus it avoids the accesses to all the other objects within the same target set of the corresponding variable. This approach needs an auxiliary structure for the relevant objects of the variables that appear in the web view specification.
1.3 Contributions
The contributions of our work are:
- Propose the Query Graph (QG) to represent explicitly the path query pattern over data graph required by a web view.
- Develop a view evaluation strategy to reuse the common aggregation path index structure among a set of web view specifications, which have the same path specification, but may differ from each other in value predication or view selection predicates.
- Establish the Aggregate Path Index (APIX) for objects that conform to the path specification and accommodate their value evaluation aggregation results.
- Describe algorithms for efficiently deriving a variety of view selections and maintaining the materialized web view upon updates by checking self-maintainability and cheaper computation of web view “patches” (in terms of minimizing accesses to base data) based on the APIX.
- Analyze the cost of our approach and demonstrate that it wins over alternative state-of-art solutions.
1.4 Outline
In Section 2, we give a detailed specification of the basic concepts of web views, the web data model, our assumptions and the problem description. In Section 3, we present our basic solution approach surrounding the QG and the APIX structures. In Section 4, our maintenance algorithms are described under different
update scenarios. Cost analysis and experiment studies on the comparison of the costs between our algorithm and the state of art solutions is conducted in Section 5. We wrap up our discussion in Section 6.
2 Background on Web Views
2.1 Web Data Model
Numerous data models have been proposed in the literature for semi-structured data [Aro97, Mih96, AMM97, FFLS97, CRCK98]. Recently, XML is emerging as a standard of universal data exchange format on the Internet and it utilizes a hierarchical structure with rich, powerful links and naming mechanisms. We believe that these qualities of XML make it a perfect fit for modeling web data sources. Hence, we envision that web data sources can be structured as a hierarchical graph model by parsing each tagged XML element as an object and by capturing each hyper-link of XML as a direct edge attached with a label indicating the type of this parent-child relationship. Basically, the model suggested for XML objects is quite similar to the Object Exchange Model (OEM)[PGMW95] but has some extensions (such as each object has knowledge of its parent object).

**Figure 1:** Motivating Web Database Example
Figure 1 shows the structure of our running example E-mail web database. An E-mail integrates information sources from a large amount of shops, each of which has its products advertised. For instance, most of the shops within this E-mail have their names, sale categories and product information published. For each product, information such as its name, price and component items are expected to be provided. However, this product information structure is not fixed and can be flexible. We characterize basic features of a database as below:
- A database is a single-rooted, labeled, acyclic directed graph. Root is the only entry point of it. Each node in the graph is an object with a unique id (such as 6/10) and a unique label (such as kit). Each
object can have multiple parent objects but with the same label linked to it. Each labeled edge represents a single-step path from a parent object to its child object.
- Each object in OEM is either atomic or complex. An atomic object has a value of one primary type (such as an integer, string, or image). The value for a complex object can be seen as a collection of subobjects taking the form of <label, oid> pairs. A complex object never has a primary type value of itself (as in XML, the value for one attribute of an XML element can be modeled as a child atomic object).
- An object with the Null value is a specific case. An object in such a state can either become a complex object by adding an outgoing edge to an atomic object, or turn out to be a real atomic object by changing the null value to a value of another primary type. On the other hand, a complex object can be changed back to an atomic object by removing all the links to its child objects.
### 2.2 Web View Specification
In this paper, we focus on exact path expressions that specify each single-step path. We give the general form of a WVS as:
```
Define web view favorite_products as
select c, k, p
from E-mail shop s, s.category c, s.item k, k.price p, k.item i
where c = "toy" and p < $50 and z = "book"
with k.category c, k.price p
```
**Figure 2: Example Web View Specification**
In our WVS definition, the selection list can specify more than one type of object to be returned by the path and value conditions that consider joins.
Given the example in Figure 1, the WVS is shown in Figure 2 get a collection of favorite.products, each of which is found in “toy” shops of this E-mail, costs less than $50, and at least contains one item of book. This web view is constructed by each product object with its category and price object. Variables (such as s, k, c, p and i) that are attached to both ends of each path are designated for a set of objects respectively (for example, s is binding to the object set of 82, 83, 84). Variables can be distinguished according to their location in the global graph.
### 2.3 Basic Types of Web Updates
Like previous work [AMR+98], we consider three types of basic updates on web data source: Ins, Del and Chg. <Ins, o₁, l, o₂> and <Del, o₁, l, o₂> represent the insertion and deletion of the edge with label l from object o₁ to object o₂. <Chg, o, OldVal, NewVal> denotes the change of the value of the atomic object o from OldVal to NewVal. For both insertion and deletion, o₁ must be a complex object while o₂ can be either an atomic object or a complex one.
Note that these basic update transactions would affect nothing if \( o_1 \) is an unreachable object, and this reachability of \( o_1 \) will not be changed by an \textit{Ins} or \textit{Del} operation since it is at the start of the edge \( l \). On the other hand, \( o_2 \) together with all its descendant objects becomes reachable after an insertion operation while it may be unreachable after a deletion operation.
3 Evaluation Strategy for Web View
In the web data graph, there is no strict schema restriction. Note however that WVS asserts over such a data graph a query specification, that imposes a query pattern to filter out only objects that conform to their corresponding aggregation paths. In this section, we study this query pattern imposed by a WVS, characterize it by exploiting a structural graph of aggregation paths, and develop our evaluation strategy based on it. Then we introduce an APIX structure for those objects to store their value evaluation results that are computed along these aggregation paths.
3.1 Query Pattern of Web View Specification
When initially setting up a web view, we need to access the base data to identify the relevant information. The WVS asserts a query pattern over the base data graph with two kinds of condition restrictions – \textbf{Path Conditions (PC)} and \textbf{Value Conditions (VC)}.
3.1.1 Path Conditions and Value Conditions
Web view evaluation involves path evaluation and value evaluation. The \textbf{PC} reflects the evaluation criteria on an object conforming the path specifications of the WVS. It corresponds conceptually to an overall path pattern graph structure, within which each object falls into one type of evaluation pattern on its outgoing aggregation path set. This road-map-like query pattern serves as a filter to distill out of the base data graph a small conforming subgraph. The \textbf{VC} on the other hand, only includes value evaluation criteria on atomic objects. In addition, there exists an implicit value aggregation function for each complex object to compute its aggregation value result along its required path set. The final web view consists of objects that satisfy both the \textbf{PC} and the \textbf{VC}. Based on the separation of these two types of conditions in a WVS, we propose a two-phase-evaluation strategy by first applying an overall path pattern graph against the base data graph for path evaluation and second by propagating bottom-upwards the computation of the aggregation values for the value evaluation starting from the primitive conditions at the atomic leaf objects.
3.1.2 Query Graph
\textbf{Definition 1 (Condition Relevant Path – CRP)} Each single-step path that is represented as a label in the \textit{from} as well as in the \textit{where} clauses in a WVS is relevant to the path evaluation, and thus is referred to as a \textbf{Condition Relevant Path (CRP)}. In general, we refer to \textit{CRP} in two different contexts. A complete path that concatenates adjacent \textit{one-step paths} starting from the root variable and ending with some atomic
variable, we express it using a specific term of \(a\)-\(CRP\) in this global context. For a variable \(v\), the term \(v\)-\(CRP\) denotes the aggregation single-step path set that is imposed in such a local situation.
**Example 1** For the given example in Figure 1, the single-step paths (each with two variables attached at both ends) in the WVS (given in Figure 2) are \((e\) is the root variable for E-mall).
\[ e.\text{shop}\ s, \ s.\text{category}\ c, \ s.\text{kit}\ k, \ k.\text{price}\ p, \ k.\text{item}\ i \]
Three complete \(a\)-\(CRPs\) (attached variables are eliminated to avoid the interference):
\[ \text{E-mall.\text{shop.\text{category}}}, \ \text{E-mall.\text{shop.\text{kit.\text{price}}}}, \ \text{E-mall.\text{shop.\text{kit.\text{item}}}} \]
The \(v\)-\(CRP\) for the root variable \(e\) is: \(\text{shop}\). There is a procedure of computing \(v\)-\(CRP\) sets from a given set of complete \(a\)-\(CRPs\) of the WVS (given in Figure 2):
\[
\begin{align*}
a-\text{CRP}_1 & : \ \text{E-mall.\text{shop.\text{category}}} \\
a-\text{CRP}_2 & : \ \text{E-mall.\text{shop.\text{kit.\text{price}}}} \\
a-\text{CRP}_3 & : \ \text{E-mall.\text{shop.\text{kit.\text{item}}}}
\end{align*}
\]
\(a\)-\(CRPs\)
Query Graph
We see for our given example, this would be done as follows. \(a\)-\(CRP_2\) and \(a\)-\(CRP_3\) overlap with \(a\)-\(CRP_1\) for the segment of E-mall.\text{shop} but not after variable \(s\). While they diverge from each other after their common segment of E-mall.\text{shop.\text{kit}}. Thus we obtain for variable \(s\) (represents E-mall.\text{shop}) its \(v\)-\(CRP\) set is \text{category} and \text{kit} and for \(k\) (represents E-mall.\text{shop.\text{kit}}) its \(v\)-\(CRP\) set is \text{price} and \text{item}.
**Definition 2 (Query Graph – QG)** We construct an overall path pattern graph structure by overlapping the common segments of the \(a\)-\(CRPs\) and refer to this resultant graph as the **Query Graph (QG)** of a WVS. The QG can act as an overall query pattern against the base data graph for conducting the path evaluation.
**Example 2** A Query Graph corresponds to the WVS (given in Figure 2). Variables marked by * are those that have its \(v\)-\(CRP\) set composed of more than one member outgoing path. Thus a type value aggregation function is prepared for each object of such variables. The QG summarizes all the path conditions and graphically illustrated the \(v\)-\(CRP\) set of each variable within it.
Given a set of WVSs, that have the same path specification part but may differ from each other in value predication or view selection parts, we can capture them by the same QG, i.e., the same path query pattern.
### 3.2 Our Two-Phase-Evaluation Strategy
Path evaluation is usually conducted in a Depth-First-Search (DFS) traversal process \([\text{AQM}^{+}97, \text{Abi}97]\). The path evaluation may be conducted for the same object several times accessing it each time it is involved in the
evaluation of one single path condition. Also, once we reach an atomic object via such a DFS path evaluation, we evaluate its VC (referred to as value evaluation) and then traverse upwards for other unprocessed path evaluations. This way, path evaluation and value evaluation are mixed, leading to evaluation efficiency. Hence, we propose our two-phase-evaluation strategy that separates path evaluation from value evaluation. In particular, for each object touched in a Breadth-First-Search (BFS) traversal, conduct a once-and-for-all path evaluation against its \textit{v-CRP} path set.

<table>
<thead>
<tr>
<th>c</th>
<th>&8</th>
<th>&12</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>p</th>
<th>&15</th>
<th>&18</th>
<th>&21</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>i</th>
<th>&16</th>
<th>&19</th>
<th>&22</th>
<th>&23</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
(b) Seed truth values assigned according to the value evaluation of the atomic objects
**Figure 3: Evaluation Passage Graph**
With the \textit{QG} serving as the guide for the required query paths and variables within it indicating their \textit{v-CRP} sets, we first conduct the path evaluation. The result of the path evaluation pass is a subgraph of web data objects that are distilled from the base data graph. We refer to such a virtual subgraph structure as the \textit{Evaluation Passage Graph (EPG)} (see Figure 3) and build an \textbf{Aggregate Path Index (APIX)} for objects captured by it.
### 3.2.1 Aggregate Path Index
Path evaluation proceeds as a BFS traversal process starting from the root object. For each object encountered in the traversal, we set up its \textit{APIX} structure and initialize some auxiliary information needed for the second phase of computing value evaluation aggregation results.
The structure of \textit{APIX} for each trapped object, as shown in Figure 4, is a cross-tabular: One tuple for each distinct object (for example, object \&3); One column for each outgoing path member in its \textit{v-CRP} set (represented by its corresponding variable, i.e., two columns for object \&3 are “category” \textit{c} and “kit” \textit{k}). Each cross-bar hosts three measures of the child \textit{oid} set of one outgoing path type (for example, \&9, \&10 is the child object set targeted by the “kit” path of object \&3), the \textit{oids}, the \textit{Count} of children objects of each
path type, and the cumulative truth value \( CT \) derived from each set of child objects, respectively. These three measures capture structural path information for the objects that conforms to the \( QG \), hence we name it Aggregated Path Index.

<table>
<thead>
<tr>
<th>&</th>
<th>c</th>
<th>k</th>
</tr>
</thead>
<tbody>
<tr>
<td>&3</td>
<td>oids</td>
<td>{&8, &9, &10}</td>
</tr>
<tr>
<td></td>
<td>Count</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>CT</td>
<td>1</td>
</tr>
<tr>
<td>&4</td>
<td>oids</td>
<td>{&12, &10, &13}</td>
</tr>
<tr>
<td></td>
<td>Count</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>CT</td>
<td>0</td>
</tr>
</tbody>
</table>

<table>
<thead>
<tr>
<th>&</th>
<th>c</th>
<th>k</th>
</tr>
</thead>
<tbody>
<tr>
<td>&9</td>
<td>oids</td>
<td>{&15, &16}</td>
</tr>
<tr>
<td></td>
<td>Count</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>CT</td>
<td>0</td>
</tr>
<tr>
<td>&10</td>
<td>oids</td>
<td>{&18, &19}</td>
</tr>
<tr>
<td></td>
<td>Count</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>CT</td>
<td>1</td>
</tr>
<tr>
<td>&13</td>
<td>oids</td>
<td>{&21, &19, &22, &23}</td>
</tr>
<tr>
<td></td>
<td>Count</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>CT</td>
<td>1</td>
</tr>
</tbody>
</table>
**Figure 4:** Aggregation Path Index (APIX)
Besides these three explicit measures for each child object set of an object, as illustrated in Figure 4, there are more measures associated with an object. They are the aggregated truth value \( T \) and the signal \( In_{EPG} \) indicating whether the object itself belong to the \( EPG \) (if not, the space allocated for such index information can be released), and the only parent object set (since each object except the root has a unique type of incoming edges in Section 2).
### 3.2.2 Path Evaluation
Now we illustrate the process of path evaluation by procedure \( PE \) (see Figure 3.2.2. Starting from an object \( o \) (usually it is the root object since BFS("root") is called), a BFS traversal of the base data graph conducts the evaluation for each encountered object against its aggregation path set. If the object has all types of the outgoing paths that are asserted by its corresponding \( v-CRP \) set, we mark its \( In_{EPG} \) as \( True \) and assign it an APIX tuple of its type. For example, since the \( v-CRP \) set for variable \( s \) is category and kit, object \&3 fails the path evaluation because it lacks the “kit” type of child object, while object \&2 meets the requirement and is thus distilled.
Along with the traversal, we fill in the oids and the Count information (see Figure 3.2.2). At the end of this path evaluation pass, an \( EPG \) of data objects has been distilled with their path index as well as some initialization information captured in the APIX structure.
### 3.3 Aggregation Function for Value Evaluation
Value predicates specified in a WVS can be directly dealt with by the evaluation on atomic objects. Then these value evaluation results need to combined with the path evaluation results of the distilled subgraph of objects, each of which has an aggregation function attached for propagating upwards the value evaluation results of its children objects. For example, in Figure 3, the truth value of the object with oid \&9 is the
### Procedure BFS (Labels)
// Labels is a queue that stores labels to be evaluated on;
// BFS gets a label \( l \) from the top of Labels and evaluates the object set of \( l \).
get a label \( l \) from the queue Labels;
if the end variable \( v \) of \( l \) is a leaf variable
if Labels is empty
return;
else // begin setting up API structure for the variable \( v \)
for each of its \( v \)-CRP label \( l_i \),
put it into the end of queue Labels;
// initialize information for its child object set
Obj\( [v] \) = null; Count\( [v] \) = 0; CT\( [v] \) = 0; In\( EPG \) = True;
for each of the child object \( o_{ij} \)
o.Obj\( [v] \) = o.Obj\( [v] \) + o\( ij \);
o.Count\( [v] \) += 1;
if o.Count\( [v] \) = 0
o.In\( EPG \) = False;
BFS\{Labels\};
#### Figure 5: Path Evaluation
... conjunction of the truth values of its children objects &15 and &16.
We start the value evaluation from the atomic objects of the \( EPG \), and assign for each a truth value \( \text{true/false} \), or 1/0 based on if complying with the predicates of the WVS. Figure 3 shows the truth values that are attached to the atomic objects of the \( EPG \) (as shown by the Figure 3). For example, 1 is the truth value for object &8 that is binding to variable \( c_i \) since it satisfies the predicate asserted on \( c_i \) “exists \( x \) in \( c_i \): \( x = "toy" \).
#### Theorem 1 (Up-Propagating Truth Values)
Each value aggregation function is decided by the aggregation path pattern of a variable, thus for objects with the same \( v \)-CRP pattern the function is the same. However, the aggregation result for each object is decided by its actual measures. The \( CT \) value for each of its children object sets is derived by a cumulative computation of the \( T \) value of all children object members within the set, then its own \( T \) value is computed via a conjunction of all the derived \( CT \) values for its children object sets.
The reason for the conjunctive method to compute the truth value \( T \) from all the \( CT \) values of its path divisions is obvious: the aggregated paths at their meeting point naturally assert to all the participated paths a conjunctive relationship, that reflects both the path evaluation and the value evaluation. However, within the same path division, all the children objects are under the same value evaluation and thus are considered by their parent object as one single contributor to the value evaluation result along this path. The aggregation conducted in difference place is coordinated in a bottom-up way, like the reverse process of BFS. At the end of this pass of value evaluation, all the truth \( T \) values for objects of \( EPG \) are obtained via this up-propagation.
### 3.4 Deriving the Web View
For each data object distilled by the path evaluation, its \( APIX \) structure is constructed to capture its aggregation path pattern as well as to accommodate its materialized value evaluation result. Thus a variety...
of web views could be easily derived by reusing this APIX structure.
**Definition 3 (View Variables & View Paths)** In a WVS, a list of variables specified in the *select* clause indicates the desired data, called **View Variables (VV)**. A **View Path (VP)** is a path that leads to a VV.
**Example 3** Both the *a-CRP*s and VPs can be shown in one QG by augmenting the QG with dashed path segments of the VPs. The overlapped path segment of a VP with the QG is called **View Passing Path**. In the QG depicted in Figure View Path and Query Graph, there are six complete *a-CRP*s and the dashed path is the VP to a VV.
**Theorem 2 (View Objects Selection)** Each view object set can be selected via the objects that are along the View Passing Path and with satisfiable value evaluation results (T values are 1s).
The reason lies in the simple fact that the objects of the EPG with satisfiable T values indicate that they successfully passed both the path evaluation and the value evaluation and thus lead to the right view objects to be chosen. Thus no matter how many view variables are specified in the WVS, their view objects can easily be retrieved by utilizing the satisfiable objects along their View Passing Paths. Once we have
retained all desired view objects, restructuring among them becomes a trivial job using local computation costs only.
Satisfiable objects are applied to their corresponding variables in the WVS to generate the web view as shown by the Web View Object Selection procedure.
4 Approach for Materialized Web View Maintenance
To keep a materialized web view up-to-date with dynamic web data sources, we now propose efficient maintenance algorithms based on the cached APIX structure.
4.1 Checking Self-Maintainability
Updates are said to be irrelevant to the materialized web view if they would not cause any effect to it. With the local materialized auxiliary information stored in the API structure, we have a set of self-maintenance tests that avoid any remote access to base data for such cases. We illustrate a complete list of our best cases in which irrelevant updates are discovered by our algorithm. The last two of them cannot be identified by other approaches without an APIX [ZG98b].
- For a \(<\text{Chg}, o, \text{OldVal}, \text{NewVal}>\) update, if the value evaluation result of its \text{OldVal} is the same with that of its \text{NewVal}, then this update is irrelevant. For example, given the base databases example shown by Figure 1, \(<\text{Chg}, \&6, \"fashion\", \"bakery\">\) is an obviously irrelevant update since neither “fashion” nor “bakery” belong to the category of “toy”.
- For an \(\text{Ins}\) or \(\text{Del}\) operation, if either \(o_1\) or \(o_2\) do not belong to any of the CRPs, then the update is irrelevant. For example, operation \(<\text{Ins}, \&2, \"location\", \&24>\) (assume the value of the atomic object \&24 is "boston") has nothing to do with the path evaluation thus no effect on the materialized web view.
- In a \(<\text{Del}, o_1, l, o_2 >\) case, if neither \(o_1\) nor \(o_2\) is an object with its information materialized in the APIX, then it is is irrelevant. An example for this case is \(<\text{Del}, \&2, \"name\", \&5>\), before which \&2 wasn’t materialized in the APIX since it didn’t pass the path evaluation. Thus deletion of such object would have no effect on the materialized view.
- For a \(<\text{Chg}, o, \text{OldVal}, \text{NewVal}>\) transaction, in which its value evaluation result is changed from true to false, we check whether any parent object of \(o\) is materialized in the APIX. If no such parent object exists, this update is irrelevant. For example, if the value for object \&6 is changed to “toy”, there is still no chance for re-evaluating its value since the path evaluation is stopped by its parent object \&2, which doesn’t have an outgoing path “kit”.
The above checks indicate a sequence from simple to complex. The first test only requires to check on the Cdg update type and the old and new values of the updated object. The second one needs to check on the path relevance of the affected object pair. These first two kinds of self-maintenance checks also appear in other work [ZG98b]. Our self-maintenance tests are more effective as they are able to also discover the last two cases of irrelevant updates based on the information materialized in the APIX.
### 4.2 Accessing Base Data
After these self-maintainability tests, only relevant updates remain. Hence we now would need to refer back to base data for maintaining the materialized web view and also the APIX. The maintenance task of the materialized APIX includes adding/deleting object tuples and keeping their measures up-to-date according to the structural changes as well as the modified value evaluation results. The latter could in turn trigger maintenance procedures for maintaining the materialized web view. Later we show that the cost of this maintenance approach in terms of the number of accesses to base data is much reduced compared to the alternate solutions.
Procedure **Ins** ($o_1$, $l$, $o_2$)
1. If $o_2 \not\in \text{EPG}$
2. BFS($l$) from $o_2$ // result in a $o_2$"EPG";
3. If $o_2 \not\in \text{"EPG"}$ or $o_2$.T = 0
1. Judged to be an Irrelevant Update;
2. exit;
else
if $o_1 \not\in \text{EPG}$
re-evaluate on $o_1$;
else
cache $o_2$ in EPG;
$o_1$.Count[$l$]++; $o_1$.CT[$l$]++;
if $o_1$.CT[$l$] > 0 and $o_1$.T = 0
for each of other labels $l_i$
$o_1$.T = $o_1$.T $\times$ $o_1$.CT[$l_i$];
if $o_1$.T = 1
propagate on its parents
**Figure 7:** Insertion Maintenance on APIX
Procedure **Del** ($o_1$, $l$, $o_2$)
1. If $o_1 \not\in \text{EPG}$ or $o_2 \not\in \text{EPG}$
1. Judged to be an Irrelevant Update;
2. exit;
3. $o_1$.Count[$l$]--;
if $o_1$.Count[$l$] = 0
drop $o_1$ from API;
propagate the effect to parent objects.
else
$o_1$.CT[$l$]--;
if $o_1$.CT[$l$] = 0
$o_1$.T = 0;
propagate the effect to parent objects.
**Figure 8:** Deletion Maintenance on APIX
### 4.2.1 Insertion Scenarios
Maintenance of the APIX upon an insertion case $<\text{Ins}, o_1$, $l$, $o_2>$ is shown in procedure Ins($o_1$, $l$, $o_2$) (see Figure 7). More tuples of objects are usually newly cached into the APIX due to the insertion updates.
The edge $l$ has been checked by the self-maintainability test and hence is sure to be one of the CRPs. We conduct the two-phase-evaluation on the data subgraph starting from $o_2$. If the aggregated value evaluation result of $o_2$ turns out to be 0, then this insertion is an irrelevant update. Otherwise the method $\text{Inc}_{CT}$ is
called for propagating the effect of the newly added edge to the satisfiable object $o_1$.
We have no materialized information about an object if it fails the path evaluation of its $v$-CRP set. Thus if $o_1$ wasn’t materialized in the API at that time, then the newly introduced edge $l$ from $o_1$ to $o_2$ causes us to re-evaluate the $v$-CRP set of $o_1$. Only if the path evaluation succeeds for $o_1$ and none of its parent objects also wasn’t materialized in the API, then the next upper level path evaluation is carried on. Otherwise, if the path evaluation for $o_1$ fails, we can stop the process since the update is already judged to be irrelevant. If the path evaluation for $o_1$ succeeds and $o_1$’s parent objects exist in the API, then the former broken path passages via these objects to $o_2$ now are conductive. Along with this upwards path evaluation, we carry on the value evaluation and accommodate their value evaluation results in the APIX (see Figure 9).
We present two insertion cases to illustrate the effect on the materialized API of the maintenance process.
**Scenario 1:** $<\text{Ins}, \&2, "kit", \&10>$
<table>
<thead>
<tr>
<th>$&2$</th>
<th>$&3$</th>
<th>$&4$</th>
</tr>
</thead>
<tbody>
<tr>
<td>oids</td>
<td>{&6}</td>
<td>{&10}</td>
</tr>
<tr>
<td>Count</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>CT</td>
<td>0</td>
<td>1</td>
</tr>
</tbody>
</table>
**Scenario 2:** $<\text{Ins}, \&10, "item", \&16>$
<table>
<thead>
<tr>
<th>$&9$</th>
<th>$&10$</th>
<th>$&13$</th>
</tr>
</thead>
<tbody>
<tr>
<td>oids</td>
<td>{&15}</td>
<td>{&16}</td>
</tr>
<tr>
<td>Count</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>CT</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
| oids | \{\&18\} | \{\&19,\&19\} |
| Count | 1 | 2 |
| CT | 1 | 2 |
| oids | \{\&21\} | \{\&19,\&22,\&23\} |
| Count | 1 | 3 |
| CT | 0 | 1 |
**Figure 9:** Updated Aggregation Path Index (APIX)
### 4.2.2 Deletion Scenarios
Upon a deletion case $<\text{Del}, o_1, l, o_2>$, the update first is screened for irrelevancy by the self-maintainability test if either $o_1$ or $o_2$ does not exist in the APIX. However, if $o_1$ has the only one outgoing path of type $l$ to $o_2$, this deletion would dissatisfy the aggregation path restriction on $o_1$ and thus cause the deletion of its tuple from the APIX. Correspondingly, we propagate the effect of this deletion. The deletion procedure $\text{Del}(o_1, l, o_2)$ is depicted in Figure 8.
### 4.2.3 Change Scenarios
A $<\text{Chg}, o, \text{OldVal}, \text{NewVal}>$ update is a relevant if it bears different value evaluation results for $o$ before and after its value change. If the value evaluation of NewVal is 1 (i.e., the value evaluation of OldVal is 0), then it equals to a set of $<\text{Ins}, o_1, l, o>$ insertions, each of which with $l$ standing for the only type of incoming edge of $o$ and with $o_1$ representing one of the parent objects of $o$. Similiarly, if the value evaluation of NewVal is 0 (i.e., the value evaluation of OldVal is 1), then it is equivalent to a set of $<\text{Del}, o_1, l, o>$ deletions.
4.3 Computation of Web View “Patches”
The maintenance of the materialized APIX involves the adding or deleting data object tuples and fixing value evaluation results of some data objects. In the APIX, the newly appeared true/(1) T values of data objects, either from the added data object tuples or due to the changed value evaluation results of data objects, would trigger the ADD maintenance statements for computing web view “patches”. On the other hand, the disappeared true/(1) T values of data objects, either by the deletion of such data object tuples or due to the changed value evaluation results, would trigger the DEL maintenance statements.
Example 4 We log the For generating view objects “patches” to be added, we can apply the ADD maintenance statements shown as below:
\[
\text{ADD+ = select a view path list from view paths}
\]
foreach view passing path
\[
\text{applying each possible pairs of bindings of objects with new true/1 T values}
\]
5 Evaluation on Costs for Web View Maintenance
Like others’ work, we assume that the main cost of the computation of a web view can be estimated in terms of the numbers of base objects being fetched. This is based on the fact that each object of base databases could be quite large in storage and its retrieval takes time. In fact, one could even assume that these objects (XML documents, for example) lie on different servers on the Internet. Hence, each base data object access may require a URL locating and an http transfer across networks.
5.1 Cost Factors
Next we consider key factors that account for the cost. The first two features depend on the query pattern, as shown by Figure 3.2.2, while the last two are more overall measures of a base database.
- \(C(\text{object occurrences})\) : how many object instances bind to that variable.
- \(M(\text{outgoing label diversity})\) : how rich in types of outgoing query paths a variable is.
- \(H(\text{height of base data graph})\) : length of the longest path from the root to some atomic object.
- \(N(\text{total number of base data objects})\) : the size of the base database.
From the first two parameters, we can estimate the population of the children objects of an object based on its binding variable characteristics. \(M \times C\) is the rough number of children for an object if \(C\) is fairly uniform. If the directed graph structure is balanced (the path lengths of atomic objects do not differ much from each other) and the deviation of \(M\) for each variable is quite small, then the data object explosion rate
along one level down can be estimated as $M \ast C$. Thus after $H$ levels down, the number of atomic data instances is about $(M \ast C)^H$. The total number of base data objects is a sum of the number of objects at each level.
On the reverse side, we use $C_I$ to measure how many parent object instances an object has. The incoming label diversity $M_I$ is 1 according to our data model specification. Usually, $C_I$ is much smaller than $C$. Alternative maintenance techniques access base data from the root object may result in a large examination space. Our maintenance approach is carried on in a reverse direction. Starting from the touched object, our method to do maintenance examines upwards the parent objects, whose structural and value evaluation relevant information is already computed in the initialization phase and materialized in the $API$ structure. Integrating this precomputed aggregation information with those of the updated object, we can quickly derive the new effects. The cost is a function of $C_I$ instead of in the order of $M \ast C$.
The cost spend in the evaluation phase is decided by a $Reduction\ Factor$ for each variable that describes the ratio of the number of objects being filtered out to the whole size of this object set. We formulate below the costs for the evaluation and maintenance phases.
5.2 Web View Computation Cost during the Evaluation Phase
Using the web view specification in Figure 2 to evaluate the base database shown as Figure 1, both the naive algorithm and Abiteboul’s algorithm [AMR+98] conduct a DFS traversal during their evaluation phase, and the total number of objects they access is:
\[
\begin{align*}
Cost_{eval}^{naive} &= C_c + C_s + C_c + C_k + C_p + C_i \quad (\text{with } C_c = 1) \\
Cost_{eval}^{avg} &= C_c + C_s + C_c + C_k + C_p + C_i
\end{align*}
\]
In both the naive algorithm and Abiteboul’s algorithm, an object is evaluated no matter if it really has a complete set of outgoing paths that conform to its $v$-$CRP$ or not. For example (see Figure 1), object $&6$ of variable $c$ is evaluated in both the naive algorithm and Abiteboul’s algorithm even if it does not have an outgoing path to objects of variable $k$. Such kind of accesses to base data objects is a waste of time. Our evaluation strategy conducts a once-and-for-all path evaluation in the process of BFS traversal and thus eliminates the unqualified objects from the evaluation space at a much earlier time. In this way, we considerably reduce the number of accesses to base data. We later refer to our approach as $APIX$ as opposed to the other two approaches of $naive$ and $abit$.
**Theorem 3 (Reductive Factor for Objects to be Evaluated)** During the evaluation phase, how much we cut down the costs is decided by the $Reductive\ Factor$. Assuming a uniform distribution of all combination probabilities of the outgoing paths of an object, the $Reductive\ Factor$ for each variable is the ratio of the occurrences that at least encompass the required outgoing path set to the total occurrences. It is in the inverse proportion to an exponential function whose base is 2 and exponent is the number of the
required outgoing paths by its $v$-CRP set. This Reductive Factor also indicates the storage space size needed by the API.
Illustrating it in more detail, suppose that for a variable $v$, we have $M$ joint paths (outgoing labels) to evaluate and each of these paths leads to a variable $v_i$ (i is from 1 to $M$). The evaluation of the subobjects of any variable $v_i$ is worthwhile only if it also has subobjects of all the other $M - 1$ variables. By a uniform distribution, the probability of having all the other $M - 1$ variables is $\frac{1}{2^{M-1}}$. By applying this formula to the evaluation cost of objects of variable $e$ and $s$ in our example database, $e$ has just one path leading to $s$, thus $\frac{1}{2^{M-1}} = 1$ times of the object occurrences of $s$ need to be evaluated. While $s$ has two joint paths to be evaluated, thus $\frac{1}{2^{M-1}} = \frac{1}{2}$ times of the object occurrences of variable $e$ and $k$ need to be evaluated. The access cost caused by using our approach is:
$$Cost_{API}^{\text{eval}} = C_e + 1 \cdot C_s + \frac{1}{2} C_c + \frac{1}{2} C_k + \frac{1}{2} C_p + \frac{1}{2} C_i \quad (\text{with } C_e = 1)$$
5.3 Referring Back Cost during Maintenance Phase
For below, suppose an operation <Ins, $9$, “item”, $24$> happens with the value of the atomic object $24$ being “book”. Thus $9$ is $o_1$ and $24$ is $o_2$, and $o_1$ is binding to variable $k$.
The maintenance by the naive approach involves the total recomputation of the view against the base databases. Hence the cost is the same as that of the initial phase.
$$Cost_{\text{maint}}^{\text{eval}} = C_e + C_s + C_c + C_k + C_p + C_i$$
Abiteboul’s algorithm still needs to go back to the root object and re-evaluate the base data. However, it can apply $&9$ directly to variable $k$ while saving the accesses to other objects of $k$. For example, for <Ins, $9$, “item”, $24$>, Abiteboul’s algorithm needs to access all the objects that are attached to $e$, $s$ and $c$ and one object $9$ of variable $k$ while ignoring all the other ($C_k - 1$) objects. Also, the objects to-be-evaluated of variables $p$ and $i$ (descendant variables of variable $k$) are restricted to only those descendant of $9$.
Let the object occurrences of variable $p$ and $i$ stemming from the object $&9$ are $C_p'$ and $C_i'$ respectively, then the number of total objects accessed using Abiteboul’s algorithm is:
$$Cost_{\text{maint}}^{\text{eval}} = C_e + C_s + C_c + 1 + C_p' + C_i'$$
As proposed in Section 4, our algorithms can avoid a large number of the accesses to base databases by detecting irrelevant updates. Even in the worst case when access to base databases is inevitable, we access only the base data objects from the affected one (i.e., $&9$). Thus the accesses to the objects of $s$ and $c$ are saved. The maximum number of total base objects evaluated by our algorithm under the same situation as Abiteboul’s is:
$$Cost_{API}^{\text{maint}} = 1 + \frac{1}{2} C_p' + \frac{1}{2} C_i'$$
5.4 Cost Comparison of Experimental Results in Three Scenarios
Experiment tests on the maintenance costs under three different update scenarios to the base database (see Figure 1) are shown in Figure 10. The database contains one E-Mall, 1000 shops, 100 products and 2 categories per shop, and 10 items and 1 price per kit, and possibly other portions of database that are irrelevant to the WVS. We observe from the experiment result that, in an Ins update situation, the cost of maintenance is mainly associated with the size of the subgraph starting from the affected object. A deletion involves the propagation of the changed value evaluation result or the dropping of object tuples at the local API site. The number of the affected object tuples is in linear relation with the height of the affected object. An Udp update is most expensive since it affects an atomic data object, which is at the bottom of the data graph. The maintenance involves a longest reverse evaluation from bottom upwards.


Figure 10 also shows that our approach wins significantly over the other two methods especially in the Del and Udp situations. In the second experiment, we use a view specification containing a chain of eight one-step paths in the from clause:
\[ \text{select } z_i \text{ from } A.L1 \ z_1, \ z_1.L \ z_2, \ ... \ z_7.L \ z_8; \]
Figure 11 shows that our algorithm achieves the more significant improvement in the terms of maintenance costs for deletion cases if the deletion update occurs closer to the root object. This is because the higher the objects are (opposed to the lowest atomic objects), the shorter paths they may go through to propagate up their dropped/changed value evaluation information to maintain the APIX.
For the experiment shown by Figure 12, we use the example WVS shown in Figure 2. We increase the number of shops in the database from 1000 to 5000, and keep the same average ratio of kits per shop, items per kit, etc. Therefore, when the number of shops doubled, for example, the size of its subgraph is doubled. We conduct three kind of insertion operations by adding the edges kit, category and item respectively to the database and test the costs caused by our algorithm against Abiteboul’s. We see that both sets of maintenance costs grow linearly with the size of relevant subgraph. Our approach gains much compared
to the alternative one when inserting a lower object such as *item* (we thus observe an opposite prefer from deletion cases). The reason for this is that the access time to the base data by our insertion maintenance approach is related to the size of subgraph that stems from the inserted data.
All three experiments are designed to be similiar to Abiteboul's work [AMR+98] to set up a reality uniform testbed based on which the experimental results can be compared against. These experimental studies help us to identify suitable cases for our algorithm: (1). The richer the WVS is in terms of path conditions and strict value predicates. Then, the base database is evaluated against a more complex QG and the evaluation is more effective to screen out undesired objects. (2). A big database bears a larger ratio of the average of object occurrences to the label diversity, this indicating a good reduction factor. (3). *Del* operations happening to the upper objects while the *Ins* operations occuring the lower ones. (4). Expense on storage is less important compared to the network communication or the times of connections to be established.
6 Conclusion
In general, previous techniques for incremental web view maintenance simply recompute it from scratch or just integrate the updated object directly with the variable it is binding to in the web views. We propose an index-like mechanism *APIX*, which construct itself according to the aggregation path restriction by the WVS and accommodates the conforming objects together with their value evaluation results. This way, a set of web views specifications can reuse their common part of path evaluation criteria and compute the final view objects to restructure the web views from. Also, the updated objects can be explored on to derive the web view “patches” to be integrated into the materialized web view.
We conduct the cost analysis and the experimental studies on the maintenance performance comparisons.
with the alternative solutions at the state-of-art. Both the theoretical analysis and the experimental results show that our approach win over its competitors most of the time and in some cases the gains of our strategy are significantly with more probability of self-maintainability or fewer accesses to base data. We develop a set of efficient strategies as for the initialization phase of web view evaluation as well as its incremental maintenance.
We use XML files to simulate web data sources and have implemented the web view mechanism based on that. We plan to extend our web view specification for accommodating also regular path expressions, and develop more general APIX structure to allow for such an extension. We find that the storage space APIX can be economized by compressing a multi-step non-branching paths into one single-step path. The corresponding maintenance strategy is possible. Finally, we would like to consider exploiting XML schema, its linking mechanism and query language to optimize the web view maintenance.
References
A Appendix
A.1 Pseudo Code for Path Evaluation Algorithm
```java
Procedure Path_Evaluation (o)
{ if BFS_CRPs ("root", \{o\}) = True
generate a "EPG" from o by including
objects that their In_EPG = T
}
Boolean BFS (Labels, Obs)
{ int has_obj = 0;
get a f from queue Labels;
if the ending variable of li, v is a leaf variable
return True;
else {
for each li in v-CRP, put li in Labels;
for each object o of Obs[l] [ ]
o.set_labels;
if o.In_EPG = T {
has_obj ++;
for each label li in o.Label[ ]
Obs[li] = Obs[li] + o.Obs[l];
}
if has_obj = 0
return False;
}
return BFS_CRPs (Labels, Obs);
}
```
(a) Path Condition Evaluation Conducted in a BFS Traversal
```java
Object ::
Member
{ Boolean In_EPG = F; int T = 1;
Set Labels = \emptyset; Set Obs[ ] = \emptyset;
int Count[ ] = 0; int CT[ ] = 0;
}
Method set_Labels
{ binding o with variable v;
if v is a variable for leave nodes
In_EPG = True;
else {
for each li in v-CRP {
Labels = Labels + \{li\};
Obs[li] = null; Count[li] = 0;
CT[li] = 0;
for label li, for each of its paired subObj oij {
Obs[li] = Obs[li] + \{oij\};
Count[li] ++;
}
for each label li in Label[ ] {
if Count [li] = 0
In_EPG = False;
}
}
}
}
```
(b) Joint Variable Objects Index Structure Initialization
Figure 13: Path Evaluation Algorithm
A.2 Pseudo Code for Aggregation Function
```java
Procedure Compute_Truth (EPG)
//Aggregation Function for computing truth value;
{ According to the QG within this EPG,
sort the variables bottom-up in a partial order;
excluding leaf variables,
for each variable v {
for each o of variable v;
o.comp_CT;
}
}
```
```java
Object :: Method comp_CT
{ T = 1;
for each label li in Label[ ] {
for label li, for its paired subObj oij {
CT [li] = CT [li] + oij.T;
if (CT [li] == 0) {
T = 0;
exit;
}
}
}
}
```
Figure 14: Aggregation Function for CT Value Computation
### A.3 Pseudo Code for Insertion and Deletion Algorithm
**Procedure Ins** \(o1, l, o2\)
```plaintext
{
If \(o2 \notin EPG\)
{
PE(o2); // result in a sub-"EPG" from o2;
If \(o2 \in "EPG"\) or \(o2.T = 0\)
{
Judged to be an Irrelevant Update;
exit;
}
} else
{
if \(o1 \notin EPG\)
re-evaluate on o1;
} else
{
cache o2 in EPG;
o1.Count[l]++; o1.CT[l]++;
for each of other labels li
{
o1.T = o1.T \times o1.CT[li];
if o1.T = 1
propagate on its parents
}
}
}
```
**Procedure Del** \(o1, l, o2\)
```plaintext
{
If \((o1 \notin EPG) \lor (o2 \notin EPG)\)
{
Judged to be an Irrelevant Update;
exit;
}
o1.Drop_Obj(l);
}
```
**Object ::**
**Method Inc_CT(l)**
```plaintext
{
CT[l]++;
if CT[l] = 1
{
for each label li in Label[ ] other than l
T = T \times CT[li];
if CT = 1
for each of its parent o’ with label ll
o’.Inc_CT(ll);
}
}
```
**Method Dec_CT(l)**
```plaintext
{
CT[l]--;
if CT[l] = 0
{
for each of its parent o’ with label ll
o’.Dec_CT(ll);
}
}
```
**Method Drop_Obj(l)**
```plaintext
{
Count[l]--;
if Count[l] = 0
{
In_EPG = F;
for each parent o’ with label ll to it
o’.Drop_Obj(ll);
} else
Dec_CT(l);
}
```
*Figure 15: Maintenance Algorithms under Insertion and Deletion Scenarios*
|
{"Source-Url": "https://digitalcommons.wpi.edu/cgi/viewcontent.cgi?article=1235&context=computerscience-pubs", "len_cl100k_base": 14224, "olmocr-version": "0.1.49", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 147576, "total-output-tokens": 16748, "length": "2e13", "weborganizer": {"__label__adult": 0.0002665519714355469, "__label__art_design": 0.0004703998565673828, "__label__crime_law": 0.0002532005310058594, "__label__education_jobs": 0.001300811767578125, "__label__entertainment": 7.677078247070312e-05, "__label__fashion_beauty": 0.0001590251922607422, "__label__finance_business": 0.0004117488861083984, "__label__food_dining": 0.0002684593200683594, "__label__games": 0.0004072189331054687, "__label__hardware": 0.0008530616760253906, "__label__health": 0.0004954338073730469, "__label__history": 0.0003077983856201172, "__label__home_hobbies": 0.00010067224502563477, "__label__industrial": 0.0003743171691894531, "__label__literature": 0.00027823448181152344, "__label__politics": 0.00018393993377685547, "__label__religion": 0.0003745555877685547, "__label__science_tech": 0.054443359375, "__label__social_life": 8.571147918701172e-05, "__label__software": 0.0208740234375, "__label__software_dev": 0.91748046875, "__label__sports_fitness": 0.0001881122589111328, "__label__transportation": 0.0003883838653564453, "__label__travel": 0.00019216537475585935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60014, 0.02776]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60014, 0.56637]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60014, 0.87863]], "google_gemma-3-12b-it_contains_pii": [[0, 390, false], [390, 645, null], [645, 3114, null], [3114, 6644, null], [6644, 9454, null], [9454, 11419, null], [11419, 14024, null], [14024, 17115, null], [17115, 20096, null], [20096, 22470, null], [22470, 25250, null], [25250, 28290, null], [28290, 29530, null], [29530, 32185, null], [32185, 35014, null], [35014, 37806, null], [37806, 40364, null], [40364, 43531, null], [43531, 46532, null], [46532, 49003, null], [49003, 50977, null], [50977, 54439, null], [54439, 56246, null], [56246, 58585, null], [58585, 60014, null]], "google_gemma-3-12b-it_is_public_document": [[0, 390, true], [390, 645, null], [645, 3114, null], [3114, 6644, null], [6644, 9454, null], [9454, 11419, null], [11419, 14024, null], [14024, 17115, null], [17115, 20096, null], [20096, 22470, null], [22470, 25250, null], [25250, 28290, null], [28290, 29530, null], [29530, 32185, null], [32185, 35014, null], [35014, 37806, null], [37806, 40364, null], [40364, 43531, null], [43531, 46532, null], [46532, 49003, null], [49003, 50977, null], [50977, 54439, null], [54439, 56246, null], [56246, 58585, null], [58585, 60014, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60014, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60014, null]], "pdf_page_numbers": [[0, 390, 1], [390, 645, 2], [645, 3114, 3], [3114, 6644, 4], [6644, 9454, 5], [9454, 11419, 6], [11419, 14024, 7], [14024, 17115, 8], [17115, 20096, 9], [20096, 22470, 10], [22470, 25250, 11], [25250, 28290, 12], [28290, 29530, 13], [29530, 32185, 14], [32185, 35014, 15], [35014, 37806, 16], [37806, 40364, 17], [40364, 43531, 18], [43531, 46532, 19], [46532, 49003, 20], [49003, 50977, 21], [50977, 54439, 22], [54439, 56246, 23], [56246, 58585, 24], [58585, 60014, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60014, 0.08782]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
d6cce45656ab4fc290f90c89d181d54a233c9b13
|
We describe the algorithmic design of a worldwide location service for distributed objects. A distributed object can reside at multiple locations at the same time, and offers a set of addresses to allow client processes to contact it. Objects may be highly mobile like, for example, software agents or Web applets. The proposed location service supports regular updates of an object's set of contact addresses, as well as efficient look-up operations. Our design is based on a worldwide distributed search tree in which addresses are stored at different levels, depending on the migration pattern of the object. By exploiting an object's relative stability with respect to a region, combined with the use of pointer caches, look-up operations can be made highly efficient.
Received December 11, 1997; revised August 14, 1998
1. INTRODUCTION
As the Internet continues to grow exponentially, the problem of locating people, services, data, software and machines is becoming more severe. To compound the problem, increasingly many users are no longer tied to a single, fixed access point, but instead are using mobile hardware such as telephones, notebook computers and personal digital assistants. Applications must therefore take into account that a user will have to be located first in order to deliver any messages [1]. Likewise, the mobile user will possibly also have to find local, nonmobile resources at the location he or she is currently residing (e.g., a local laser printer) [2].
Mobile computing, which is generally tied to users migrating between different locations, is one aspect of mobility in the Internet. Another aspect is formed by mobile computations, by which software and data move within a computer network instead of users. For example, to support ubiquitous computing, it will be necessary to move a user’s personal environment from one location to another [3]. Another example of software mobility is the active transfer of Web pages to replication servers in the proximity of clients [4, 5]. Likewise, software agents may be roaming the network in search of information, representing their owner at servers, etc. [6]. Finally, with the introduction of Java, mobile code will form an important component of many future Web-based applications [7, 8].
In this paper, we use the term mobile object to collectively refer to any component—implemented in hardware, software or a combination thereof—that is capable of changing locations. We assume that a mobile object can be distributed or replicated across multiple locations, meaning that there may be several locations where the object resides at the same time. This can be the case, for example, with a whiteboard application shared between a number of mobile users.
The existence of (worldwide) mobile objects introduces a location problem: the need for a scalable facility that maintains a binding (i.e. a mapping) between an object’s permanent name and its current address(es). Such facilities are normally offered by wide-area naming systems such as the Internet’s Domain Name System (DNS) [9], DEC’s Global Name Service (GNS) [10] and the X.500 Directory Service [11].
However, existing naming systems are inadequate for mobile objects for two reasons. First, wide-area naming systems assume that name-to-address bindings hardly change. This assumption is necessary to allow effective use of data caches to improve look-up performance. In a mobile environment, however, we must be able to handle the case that bindings change regularly. Second, most naming systems distribute the name space across different globally distributed naming authorities, and subsequently use location-dependent names [12]. Unfortunately, location-dependent names make it harder to handle migration and replication. Each time an object changes location, or whenever a replica is added or removed, we have to adapt the object’s name(s) as well. Alternatively, we could change a name into a forwarding pointer, but this has serious scalability problems when applied in worldwide systems.
What is needed is a naming facility that allows bindings to change regularly and which offers complete location transparency to its users. We have recently completed the design of such a facility, which we call a location service, as part of the Globe project [13]1. The Globe location service is
1Information on the Globe project can be found at http://www.cs.vu.nl/~steen/globe/.
designed to handle trillions of mobile objects worldwide. It uses a worldwide distributed search tree in which addresses of an object’s present location are stored. All location operations (updating and looking up addresses) are based on the use of globally unique and location-independent object identifiers. The service can be used in combination with traditional naming services, but which should then map user-defined names to object identifiers instead of addresses. Our approach distinguishes itself by (1) scaling worldwide and to trillions of objects, (2) allowing objects to frequently update name-to-address bindings and (3) supporting distributed objects that reside at multiple locations at the same time.
In this paper, we present the basic algorithms for updating and looking up locations. In Section 2 we give an outline of our approach, followed in Section 3 by a detailed description of our algorithms. Related work is presented in Section 4. We conclude and discuss future work in Section 5.
2. ARCHITECTURAL DESIGN
In this section, we outline the architecture of the Globe location service. An overview of our approach can also be found in [14].
2.1. Naming and locating objects
A naming and location service maintains a mapping between a user-defined name of an object and that object’s location. Traditional naming services generally store name-to-address bindings directly. In other words, each binding consists of a record containing the name and address of an object.
In this approach, we are forced to update the binding whenever the object changes its location. For example, if we move a Web server to a machine with a different IP address, we are generally forced to update the server’s DNS entry. Likewise, the name-to-address binding has to be updated whenever the user decides to change the object’s name. As an example, if system administration decides to assign different names to existing machines, we may be forced to change name-to-address bindings of Internet services as registered in DNS.
Consequently, by storing bindings between a user-defined name and an object’s location as records in a database, we create a dependence between two different, and in principle unrelated, kinds of updates. For a wide-area system, such a dependence may introduce serious management and scalability problems.
In Globe, we follow a different approach. We separate naming from location issues by introducing a two-layered naming hierarchy. The upper layer deals with hierarchically organized, user-defined, human-readable name spaces. The lower layer deals with keeping track of each object’s location independent of how that object is named by its users. The interface between the two layers is formed by object handles: a user-defined name is bound to an object handle, which in turn is bound to the address(es) where the object can be found.
An object handle is designed specifically for looking up an object’s present location. It contains a service-independent global unique identifier (SGUID) which is similar to a universal unique identifier in DCE [15]. A SGUID is a true object identifier [16]: (1) each SGUID refers to exactly one object, (2) each object has exactly one SGUID, (3) a SGUID is never reused and (4) an object will never get another SGUID than the one initially assigned to it.
An object handle will generally obey the same properties, although an object might have several object handles. An object handle may also contain information that can be used to assist in locating the object. An important property of an object handle is its stability: it is assigned once to an object and remains the same during that object’s lifetime, no matter where the object moves to. No two objects ever have the same object handle, even if generated 100 years apart in distant countries.
Mapping user-defined names to object handles is done by a naming service, and which can be based on existing technology. For example, because object handles do not change, an implementation can make effective use of caching name-to-handle bindings, analogous to the approach followed in DNS [9]. In fact, we can even use TXT records in DNS to implement our name-to-handle bindings.
In contrast, mapping an object handle to a set of addresses is the main task of a location service. In Globe, we adopt a model in which an object offers contact addresses to client processes. A contact address describes where and how an object can be reached [13].
A contact address consists of, for example, an IP address, a telephone number, or another kind of address, as well as additional information that identifies the place where the address lies. We allow an object to regularly change its location, that is, to regularly change the binding between its object handle and contact address. In addition, we also provide support for binding several addresses to a single object handle. In this way, it becomes much easier to handle replicated objects. In this model, a mobile, replicated object is characterized by having a set of contact addresses which may change over time.
2.2. General organization
To efficiently update and look up contact addresses, we organize the underlying wide-area network as a hierarchy of geographical, topological or administrative domains, similar to the organization of DNS. For example, a lowest level domain may represent a campus-wide network of a university, whereas the next higher level domain represents the city where that campus is located. Lowest level domains are also called leaf domains. Each domain $D$ is represented by a separate directory node, denoted $dr(D)$, leading to a worldwide search tree. Nodes may be internally partitioned for scalability reasons. The internal organization of the location service is entirely transparent to client processes.
A directory node stores information on objects in contact records. Each node has a separate contact record per object. A contact record contains a number of contact fields, one for each child of the node where the record is stored. A contact address of an object is always stored at exactly one directory node. In addition, a path of forwarding pointers from the
In Figure 1, node contact fields, one for each of its children. The field for node dom are no contact addresses that lie in domain O object currently no update operations in progress for a specific of consistency conditions. In particular, when there are contact record is said to be empty.
If the contact field of a contact record contains data, the of the contact fields of a contact record contains data, the met.
The domain represented by a node N is denoted dom(N). In Figure 1, node N0 contains a contact record with three contact fields, one for each of its children. The field for child N1 contains two contact addresses, which both lie in domain dom(N1). As we put forward in Section 3.5, although contact addresses are normally stored in leaf nodes, higher level nodes may decide to store addresses as well. We follow the policy that in such cases, higher level nodes have priority over lower level ones. The contact field for child N2 contains a forwarding pointer, meaning that somewhere in the subtree rooted at N2 there should be at least one other contact address stored for the object. Finally, the contact field for node N3 contains no data at all, implying that there are no contact addresses that lie in domain dom(N3). If none of the contact fields of a contact record contains data, the contact record is said to be empty.
Storage of addresses and pointers is subject to a number of consistency conditions. In particular, when there are currently no update operations in progress for a specific object O, we require that the following three conditions are met.
C1: A contact address from a leaf domain D, is stored at dir(D), or at the directory node of an enclosing (higher-level) domain of D.
This condition implies that a contact address from leaf domain D can be stored only at a directory node that lies on the path from the root to dir(D).
C2: For each node N, the contact record for O at node N stores a forwarding pointer to a child node of N if and only if the contact record for O at that child is nonempty.
This means that we do not accept dangling pointers in our tree. In other words, if we follow a forwarding pointer we should eventually find a contact record containing one or more addresses.
C3: A contact field can contain either a forwarding pointer or contact addresses, but not both.
Together with the previous conditions, this condition implies that as soon as we encounter a contact field containing contact addresses, we can be sure that we have found all contact addresses that lie in the subdomain represented by that contact field.
When these conditions are met, the tree is said to be globally consistent for O. As an example, the tree shown in Figure 1 is globally consistent.
As we discuss below, a contact address that lies in leaf domain D is always inserted or deleted by initiating a request at the directory node dir(D) of D. To simplify matters, we require that the identity of the leaf domain in which the address lies is encoded in the address. For example, a contact address could be represented by a record containing fields for the type of network address (such as ‘IPv6’), the actual network address, and a name such as ‘cs.vu.nl’ that identifies the leaf domain where that address lies. In contrast to most network addressing schemes, our contact addresses are thus seen to be location dependent.
2.3. Update algorithms
We require that an update operation on a globally consistent tree leaves the tree in a global consistent state after its completion (assuming that no other operations for the same object are still in progress). For an insert request initiated at leaf node dir(D), it is easily seen that global consistency implies that there can be only one node along the path from the leaf node to the root where all addresses from D are stored. In particular, if there is such a node N, then an insert request from any leaf domain enclosed by dom(N) should be forwarded to N.
If there is no node that is already storing addresses from D, we can choose one along the path to the root as long as the global consistency constraints are satisfied. We follow the policy that the highest level node that wants to store addresses from D, without violating global consistency, will
Request arrives at node with nonempty contact record
Nodes that want to store address
Request to insert contact address at leaf node
Forwarding pointer is installed at N0
Node decides to store address
Contact record at leaf node remains empty
FIGURE 2. The general approach to inserting a contact address, by which an insertion request propagates upwards to the lowest-level node where the object is known (a), after which a downward path of forwarding pointers is set up (b).
be allowed to store addresses. As we explain in Section 3.5, this policy allows us to construct highly effective caches, even for mobile objects. Note that only those nodes are eligible for storing contact addresses from $D$ which either have an empty contact record or an empty contact field for a domain that encloses $D$.
Whenever an insert request arrives at a node that is willing and capable of storing the address, that node will thus have to check whether there is a higher level node along the path to the root where the address should actually be stored. The general approach to inserting an address is illustrated in Figure 2. When an address is to be inserted, the request is propagated to the first directory node where the object is known, which is $N_0$ in our example. Due to conditions C2 and C3, nodes higher than $N_0$ cannot store the address and thus need not be considered. Assuming node $N_0$ does not want to store the address (as we explain below), an acknowledgment is propagated back to the initiating leaf node while at the same time a path of forwarding pointers is established. In our example, both $N_1$ and leaf node $N_2$ want to store the address, in which case $N_1$ will be permitted to do so.
There may be several factors that determine whether or not a node wants to store addresses. For example, as we discuss in Section 3.5, when an object is highly mobile, meaning that it is inserting and deleting addresses at a relatively high frequency, a node may decide that it is more efficient to store addresses at a higher level node that covers the smallest domain in which the object is moving. This means that, although an insert operation is always initiated at a leaf node, the contact address may actually be stored at a higher level node. There may be other reasons as well that influence the willingness of a node to store addresses. However, we want to decouple our algorithms from such decisions and introduce, for each node, a boolean operation $\text{store}_{\text{here}}$ that returns true if and only if the node wants to store addresses. If, on the path from a leaf node to the root, there is no node willing to store addresses, we follow the policy that addresses are stored in the root node. We allow the outcome of $\text{store}_{\text{here}}$ to change in the course of time.
Deleting a contact address is straightforward and is done as follows. First, the address is found through a search path up the tree, starting at the leaf node where the address was initially inserted. Once the contact address has been found,
it is removed from its record. If a contact record becomes empty, the parent node is informed that it should delete its forwarding pointer to that record, possibly leading to the (recursive) deletion of forwarding pointers at higher level nodes.
Inserting and deleting contact addresses is targeted toward exploiting locality. Especially when contact addresses already exist in the domain where the operation is being performed, it is seen that the operations can be relatively cheap.
2.4. Look-up algorithm
Looking up addresses can be done completely independent of the update operations. In this paper, we consider only look-up operations for one contact address; operations that look up several addresses for the same object are easily devised.
We adopt a simple look-up policy. A look-up operation is always initiated at a leaf node (in particular, the one in the client’s domain), and forwarded along the path to the root until a node is reached having a nonempty contact record. If that record contains a contact address, then the address is returned to the client process. Otherwise, if the record contains only forwarding pointers, a depth-first search is initiated at an arbitrary child, until an address is finally found. This approach is shown in Figure 3.
Again, it is seen that we exploit locality: the look-up operation searches local domains first and gradually expands to larger domains as long as no contact addresses are found.
3. ALGORITHMIC DESIGN
In this section we concentrate on the algorithmic design of our location service. We first present the basic data structures, after which we discuss in detail the insertion of addresses. Address deletion is then relatively straightforward, as well as our look-up algorithm. In the following, we concentrate only on operations for a single object, as operations for different objects are completely independent.
3.1. Preliminaries
Contact records. For each directory node, we model an object’s contact record as an (indexed) set of contact fields, one field for each child. Each contact field stores either a forwarding pointer, or a set of contact addresses, but never both. A leaf node has exactly one contact field. Adopting an Ada-like notation, we can describe these data types as shown in Figure 4. We assume that each node has a unique identifier of type NodeID that can be used as an index for sets of contact fields. An opaque data type Address is used to model contact addresses.
Tentatively available data. As we make clear in the succeeding sections, update operations gradually propagate through the tree. While doing so, a decision is made where to actually store or remove data. For example, our update protocol prescribes that before storing an address addr at some node N, we first need permission from N’s parent. If we wait until that permission is granted, addr cannot yet be looked up, despite the fact that we already know that it is a valid contact address. Therefore, it makes sense to make the address tentatively available at the node where the operation is currently being performed, without giving guarantees that it will eventually also be stored there. To support tentative availability of updates, we introduce views and view series.
A view on a variable v is a statement expressing a change to the value of v. Evaluating a view leads to the tentative execution of the statement, returning the value that v would have had if the statement had actually been executed. Evaluating a view on v leaves the original value of v unaffected; it is like a kind of shadow version. View evaluation takes place only by means of view series. A view...
series associated with a variable $v$ is a FIFO-ordered list of views on $v$. The value of a view series is defined as the result of evaluating its views in the order that they have been appended to the series.
This mechanism is best illustrated by an example. In Figure 5, we declare integer variables $x$ and $y$, and an integer view series $vx$ that is associated with $x$. (The notation $(a, b, c)$ denotes a list of elements $a$, $b$, $c$, with $a$ being the head of the list.) In line 4, we append a view that expresses an increment of $x$ by 1. The pseudo-variable `self` points to the variable associated with the view series, in this case $x$. We then subsequently assign the value of $vx$ to $y$. At that point, the value of $y$ is 5, whereas $x$ is still 4. In line 5, another view is appended expressing a multiplication by 2, followed by an update of $y$, which now has the value 10. Note that at this point, the value of $vx$ is $2 \cdot (x + 1)$. Therefore, if we change the value of $x$ to 5, as in line 6, and update $y$ again, $y$ will become 12.
The view at the head of a view series, that is, the least recently appended one, can be applied by evaluating its expression and changing the value of the associated variable accordingly. The view is then removed from the view series. For example, in line 7, we apply the first view to $x$, thereby changing the value of $x$ to 6 by incrementing it by 1. At the same time, the view is removed, so that the view series $vx$ now reflects only the value 2 $\cdot$ $x$. A view can also be directly removed, that is, without applying it. Finally, the function `sizeof` returns the length of a given view series.
A contact record for an object $O$ at node $N$ has an associated view series `tentativeCR$(O, N)$`. Because we consider only operations for a specific pair of object and node, we omit the indices throughout the remainder of our discussion. This view series is an instance of the following data type:
```plaintext
type TentativeRecord is view series of ContactRecord;
```
As we shall see, all update operations first append a view to a contact record’s view series to reflect the intended update. However, this result is still tentative. Later, when the final decision can be made on the update, the previously appended view is either applied, making the result authoritative or undone by removing the view from the view series. Details are explained in the next section.
**Remote invocations.** Our algorithms are based on an RPC mechanism [17], by which a node invokes an operation at its parent and subsequently blocks until a reply is received. We assume that the execution of an update or look-up operation for a specific object runs to completion or until it blocks, without being pre-empted by competing operations. To ensure correctness of our algorithms, we require that invocation requests and the subsequent responses are handled in the order that they were issued. How these semantics are implemented is described in [18].
### 3.2. Address insertion
The insertion of an address for a specific object is done by two operations:
- `insert_addr` is invoked at a node when that node is requested to store the given address;
- `insert_chk` is invoked at a parent node to obtain permission to store the address at the invoking node, or one of its children.
Note that whenever either operation is invoked at a specific directory node it is known at that point that the given address can be used to contact the object. In other words, the address can, in principle, be returned as the result of a look-up operation. The only thing that is not yet known, is exactly at which node the address will be stored. For example, when returning to Figure 2, we see that as soon as the insert request is initiated at leaf node N2 we can already make the address available to look-up operations from `dom(N2)`. Likewise, when the request is propagated to N1 the address can be made available to look-up requests from N1. In both
FIGURE 4. Data structures for storing contact addresses of a single object at a directory node.
(1) $x$ : Integer := 4;
(2) $y$ : Integer;
(3) $vx$ : view series of Integer := $x$;
(4) $x$ := 4; $y$ := 5; $vx$ := (4) append view (self := self + 1) to $vx$; $y$ := $vx$;
(5) $x$ := 4; $y$ := 10; $vx$ := (6) append view (self := self + 2) to $vx$; $y$ := $vx$;
(6) $x$ := 5; $y$ := $vx$;
(7) apply view to $vx$;
FIGURE 5. A simple example of views and view series.
$x$ := 4; $vx$ := ()
$x$ := 4; $exec := self + 1$
$x$ := $exec$
$x$ := $exec + 2$
$x$ := $exec + 1$
$x$ := $exec$
$x$ := $exec$
cases, we do not yet know where the address will actually be stored. Our insert operations, therefore, can start by making the address tentatively available at the present node without yet having permission from the parent. Making the address tentatively available means that either the address or a forwarding pointer to the calling node is tentatively stored.
Operation insert_addr. We start with the operation insert_addr, which is specified in Figure 6. We assume there is a function thisNode that returns the node identifier of the node where the function is called. As mentioned before, the variable tentativeCR denotes the view series associated with the object’s contact record at the current node. The operation starts with saving the state of the current contact record in line 2 after which it makes the address available to look-up operations by tentatively adding it to tentativeCR in line 6.
As a next step, the node has to check whether and how it should contact its parent. There are three occasions on which the parent needs to be contacted.
- If the contact record was empty when the operation was invoked, the node may choose to store the address. If it is not prepared to store the address, it should pass the request to its parent. This is expressed in lines 11–15. It also means that the previously appended view should be removed when the call to the parent returns (line 15). Note that the address is simply passed to the parent by calling invoke_addr again in line 14.
- If the contact record was empty and the node wants to store the address, it will have to ask its parent for permission by invoking insert_chk in line 19.
- Permission is also needed when there are pending requests to the parent, that is, when a number of tentative results from previous operations still exist. In that case, the node cannot take any definitive decision on whether or not to store the address. This situation is also covered by the invocation of insert_chk in line 19.
Depending on whether the parent had been called, or what the response was, the operation eventually continues with either turning the previously appended view into authoritative data (line 22), or removing it altogether (line 23).
Operation insert_chk. The operation insert_chk is invoked at the parent node when the invoking node or one of its (grand)children wants to store the given address. The parent is asked for permission to store the address at one of its (grand)children.
If the parent agrees, it will, in turn, have to obtain permission from the next higher level node and so on up to the root of the tree. This permission results from our policy that the highest level node that wants to store addresses may do so provided global consistency is not violated. Permission is not needed if the parent had already stored a forwarding pointer to the calling child. When the invoked node permits its (grand)child to store the address it tentatively installs a forwarding pointer to the calling child, thereby making the address available for look-up operations.
FIGURE 6. Insertion of contact addresses.
in its domain. The pointer can be only tentatively installed as long as higher level nodes have not yet given their permission for storing the address at some lower level.
Alternatively, the parent may decide that it wants to store the address itself and that it can do so without violating global consistency. In that case, the invoking child, which will have made the address tentatively available, is instructed to remove the address or its forwarding pointer from its view. Removal is recursively propagated downwards to the lowest level node where the address is tentatively stored.
The operation \texttt{insert}_\texttt{chk} has a similar structure to \texttt{insert}_\texttt{addr} (see Figure 7). It decides whether to tentatively add the given address to its contact record or tentatively install a forwarding pointer to the calling child (lines 9–14). An address is always added if there are already contact addresses in the corresponding contact field. When the contact field was empty, that is, it also did not contain a forwarding pointer to the calling child, the node may decide to store the address using its \texttt{store} operation. When an address is (tentatively) added, the calling child must clear its contact record. This is accomplished by replying with \texttt{DELETE} (lines 10–11).
When the invoked node is not going to store the address, it gives the calling child permission to do so instead. The invoked node will not store the address because it either is not prepared to do so or because it already has a forwarding pointer to the calling child. (Note that whenever a contact field already has a forwarding pointer, it can never decide to store an address. In other words, we discard the outcome of \texttt{store}.) In any case, it will have to ensure that the address becomes (tentatively) available by having a forwarding pointer to the child. The latter is ensured by simply installing the pointer, as is done in lines 12–13.
There are two occasions when the invoked node has to pass the request to its parent.
- When there are still pending requests to the parent that have not been answered yet, the node cannot take an authoritative decision on whether or not to make the address available. In that case, the parent has to be asked for permission as well.
- When the node had an empty contact record when the insert request arrived, this invocation concerns currently the only address from the node’s domain. In that case, the parent is also unaware of the address and should be asked for permission, regardless whether the node is prepared to store the address or not.
These two cases are specified in lines 18–21. Finally, depending on the reaction of the parent, the previously appended view is either applied or removed as shown in lines 22–25.
3.3. Address deletion
Deleting an address is done by a single operation \texttt{delete}_\texttt{addr}. The operation must be invoked at the same leaf node where
FIGURE 7. Checking an insert operation with a parent.
the associated address insertion was initiated. (Note that we assume that the leaf domain in which a contact address lies is encoded in the address. We can thus easily identify the leaf node where the deletion should be initiated.) When a contact record at node N becomes empty after deleting an address, the parent node should delete its forwarding pointer to N. Removing a pointer at a parent node is handled as well, for which case it has an additional Boolean parameter delPtr. The operation is specified in Figure 8.
Completely analogous to making newly inserted addresses tentatively available, we can also immediately announce that an address or forwarding pointer will be removed. In other words, as soon as a node N is requested to delete an address or forwarding pointer, it can do so without waiting for its parent to have completed the operation. Deletion takes place by appending a view by which the address or forwarding pointer is removed from the contact record. In this way, we even achieve that a previously inserted address for which the insert operation has not yet fully completed, that is, the address is yet only tentatively available at a node, is immediately made unavailable again to look-up operations at that node. Such effects are important in wide-area systems. An alternative, by which a deletion can come into effect only after the associated insertion has completed, is generally unacceptable due to unpredictable delays for the completion of an operation.
The operation delete_addr starts with undoing the effects of the previous insert operation (lines 3–12). It checks whether it stores the address (line 3) or forwarding pointer (line 4), after which a view is appended reflecting the respective removal (lines 10–11).
There are two occasions in which the parent should be called as well.
- If the contact record was already empty, or when it became empty on account of the current delete, the parent node should remove its forwarding pointer to the current node. This situation is specified in lines 15–17 for the case that record became empty and in line 25 for the case that it already was empty.
- If there were pending operations to the parent, the node does not yet know what the final situation will be when all previous requests have been processed. Therefore, the parent must be informed about the deletion as well. This situation is expressed in line 18 and also in line 25.
3.4. Address look-ups
An important design issue for our location service is that we wish to make update results available as soon as possible. This is important in a wide-area system, where propagations of updates may take a relatively long time due to network and node failures. Therefore, look-ups operate on tentatively
FIGURE 8. Deletion of contact addresses.
available data, that is, the value of view series, rather than on the authoritative data of contact records.
This policy works fine in a tree that is globally consistent and even in a tree where some addresses have been made tentatively available only. Problems arise when some addresses are being deleted concurrently with look-up operations, for in that case we may decide to follow a path of forwarding pointers that is in the process of being deleted. In that case we adopt a simple solution. If a path has been followed without success, we simply continue the look-up operation in another path, if possible. If all such attempts fail, the look-up operation proceeds with the next higher level node on the path to the root.
Our operation lookup is given in Figure 9. It starts with checking whether the current node has a nonempty contact record (line 4). If so, it tries to select an arbitrary contact field containing addresses. This is expressed by the choose any statement in line 7, which, in this case, takes an index as a free variable and tries to match that in the expression following the with keyword.
If the selection succeeded, the operation subsequently selects an arbitrary address from that contact field (again expressed as a choose any statement), and returns the address as the result to the calling node (lines 8–10). On the other hand, if there were no addresses in the contact record, the look-up operation continues by following an arbitrary path of forwarding pointers in one of the subtrees rooted at a child. Because each of these paths may be in the process of being deleted, all contact fields containing a forwarding pointer are checked (line 12). As soon as an address has been found in one of the subtrees, the operation stops by returning that address (line 14).
If no address could be found, we continue the look-up operation at a higher level node (line 19). This makes sense only when the operation was initially called by one of the children or by a client process, that is, caller ≠ parent. Otherwise, when no address was found, we have reached the root of the tree, and NIL, which is the present value of addr can be returned (line 20). If we did find an address, we simply return that value.
FIGURE 9. Looking up a single contact address.
3.5. Discussion
If we ignore the use of view series, our algorithms are relatively straightforward and strongly resemble standard (recursive) implementations for search tree algorithms. The intricacies mainly come from the fact that we wish to make results available as soon as possible. This explains why every operation starts with appending its anticipated result to the view series associated with the current contact record. Effectively, view series allow us to propagate update results in increasingly expanding domains before the update has been fully completed. For a wide-area system, the availability of such tentative data is essential, as it may take considerable time before results become authoritative.
To illustrate the benefit of our approach, assume the root node is temporarily unreachable due to a network or node failure. In that case, our location service is temporarily partitioned into a number of unreachable parts (one for each child of the root node). However, each subtree continues to operate normally, although operations requested to be invoked at the root node will experience a significant delay. By additionally maintaining the order of invocations through view series, we, at worst, experience performance failures. Clearly, the look-up operation needs to be improved, as it is unacceptable that a client must wait until the tree recovers from a failure.
or indefinite waiting can easily be dealt with by using time-out mechanisms.
Correctness. To assess the correctness of our algorithms, we initially expressed our update and look-up operations in the protocol verification language Promela [19], and conducted a number of state space searches. After an initial design phase, we constructed formal proofs of correctness. The latter can be found in an extended version of this paper [20].
Placement of contact addresses. There are several ways in which we can improve the working of the location service described so far. One important optimization consists of adding caches.
By default, a contact address is stored at the leaf node where it is inserted. However, this may not always be the best choice. Consider the situation that an object is regularly moving between two leaf domains $L_1$ and $L_2$. Let $D$ denote the lowest level domain that covers both leaf domains. Each time the object moves from $L_1$ to $L_2$, the location service creates and deletes a path of forwarding pointers from the directory node $\text{dir}(D)$ of $D$ to the leaf nodes $\text{dir}(L_1)$ and $\text{dir}(L_2)$, respectively. When the object is moving regularly, it makes sense to store the contact address in the object’s contact record at $\text{dir}(D)$. For example, by maintaining only the path from the root to $\text{dir}(D)$, we can save on costs for path maintenance.
In addition, there is another advantage of storing addresses at $\text{dir}(D)$. We know that, although the set of addresses stored at $\text{dir}(D)$ may change, the place where these addresses are stored is now stable. This permits us to effectively shorten search paths by caching pointers to contact records. Specifically, we cache a pointer to the directory node containing a contact address, at each node of the search path when returning the answer to the leaf node where a look-up request originated, as shown in Figure 10.
We now have the situation that the object which is moving between leaf domains can be easily located by looking up its present address in the node $\text{dir}(D)$ representing the smallest domain in which all its movements take place. By caching a pointer to $\text{dir}(D)$, the object may be tracked by just two successive look-up operations (assuming a cache hit at the leaf node): the first one at the leaf node servicing the requesting process, and the second one at $\text{dir}(D)$. This is a considerable improvement over existing approaches.
We are currently investigating how stable locations for storing addresses can be identified. Initially, we plan to use a timer-based approach. If a node detects that pointers in a relatively long-living contact record often change between the record’s fields, it can conclude that contact addresses instead of pointers should be stored in that record. Likewise, if an address has been stored for a relatively long time at some intermediate node, it is justified to store the address at a lower-level node.
Scalability. Our search tree described so far obviously does not yet scale. In particular, higher level directory nodes not only have to handle a relatively large number of requests, they also have high storage demands. Our solution is to partition a directory node into one or more directory subnodes, such that each subnode is responsible for a subset of the records originally stored at the directory node. We can easily use hashing techniques on the object handles to identify subnodes at parents and children.
When partitioning directory nodes, simple calculations show that storage requirements per subnode range between 10 and 100 gigabytes, which can be easily handled with current technology. Whether we can actually meet processing demands per subnode is somewhat speculative because of the lack of reference data. However, it is more likely that performance is limited by the capacities of the underlying communication network.
4. RELATED WORK
We have made a strict separation between a naming service which is used to organize objects in a way that is meaningful to their users and a location service which is strictly used to contact an object given a unique identifier. Naming services can be used for finding information based on the meaning of a name, as is often used for Internet resource discovery services. In our scheme, information retrieval would start with finding relevant names, retrieving the associated object handles and having the location service return contact address for each object that was found to be potentially interesting.
Location services are particularly important when sources of information, that is objects, can migrate between different physical locations. They are becoming increasingly important as mobile telecommunication and computing facilities become more widespread. To relate our work to that of others, we therefore concentrate primarily on aspects of mobility, for which we make a distinction between mobile hosts and mobile objects.
Mobile computing
So far, much research has concentrated on mobile computing which is generally based on a model in which users migrate between different network locations. Usually, mobility in these cases is tied to mobile hardware such as handheld telephones, personal digital assistants and notebook computers. An implicit assumption underlying mobile computing is that the mobile object is always at precisely one location. Replication is less an issue, except when dealing with fault tolerant issues as, for example, in the case of disconnected file operations [21].
Location management in mobile computing generally follows a home-based approach. This means that the system assumes that there is always a home location that keeps track of the object’s current location. Once the present location has been found through the home location, messages can be redirected. This is, for example, the way that mobile IP works [22]. PCNs often work with a two-level search tree in which the second level consists of visitor location registers that contain addresses of visiting hosts in the current region. A distinctive feature of our approach compared to PCNs, is that we have several levels allowing us to exploit locality more effectively by inspecting succeedingly expanded regions at linearly incrementing costs.
The main drawback of a home-based approach is that it does not scale well to worldwide systems. First, having to contact a possible distant home location while the object may actually be very near to the calling process is not efficient: all locality aspects are neglected. Second, the approach cannot adequately handle long-living objects, as the home location must remain responsible for all its objects forever. This is also true for the situations in which an object has permanently moved to another location, even perhaps decades ago. As a consequence, assigning a lifetime telephone number is hard to realize efficiently with home-based approaches.
As an alternative, there are several proposals based on a hierarchically organized distributed database. A straightforward solution without any caching facilities and in which addresses are always stored in leaf nodes is described in [23]. Awerbuch and Peleg [24] propose a solution in which a moving object leaves a forwarding pointer which is removed only after a considerable distance has been traveled. In this way, a trade-off between costly update operations and scalable look-ups is achieved.
Jain [25] uses an approach to caching that is somewhat similar to ours. He also builds a hierarchical database in which the leaf nodes contain contact addresses and intermediate nodes pointers similar to ours. Once an object has been located, a pointer to a node covering the domain in which the object is moving can be cached at nodes on the reversed search path. Our approach is different in that the address of frequently moving objects is stored at a higher level node instead of just a pointer. Consequently, our look-up and update operations appear to be cheaper.
Alternatively, update and look-up strategies can be dynamically adapted to a user’s migration pattern as proposed by Krishna et al. [26]. In contrast, we propose to adapt the tree on a per-object basis by allowing addresses to be stored at higher levels when necessary. Our update and location policies remain the same. To avoid global look-ups that may involve many hops, Jannink et al. [27] propose to selectively replicate user profiles. This comes very close to allowing an object to have several contact addresses stored by the location service. In our approach, however, we let the object decide whether or not it wants to provide several contact addresses.
Using a hierarchically distributed database leads to the question when and how updates are propagated through the tree. In most cases, an update becomes visible when it has been completed. For wide-area systems, this approach is not acceptable because update propagation is slow. Instead, the results of update operations should be made available as soon as possible. Similar, in wide-area systems, we cannot accept that an operation is delayed until a previous one is completed. To solve these problems, we introduced view series that are used to implement a notion of tentative data. Our mechanism resembles queued RPCs as used in the Rover toolkit [28], except that we maintain the ordering of invocations. In this sense, view series are comparable to sender-based message logging used for recovery from node and network failures as explained in [29].
Mobile object systems
An implicit assumption that location management services for mobile computing are often making is that the object moves gradually through the network. For this reason, many algorithms are seen to work well because updates need not be propagated through the entire distributed database. In contrast to systems for mobile computing, mobile-object systems often deal with mobile computations. In these cases, one can imagine users to be fairly immobile and that instead objects move between locations for reasons of load
balancing, dynamic replication, etc. An important difference
with mobile computing is that objects travel at a speed
dictated by the network and may pop-up virtually anywhere.
This requires a highly flexible approach to locating objects.
Mobile objects have mainly been considered in the
context of local distributed systems. In Emerald, mobile
objects are tracked through chains of forwarding pointers,
combined with techniques for shortening long chains and a
broadcast facility when all else fails [30]. Such an approach
does not scale to worldwide networks. An alternative
approach to handle worldwide distributed systems is the
location independent invocation (LII) [31]. By combining
chains of forwarding references, stable storages and a
global naming service, an efficient mechanism is derived
for tracking objects. Most of the applied techniques are
orthogonal to our approach and can easily be added to
improve efficiency. However, the global naming service,
which is essential to LII, assumes that the update-to-lookup
ratio is small. We do not make such an assumption.
A seemingly promising approach that has been advocated
for large-scale systems are SSP chains [32]. The principle
has been applied to a system called Shadows [33]. SSP chains
allow object references to be transparently handed
over between processes. In essence, a chain of forwarding
pointers is constructed from an object reference to the object.
Consequently, there is no need for any location service
because an object reference can always be resolved through
the chain of pointers. A drawback is that this approach
neglects locality, making it hard to apply to worldwide
systems.
5. CONCLUSIONS AND FUTURE WORK
The Globe location service provides a novel approach to
locating objects in mobile computing and computation.
Although the service has yet to be extensively tested in
practice, simulation experiments and local implementations
indicate that the service can scale efficiently worldwide. An
important component of the service is formed by pointer
caches. Further research and experimentation is needed
to see whether and how our caching policy can indeed be
effectively and efficiently deployed.
We are currently developing a prototype implementation
of directory nodes that can be easily tested on the Internet.
To come to that point, our research is currently concentrating
on minimal support for fault tolerance and security. We
initially concentrate on an implementation that can support
mobile and replicated Web pages and which can be
seamlessly integrated with existing Web browsers.
REFERENCES
computing. Computer, 27, 4, 38–47.
environment. In Proc. Workshop on Object Replication and
Mobile Computing, San Jose, CA, October, 1996. ACM Press,
New York.
[4] Baetsch, M., Baum, L., Molter, G., Rothkugel, S. and Sturm,
P. (1997) Enhancing the Web’s infrastructure: from caching to
replication. IEEE Internet Comput., 1, 2, 18–27.
push-caching. In Proc. 5th HOTOS, Orcas Island, WA, May,
1996. IEEE, Los Alamitos, CA.
Mobile Agents: Are They a Good Idea. Technical Report, IBM
T.J. Watson Research Center, Yorktown Heights, NY.
distributed computing. IEEE Micro, 17, 2, 44–53.
Facilities. RFC 1034.
In Proc. 4th ACM Symp. on Principles Of Distributed Computing. ACM.
and Deployment. International Thomson Computer Press,
London.
naming service for improved performance and fault tolerance.
The architectural design of Globe: A wide-area distributed
system. IEEE Concurrency, 7, 1.
[14] van Steen, M., Hauck, F., Homburg, P. and Tanenbaum,
and surrogates—object identifiers revisited. Theory Practice
Object Syst., 1, 2, 101–114.
concurrent RPCs in the Globe location service. In Proc. 3rd
ACSI Annual Conf., Heijen, The Netherlands, June 1997,
pp. 28–33.
Globe Wide-Area Location Service. Technical Report IR-440,
Vrije Universiteit, Department of Mathematics and Computer
Science.
Berlin.
strategy for universal personal communication systems. IEEE
J. Selected Areas Commun., 11, 6, 850–860.
users. J. ACM, 42, 5, 1021–1058.
hierarchical user location databases. In Proc. Int. Conf. on
Comm. IEEE.
|
{"Source-Url": "https://www.distributed-systems.net/my-data/papers/1998.compj.pdf", "len_cl100k_base": 10960, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 48533, "total-output-tokens": 13050, "length": "2e13", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0007357597351074219, "__label__crime_law": 0.0003647804260253906, "__label__education_jobs": 0.0008997917175292969, "__label__entertainment": 0.0001418590545654297, "__label__fashion_beauty": 0.0002205371856689453, "__label__finance_business": 0.0004177093505859375, "__label__food_dining": 0.0003771781921386719, "__label__games": 0.0007138252258300781, "__label__hardware": 0.003276824951171875, "__label__health": 0.0006885528564453125, "__label__history": 0.0006256103515625, "__label__home_hobbies": 0.00014960765838623047, "__label__industrial": 0.0005536079406738281, "__label__literature": 0.00041294097900390625, "__label__politics": 0.00033855438232421875, "__label__religion": 0.0005521774291992188, "__label__science_tech": 0.302001953125, "__label__social_life": 9.441375732421876e-05, "__label__software": 0.020904541015625, "__label__software_dev": 0.66455078125, "__label__sports_fitness": 0.00030112266540527344, "__label__transportation": 0.0009355545043945312, "__label__travel": 0.00031375885009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56363, 0.03219]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56363, 0.53929]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56363, 0.93969]], "google_gemma-3-12b-it_contains_pii": [[0, 4434, false], [4434, 10624, null], [10624, 14881, null], [14881, 17939, null], [17939, 21586, null], [21586, 26184, null], [26184, 29284, null], [29284, 32294, null], [32294, 35087, null], [35087, 38767, null], [38767, 42712, null], [42712, 48875, null], [48875, 54844, null], [54844, 56363, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4434, true], [4434, 10624, null], [10624, 14881, null], [14881, 17939, null], [17939, 21586, null], [21586, 26184, null], [26184, 29284, null], [29284, 32294, null], [32294, 35087, null], [35087, 38767, null], [38767, 42712, null], [42712, 48875, null], [48875, 54844, null], [54844, 56363, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56363, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56363, null]], "pdf_page_numbers": [[0, 4434, 1], [4434, 10624, 2], [10624, 14881, 3], [14881, 17939, 4], [17939, 21586, 5], [21586, 26184, 6], [26184, 29284, 7], [29284, 32294, 8], [32294, 35087, 9], [35087, 38767, 10], [38767, 42712, 11], [42712, 48875, 12], [48875, 54844, 13], [54844, 56363, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56363, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
ad6f41b0bbbc01e18deec2693a85e059cd1e7990
|
[REMOVED]
|
{"Source-Url": "http://assured-cloud-computing.illinois.edu/files/2014/03/A-Probabilistric-Strategy-Language-for-Probabilistic-Rewrite-Theories-and-Its-Applciation-to-Cloud-Computing.pdf", "len_cl100k_base": 13812, "olmocr-version": "0.1.42", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 58200, "total-output-tokens": 17153, "length": "2e13", "weborganizer": {"__label__adult": 0.0003941059112548828, "__label__art_design": 0.0006427764892578125, "__label__crime_law": 0.0004584789276123047, "__label__education_jobs": 0.0010900497436523438, "__label__entertainment": 0.00017917156219482422, "__label__fashion_beauty": 0.0002167224884033203, "__label__finance_business": 0.0006880760192871094, "__label__food_dining": 0.0004973411560058594, "__label__games": 0.0010852813720703125, "__label__hardware": 0.0013742446899414062, "__label__health": 0.0008578300476074219, "__label__history": 0.00044345855712890625, "__label__home_hobbies": 0.000156402587890625, "__label__industrial": 0.0007691383361816406, "__label__literature": 0.0006732940673828125, "__label__politics": 0.0004818439483642578, "__label__religion": 0.0005860328674316406, "__label__science_tech": 0.2822265625, "__label__social_life": 0.0001348257064819336, "__label__software": 0.01519775390625, "__label__software_dev": 0.6904296875, "__label__sports_fitness": 0.0002837181091308594, "__label__transportation": 0.0007028579711914062, "__label__travel": 0.00023758411407470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55989, 0.01672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55989, 0.28939]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55989, 0.77099]], "google_gemma-3-12b-it_contains_pii": [[0, 2565, false], [2565, 5786, null], [5786, 9614, null], [9614, 14229, null], [14229, 17585, null], [17585, 21676, null], [21676, 25352, null], [25352, 28641, null], [28641, 32158, null], [32158, 33727, null], [33727, 36582, null], [36582, 38723, null], [38723, 41824, null], [41824, 43623, null], [43623, 46319, null], [46319, 49619, null], [49619, 52791, null], [52791, 55989, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2565, true], [2565, 5786, null], [5786, 9614, null], [9614, 14229, null], [14229, 17585, null], [17585, 21676, null], [21676, 25352, null], [25352, 28641, null], [28641, 32158, null], [32158, 33727, null], [33727, 36582, null], [36582, 38723, null], [38723, 41824, null], [41824, 43623, null], [43623, 46319, null], [46319, 49619, null], [49619, 52791, null], [52791, 55989, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55989, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55989, null]], "pdf_page_numbers": [[0, 2565, 1], [2565, 5786, 2], [5786, 9614, 3], [9614, 14229, 4], [14229, 17585, 5], [17585, 21676, 6], [21676, 25352, 7], [25352, 28641, 8], [28641, 32158, 9], [32158, 33727, 10], [33727, 36582, 11], [36582, 38723, 12], [38723, 41824, 13], [41824, 43623, 14], [43623, 46319, 15], [46319, 49619, 16], [49619, 52791, 17], [52791, 55989, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55989, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
01c3c6d4f3d16539b5f0374af192d7173ade2812
|
INDEX
A
A1 references in formulas, 113–115
vs. R1C1, 115
toggling with R1C1, 116–118
absolute references
converting to relative, 121–122
named ranges with, 119–120
Access application and tables
adding records to, 419–423
creating, 426–427
exporting, 423–426
QueryTables for, 356–358
step-by-step example, 427–429
accessing
UserForms, 281
VBA environment, 11–15
AccessToExcel macro, 423–426
activate events
workbooks, 154, 157
worksheet, 143
Activate method, 37, 52
ActivateWord macro, 400–401
activating
objects, 37
Word documents, 399–401
workbooks, 67–68
worksheets, 52
active elements, coloring,
373–375
active worksheet and workbook names,
243–244
ActiveCell object, 69
ActiveConnection property
Command object, 368
Recordset object, 367
ActiveX controls
collections of, 328
CommandButtons, 187–191
Control Toolbox, 186–187
vs. Form controls, 182
overview, 181
ActiveX Data Objects (ADO)
Command object, 368
Connection object, 367
introduction, 365–367
Recordset object, 367–368
with SQL, 368–372
add-ins
benefits, 336
closing, 349
code changes, 348
converting files to, 341–342
creating, 336–340
description, 335
installing, 342–346
removing, 349–350
step-by-step example, 350–352
user interface, 346–348
Add-Ins dialog box, 343–346, 349, 351
Add method
- charts, 201
- PowerPoint presentations, 431
- Word documents, 402
- workbooks, 67
- worksheets, 68–69
Add Watch dialog box, 262
AddCorrection macro, 108
AddFiveWorksheets macro, 102–103
AddItem method
- ComboBox controls, 292
- ListBox controls, 291
AddNewField macro, 428–429
addresses, extracting from hyperlinks, 242
AddSheetTest macro, 264–265
AddWorkbooks macro, 68
ADO. See ActiveX Data Objects (ADO)
AdvancedFilter
- deleting rows with duplicates, 161–164
- unique lists from columns, 167–168
Alignments button, 306
Alt+= keys, 193
Alt+D keys, 426
Alt+F1 keys, 204
Alt+F8 keys, 22, 39, 246
Alt+F11 keys, 25–26
Alt+O+E keys, 204
Alt+Q keys, 30
Alt+T+I keys, 344
American Standard Code for Information Interchange (ASCII), 289
Analysis ToolPak add-in, 346
Analysis ToolPak VBA add-in, 346
AND logical operators, 86
apostrophes (’), for comments, 36, 38
AppendRecords macro, 421–423
Application.Caller statement
- Form controls, 184–186
- UDFs, 240
Application object, 50, 67
Application.Volatile statement, 243–244
applications, variable scope in, 63
arguments in UDFS, 239
arrays
- benefits, 128–129
- boundaries, 132
- declaring, 129–130
- dynamic, 133–134
- with fixed elements, 132–133
formulas, 120–122
Option Base statement, 130–131
overview, 127–128
step-by-step example, 134–136
ArraySheets macro, 132
ArrayTest macro, 131
ArrayWeekdays macro, 132–133
As keyword, 55
ASCII (American Standard Code for Information Interchange), 289
Assign Macro dialog box, 183–184, 197–198
assigning
- shortcut keys, 19, 36
- values to variables, 56–57
asterisks (*) in SELECT statement, 369
Auto List Members option, 71–72
AutoCorrect list, updating, 108
automatically run macros, 5–6
automation, Office. See Office automation
AverageBowlingScores macro, 120
B
BackColor property, 327
BASIC (Beginner’s All-purpose Symbolic Instruction Code) programming language, 4
binding in Office automation, 392–394
Boolean data type, 58
boundaries for arrays, 132
Break button, 255
breakpoints, 259–261
Bring to Front button, 306
bugs. See debugging code
BuildDynamicString macro, 412
buttons
Form controls, 183–184
message boxes, 93
bypassing errors, 265–266
Byte data type, 58
calculate events, 144
CalculateSalary macro, 120
Call Stack dialog box, 263
calling UDFs from macros, 245–246
Caption property
CommandButtons, 188, 286
Label controls, 276, 287
UserForms, 274
Case keyword, 91–92
Cell object, 50, 51
cells
clearing, 70
color, 23–24
coloring, 373–376
data validation, 383–387
filling, 118–119
logging changes to, 380
ranges. See ranges and Range object
summing numbers in, 239–240
Cells property, 76
Centering button, 306
centuries, entering, 59
Change Chart Type dialog box, 200
change events
workbooks, 154–155
worksheets, 141–142, 144–148
CHAR function, 289
characters, extracting from strings, 241–242
Chart object, 199
chart sheets
adding charts to, 200–202
copying to slides, 433–435
ChartLocation macro, 84
ChartObject object, 199
charts, 199
adding to chart sheets, 200–202
adding to worksheets, 202–204
deleting, 207–208
locating, 82–84, 209
looping through, 206–207
moving, 204–205
PivotCharts, 223–226
renaming, 208
step-by-step example, 208–211
UserForms, 314–315
Charts collection, 53, 199
ChartSheetsToWorkbook macro, 205
ChartSheetToWorksheet macro, 204–205
CheckBox controls
color, 329–330
overview, 294–295
CheckBox1_Click macro, 295
class modules, 28, 321
benefits, 323–326
classes, 321–322
collections, 326
description, 322–323
objects
creating, 323
embedded, 326–330
step-by-step example, 330–334
ClearClipboard macro, 381–382
ClearContents method, 51–52, 70
ClearData macro, 183, 185
clearing
clipboard, 381–382
ranges, 51–52, 70
click events
CommandButtons, 286
workbooks, 155–156
worksheets, 142
clipboard, clearing, 381–382
Close button, disabling, 307–308
close events, 154
Close method
Connection object, 367
Recordset object, 368
CloseAllOtherWorkbooks macro, 68
CloseOneWorkbook macro, 105
CloseOneWorkbookFaster macro, 105
CloseWorkbooks macro, 104
closing
add-ins, 349
connections, 367
Recordset objects, 368
UserForms, 281–283
workbooks, 68, 104–105
worksheets, 104
cmdButtonGroup_Click macro, 327
cmdButtonGroup_MouseMove macro, 327
cmdCancel_Click macro, 286
cmdContinue_Click macro, 308
cmdLandscape_Click macro, 286
cmdOK_Click macro
add-in example, 338–339
checkboxes, 296
cmdPortrait_Click macro, 286
cmdSortDown_Click macro, 311–312
cmdSortUp_Click macro, 311
code
debugging. See debugging code
macros, 36
UserForms, 281
Code window, 27
Collection object, 52–53
collections
ActiveX controls, 328
creating, 326
For...Each...Next loops, 104
object model, 52–53
step-by-step example, 71–73
workbooks, 67–69
colon character (:) in Select Case structure, 92
color
cells, 23–24
CheckBox controls, 329–330
comments, 36, 38
hex codes, 327
Color property, 51
colored cells, summing numbers in, 239–240
coloring
active elements, 373–375
cells, 375–376
columns
coloring, 373–375
last, 80–81
ComboBox controls
overview, 292–294
populating, 312–314
pre-sorting items in, 310–311
ComboBox1_Change macro, 313
Command object, 368
CommandBar object, 346
CommandButton controls
ActiveX controls, 187–191
adding, 278–280
overview, 286
CommandButton1_Click macro
ActiveX controls, 190
hiding columns, 288
OptionButtons, 297–298
summing numbers, 290
CommandButton4_Click macro, 300
CommandText property
Access fields, 427
Command object, 368
CommandType property, 368
commas (,)
arguments, 239
ranges, 77
thousands separators, 16
variable declarations, 59
Comment2Text macro, 360
comments
cell change logs, 380
conditional formatting for, 244–245
listing unique items, 169–170
in macros, 36–39
Comments collection, 52
compatibility of macros, 34
conditional formatting in UDFs, 244–245
ConfirmExample macro, 93
Connection object, 367
ConnectionString property, 367
constants, 63–64
continuously populated ranges, 75–77
Control Toolbox, 186–187
controls, 274–280
Application.Caller for, 184–186
Buttons, 183–184
CheckBox, 294–295
ComboBox, 292–294
CommandButton, 187–191, 286
Control Toolbox, 186–187
Form and ActiveX, 181–182
Forms toolbar, 182–183
Frame, 298–300
frequently used, 285
Label, 287–288
ListBox, 290–292
MultiPage, 300–301
OptionButton, 296–298
step-by-step examples, 191–198, 301–304
TextBox, 288–290
ControlSource property, 288
ConvertAbsoluteToRelative macro, 121–122
converting
absolute and relative references, 121–122
files to add-ins, 341–342
ConvertRelativeToAbsolute macro, 121
CopyChartSheets macro, 434–435
CopyEmbeddedChart macro, 437–439
copying
chart sheets to slides, 433–435
to clipboard, 381–382
ranges
to PowerPoint presentations, 432–433
to Word documents, 402–403
CopyRange macro, 432–433
CountFormulas macro, 122
Create PivotTable dialog box, 213
CreateAccessTable macro, 426–427
CreateChartSameSheet macro, 202–203
CreateChartSheet macro, 200
CreateNewPresentation macro, 431–432
CreatePivot macro, 227
CreatePivotChart macro, 234–235
CreateTextFiles macro, 359
CreateWordDoc macro, 402
Ctrl+Alt+F9 keys, 243
Ctrl+Break keys, 243
Ctrl+F11 keys, 26
Ctrl+G keys, 28, 72
Ctrl+R keys, 26, 150
Ctrl+S keys, 350
Ctrl+Shift+Enter keys, 120
Ctrl+Shift+F9 keys, 261
Currency data type, 58
current cells, coloring, 375–376
CurrentQuarter macro, 92
CurrentRegion property
charts, 200–201
overview, 76–77
Custom Lists dialog box, 384–385
Customize Ribbon option, 15
CustomListDV macro, 385–387
CutCopyMode property, 381
DAO (Data Access Objects) library, 366
data access, ADO. See ActiveX Data Objects (ADO)
data ranges, identifying, 79
data types, 55
arrays, 127
dates and time, 58–59
declaring, 59–61
overview, 57–58
data validation in cells, 383–387
database management system (DBMSs), 366
databases
Access. See Access application and tables
terms, 366
DataRangeLastRowsColumns macro, 80–81
dates and Date data type
declaring, 58–59
description, 58
filtering, 376–379
querying, 361–364
DateSerial function, 111, 376
DBMSs (database management system), 366
deactivate events
workbooks, 154, 157–158
worksheet, 144
Debug toolbar, 254–255
Break button, 255
Design Mode button, 255
Reset button, 255
Run button, 255
stepping through code, 255–256
Step Into button, 257–258
Step Out button, 259
Step Over button, 258–259
Toggle Breakpoint button, 259–261
debugging code
Call Stack dialog box, 263
Debug toolbar. See Debug toolbar
erors
bypassing, 265–266
causes, 252–254
handling, 264–265
Immediate window, 261–262
Locals window, 261
overview, 251–252
Quick Watch window, 263
step-by-step example, 266–268
Watch window, 262–263
Decimal data type, 58
decisions, 85
If...Then statements, 88
If...Then...Else statements, 89
If...Then...ElseIf statements, 90
IIF statements, 90–91
logical operators, 85–88
Select Case structure, 91–92
step-by-step example, 94–97
user, 92–94
declaring
arrays, 129–130
dynamic, 133–134
with fixed elements, 132–133
constants, 63–64
variables, 55–56, 59–61
DELETE statement in SQL, 370–371
DeleteAllPivotTablest macro, 232
DeleteAndCreate macro, 361
DeleteArrayColors macro, 167
DeleteChartSheets macro, 208
DeleteDupesColumnA macro, 162
DeleteDupesColumnD macro, 162–163
DeleteDuplicateRecords macro, 164–165
DeleteRows3YearsOld macro, 378–379
deleting
charts, 207–208
hyperlinks, 261
macros, 39
modules, 42–43
PivotTables, 232
rows
with duplicates, 161–167
filtered dates, 378–379
in SQL, 370–371
descriptions
in Insert Function dialog box, 246–248
for macros, 19
Design Mode button, 255
Design mode in Control Toolbox, 188, 191
Developer tab, 13–15
Dim statement, 129–130
dimensions, arrays, 129
Dir function, 107
disabling
Close button, 307–308
Frames, 298–299
worksheet events, 139–140
DisplayGridlines property, 88
displaying
photographs, 308–309
real-time charts, 314–315
Do...Loop Until loops, 109
Do...Loop While loops, 109
Do Until loops, 107–108
Do While loops, 106–107
double-click events
workbooks, 155–156
worksheets, 142
Double data type, 58
DoWhileExample macro, 107
duplicates
deleting rows with, 161–167
selecting range of, 171–172
step-by-step example, 173–179
unique lists from multiple columns,
167–170
dynamic arrays, 133–134
dynamic last rows and columns, 80–81
e-mail
creating, 410–411
example, 413–414
step-by-step example, 415–418
transferring ranges to, 411–413
worksheets, 415
early binding, 392–395
EarlyBindingTest macro, 393
editing macros, 37–39
efficiency, variables for, 57
elements, array, 127
EmailAttachmentRecipients macro,
416–418
EmailSingleSheet macro, 415
embedded charts
adding to worksheets, 202–204
copying to PowerPoint, 436–439
looping through, 206–207
moving, 204–205
embedded form controls. See Form controls
embedded objects, class modules for,
326–330
EmbeddedChartToAnotherWorksheet macro,
205
EmbeddedChartToChartSheet macro, 205
EmptyRecycleBin function, 382
EnableEvents property, 139–140
enabling worksheet events, 139–140
End Function statements, 238
End If statements, 88
end of ranges, 81–82
errors
deployment. See debugging code
UDFs, 242
Word applications, 400
Euro Currency Tools add-in, 346
events, 137
automatically run macros, 5–6
CommandButton, 187–191
Object Browser, 29
workbook. See workbook events
worksheet. See worksheet events
ExampleEmail macro, 413–414
Excel Options dialog box
add-ins, 344
for Developer tab, 13–14
formulas, 116
lists, 384–385
Option Explicit statement, 61
Exit For statements, 105
exiting
For loops, 104–105
VBE, 30
ExportFromExcelToWord macro, 403
exporting Access tables, 423–426
expressions in Watch window, 262–263
Extended ASCII characters, 289
external data, 353
ADO. See ActiveX Data Objects (ADO)
QueryTables
for Access, 356–358
from web queries, 353–356
step-by-step example, 361–364
text files for, 359–361
extracting
addresses from hyperlinks, 242
characters from strings, 241–242
ExtractLetters UDF, 241–242
ExtractNumbers UDF, 241, 246–247
F
F2 key, 23, 243
F4 key, 337
F5 key, 23, 70
F9 key, 260
F11 key, 202
False value in truth tables, 85–88
FavoriteMovies macro, 127
FavoriteMoviesLoop macro, 128
FavoriteMoviesRange macro, 128–129
field lists in PivotTables, hiding, 217–219
fields, database, 366
files, converting to add-ins, 341–342
FillBlankCellsFromAbove macro, 118–119
FilterBetweenDates macro, 376–378
FilterDateAfterToday macro, 378
FilterDateBeforeToday macro, 378
filters
AdvancedFilter, 161–164, 167–168
dates, 376–379
deleting rows with duplicates, 161–164
PivotTables, 214
Find_LastRow_LastColumn macro, 75
Find method
error bypass structure for, 266–268
ranges, 79
FindFormulas macro, 71
FindHello macro, 109
FindTest macro, 268
fixed elements, declaring arrays with,
132–133
fixed-iteration loops, 102
For...Next loops, 102–103
For...Each...Next loops, 104
forcing variable declarations, 59–61
ForeColor property, 327
Form controls
vs. ActiveX, 182
Application.Caller, 184–186
buttons, 183–184
Control Toolbox, 186–187
Forms toolbar, 182–183
overview, 181
step-by-step example, 191–198
Format Cells dialog box
color, 23
numbers, 193–194
PivotTables, 220–221
formatting
PivotTable numbers, 219–222
UDFs, 244–245
forms. See UserForms
Forms toolbar, 182–183
FormulaArray method, 120
FormulaR1C1 method, 114, 116
formulas, 113
array, 120–122
counting, 122
entering, 114–115
references, 113–115
A1 vs. R1C1, 115
converting absolute and relative, 121–122
mixed, in filling empty cells, 118–119
named ranges, 119–120
toggling between style views, 116–118
step-by-step example, 124–126
summing lists, 122–124
ForNextExample2 macro, 103
ForNextExample3 macro, 103
Frame controls, 276–277, 298–300
FROM clause in SELECT statements, 369
Function statement, 238
functions. See user-defined functions (UDFs)
G
GetComment UDF, 249, 350
GetObject function, 399–400
GetTextMessage macro, 361
Go To dialog box
accessing, 23
SpecialCells, 70, 244
Go To Special dialog box, 23
GroupName property, 298
Groups button, 306
grpCBX_Click macro, 329
H
Height parameter for charts, 209
hex codes for color, 327
Hide method for UserForms, 281
hiding
PivotTable field lists, 217–219
UserForms, 283
history of VBA, 4
hyperlinks
deleting, 261
events, 142–143
extracting addresses from, 242
I
icons, displaying, 12–13
identifying ranges, 79–80
If...Then statements, 88
If...Then...Else statements, 89
If...Then...ElseIf statements, 90
IIF statements, 90–91
Image controls, 309
Immediate window, 28, 31, 261–262
Import Data dialog box, 426
Importance property for e-mail, 411
ImportHistory macro, 356
importing
Access tables, 423–426
Word documents, 404–405
ImportStocks macro, 354–355
ImportToExcelFromWord macro, 404–405
indeterminate loops, 102
index numbers
arrays, 127–128, 131–132
charts, 205
lists, 383
worksheets, 68–69, 107
Index property
charts, 82
PivotTables, 229
infinite loops, 140
Initialize events
labels, 287
ListBox controls, 291
input boxes, 94
InputPassword macro, 110
Insert Chart dialog box, 225
Insert Function dialog box, descriptions in, 246–248
inserting
modules, 39–40
rows
on data changes, 172–173
databases, 369–370
input boxes for, 94
loops for, 106
InsertRows macro, 94, 106
installing add-ins, 342–346
instantiating
classes, 323
objects, 325
Integer data type
description, 58
variables, 55
IntelliSense tool, 71–73
interface for add-ins, 346–348
IsNumeric function, 146
iterations in loops, 101–102
J
JKP Application Development Services, 381
K
KeepOnlyArrayColors macro, 166–167
L
Label controls
OptionButtons, 330–331
overview, 287–288
UserForms, 276–277
last rows and columns, finding, 80–81
late binding
description, 394
vs. early, 394–395
step-by-step example, 395–397
LateBindingTest macro, 394
LBound function, 132
Left parameter for charts, 209
LEN function, 293
letters, extracting from strings, 241–242
liabilities of VBA, 8
libraries in Object Browser, 28–30
lifetime
constants, 64
variables, 61–63
Link UDF, 242
ListBox controls
overview, 290–292
populating, 312–314
pre-sorting items in, 310–311
ListBox1_Click macro, 292
lists
arrays as, 128–129
custom, 385–387
from multiple columns, 167–170
summing, 122–124
ListStyle property, 290
LoadPicture dialog box, 309
local macro scope, 62
Locals window, 261
Location property, 200
Locked property, 51
locking VBE, 43–44
logging cell changes, 380
logical errors, 253–254
logical operators, 85
AND, 86
NOT, 87–88
OR, 86–87
Long data type, 58
look and feel, simplifying, 7
LoopAllChartSheets macro, 207
LoopAllEmbeddedCharts macro, 206
loops
description, 101–102
Do...Loop Until, 109
Do...Loop While, 109
Do Until, 107–108
Do While, 106–107
embedded charts, 206–207
exiting, 104–105
For...Each...Next, 104
For...Next, 102–103
infinite, 140
nesting, 110–111
reverse, 105–106
step-by-step example, 111–112
types, 102
While...Wend, 110
Loop Twelve Months macro, 112
M
Macro dialog box, 21–22
Macro Options dialog box, 247–248
Macro Recorder limitations, 37
macros
automatically running, 5–6
buttons, 183–184
calling UDFs from, 245–246
code, 36
compatibility, 34
deleting, 39
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
description, 3–4
MailItem objects, 410–411
maximizing UserForms, 308
Me keyword, 306
message boxes, 92–93
methods
IntelliSense for, 71–73
Object Browser, 29
object model, 49, 51–52
MID function, 243–244
mixed references
filling empty cells, 118–119
named ranges with, 119–120
modal UserForms, 306–307
modeless UserForms, 306–307
modules
class. See class modules
deleting, 42–43
inserting, 39–40
renaming, 41–42
types, 28
UDFs, 238
UserForms, 281
variable scope in, 62–63
VBE, 34–35
moving charts, 204–205
MultiPage controls, 300–301
multiple columns
deleting rows with duplicates, 164–167
unique lists from, 167–170
MultiSelect property, 290, 292
Name property
UserForms, 274
worksheets, 51
named ranges, 119–120
names
active worksheets and workbooks, 243–244
charts, 205, 208
macros, 19, 22
modules, 41–42
PivotFields, 230
testing, 265–266
UDFs, 239
variables, 55–56
Names collection, 53, 70
NameWB UDF, 244
Naval Observatory, querying, 361–364
nesting loops, 110–111
New Formatting Rule dialog box, 245
new sheet events, 156–157
Next statements, 103
non-Excel applications, controlling, 7–8
noncontinuously populated ranges, 77
NOT logical operators, 87–88
number signs (#) for dates and time, 58–59
numbers
extracting from strings, 241–242
formatting in PivotTables, 219–222
summing, 239–240
Object Browser, 28–30
Object data type, 58
object-oriented programming
introduction, 49
object model
collections, 52–53
methods, 51–52
overview, 50–51
properties, 51
summary, 53
objects
creating, 323
embedded, 326–330
IntelliSense for, 71–73
ODBC (Open Database Connectivity), 366
Office automation, 391
Access. See Access application and tables
benefits, 391–392
binding, 392–395
Outlook. See Outlook application and
e-mail
PowerPoint. See PowerPoint presentations
step-by-step example, 395–397
Word. See Word application and
documents
OFFSET property, 78
OLEObject keyword, 328
OLEObjects keyword, 328
On Error GoTo statements, 264
On Error Resume Next statements, 265, 400
OnKey procedures, 202
Open Database Connectivity (ODBC), 366
open events, 153–154
Open method
Connection object, 367
Recordset object, 368
OpenAllFiles macro, 107
opening
databases, 367, 424
Outlook, 409–410
PowerPoint, 395–397
Recordset objects, 368
Word documents, 400–401, 406–408
workbooks, 107
OpenOrClosed UDF, 245–246
OpenOutlook macro, 409–410
OpenPowerPoint macro, 395–397
OpenRequestedWordDoc macro, 406–408
OpenTest UDF, 245
OptGroup_Click macro, 332
Option Base statement, 130–131
Option Explicit statement, 59–61
OptionButton controls
adding, 277–278
overview, 296–298
step-by-step example, 330–334
Options dialog box
Auto List Members, 71–72
view style, 116–117
OR logical operators, 86–87
ORDER BY statement, 369
Outlook application and e-mail, 409
creating, 410–411
example, 413–414
opening, 409–410
step-by-step example, 415–418
transferring ranges to, 411–413
worksheets, 415
Parent property, 52
parentheses ()
arrays, 129–130
message boxes, 93
Sub statement, 36
UDFs, 239
PasswordChar property, 288
passwords
entering, 110
step-by-step example, 94–97
UserForms, 288
VBE, 43
PasswordTest macro, 97
photographs, 308–309
PickSixLottery macro,
110–111
Picture property, 309
pie charts, 210–211
Pieterse, Jan Karel, 381
PivotCaches, 226–230
PivotCharts, 223–226
PivotFields, 230
PivotItems, 231
PivotTables, 52, 213
creating, 213–217
field list hiding, 217–219
formatting numbers in,
219–222
PivotCaches, 226–230
PivotCharts, 223–226
PivotFields, 230
pivoting data in, 222
PivotItems, 231
refreshing, 226, 232
step-by-step example, 232–235
workbook events, 156
worksheet events, 144
PivotTables collections, 231–232
points, 209
populating ListBox and ComboBox items,
312–314
page breaks, 379
PageBreakInsert macro, 379
PowerPoint presentations
binding, 395–397
copying chart sheets to, 433–435
copying ranges to, 432–433
creating, 431–432
running, 435–436
PowerPointSlideshow macro, 435–436
pre-sorting ListBox and ComboBox items, 310–311
prefixes for control names, 286
Preserve statements, 133–134
primary keys for databases, 366
print events, 157–160
printing Word documents, 403–404
PrintWordDoc macro, 403–404
prior selected cells, coloring, 375–376
Project Explorer window, 26–27, 150
prompts
input boxes, 94
message boxes, 93
properties and Property Window, 27
accessing, 339
IntelliSense for, 71–73
module names, 41
Object Browser, 29
object model, 49, 51
UserForms, 273–274
protecting
add-in code, 348
VBE, 43–44
PtrSafe keyword, 381, 424
Public scope
arrays, 130
UDFs, 238
PublicArrayExample macro, 130
queries, database, 366
QueryClose events, 307
QueryTables
for Access, 356–358
from web queries, 353–356
question marks (?) for Immediate window, 261–262
Quick Watch window, 263
QuickBASIC language, 4
quotes ("),
column references, 76
ranges, 77
VALUES clause, 370
R
R1C1 references in formulas, 113–115
vs. A1, 115
toggling with A1, 116–118
RAND function, 124, 243
random numbers
lottery example, 110–111
volatility of, 124, 243
ranges and Range object, 50, 75
continuously populated, 75–77
copying
to PowerPoint presentations, 432–433
to Word documents, 402–403
with duplicates, 171–172
identifying, 79–80
last rows and columns, 80–81
named, 119–120
noncontinuously populated, 77
OFFSET property, 78
overview, 69–70
RESIZE property, 78
SpecialCells, 70–71
start and end, 81–82
step-by-step example, 82–84
transferring to e-mail, 411–413
readability, variables for, 57
real-time charts, 314–315
recalculating
calculate events for, 144
Volatile functions, 124, 243
Record Macro dialog box, 18
recording macros, 16–21
records
adding to Access, 419–423
databases, 366
Recordset object, 367–368
recurring tasks, 5
RecycleBinEmpty macro, 382
ReDim statements, 133–134
references in formulas, 113–115
A1 vs. R1C1, 115
converting absolute and relative,
121–122
mixed, for filling empty cells, 118–119
named ranges with, 119–120
toggling between style views, 116–118
Refresh method, 52
RefreshAll method, 232
refreshing
PivotCaches, 226
PivotTables, 232
QueryTables, 355–356
relational databases, 366
relative references
converting to absolute, 121–122
named ranges with, 119–120
removing add-in list items, 349–350
RenameCharts macro, 208
renaming
charts, 208
modules, 41–42
repeating actions with loops. See loops
repetitive tasks, 5
Require Variable Declaration option, 61, 68
Reset button, 255
RESIZE property, 78
reverse loops, 105–106
Ribbon interface, 11, 13
right click events
workbooks, 156
worksheets, 142
rows
coloring, 373–375
deleting
with duplicates, 161–167
filtered dates, 378–379
in SQL, 370–371
inserting
on data changes, 172–173
in databases, 369–370
input boxes for, 94
loops for, 106
last, 80–81
RowSource property
ComboBox controls, 292–293
ListBox controls, 291
Run button, 255
Run Macro button, 21
runtime errors, 253
Same Size button, 306
Save As dialog box, 341, 349–350
save events, 158
SaveCellValue macro, 361
Saved property, 51
scope
arrays, 130
constants, 63–64
variables, 61–63
ScreenUpdating, 175
searching
loops for, 109
in Object Browser, 30
Select Case structure, 91–92
Select Data Source dialog box, 357
Select method, 37
<table>
<thead>
<tr>
<th>Topic</th>
<th>Page(s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SELECT statement in SQL</td>
<td>369</td>
</tr>
<tr>
<td>Select Table dialog box</td>
<td>357–359</td>
</tr>
<tr>
<td>SelectCaseExample macro</td>
<td>92</td>
</tr>
<tr>
<td>SelectDataRange macro</td>
<td>79</td>
</tr>
<tr>
<td>selected cells, coloring</td>
<td>375–376</td>
</tr>
<tr>
<td>SelectedWorksheets macro</td>
<td>133–134</td>
</tr>
<tr>
<td>selecting photographs</td>
<td>308–309</td>
</tr>
<tr>
<td>range of duplicates</td>
<td>171–172</td>
</tr>
<tr>
<td>worksheets, 107–108</td>
<td></td>
</tr>
<tr>
<td>selection change events</td>
<td></td>
</tr>
<tr>
<td>workbooks, 155</td>
<td></td>
</tr>
<tr>
<td>worksheets, 141–142</td>
<td></td>
</tr>
<tr>
<td>Selection object</td>
<td>69</td>
</tr>
<tr>
<td>SelectSheet macro</td>
<td>107–108</td>
</tr>
<tr>
<td>SelectUsedRange macro</td>
<td>79</td>
</tr>
<tr>
<td>self-expiring workbooks</td>
<td>382</td>
</tr>
<tr>
<td>Send to Back button</td>
<td>306</td>
</tr>
<tr>
<td>SendEmail macro</td>
<td>410–411</td>
</tr>
<tr>
<td>SendMail</td>
<td>415</td>
</tr>
<tr>
<td>Set as Default Chart option</td>
<td>200</td>
</tr>
<tr>
<td>SheetManager macro</td>
<td>339, 346–347</td>
</tr>
<tr>
<td>SheetName UDF</td>
<td>244</td>
</tr>
<tr>
<td>SheetPivotTableUpdate events</td>
<td>156</td>
</tr>
<tr>
<td>Sheets collection</td>
<td>69</td>
</tr>
<tr>
<td>ShellExecute function</td>
<td>424</td>
</tr>
<tr>
<td>Shift+F9 keys, 263</td>
<td></td>
</tr>
<tr>
<td>shortcut keys for macros</td>
<td>19, 22</td>
</tr>
<tr>
<td>Show Developer tab</td>
<td>13–14</td>
</tr>
<tr>
<td>ShowAllItems macro</td>
<td>231</td>
</tr>
<tr>
<td>ShowHidePivotChartFieldButtons macro</td>
<td>225–226</td>
</tr>
<tr>
<td>ShowModal property</td>
<td>306</td>
</tr>
<tr>
<td>ShowSingleItem macro</td>
<td>231</td>
</tr>
<tr>
<td>ShowUserForm1 macro</td>
<td>306</td>
</tr>
<tr>
<td>ShowUserForm2 macro</td>
<td>307</td>
</tr>
<tr>
<td>Single data type</td>
<td>58</td>
</tr>
<tr>
<td>64-bit version, 381</td>
<td></td>
</tr>
<tr>
<td>size</td>
<td></td>
</tr>
<tr>
<td>arrays, 129, 133–134</td>
<td></td>
</tr>
<tr>
<td>UserForms, 308</td>
<td></td>
</tr>
<tr>
<td>slides in PowerPoint presentations</td>
<td></td>
</tr>
<tr>
<td>copying chart sheets to</td>
<td>433–435</td>
</tr>
<tr>
<td>copying ranges to</td>
<td>432–433</td>
</tr>
<tr>
<td>Solver add-in</td>
<td>346</td>
</tr>
<tr>
<td>Sort_Separate_ClientName macro</td>
<td>172–173</td>
</tr>
<tr>
<td>SpecialCells method</td>
<td>70–71, 118</td>
</tr>
<tr>
<td>splash screens, 309–310</td>
<td></td>
</tr>
<tr>
<td>SQL. See Structured Query Language (SQL)</td>
<td></td>
</tr>
<tr>
<td>standard modules</td>
<td></td>
</tr>
<tr>
<td>description</td>
<td>28</td>
</tr>
<tr>
<td>UDFs, 238</td>
<td></td>
</tr>
<tr>
<td>start of ranges</td>
<td>81–82</td>
</tr>
<tr>
<td>static random numbers</td>
<td>124</td>
</tr>
<tr>
<td>Static scope of arrays</td>
<td>130</td>
</tr>
<tr>
<td>StaticRandom UDF</td>
<td>243</td>
</tr>
<tr>
<td>Step statements</td>
<td>105–106</td>
</tr>
<tr>
<td>stepping through code</td>
<td>255–256</td>
</tr>
<tr>
<td>Step Into button</td>
<td>257–258</td>
</tr>
<tr>
<td>Step Out button</td>
<td>259</td>
</tr>
<tr>
<td>Step Over button</td>
<td>258–259</td>
</tr>
<tr>
<td>stock quotes QueryTables example</td>
<td>353–356</td>
</tr>
<tr>
<td>Stop Recording toolbar</td>
<td>20–21</td>
</tr>
<tr>
<td>strings and String data type</td>
<td></td>
</tr>
<tr>
<td>description</td>
<td>58</td>
</tr>
<tr>
<td>extracting characters from</td>
<td>241–242</td>
</tr>
<tr>
<td>Structured Query Language (SQL),</td>
<td>356, 368</td>
</tr>
<tr>
<td>DELETE statement</td>
<td>370–371</td>
</tr>
<tr>
<td>examples, 371–372</td>
<td></td>
</tr>
<tr>
<td>INSERT statement</td>
<td>369–370</td>
</tr>
<tr>
<td>SELECT statement</td>
<td>369</td>
</tr>
<tr>
<td>UPDATE statement</td>
<td>370</td>
</tr>
<tr>
<td>upper case for statements</td>
<td>369</td>
</tr>
<tr>
<td>Sub statement</td>
<td>36</td>
</tr>
<tr>
<td>SumAlongOneRow macro</td>
<td>123–124</td>
</tr>
<tr>
<td>SumColor UDF</td>
<td>240</td>
</tr>
<tr>
<td>SUMIF function</td>
<td>239</td>
</tr>
<tr>
<td>summing</td>
<td></td>
</tr>
<tr>
<td>lists, 122–124</td>
<td></td>
</tr>
<tr>
<td>numbers in colored cells</td>
<td>239–240</td>
</tr>
<tr>
<td>syntax errors</td>
<td>252</td>
</tr>
</tbody>
</table>
tables
Access. See Access application and tables
arrays as, 128–129
PivotTables. See PivotTables
TestComment UDF, 244–245
TestPublicArrayExample macro, 130
TestSheetCreate macro, 265–266
text files for external data, 359–361
TextBox controls
adding, 276–277
collections of, 326
input filtering, 323–326
overview, 288–290
TextBox1_KeyPress macro, 289, 323–324
TextExport macro, 360–361
32-bit version, 381
ThisWorkbook module, 28, 150, 158, 202
time
declaring, 58–59
fractional values with, 287
querying, 361–364
Time function, 287
TimeAfterTime macro, 363
titles in message boxes, 93
Toggle Breakpoint button, 259–261
ToggleViews macro, 195–197
toolbars
Control Toolbox, 186–187
Debug. See Debug toolbar
Forms, 143, 182–183
Macro Recorder, 20
Stop Recording, 21
UserForms, 305–306
VB, 12–13
VBE, 33–34
Toolbox in VBE, 274–276
Top parameter for charts, 209
trapping errors, 264–266
triggers
for automatically run macros, 5–6
events. See events
True value in truth tables, 85–88
truth tables, 85–88
TryItPieChart macro, 209–211
two-dimensional arrays, 129
TwoDimensionalArray macro, 129
TxtGroup_KeyPress macro, 324
UBound function, 130, 132
UDFs. See user-defined functions (UDFs)
Ungroup button, 306
UnhideSheets macro, 104
unhiding worksheets, 104
unique lists from multiple columns, 167–170
UniqueList macro, 167–168
UniqueStoresToWorkbooks macro, 174–179
Unload method, 281
unloading UserForms, 282, 309–310
UPDATE statement in SQL, 370
UsedRange property, 79–80
user decisions, 92
input boxes, 94
message boxes, 92–93
user-defined functions (UDFs)
anatomy, 238–239
calling from macros, 245–246
characteristics, 237–238
conditional formatting, 244–245
creating, 7
description, 237
extracting addresses from hyperlinks, 242
extracting characters from strings, 241–242
Insert Function dialog box, 246–248
returning active worksheet and workbook
names, 243–244
step-by-step example, 248–249
summing numbers, 239–240
volatile, 243
user interface for add-ins, 346–348
UserForm_Initialize macro
charts, 314
ComboBoxes, 292–294
labels, 287
ListBoxes, 291, 310–311, 313
TextBoxes, 325–326
UserForm size, 308–309
UserForm_QueryClose macro, 307
UserForms
add-ins, 346
Close button, 307–308
closing, 281–283
code, 280–281
controls. See controls
creating, 272–273
description, 271
designing, 273–274
hiding, 283
ListBox and ComboBox items
populating, 312–314
pre-sorting, 310–311
modal vs. modeless, 306–307
modules, 28
photographs, 308–309
real-time charts, 314–315
showing, 280
size, 308
step-by-step examples, 283–284, 315–319
toolbar, 305–306
unloading, 282, 309–310
variables
assigning values to, 56
data types
dates and time, 58–59
declaring, 59–61
overview, 57–58
declaring, 55–56, 59–61
names, 55–56
need for, 56–57
overview, 55–56
scope, 61–63
step-by-step example, 64–66
Watch window, 262–263
workbooks, 68
Variants data type
as default type, 59
description, 58
VB (Visual Basic) vs. VBA, 4
VBA overview, 3
benefits, 5–6
controlling non-Excel applications, 7–8
environment access, 11–15
history, 4
liabilities, 8
UDFs, 7
workbook look and feel, 7
VBAPrj - Project Properties dialog box, 43
VBE. See Visual Basic Editor (VBE)
vbModal value, 306
versions
Office, 392
VBA, 11–12
View Code option, 138
visibility
arrays, 130
variables, 61–64
Visual Basic Editor icon, 31
V
Val function, 289
Value Field Settings dialog box, 219–221
values, assigning to variables, 56
VALUES clause in INSERT statement, 370
Visual Basic Editor (VBE)
- description, 25
- exiting, 30
- getting into, 25–26
- locking and protecting, 43–44
Macros
- code, 36
- deleting, 39
- editing, 37–39
- locating, 33–35
Modules, 28
- deleting, 42–43
- inserting, 39–40
- renaming, 41–42
Object Browser, 28–30
- step-by-step example, 30–31, 44–45
Toolbars, 33–34
UserForms. See UserForms
Windows, 26–28
Visual Basic toolbar, 12–13
Volatile functions, 124, 243
W
Watch window, 262–263
Web queries, QueryTables from, 353–356
WEEKDAY function, 88, 133
WeekdayTest macro, 90
WHERE clause
- DELETE statement, 370–371
- SELECT statement, 369
While...Wend loops, 110
Width parameter for charts, 209
Windows API, 381–382
With statements, 39
WithEvents keyword, 324
WithoutVariable macro, 56–57
WithVariable macro, 57
Word application and documents
- activating, 399–401
- copying ranges to, 402–403
- creating, 402
- early binding, 392–394
- importing, 404–405
- opening, 400–401
- printing, 403–404
- step-by-step example, 405–408
Workbook_Activate events, 154
Workbook_BeforeClose events, 154, 347
Workbook_BeforePrint events, 157
Workbook_BeforeSave events, 158, 359
Workbook_Deactivate events, 154
Workbook_NewSheet events, 156–157
Workbook object, 50, 67
Workbook_Open events
- ActiveX controls, 328
- add-ins, 347
- CheckBox controls, 329–330
- overview, 153–154
- PivotTables, 232
Workbook_SheetActivate events, 157
Workbook_SheetBeforeDoubleClick events, 155–156
Workbook_SheetBeforeRightClick events, 156
Workbook_SheetChange events, 154–155
Workbook_SheetDeactivate events, 157–158
Workbook_SheetSelectionChange events, 155
workbooks
closing, 105
events. See workbook events
look and feel, 7
modules, 28
opening, 107
self-expiring, 382
working with, 67–69
Workbooks collection, 53, 67–69
Worksheet_Activate events, 143
Worksheet_BeforeDoubleClick events, 142, 171–172
Worksheet_BeforeRightClick events, 142
Worksheet_Calculate events, 144
Worksheet_Change events, 140
cell change logs, 380
overview, 140
PivotCharts, 229
PivotTables, 231–232
summing numbers, 144–148
unique items, 169–170
Worksheet_Deactivate events, 144
worksheet events, 137
common, 141–144
description, 137–138
enabling and disabling, 139–140
modules for, 138–139
step-by-step example, 144–148
Worksheet_Change, 141
Worksheet_FollowHyperlink events, 142–143
Worksheet object, 50
Worksheet_PivotTableUpdate events, 144
Worksheet_SelectionChange events
coloring active elements, 373–375
coloring cells, 376
overview, 141–142
worksheets
adding, 68–69
adding embedded charts to, 202–204
closing, 104
creating, 109
e-mailing, 415
events. See worksheet events
modules, 28
selecting, 107–108
unhiding, 104
Worksheets collection, 52, 69
WorksheetTest1 macro, 68
WorksheetTest2 macro, 69
X
.xla extension, 335
.xlam extension, 335
xlPath UDF, 239
.xlsx extension, 176
Y
years, entering, 59
YearSheets macro, 109
Z
zero-based array numbering, 130–131
Zoom button, 306
|
{"Source-Url": "https://media.wiley.com/product_data/excerpt/70/11189913/1118991370-11.pdf", "len_cl100k_base": 12502, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 104977, "total-output-tokens": 16522, "length": "2e13", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.0007939338684082031, "__label__crime_law": 0.00022482872009277344, "__label__education_jobs": 0.0031337738037109375, "__label__entertainment": 0.0001367330551147461, "__label__fashion_beauty": 0.00013017654418945312, "__label__finance_business": 0.00063323974609375, "__label__food_dining": 0.0002300739288330078, "__label__games": 0.0009546279907226562, "__label__hardware": 0.0007843971252441406, "__label__health": 0.00020933151245117188, "__label__history": 0.0003390312194824219, "__label__home_hobbies": 0.0002200603485107422, "__label__industrial": 0.0003619194030761719, "__label__literature": 0.00030493736267089844, "__label__politics": 0.00014674663543701172, "__label__religion": 0.0003523826599121094, "__label__science_tech": 0.00933074951171875, "__label__social_life": 0.00012671947479248047, "__label__software": 0.09893798828125, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.00017595291137695312, "__label__transportation": 0.0001971721649169922, "__label__travel": 0.0002081394195556641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39524, 0.16352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39524, 0.3392]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39524, 0.5918]], "google_gemma-3-12b-it_contains_pii": [[0, 1250, false], [1250, 3237, null], [3237, 5089, null], [5089, 6824, null], [6824, 8817, null], [8817, 10751, null], [10751, 12458, null], [12458, 14254, null], [14254, 15914, null], [15914, 17581, null], [17581, 19227, null], [19227, 21003, null], [21003, 22759, null], [22759, 24503, null], [24503, 26313, null], [26313, 33050, null], [33050, 34983, null], [34983, 36574, null], [36574, 38195, null], [38195, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1250, true], [1250, 3237, null], [3237, 5089, null], [5089, 6824, null], [6824, 8817, null], [8817, 10751, null], [10751, 12458, null], [12458, 14254, null], [14254, 15914, null], [15914, 17581, null], [17581, 19227, null], [19227, 21003, null], [21003, 22759, null], [22759, 24503, null], [24503, 26313, null], [26313, 33050, null], [33050, 34983, null], [34983, 36574, null], [36574, 38195, null], [38195, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null], [39524, 39524, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39524, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39524, null]], "pdf_page_numbers": [[0, 1250, 1], [1250, 3237, 2], [3237, 5089, 3], [5089, 6824, 4], [6824, 8817, 5], [8817, 10751, 6], [10751, 12458, 7], [12458, 14254, 8], [14254, 15914, 9], [15914, 17581, 10], [17581, 19227, 11], [19227, 21003, 12], [21003, 22759, 13], [22759, 24503, 14], [24503, 26313, 15], [26313, 33050, 16], [33050, 34983, 17], [34983, 36574, 18], [36574, 38195, 19], [38195, 39524, 20], [39524, 39524, 21], [39524, 39524, 22], [39524, 39524, 23], [39524, 39524, 24], [39524, 39524, 25], [39524, 39524, 26], [39524, 39524, 27], [39524, 39524, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39524, 0.0548]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
65a28c732bb8fe9cfa36869354a4205aad63c2a9
|
Real-Time Adaptive A* with Depression Avoidance
Carlos Hernández
Departamento de Ingeniería Informática
Universidad Católica de la Santísima Concepción
Concepción, Chile
Abstract
RTAA* is probably the best-performing real-time heuristic search algorithm at path-finding tasks in which the environment is not known in advance or in which the environment is known and there is no time for pre-processing. As most real-time search algorithms do, RTAA* performs poorly in presence of heuristic depressions, which are bounded areas of the search space in which the heuristic is too low with respect to their border. Recently, it has been shown that LSS-LRTA*, a well-known real-time search algorithm, can be improved when search is actively guided away of depressions. In this paper we investigate whether or not RTAA* can be improved in the same manner. We propose aRTAA* and daRTAA*, two algorithms based on RTAA* that avoid heuristic depressions. Both algorithms outperform RTAA* on standard path-finding tasks, obtaining better-quality solutions when the same time deadline is imposed on the duration of the planning episode. We prove, in addition, that both algorithms have good theoretical properties.
Introduction
Several real-world applications require agents acting in complex environments to repeatedly act quickly in a possibly unknown environment. Such is the case of virtual agents that repeatedly perform pathfinding tasks in commercial games (e.g., World of Warcraft, Baldur’s Gate, etc.). Indeed, it has been reported that game companies impose a time limit on the order of 1 millisecond for the path planning cycle that determines the move of every game character (Bulitko et al. 2010).
Real-time heuristic search (e.g. Korf 1990; Koenig 2001) is a standard approach used to solve these tasks. It is inspired on A* search (Hart, Nilsson, and Raphael 1968), but a mechanism is provided to bound the time taken to produce a movement. Real-time heuristic search algorithms repeatedly perform planning episodes whose computational time is bounded. Like standard A*, real-time heuristic algorithms use a heuristic function to guide action selection. Unlike A* search, the heuristic function is updated by the algorithm during execution. Such a process is usually referred to as the learning of the heuristic. The learning process guarantees that algorithms like LRTA* (Korf 1990) always finds a solution if any exists in finite, undirected search spaces.
Researchers (e.g. Ishida 1992) have observed that algorithms like LRTA* perform poorly when they enter heuristic depressions. Intuitively, a depression is a bounded connected component of the search space, D, in which the heuristic underestimates too much the cost to reach a solution in relation to the heuristic values of the states in the border of D. Most state-of-the-art real-time heuristic search algorithms (Bulitko et al. 2010; Koenig and Likhachev 2006; Hernández and Meseguer 2005; 2007; Koenig and Sun 2009) deal with this issue either by doing more lookahead while planning, or by increasing the amount of learning.
In a previous paper, we proposed aLSS-LRTA*, a variant of LSS-LRTA* (Koenig and Sun 2009) that actively guides search away of heuristic depressions; a principle called depression avoidance (Hernández and Baier 2011). aLSS-LRTA* typically outperforms LSS-LRTA*, especially when the lookahead phase is limited to explore few states.
Although relatively popular, LSS-LRTA* does not seem to be representative of the state-of-the-art in real-time heuristic search when time is taken into account; RTAA* (Koenig and Likhachev 2006), on the other hand, does seem to be. Indeed, Koenig and Likhachev (2006) show that RTAA* is the correct choice when the time per planning episode is bounded; we have verified those results, but omitted them for space. RTAA* is similar in spirit to LSS-LRTA*; its learning mechanism yields a less informed heuristic but it is faster and simpler to implement.
In this paper we report on whether or not depression avoidance can be incorporated into RTAA*, and propose two new real-time heuristic search algorithms. The first, aRTAA*, is a rather straightforward adaptation of aLSS-LRTA*’s implementation of depression avoidance into RTAA*. As aLSS-LRTA* does, it avoids moving to states that have been identified as belonging to a depression. We show aRTAA* outperforms RTAA* in path-finding tasks. The second algorithm, daRTAA*, is a finer implementation of depression avoidance that, like aRTAA*, will prefer to move to states that do not belong to a depression, but, unlike aRTAA*, when no such states are found, it prefers moving to states that seem closer to the border of the depression. We show daRTAA* outperforms aRTAA*. We prove both algorithms have nice theoretical properties: they maintain the consistency of the heuristic, and they terminate finding a
Preliminaries
A search problem \( P \) is a tuple \((S, A, c, s_0, G)\), where \((S, A)\) is a digraph that represents the search space. The set \( S \) represents the states and the arcs in \( A \) represent all available actions. \( A \) does not contain elements of form \((x, x)\). In addition, the cost function \( c : A \rightarrow \mathbb{R}^+ \) associates to each state \( s \) a cost to each of the available actions. Finally, \( s_0 \in S \) is the start state, and \( G \subseteq S \) is a set of goal states. We say that a search space is undirected if whenever \((u, v)\) is in \( A \) then so is \((v, u)\). We assume that in undirected spaces \( c(u, v) = c(v, u) \), for all \((u, v) \in A \). We define \( k(u, v)\) as a function that returns the cost of the minimum-cost path between states \( u \) and \( v \). The successors of a state \( u \) are defined by \( \text{Succ}(u) = \{v | (u, v) \in A \} \). Two states are neighbors if they are successors of each other. A heuristic function \( h : S \rightarrow [0, \infty) \) associates to each state \( s \) an approximation \( h(s) \) of the cost of a path from \( s \) to a goal state. \( h \) is consistent if \( h(g) = 0 \) for all \( g \in G \) and \( h(s) \leq c(s, w) + h(w) \) for all states \( w \in \text{Succ}(s) \). We refer to \( h(s) \) as the \( h \)-value of \( s \). We assume familiarity with the A* algorithm (Hart, Nilsson, and Raphael 1968): \( g(s) \) denotes the cost of the path from the start state to \( s \), and \( f(s) \) is defined as \( g(s) + h(s) \). The \( f \)-value and \( g \)-value of \( s \) refer to \( f(s) \) and \( g(s) \) respectively.
Real-Time Search
The objective of a real-time search algorithm is to make an agent travel from an initial state to a goal state performing, between moves, an amount of computation bounded by a constant. An example situation is pathfinding in previously unknown grid-like environments. There the agent has memory capable of storing its current belief about the structure of the search space, which it initially regards as obstacle-free (this is usually referred to as the free-space assumption (Koenig, Tovey, and Smirnov 2003)). The agent is capable of a limited form of sensing: only obstacles in the neighbor states can be detected. When obstacles are detected, the agent updates its map accordingly.
Most state-of-the-art real-time heuristic search algorithms can be described by the pseudo-code in Algorithm 1. The algorithm iteratively executes a lookahead-update-act cycle until the goal is reached. The lookahead phase (Line 5–7) determines the next state to move to, the update phase (Line 8) updates the heuristic, and the act phase (Line 9) moves the agent to its next position. The lookahead-update part of the cycle (Lines 5–8) is referred to as the planning episode throughout the paper.
The generic algorithm has three local variables: \( s \) stores the current position of the agent, \( c(s, s') \) contains the cost of moving from state \( s \) to a successor \( s' \), and \( h \) is such that \( h(s) \) contains the heuristic value for \( s \). All three variables may change over time. In path-finding tasks, when the environment is initially unknown, the initial value of \( c \) is such that no obstacles are assumed; i.e., \( c(s, s') < \infty \) for any two neighbor states \( s, s' \). The initial value of \( h(s) \), for every \( s \), is given as a parameter.
In the lookahead phase (Line 5–7), the algorithm determines where to proceed next. The lookahead (Line 5) generates a search frontier of states reachable from \( s \), which are stored in the variable \( \text{Open} \). In RTA* and LRTA* (Korf 1990) the frontier corresponds to all states at a given depth \( d \). On the other hand, LSS-LRTA* (Koenig and Sun 2009), RTA* (Koenig and Likhachev 2006), and other algorithms carry out an A* search that expands at most \( k \) states. The number \( k \) is referred to as the lookahead parameter, and the states generated during lookahead conform the so-called local search space. The variable \( \text{Open} \) (cf. Line 6) contains the frontier of the local search space. Also, we assume that after executing an A* lookahead, the variable \( \text{Closed} \) contains the states that were expanded by the algorithm. Finally, the next state to move to, \( s_{next} \), is assigned in Line 7, and generally corresponds to the state \( s' \) in \( \text{Open} \) that minimizes the sum \( k(s_{current}, s') + h(s') \), where \( k(s_{current}, s') \) is cost of the optimal path from \( s_{current} \) to \( s' \).
When an A* lookahead is used with consistent heuristics, such a state is the one with minimum \( f \)-value in \( \text{Open} \) (see Algorithm 2).
**Algorithm 1:** A standard real-time heuristic search algorithm
```plaintext
Input: A search problem \( P \), a heuristic function \( h \), a cost function \( c \).
1. for each \( s \in S \) do
2. \( h_0(s) \leftarrow h(s) \)
3. \( s_{current} \leftarrow s_0 \)
4. while \( s_{current} \notin G \) do
5. LookAhead()
6. if \( \text{Open} = \emptyset \) then return no-solution
7. \( s_{next} \leftarrow \text{Extract-Best-State}() \)
8. Update()
9. move the agent from \( s_{current} \) to \( s_{next} \) through the path identified by \text{LookAhead}. Stop if an action cost along the path is updated.
10. \( s_{current} \leftarrow \text{current agent position} \)
11. update action costs (if they have increased)
```
**Algorithm 2:** Selection of the Best State used by LSS-LRTA*, RTA*, and others
```plaintext
Input: A search problem \( P \), a heuristic function \( h \), a cost function \( c \).
1. procedure \text{Extract-Best-State}()
2. return \( \arg\min_{s' \in \text{Open}} g(s') + h(s') \)
```
Using the heuristic of all or some of the states in the frontier of the local search space (\( \text{Open} \)), the algorithm updates the heuristic value of states in the local search space (Line 8). Intuitively, after the lookahead is carried out, information is gained regarding the heuristic values of states in the frontier of the local search space. This information is used to update...
date the $h$-value of states in the local search space in such a way that they are consistent with the $h$-values of the frontier. As before, different algorithms implement different mechanisms to update the heuristics. In what follows, we focus on LSS-LRTA* and RTAA*, since these are the most relevant algorithms to this paper.
LSS-LRTA* updates the values of each state $s$ in the local search space in such a way that $h(s)$ is assigned the maximum possible value that guarantees consistency with the states in $\text{Open}$. It does so by implementing the $\text{Update}$ procedure as a version of Dijkstra’s algorithm (see Koenig and Likhachev 2006). We carried out an independent empirical evaluation over 12 game maps in which we confirmed that observation. For example, given a deadline of 0.005 milliseconds for the planning episode, RTAA* finds solutions on average 11.6% cheaper than those found by LSS-LRTA*. For a deadline of 0.02 milliseconds, RTAA* finds solutions on average 41.5% cheaper than those found by LSS-LRTA*. We concluded that RTAA* seems superior to LSS-LRTA* when time per planning episode is restricted.
**Depression Avoidance**
A heuristic depression is a bounded region of the search space containing states whose heuristic value is too low with respect to the heuristic values of states in the border of the depression. Depressions exist naturally in heuristics used along with real-time heuristic search algorithms and are also generated during runtime.
Ishida (1992) was perhaps the first to analyze the behavior of real-time heuristic search algorithms in presence of such regions. For Ishida, a depression is a maximal connected component of states that defines a local minimum of $h$. Real-time search algorithms like LRTA* become trapped in these regions, precisely because movements are guided by the heuristic. As such, once an agent enters a depression, the only way to leave it is by raising the heuristic values of the states in the depression high enough as to make the depression disappear. Algorithms capable of performing more learning than LRTA*, such as LSS-LRTA* or RTAA*, also perform poorly in these regions because their movements are also only guided by the value of the heuristic.
In previous work, we proposed a definition for cost-sensitive heuristic depressions that is an alternative to Ishida’s and takes costs into account (Hernández and Baier 2011). Intuitively, a state $s$ is in a cost-sensitive heuristic depression if its heuristic value is not a realistic cost estimate with respect to the heuristic value of every state in the border, considering the cost of reaching such a state from $s$. Formally.
**Definition 1 (Cost-sensitive heuristic depression)** A connected component of states $D$ is a cost-sensitive heuristic depression of a heuristic $h$ iff for any state $s \in D$ and every state $s'$ in the boundary of $D$, $h(s) < k(s, s') + h(s')$, where $k(s, s')$ denotes the cost of the cheapest path that starts in $s$ and traverses states only in $D$ before ending in $s'$.
**Algorithm 3: RTAA*’s Update Procedure**
| Input: A search problem $P$, a heuristic function $h$, a cost function $c$. |
| procedure $\text{Update}$ () |
| $f^* \leftarrow \min_{s \in \text{Open}} g(s) + h(s)$ |
| for each $s \in \text{Closed}$ do |
| $h(s) \leftarrow f^* - g(s)$ |
RTAA*, on the other hand, uses a simpler update mechanism. It updates the heuristic value of states in the interior of the local search space (i.e., those stored in $A^*$’s variable $\text{Closed}$) using the $f$-value of the best state in $\text{Open}$. The procedure is shown in Algorithm 3. The heuristic values that RTAA* learns may be less informed than those of LSS-LRTA*. The following two propositions establish this relation formally, and, to our knowledge, are not stated explicitly in the literature.
**Proposition 1** Let $s$ be a state in $\text{Closed}$ right after the call to $A^*$ in the $n$-th iteration of LSS-LRTA*. Then,
$$h_{n+1}(s) = \min_{s_b \in \text{Open}} k_n(s, s_b) + h_n(s_b),$$
where $h_n$ denotes the value of the $h$ variable at iteration $n$ and after the update of iteration $n - 1$. $k_n(s, s_b)$ denotes the cost of the cheapest path from $s$ to $s_b$, with respect to the $c$ variable at iteration $n$ and that only traverses states in $\text{Closed}$ before ending in $s_b$. For RTAA*, the situation is slightly different.
**Proposition 2** Right after the call to $A^*$ in the $n$-th iteration of RTAA*, let $s^*$ be the state with lowest $f$-value in $\text{Open}$, and let $s$ be a state in $\text{Closed}$. Then,
$$h_{n+1}(s) \leq \min_{s_b \in \text{Open}} k_n(s, s_b) + h_n(s_b).$$
However, if $h_n$ is consistent and $s$ is in the path found by $A^*$ from $s_{\text{current}}$ to $s^*$, then
$$h_{n+1}(s) = \min_{s_b \in \text{Open}} k_n(s, s_b) + h_n(s_b).$$
Proposition 2 implies that, when using consistent heuristics, RTAA*’s update yields possibly less informed $h$-values than those of LSS-LRTA*. However, at least for some of the states in the local search space, the final $h$-values are equal to those of LSS-LRTA*, and hence they are as informed as they can be.
observation that Dijkstra’s algorithm will update the heuristic of a state if it’s in a cost-sensitive depression of the local search space, a state is marked as being part of a depression when its heuristic value is updated. To select the next move, the algorithm chooses the best state in Open that has not been marked as in a depression. If such a state does not exist the algorithm selects the best state in Open, just like LSS-LRTA* would do.
The selection of the state to move to is implemented by the function in Algorithm 4, where s updated denotes whether or not the heuristic of s has been updated (i.e., whether or not a state is marked). The update procedure is modified appropriately to set this flag. Despite the fact that aLSS-LRTA* does not necessarily move to the best state in Open, it is guaranteed to find a solution in finite undirected search spaces if such a solution exists. In path-finding benchmarks, aLSS-LRTA* improves the solution cost over LSS-LRTA* by about 20% for small lookahead values and by about 8.4% for high values of the lookahead parameter.
**RTAA* with Depression Avoidance**
In this section we propose two variants of the state-of-the-art RTAA* algorithm that implement depression avoidance. The first, aRTAA*, is based on aLSS-LRTA*. The second, daRTAA*, is a more fine-grained adaptation of depression avoidance which prefers moving to states that seem closer to the border of a depression.
**aRTAA***
aRTAA* is a straightforward port of aLSS-LRTA*’s implementation of depression avoidance into RTAA*. RTAA* is modified as follows. First, its update procedure is replaced by Algorithm 5, which implements the same update rule of RTAA* but, like aLSS-LRTA*, marks states that have been updated. Second, RTAA*’s procedure to select the next state is replaced by that of aLSS-LRTA* (Algorithm 4). As a result aRTAA* is a version of RTAA* that avoids depressions using the same mechanism that aLSS-LRTA* utilizes.
**Properties of aRTAA**
aRTAA* inherits a number of RTAA*’s properties. Since the update rule is not changed, we can use the same proofs by Koenig and Likhachev (2006) to show that h is non-decreasing over time, and that h remains consistent if so it is initially. Other properties specific to aRTAA* can also be shown.
**Theorem 1** Let s be a state such that s updated switches from false to true between iterations n and n + 1 in an execution of aRTAA* initialized with a consistent heuristic h. Then s is in a cost-sensitive heuristic depression of h n.
**Algorithm 5: aRTAA*’s Update Procedure**
1. function Update ()
2. if first run
3. for each s ∈ S do s updated ← false
4. f* ← min s∈Open g(s) + h(s);
5. for each s ∈ Closed do
6. h(s) ← f* - g(s);
7. if h(s) > h0(s) then s updated ← true
**Theorem 2** Let P be an undirected finite real-time search problem such that a solution exists. Let h be a consistent heuristic for P. Then aRTAA*, used with h, will find a solution for P.
**daRTAA***
daRTAA* is based on aRTAA*, but differs from it in the strategy used to select the next state to move to. To illustrate the differences, consider a situation where there is no state s in the frontier such that s updated is false. In this case, aRTAA* behaves exactly as RTAA* does. This seems rather extreme, since intuitively we would still like the movement to be guided away of the depression. In such situations, daRTAA* will actually attempt to escape the depression by choosing the state with best f-value among the states whose heuristic has changed the least. The intuition behind this behavior is as follows: assume ∆(s) is the difference between the actual cost to reach a solution from a state s and the initial heuristic value of state s. Then if s1 is a state close to the border of a depression D and s2 is a state farther away from the border and “deep” in the interior of D, then ∆(s2) ≥ ∆(s1), because the heuristic of s2 is more precise than that of s1. At execution time, h is an estimate of the actual cost to reach a solution. As such, in summary, daRTAA* always moves to a state believed not to be in a depression, but if no such state exists it moves to states that are regarded as closer to the border of the depression.
daRTAA* is implemented like RTAA* but the procedure to select the next state to move to is given by Algorithm 6.
**Algorithm 6: daRTAA*’s Selection of the Next State**
1. function Extract-Best-State()
2. Δ min ← ∞;
3. while Open ≠ ∅ and Δ min ≠ 0 do
4. Remove state s by with smallest f-value from Open;
5. if h(s) - h0(s) < Δ min then
6. s ← s by ;
7. Δ min ← h(s) - h0(s);
8. return s
**Properties of daRTAA***
Since only the mechanism for selecting the next move is modified, daRTAA* inherits directly
most of the properties of RTAA\(^*\). In particular, \( h \) is non-decreasing over time, and consistency is maintained. We can also prove termination.
**Theorem 3** Let \( P \) be an undirected finite real-time search problem such that a solution exists. Let \( h \) be a consistent heuristic for \( P \). Then daRTAA\(^*\), used with \( h \), will find a solution for \( P \).
### Experimental Evaluation
We compared RTAA\(^*\), aRTAA\(^*\) and daRTAA\(^*\) at solving real-time navigation problems in unknown environments. For fairness, we used comparable implementations that use the same underlying codebase. For example, both search algorithms use the same implementation for binary heaps as priority queues and break ties among cells with the same \( f \)-values in favor of cells with larger \( g \)-values.
We used 12 maps from deployed video games to carry out the experiments. The first 6 are taken from the game Dragon Age, and the rest are taken from the game StarCraft. The maps were retrieved from Nathan Sturtevant’s repositories.\(^2\)
We average our results over 6,000 test cases (500 test cases for each game map). Each test case is random. Maps are undirected, eight-neighbor grids, and the agent is capable of observing obstacles in neighbor cells. Horizontal and vertical movements have cost 1, whereas diagonal movements have cost \( \sqrt{2} \). We used the *octile distance* as heuristic.
Figure 2 shows average results for RTAA\(^*\), aRTAA\(^*\) and daRTAA\(^*\) for 17 different lookahead values. For reference, we include *Repeated A\(^*\), a search algorithm for unknown environments that uses *unbounded A\(^*\) in the lookahead phase, and that does not update the heuristic values. We observe that in terms of solution cost, for all lookahead values, aRTAA\(^*\) consistently outperforms RTAA\(^*\); moreover, daRTAA\(^*\) outperforms aRTAA\(^*\). Nevertheless, for any lookahead value, aRTAA\(^*\) spends more time per planning episode than RTAA\(^*\) does; this can be explained by the extra condition aRTAA\(^*\) has to check for each state that is updated. daRTAA\(^*\), on the other hand, spends more time than RTAA\(^*\) per planning episode. This increase is due to daRTAA\(^*\)’s selection of the next state to move to is less efficient. In RTAA\(^*\) this selection is quick, since it only involves extracting the best state in *Open*, which can be done in constant time with binary heaps. daRTAA\(^*\), on the other hand, may extract several states from the open list per planning episode. Thus we observe a higher number of heap percolations. The worst-case time complexity of daRTAA\(^*\)’s selection procedure is \( O(n \log n) \), where \( n \) is the size of *Open*.
The experimental results show that daRTAA\(^*\)’s more refined mechanism for escaping depressions is better than that of aRTAA\(^*\). For small values for the lookahead parameter, daRTAA\(^*\) obtains better solutions than aRTAA\(^*\) used with a much larger lookahead. For example, with a lookahead parameter equal to 1, daRTAA\(^*\) obtains better solutions than aRTAA\(^*\) with lookahead parameter equal to 19, requiring, on average, 10 times less time per planning episode.
daRTAA\(^*\) substantially improves RTAA\(^*\), which is probably the best real-time heuristic search algorithm known to date. RTAA\(^*\) needs only a lookahead parameter of 25 to obtain solutions better than RTAA\(^*\) with lookahead parameter of 97. With those values, daRTAA\(^*\) requires about 2.6 times less time per planning episode than RTAA\(^*\).
Figure 1 shows a plot of the average best solution quality obtained by daRTAA\(^*\) and RTAA\(^*\) given fixed time deadlines per planning episode. For both algorithms, the solution quality improves as more time is given per planning episode. For every time deadline plotted, daRTAA\(^*\) obtains much better solutions, especially when the time deadline is small.
Experimentally, daRTAA\(^*\) is clearly superior to RTAA\(^*\). Of the 102,000 runs, daRTAA\(^*\) obtains a better solution quality than RTAA\(^*\) on 65.8% of the cases, they tie on 24.8% of the cases, and RTAA\(^*\) obtains a better-quality solution in only 9.4% of the cases. In addition, we computed the relative performance of the algorithms, which is given by the ratio between the solution costs for each solved problem. In the 10,000 cases in which the ratio is most favorable to RTAA\(^*\), the solutions obtained by RTAA\(^*\) are 1.43 times cheaper on average than those obtained by daRTAA\(^*\). On the other hand, in the 10,000 cases in which the ratio is most favorable to daRTAA\(^*\) over RTAA\(^*\), the former algorithm obtains solutions that are 2.79 times cheaper on average.
If one analyzes the performance on the hardest test cases for each algorithm, i.e. those for which the highest solution costs were obtained, we conclude again in favor of daRTAA\(^*\). Indeed, in the 10,000 hardest test cases for RTAA\(^*\), solutions obtained by RTAA\(^*\) are on average 8.37 times more expensive than those of daRTAA\(^*\). On the other hand, in the 10,000 hardest test cases for daRTAA\(^*\), solutions obtained by daRTAA\(^*\) are 5.57 times cheaper than those obtained by RTAA\(^*\).
Finally, we observe all real-time algorithms generate worse solutions than Repeated A\(^*\). However, they do so in significantly less time. For example, daRTAA\(^*\) with lookahead 97 obtains a solution 5.1 times worse than A\(^*\), but uses an order of magnitude more time per planning episode.
---
\(^2\)http://www.movingai.com/ and http://hog2.googlecode.com/svn/trunk/maps/. For Dragon Age we used the maps brc202d, orz103d, orz702d, ost000a, ost000t and ost100d (size 481 x 530, 456 x 463, 939 x 718, 969 x 487, 971 x 487 and 1025 x 1024 cells respectively. For StarCraft, we used the maps Enigma, FadingRealm, JungleSiege, Ramparts, TwistedFate and WheelofWar (size 768 x 768, 384 x 512, 768 x 768, 512 x 512, 384 x 384 and 768 x 768 cells respectively.)
<table>
<thead>
<tr>
<th>$k$</th>
<th>Solution Cost</th>
<th># Planning Episodes</th>
<th>Time per Episode</th>
<th>Time</th>
<th>Percolations per episode</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>553,152</td>
<td>510,579</td>
<td>0.0004</td>
<td>183.4</td>
<td>5.7</td>
</tr>
<tr>
<td>7</td>
<td>197,781</td>
<td>107,321</td>
<td>0.0016</td>
<td>174.7</td>
<td>39.9</td>
</tr>
<tr>
<td>13</td>
<td>109,887</td>
<td>43,835</td>
<td>0.0029</td>
<td>127.8</td>
<td>86.0</td>
</tr>
<tr>
<td>19</td>
<td>75,239</td>
<td>24,932</td>
<td>0.0042</td>
<td>104.6</td>
<td>139.9</td>
</tr>
<tr>
<td>25</td>
<td>56,989</td>
<td>16,606</td>
<td>0.0055</td>
<td>90.9</td>
<td>198.3</td>
</tr>
<tr>
<td>31</td>
<td>46,201</td>
<td>12,191</td>
<td>0.0107</td>
<td>68.4</td>
<td>457.6</td>
</tr>
<tr>
<td>37</td>
<td>38,839</td>
<td>9,457</td>
<td>0.0081</td>
<td>76.3</td>
<td>323.9</td>
</tr>
<tr>
<td>43</td>
<td>33,675</td>
<td>7,679</td>
<td>0.0094</td>
<td>71.9</td>
<td>389.8</td>
</tr>
<tr>
<td>49</td>
<td>29,658</td>
<td>6,403</td>
<td>0.0110</td>
<td>65.4</td>
<td>525.8</td>
</tr>
<tr>
<td>55</td>
<td>26,424</td>
<td>5,445</td>
<td>0.0120</td>
<td>65.4</td>
<td>525.8</td>
</tr>
<tr>
<td>61</td>
<td>23,971</td>
<td>4,745</td>
<td>0.0133</td>
<td>65.4</td>
<td>525.8</td>
</tr>
<tr>
<td>67</td>
<td>21,811</td>
<td>4,173</td>
<td>0.0147</td>
<td>61.2</td>
<td>659.8</td>
</tr>
<tr>
<td>73</td>
<td>20,264</td>
<td>3,761</td>
<td>0.0160</td>
<td>60.2</td>
<td>729.7</td>
</tr>
<tr>
<td>79</td>
<td>18,857</td>
<td>3,410</td>
<td>0.0173</td>
<td>59.1</td>
<td>794.3</td>
</tr>
<tr>
<td>85</td>
<td>17,524</td>
<td>3,099</td>
<td>0.0187</td>
<td>58.0</td>
<td>861.6</td>
</tr>
<tr>
<td>91</td>
<td>16,523</td>
<td>2,865</td>
<td>0.0201</td>
<td>57.5</td>
<td>926.3</td>
</tr>
<tr>
<td>97</td>
<td>15,422</td>
<td>2,632</td>
<td>0.0214</td>
<td>56.4</td>
<td>992.4</td>
</tr>
</tbody>
</table>
### Table 2: The table presents average solution cost, number of planning episodes, time per planning episode in milliseconds, total search time in milliseconds, and number of heap percolations per planning episode for 6,000 path-planning tasks and 17 lookahead values ($k$). We performed our experiments on a Linux PC with a Pentium QuadCore 2.33 GHz CPU and 8 GB RAM.
**Discussion**
Although daRTAA and aRTAA clearly outperform RTAA in our experiments, we believe it is possible to contrive families of increasingly difficult path-finding tasks in which daRTAA or aRTAA will find solutions arbitrarily worse than those found by RTAA. Those situations exist for aLSS-LRTA* (Hernández and Baier 2011). Nevertheless, it is not clear that the existence of these families of “worst-case situations” are a real concern from a practical point of view. On the one hand, we did not spot a significant number of these cases in our experimental evaluation. On the other hand, it is easy to come up with families of problems in which daRTAA and aRTAA perform arbitrarily better than RTAA.
### Summary and Conclusions
We have proposed aRTAA and daRTAA, two real-time heuristic search algorithms that implement depression avoidance on top of the state-of-the-art RTAA*. aRTAA* is a straightforward adaptation of the implementation of aLSS-LRTA*. daRTAA* is a more fine-grained implementation of depression avoidance that, when trapped in a depression, prefers to move to states that seem to be closer to the border of such a depression. We showed both algorithms outperform RTAA*, and that daRTAA* is the best of the three. We believe the particular implementation of depression avoidance that we devised for daRTAA holds promise, since it could be easily incorporated in other search algorithms in order to find better-quality solutions when time is bounded.
### References
|
{"Source-Url": "https://ojs.aaai.org/index.php/AIIDE/article/download/12455/12314/15983", "len_cl100k_base": 8338, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 27317, "total-output-tokens": 9337, "length": "2e13", "weborganizer": {"__label__adult": 0.0008721351623535156, "__label__art_design": 0.0007104873657226562, "__label__crime_law": 0.0012979507446289062, "__label__education_jobs": 0.0014085769653320312, "__label__entertainment": 0.00039005279541015625, "__label__fashion_beauty": 0.0005536079406738281, "__label__finance_business": 0.0005803108215332031, "__label__food_dining": 0.0007977485656738281, "__label__games": 0.0207061767578125, "__label__hardware": 0.00261688232421875, "__label__health": 0.0010137557983398438, "__label__history": 0.001232147216796875, "__label__home_hobbies": 0.00021064281463623047, "__label__industrial": 0.0010004043579101562, "__label__literature": 0.0009479522705078124, "__label__politics": 0.0007929801940917969, "__label__religion": 0.0008416175842285156, "__label__science_tech": 0.228271484375, "__label__social_life": 0.00017189979553222656, "__label__software": 0.01226806640625, "__label__software_dev": 0.7197265625, "__label__sports_fitness": 0.0011653900146484375, "__label__transportation": 0.001720428466796875, "__label__travel": 0.0005826950073242188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31959, 0.04947]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31959, 0.27276]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31959, 0.86272]], "google_gemma-3-12b-it_contains_pii": [[0, 4890, false], [4890, 11067, null], [11067, 16239, null], [16239, 20963, null], [20963, 26956, null], [26956, 31959, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4890, true], [4890, 11067, null], [11067, 16239, null], [16239, 20963, null], [20963, 26956, null], [26956, 31959, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31959, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31959, null]], "pdf_page_numbers": [[0, 4890, 1], [4890, 11067, 2], [11067, 16239, 3], [16239, 20963, 4], [20963, 26956, 5], [26956, 31959, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31959, 0.16779]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
fb97eb73501fa7677cfe1554eaa9844dbce2b770
|
Natural Deduction Proof and Disproof in Jape
Richard Bornat (richard@bornat.me.uk)
February 21, 2017
Preface
This manual is a how-to for Jape and my Natural Deduction encoding. Earlier versions contained a good deal of logic teaching. That’s been dropped, since now there’s a book (Logic for Programmers, OUP) which says it all a good deal better. You will learn about logic by playing with Jape, so I suppose there’s still teaching in there.
This document was written on a Mac, and the illustrations are all taken from the MacOS X version of Jape. All Japes, on Linux, Solaris, Windows, MacOS X or whereever, run exactly the same code but the interfaces can look slightly different. I haven’t produced different versions of the manual because I think — rather, I hope — that those differences in fonts, in window controls, in the placing of menus and so on, don’t really matter. If I’m wrong then I hope somebody will tell me.
Jape or NDJape?
This manual is really about how Jape, which is capable of working with many different logics, has been made to deal with Natural Deduction as described in my book. Instead of talking about what ‘Jape’ does when dealing with → formulae, I should really talk about what ‘Jape loaded with the I2L encoding’ does. But that’s such a mouthful that I just talk as if there is only one Jape, and all it does is Natural Deduction.
If you’re interested there’s a manual on the website (http://www.japeforall.org.uk) which tells you how to roll your own logic encodings. Best of luck.
Differences from the book
- Jape writes its premises on a single line, separated by commas, instead of using a separate line for each premise;
- Jape writes quantifications as ∀x.P(x) and ∃y.Q(y) — with a dot between bound variable and predicate — instead of ∀x(P(x)) and ∃y(Q(y))
So far as I know, these are the only significant differences.
Any comments?
If Jape doesn’t work for you, it isn’t working, and if it isn’t working I’d like to hear about it.
Please send any comments, thoughts, complaints or whatever to me (see title page for email address). I’ll do my best to reply quickly and if your message is of general interest I’ll put it up on the website (with your permission, and hiding your email address from the spambots at least).
Acknowledgements
I designed and built Jape between 1991 and 2002 whilst I was at Queen Mary College, University of London, working with Bernard Sufrin of the Bernard Sufrin of the Computer Laboratory, Oxford University. I’ve continued to develop it at Middlesex University since. Bernard worked on the implementation with me for the first four years or so. That doesn’t describe all he did: his design insights were crucial and without him Jape wouldn’t exist or be half as good as it is. His program architecture, which he worked out in the first few months of our collaboration, is still in place even though Jape has been rebuilt from the ground up more than once. He still produces the Jape distributions and running the www.japeforall.org.uk website.
For the rest of it I’ve been strongly influenced by colleagues at Queen Mary, many of whom have made seminal suggestions which have improved Jape. In alphabetical order I single out Jules Bean, John Bell, Peter Burton, Keith Clarke, Adam Eppendahl, David Pym, Graem Ringwood, Mike Samuels, Paul Taylor and Graham White.
Jape began as an experiment towards the end of an EPSRC research project on the use of symbolic calculators to teach formal reasoning, joint with Steve Reeves and Doug Goldson at Queen Mary, and Tim O’Shea and Pat Fung at the Open University. A later project, with Pat Fung and James Aczel from the OU, looked at students using Jape; James’s insightful summary of their difficulties led to a redesign of the treatment of natural deduction, and directly to this version of the program.
Jape’s proof engine was originally written in SML and compiled by SMLNJ, with interfaces for different operating systems written in C, tcl/tk, Python and I can’t remember what else. In 2002 I ported the engine to OCaml and wrote a system-independent interface module in Java. I’m grateful to the implementers of all those languages, especially for their decision to provide their software for free. Jape is free too, at http:www.japeforall.org.uk.
# Contents
1 Basics .......................................................... 7
1.1 Getting started .............................................. 7
1.2 Finishing a proof ........................................... 9
1.3 Saving and restoring your work ............................ 9
1.4 Printing and Exporting ..................................... 9
1.5 Making Jape work for you ................................. 9
1.5.1 Only reflect ........................................... 9
1.5.2 Be brave ............................................... 10
1.5.3 Never guess an assumption ......................... 10
2 Gestures: mouse clicks, presses and drags ..................... 11
2.1 Formula selection .......................................... 11
2.1.1 Ambiguous formulae click both ways ............... 11
2.1.2 Greying-out ........................................... 12
2.2 Subformula selection ....................................... 12
2.3 Dragging ..................................................... 13
3 Backward and forward steps .................................... 15
3.1 Making a forward step ..................................... 15
3.1.1 Selecting a target conclusion ...................... 17
3.1.2 What can go wrong with a forward step? .......... 17
3.2 Making a backward step ................................... 18
3.2.1 What can go wrong with a backward step? ......... 18
3.3 Steps which don’t need a selection ...................... 19
4 Rules of thumb for proof search ................................ 21
4.1 Look at the shape of the formula ......................... 22
5 The steps summarised ........................................... 23
5.1 ∧ steps ...................................................... 23
5.2 → steps ..................................................... 23
5.3 \lor steps .................................................... 25
5.4 \lnot steps .................................................. 26
5.5 \bot (contradiction) steps ..................................... 28
5.6 \top (truth) step .............................................. 28
5.7 \forall steps .................................................. 28
5.8 \exists steps .................................................. 30
6 Unknowns, hyp and Unify 31
6.1 Introducing an unknown ...................................... 31
6.2 Avoiding unknowns by subformula selection ............... 32
6.3 Eliminating an unknown with \textit{Unify} .................. 33
6.4 Eliminating an unknown with \textit{hyp} .................... 33
6.5 Provisos and the privacy condition ........................ 34
7 Disproof 35
7.1 Getting started .............................................. 35
7.2 Alternative sequents ........................................ 36
7.2.1 Selecting a situation .................................... 36
7.2.2 What’s forced? — colouring, greying, underlining .... 37
7.3 Making diagrams ............................................. 37
7.3.1 Dragging worlds ....................................... 38
7.3.2 Dragging lines .......................................... 38
7.3.3 Dragging formulae .................................... 38
7.4 Making individuals and predicate instances ................. 39
7.5 Exploring reasons .......................................... 39
7.6 Completing a disproof ...................................... 40
7.7 Printing disproofs .......................................... 40
8 Using theorems and stating conjectures 41
8.1 Using theorems .............................................. 41
8.2 Stating your own conjectures ............................... 43
9 Troubleshooting 45
9.1 Problems getting started .................................... 45
9.2 What if a proof step goes wrong? .......................... 45
Chapter 1
Basics
1.1 Getting started
If you don’t already have Jape, download it from http://www.japeforall.org.uk. Install it, following the instructions carefully.
You should have a directory containing the Jape application and a subdirectory called examples. Double-click Jape, and you will see a window like figure 1.1.\(^1\)
Using the Open New Theory command in the File menu, open examples/natural_deduction/I2L.jt. You should see several windows containing logical conjectures (claims to be proved), called panels in Jape. The Conjectures panel should look something like figure 1.2 (you will probably have to resize the window to make it look like this). Double-click any line to begin, or select a line and press the Prove button at the bottom of the panel. You will see a proof window like figure 1.3, and off you go!
---
\(^1\) You may see minor differences. The illustrations in this manual are taken from MacOS X. On Windows and Linux you will see a menu bar in the window and the window title will be Jape. But those differences don’t matter much, so I won’t refer to them again.
CHAPTER 1. BASICS
Figure 1.2: The Conjectures panel
Figure 1.3: A proof window
Figure 1.4: A conjecture panel with a proved conjecture
Figure 1.5: A conjecture panel with proved and disproved conjectures
1.2 Finishing a proof
When you have made a proof of a conjecture — no more lines of dots in the proof window — you can save it: pull down the Edit menu and select Done. The proof window closes, and Jape records the fact that the conjecture is proved, marking its entry in the conjectures panel with a tick in the margin, as illustrated in figure 1.4. If you disprove a conjecture (see chapter 7) then you can record that too, and you get a cross in the margin. Because classical proof and constructive disproof overlap, you can even get both marks against the same conjecture, as illustrated in figure 1.5.
Proof of a conjecture makes it a theorem. You can use theorems in your proofs as if they were additional Natural Deduction rules (select the theorem in the panel, press the Apply button), and you can review their proofs using the Show Proof button. See chapter 8 for more information.
1.3 Saving and restoring your work
Jape offers to record your proofs — saved and unsaved — when you quit or when you choose “Save Proofs” or “Save Proofs As ...” from the File menu. It will reload saved proofs using “Open ...”, also from the File menu.
1.4 Printing and Exporting
You can print a proof using the Print Proof command on the Edit menu. If you have a disproof on the go (see chapter 7) you can print it using the Print Disproof command, or proof and disproof together using Print. To make a pdf or ps copy of a proof in a file, use Export Proof, Export Disproof or Export, all on the Edit menu.
1.5 Making Jape work for you
1.5.1 Only reflect
Jape is designed to be easy to use, which means that the mouse and menu and window stuff don’t get in the way of the logic. It’s so easy to use that you can have great fun clicking away, ‘solving’ lots of problems without always knowing exactly what you’re doing. That’s ok, because you can learn while you’re having fun, and you can do things for yourself without asking for help. But it’s obviously not the whole story. Because Jape is easy to use it brings you quickly to a point where you can ask interesting and important questions. The kind of question you are supposed to ask is “is the logic really supposed to work like that?” If there are experts around you can ask them, but if you are on your own you can still ask yourself. That isn’t a daft thing to do: educationalists call it reflection, and it’s one of the best ways to learn.
Some logical proofs are hard to believe at first. Some single logical steps are pretty surprising. I hope that you will always read through finished Jape proofs to see if you can believe them. If there is a surprise, ask yourself: where does the surprise come from? By undoing and redoing steps you can watch the surprise emerge and explain to yourself why it’s necessary.
Jape’s a machine, and that has disadvantages as well as advantages. The big advantage is that a machine can do formal calculation — proof and disproof — perfectly, without mistake. The big disadvantage is that it can’t understand anything about what it’s doing. Jape doesn’t know the difference between a nice proof
and a nasty one. Sometimes reflection will show you that there is a shorter or prettier proof than the one you
have made. You can always undo your work and try again!
1.5.2 Be brave
We read proofs top-to-bottom most of the time. Novices, reasonably but mistakenly, imagine that proofs
are constructed top-to-bottom too: start at line 1 and work forwards. Well, sometimes it is done that way,
but at least as often it’s done the other way round, bottom-to-top, starting with the last line and working
backwards. Be brave and try it! If you stick to forward proof you make life very difficult for yourself, so
bravery pays dividends.
Bravery is needed, too, when learning how proof steps work. It’s reasonable when playing with an interactive
program to first try only steps that make small changes, but eventually you have to try everything. It’s just
like riding a bike: once you’ve plucked up courage you can’t remember what it felt like to be scared of \lor
elim or \land intro or whatever. Just do it!
1.5.3 Never guess an assumption
A proof is really a *structure of deductions*, not a sequence of lines, and its assumption boxes show that
structure. Reading a proof from top to bottom, every assumption that is introduced must also be discharged
by the use of a rule which makes use of the box. Jape guarantees correct use of assumptions by combining
introduction and discharge into a single step. Assumptions are introduced (and discharged) by using rules,
and the rules that do it — some forward, some backward — are labelled in the menus. Jape helps you, in
every case, by calculating the assumption that you need: there is *never* a need to guess an assumption.
Chapter 2
Gestures: mouse clicks, presses and drags
Like every other interactive program, Jape accepts instructions through the mouse and the keyboard. Mostly you will use the mouse to select a formula or a subformula in the proof window and then to command what to do with your selection by choosing a command from a menu. The ways that Jape uses the mouse are pretty standard.
2.1 Formula selection
Formula selection is made with a single click (left-click on a multi-button mouse). You click on a formula — a hypothesis above the line of dots or a conclusion below the dots — and Jape highlights your selection with an enclosing red box. If you click on a hypothesis you get a downward-facing box as in figure 2.1(a); if you click on a conclusion you get an upward-facing box as in figure 2.1(b). You can select both a hypothesis and a conclusion, as in figure 2.1(c).
Clicking on the background — a white part of the proof window — cancels all your selections.
Simple clicks will let you select one conclusion and one hypothesis at a time. Normally a second hypothesis click cancels the first, but if you hold down the Shift key while you click you can select more than one hypothesis, as shown in figure 2.1(d). And then, also using the Shift key whilst clicking, you can cancel individual formula selections.
There is no way of selecting more than one conclusion at a time.
2.1.1 Ambiguous formulae click both ways
In figure 2.2(a) there are open conclusions on lines 3 and 4, each preceded by a line of dots to show that there’s work to be done. Lines 1 and 2 aren’t conclusions, and can only be used as hypotheses to prove lines
3 and 4. Line 5 can’t be used at all: it’s a proved conclusion. Line 4 can only be a conclusion and not a hypothesis, because there’s nothing below it in the box. But line 3 is ambiguous: it has to be proved as a conclusion, perhaps using lines 1 and 2, and it can be used as a hypothesis to prove line 4.
Ambiguous formulae like $F \rightarrow G$ on line 3 of figure 2.2(a) can be selected in two ways. If you click on the top half of the formula you get a box open at the top — i.e. a conclusion selection — with a dotted line across the bottom, as shown in figure 2.2(b). If you click on the bottom half you get a box open at the bottom — i.e. a hypothesis selection — with a dotted line across the top, as shown in figure 2.2(c).
### 2.1.2 Greying-out
When you select a hypothesis you can only make a step towards conclusions below you and in the same box as the hypothesis you clicked. When you select a conclusion you can only make a backward step towards hypotheses above you and in the same box or enclosing boxes. Jape greys out all the formulae you can’t use, to help you see what’s going on, as shown by line 5 in figures 2.2(b) and 2.2(c).
If you click on a greyed-out formula Jape cancels your current selection(s). If you click on a conclusion that’s already been used up (one which isn’t immediately below a line of dots) then Jape greys it out and cancels all your selections.\(^1\)
### 2.2 Subformula selection
Occasionally you need to tell Jape to focus on part of a formula. You do this by a press-and-drag gesture:
- with a three-button mouse, middle-press-and-drag;
- otherwise by holding the Alt shift (sometimes labelled ’option’) during a press-and-drag.
---
\(^1\) This was once a bug. Now it’s become a feature, because I rather like it.
2.3 Dragging
Jape highlights the subformula you’ve selected by changing the background colour to yellow, as shown in figure 2.3(a).
If you hold down the add-a-selection key (ctrl on Windows and Linux, command on MacOS X) you can make more than one subformula selection, as shown in figure 2.3(b). Using the same key combination, you can cancel and/or modify subformula selections you’ve already made.
Jape restricts subformula selections to well-formed subformulae, as you can discover by experiment. That means that if you option/alt/middle-click on a connective or a quantifier, a whole subformula is highlighted. If you option/alt/middle-click on a name, only that name is highlighted.
Option/alt/middle-clicking or pressing on a new subformula cancels all other subformula selections, unless you hold down the add-a-selection key (Command on MacOS, Ctrl on other systems). Option/alt/middle-clicking on the background — a white part of the proof window — cancels all your subformula selections.
Formula and subformula selections are independent: you can have one without the other, or both, or neither. Cancelling one kind of selection doesn’t cancel the other.
2.3 Dragging
In the disproof pane — see chapter 7 — you can drag formulae, tiles, worlds and lines around to make a diagram. You do it by press-and-drag. See chapter 7 for details.
Chapter 3
Backward and forward steps
Because we read proofs forward, top-to-bottom, forward steps are easiest to understand and most proof novices prefer working forwards. But lots of proofs are difficult working forwards, and some are almost impossibly difficult. To make proofs in Jape you need to be able to make backward steps as well, and I’ve gone to some trouble to force you to recognise the fact. In the Backward menu, shown in figure 3.1, the first group of steps above the line work best backwards and the group below that line can be made to work backward if you try hard. The Forward menu, in figure 3.2, similarly shows steps that work well forward before ones that work forward only with difficulty. ‘hyp’ is hard to classify, so it appears in both menus.
To make a step you select one or more formulae and choose a step from the Backward or Forward menu. The selections you need to make depend on the step you choose, and are detailed in the descriptions of the steps in later chapters, but in general for a forward step you must choose a hypothesis and for a backward step an open (unproved) conclusion.
3.1 Making a forward step
Before you make a forward step you must always select a hypothesis formula. Depending on the kind of step, you may have to select more than one hypothesis: for details, see the description of the step later in this manual — or just try it and see what happens!
CHAPTER 3. BACKWARD AND FORWARD STEPS
1: \( E \rightarrow F \rightarrow G \) premise
2: \( E \rightarrow F \) assumption
3: \( E \) assumption
4: \( G \)
5: \( E \rightarrow G \) → intro 3-4
6: \( (E \rightarrow F) \rightarrow (E \rightarrow G) \) → intro 2-5
(a) before
1: \( E \rightarrow F \rightarrow G \) premise
2: \( E \rightarrow F \) assumption
3: \( E \) assumption
4: \( F \rightarrow G \) → elim 1, 3
5: \( G \)
6: \( E \rightarrow G \) → intro 3-5
7: \( (E \rightarrow F) \rightarrow (E \rightarrow G) \) → intro 2-6
(b) after
Figure 3.3: A sample forward step
1: \( (E \rightarrow F) \land (E \rightarrow G) \) premise
2: \( E \) assumption
3: \( F \land G \)
4: \( E \rightarrow (F \land G) \) → intro 2-3
(a) hypothesis only selected
1: \( (E \rightarrow F) \land (E \rightarrow G) \) premise
2: \( E \rightarrow F \) ∧ elim 1
3: \( E \) assumption
4: \( F \land G \)
5: \( E \rightarrow (F \land G) \) → intro 3-4
(b) consequent below hypothesis
Figure 3.4: A forward step without a target conclusion
You can also select a conclusion as well, if you want to. Some steps require a conclusion selection: you can look up the step in this manual or you can try it out and see what happens.
Once you have made your selection(s), choose your step from the Forward menu. Jape writes the result of the step — the consequent deduced from the antecedent(s) you selected — just below your selection(s). For example, figure 3.3 shows an \( \rightarrow \) elim step with two hypothesis selections. The consequent is line 4 in the ‘after’ picture. Notice that the justification of the step is written against the consequent.
1: \( (E \rightarrow F) \land (E \rightarrow G) \) premise
2: \( E \) assumption
3: \( F \land G \)
4: \( E \rightarrow (F \land G) \) → intro 2-3
(a) hypothesis and conclusion selected
1: \( (E \rightarrow F) \land (E \rightarrow G) \) premise
2: \( E \rightarrow F \) ∧ elim 1
3: \( E \) assumption
4: \( F \land G \)
5: \( E \rightarrow (F \land G) \) → intro 2-4
(b) consequent above conclusion
Figure 3.5: A forward step with a target conclusion
3.1. MAKING A FORWARD STEP
3.1.1 Selecting a target conclusion
Normally Jape writes the result of a forward step just after the hypothesis selection. Sometimes that may not be convenient, and it may be tidier to write it lower down the proof, just above a conclusion you are working towards.
In figure 3.4(a) only the hypothesis on line 1 is selected. A forward step — in this case “∧ elim (ignoring right)” — puts its consequent just below the selected hypothesis, as shown in figure 3.4(b).
If you select a target conclusion as well, as in figure 3.5(a), the same step will put its consequent before the line of dots above the selected conclusion, as shown in figure 3.5(b).
3.1.2 What can go wrong with a forward step?
When Jape can’t make a forward step it’s for one of the following reasons. In all cases you’ll get an error message which explains the problem.
1. No selected hypothesis.
If you don’t select a hypothesis formula, you can’t make a forward step.
2. Not enough selections.
Some steps need more than a single hypothesis selection.
3. Wrong hypothesis shape.
Most forward steps apply to a particular shape of hypothesis formula. If you select the wrong shape, you can’t make the step.
4. No target conclusion.
Two forward steps — ∨ elim, ∃ elim — need a conclusion selection as well as a hypothesis, because
Figure 3.6: ∨ elim needs a target conclusion
3.2 Making a backward step
Backward steps work on an open conclusion — a line without a justification, written just below a line of three dots. They may prove it completely, if Jape can find hypotheses to match the antecedents, or the antecedents may become unproved conclusions. For example, figure 3.7(a) shows an open conclusion selection, and figure 3.7(b) shows the effect of a backward $\land$ intro step, where each antecedent has become a new open conclusion. Figure 3.8(a) shows the same conclusion selected when there are more hypothesis formulae available, and in figure 3.8(b) only one open conclusion is generated because line 3 matches the antecedent of the $\land$ intro step.
In every case the justification of the step is written next to the selected conclusion, the consequent of the step.
3.2.1 What can go wrong with a backward step?
1. Wrong conclusion shape.
3.3. STEPS WHICH DON’T NEED A SELECTION
3.3.1. E is the consequent
(a) \( (E \land F) \lor (E \land G) \) premise
2: \( E \) premise
3: \( F \lor G \)
4: \( E \land (F \lor G) \) \( \land \) intro 2,3
(b) \( F \lor G \) is the consequent
1: \( (E \land F) \lor (E \land G) \) premise
2: \( E \) premise
3: \( F \lor G \)
4: \( E \land (F \lor G) \) \( \land \) intro 2,3
Figure 3.9: Conclusion selection to tell Jape where to work
Each backward step, except for contra, applies to a particular shape of consequent formula. If you select the wrong shape, you can’t make the step.
2. No selected consequent.
If there is more than one unproved conclusion, Jape doesn’t try to choose between them. In figure 3.7(b), for example, Jape wouldn’t know where to apply a backward step. Selecting an unproved conclusion, as in figure 3.9, resolves the ambiguity. (Notice in figure 3.9(a) that line 2 is an ambiguous hypothesis/conclusion formula, selected as a conclusion by clicking on it’s top half.)
3. Hypothesis selected.
Backward steps (except for \( \exists \) intro) don’t need and can’t make use of a selected hypothesis formula.
3.3. Steps which don’t need a selection
If there is only one formula in the proof that can be selected as a hypothesis, it would be annoying to be told to select it before you can make a forward step. If there’s only one unproved conclusion in the proof, it would be annoying to be told to select that to make a backward step. If the only possible target conclusion is on the next line to the hypothesis, it would be annoying to be told to select it when a step needs a target. So in all those cases Jape lets you get away without selection, and does the obvious thing.
Chapter 4
Rules of thumb for proof search
Rules of thumb\(^1\) are guesswork, approximate guides that don’t always work. Proof search is much easier if you recognise some simple principles.
1. Almost all the rules work on the *shape* of a hypothesis or consequent formula. Use the shape as a guide to help you choose a rule.
2. Use rules that introduce assumptions into the proof *as early as possible*. Those rules are:
- \(\rightarrow\) intro backwards (for the assumption);
- \(\lor\) elim forwards (for the assumptions);
- \(\neg\) intro backwards (for the assumption);
- \(\forall\) intro backwards (for the variable);
- \(\exists\) elim forwards (for the variable and the assumption).
3. \(\lor\) intro works backwards better than it does forwards; but since it throws away half the conclusion you apply it to, use it very carefully and as late as possible.
4. \(\neg\) elim works better forwards than it does backwards, once the contradictory formulae have been revealed.
5. If you use \(\neg\) elim backwards, look for a hypothesis to unify with the unproved conclusion \(\neg B\) (a negated unknown).
6. Classical contra — ‘proof by contradiction’ — can be used as a last resort in any situation. It introduces an assumption, and doesn’t mind what shape the consequent is. (Constructive contra is just as applicable, but since it doesn’t introduce an assumption, it isn’t usually much help.)
\(^1\) An inch was originally defined as the width of an adult male thumb, so a ‘rule of thumb’ is an approximate measure, then (by punning ‘rule’=measurer) an approximate guide.
There is a Greek word ‘heuristic’ for rule-of-thumb, but I prefer the old English.
Some people object to the English phrase because there was once a folk tradition in England that a man could legally beat his wife with a stick no thicker than his thumb, and it was popularly called rule of thumb. (There’s a reference, for example, in E.P. Thompson’s *Customs in Common.* ) The tradition was repulsive and mistaken, but (a) it’s been forgotten, except by historians, and (b) the ‘approximate guide’ reading predates it.
7. If Jape won’t let you make the proof, you’re doing it wrong. There aren’t any bugs in Jape’s treatment of Natural Deduction, and all the proofs in the Conjectures and Classical conjectures panels are possible — I’ve done them all. Similarly, all the ones in the Invalid conjectures panel are impossible (they are included so that you can disprove them — see chapter 7).
8. Don’t be afraid to Undo and Redo to search alternative routes to proof. Jape permits multiple Undos and corresponding Redos.
4.1 Look at the shape of the formula
There is a general principle: logical steps in Natural Deduction almost always simplify a formula, removing a connective (→, ∧, ∨ or ¬) or a quantifier (∀ or ∃) and breaking the formula into its constituent parts. Persuasion (intro) rules do that working backwards, and use (elim) rules do it forwards. To choose a rule, look for the main connective (or the quantifier) in an unproved conclusion or a hypothesis, and use the rule which works on that connective (or quantifier).
The way that rules match formulae is so nearly mechanical that I could have set Jape up to choose the relevant rule when you merely double-click on a formula. Because I want you to learn about Natural Deduction and not just the use of the mouse, I’ve been grandad-ish and set it up so that you have to choose the rules for yourself.
Chapter 5
The steps summarised
5.1 ∧ steps
You use ∧ intro backwards by selecting an $A \land B$ conclusion and choosing “$\land$ intro” from the Backward menu. It generates new conclusion lines from the antecedents of the rule, as shown in figures 3.7 and 3.8 on page 18. I strongly recommend using ∧ intro backwards rather than forwards.
You use ∧ elim forwards by selecting an $A \to B$ hypothesis and choosing one of the two versions of the step from the Forward menu: “∧ elim (preserving left)” deduces $A$, and “∧ elim (preserving left)” deduces $B$. The effect is illustrated in figures 3.4 and 3.5 on page 16.
∧ intro forward is also possible: select two hypothesis formulae to be the antecedents and use “→ intro” from the Forward menu. But it’s easier backwards, really it is. The problem in figure 5.1, for example, is easy if the first step is ∧ intro backwards — you have two things to prove and you have to prove them separately — and impossible almost any other way.
∧ elim backwards is possible, but for some reason or other I didn’t include it (and I’m sure I had a good reason, so I’m not going to add it now).
5.2 → steps
You use → intro backward by selecting a $A \to B$ conclusion and then choosing “→ intro” from the Backward menu. See figure 5.2, for example. Notice that the step introduces a new assumption and therefore a new box (and that’s why it should be used early).
You can use → elim forward by selecting a $A$ hypothesis and a $A \to B$ hypothesis, as shown by the trivial example in figure 5.3. (Shift-click to select the second hypothesis: if you select the wrong thing either cancel it with a shift-click or click on the background to cancel all your selections, and start again.)
You can also use → elim half-forward, half-backward if you select a $A \to B$ hypothesis and a target conclusion and apply “→ elim” from the Forward menu. Jape writes a new conclusion $A$ followed by a
```
1: E→(F∧G) premise
...
2: (E→F)∧(E→G)
```
Figure 5.1: ∧ intro backwards needed
CHAPTER 5. THE STEPS SUMMARISED
Figure 5.2: \( \rightarrow \) intro backward
\[\begin{align*}
1: & \text{(E} \rightarrow \text{F}) \rightarrow \text{G} \text{ premise} \\
2: & \text{E} \rightarrow (\text{F} \rightarrow \text{G}) \text{ selected}
\end{align*}\]
(b) hypothetical argument outlined
\[\begin{align*}
1: & \text{E} \rightarrow \text{F}, \text{F} \rightarrow \text{G}, \text{E premises} \\
2: & \text{F} \rightarrow \text{G} \text{ deduced}
\end{align*}\]
Figure 5.3: \( \rightarrow \) elim forward
\[\begin{align*}
1: & \text{E} \rightarrow \text{G} \text{ premise} \\
2: & \text{E} \lor \text{F} \text{ and target conclusion selected} \\
3: & \text{E} \lor \text{G} \text{ selected}
\end{align*}\]
(b) consequent \( G \) deduced from new open conclusion \( F \)
\[\begin{align*}
1: & \text{F} \rightarrow \text{G} \text{ premise} \\
2: & \text{E} \lor \text{F} \text{ assumption} \\
3: & \text{E} \lor \text{G} \text{ assumption} \\
4: & \text{E} \lor \text{F} \rightarrow (\text{E} \lor \text{G}) \rightarrow \text{intro 2-3}
\end{align*}\]
Figure 5.4: \( \rightarrow \) elim half backward, half forward
\[\begin{align*}
1: & \text{E} \rightarrow \text{G} \text{ premise} \\
2: & \text{E} \lor \text{F} \text{ assumption} \\
3: & \text{E} \lor \text{G} \text{ assumption} \\
4: & \text{E} \lor \text{F} \rightarrow (\text{E} \lor \text{G}) \rightarrow \text{intro 2-3}
\end{align*}\]
Because $\lor$ elim forwards implements argument by cases, and because the consequent $C$ can’t be deduced from the $A \lor B$ antecedent, it generates big proof changes from a small gesture. That frightens novices, but be brave, because $\lor$ elim is one of the rules that generates assumptions, so you have to use it as early as possible in a proof.
$\lor$ intro backwards is destructive — it throws away half a conclusion — so you use it as late as possible, even though it’s very easy to use.
You use $\lor$ elim forwards by selecting a hypothesis which fits $A \lor B$, an open conclusion $C$ (if there’s only one available conclusion, and if its on the line below the hypothesis, Jape will let you off the conclusion selection) and “$\lor$ elim” from the Forward menu. See figure 3.6 on page 17, for example. Note that the boxes representing the $A$ leads to $C$ and $B$ leads to $C$ arguments (lines 7-8 and 9-10 in figure 3.6(a), for example) are written just above the selected conclusion $C$, and the justification is (of course) written against
CHAPTER 5. THE STEPS SUMMARISED
(a) before: hypothesis $\neg(E \lor F)$ and target conclusion selected
(b) after: conclusion $E \lor F$ generated
Figure 5.7: $\neg$ elim forwards is easy
(a) before: conclusion $\neg F$ selected
(b) after: contradiction argument outlined
Figure 5.8: $\neg$ intro backwards is very easy
the selected conclusion.
$\lor$ intro is easy to use backwards: select an open conclusion, decide which half to keep and which to throw away, and apply the corresponding step ("$\lor$ intro (preserving left)" or "$\lor$ intro (preserving right)") from the Backward menu. See figure 5.5, for example.
For some reason that I’ve forgotten, I was persuaded to allow $\lor$ intro forward. I rather regret it, because this isn’t a step for novices to use. But, if you must: you select a hypothesis, decide which half of the consequent it has to be, and apply the corresponding step ("$\lor$ intro (inventing left)" or "$\lor$ intro (inventing right)") from the Forward menu. The step always invents an unknown, as illustrated in figure 5.6 by an “inventing right” step, and you have to deal with the unknown somehow (see chapter 6 for suggestions).
If unknowns frighten you, don’t use $\lor$ intro forwards (the easy way to solve the proof problem in figure 5.6, for example, is $\neg$ elim forwards from line 1 with target conclusion line 4, producing figure 5.5(a); then $\lor$ intro backwards preserving left produces figure 5.5(b)).
$\lor$ elim backwards would be possible, but not for novices, so I didn’t allow it.
5.4 $\neg$ steps
$\neg$ elim works best forwards, $\neg$ intro backwards.
To use \(\neg\) elim forwards, select a hypothesis \(\neg A\), or two hypotheses \(\neg A\) and \(A\), and choose “\(\neg\) elim” from the Forward menu. You can select a target conclusion too, if you like. The step generates a consequent \(\bot\), (plus a conclusion \(A\) if you only select one hypothesis). See figure 5.7, for example.
To use \(\neg\) intro backwards, select an open conclusion \(\neg B\) and “\(\neg\) intro” from the Backward menu. See figure 5.8, for example.
It’s possible to use \(\neg\) elim backwards, if you select an open conclusion \(\bot\). It generates an unknown \(\_B\) and two new open conclusions \(\neg B\) and \(\neg B\), as illustrated in figure 5.9. It’s usually easier to do it forwards, though.
\(\neg\) intro forwards doesn’t make much sense, so I didn’t allow it.
5.5 \( \bot \) (contradiction) steps
Classical contra is a hard rule to use (there ought to be a blues song about that), but you know you are going to have to use it for some problems. It only works backwards. Constructive contra is easier if you use it forwards.
To use classical contra, select an open conclusion \( A \) and choose “contra (classical)” from the Backward menu. It creates a hypothetical contradiction argument with assumption \( \neg A \), as shown in figure 5.10.
To use constructive contra forwards, select a \( \bot \) hypothesis and an open conclusion, and choose “contra (constructive)” from the Forward menu. See figure 5.11, for example.
Constructive contra backwards is destructive: it throws away whatever conclusion it is applied to. But you can do it if you want to. Classical contra forwards would be absurd.
5.6 \( \top \) (truth) step
You can always prove \( \top \) as an open conclusion: select it and apply “truth” from the Backward menu. And that’s all you can do with \( \top \): no forward step is possible.
5.7 \( \forall \) steps
\( \forall \) intro works backwards, \( \forall \) elim forwards. \( \forall \) intro is worth using early, because it introduces a variable. The privacy condition on the \( \forall \) intro step won’t usually bother you, because Jape always invents a new variable (\( i \), \( i1 \), \( i2 \), and so on).
To use \( \forall \) intro backwards, select an open conclusion \( \forall x. P(x) \) and choose “\( \forall \) intro” from the Backward menu. Jape builds the outline of the generalised proof, as illustrated in figure 5.12.
To use \( \forall \) elim forwards, you must select a hypothesis \( \forall x. P(x) \) and also actual \( j \), then choose “\( \forall \) elim” from the Forward menu. The effect is illustrated in figure 5.13.
5.7. ∀ STEPS
(a) before: \( \forall x. (R(x) \land S(x)) \) and actual \( i \) hypotheses selected
(b) after: \( R(i) \land S(i) \) consequent deduced
(a) before: \( \exists x. \neg R(x) \) hypothesis and target conclusion \( \bot \) selected
(b) after: generalised proof outline introduced, with private variable \( i \)
(a) before: \( \exists x. \neg R(x) \) conclusion and actual \( i \) hypothesis selected
(b) after: \( \neg R(i) \) antecedent deduced
Figure 5.13: A ∀ elim step forwards
Figure 5.14: An ∃ elim step forwards
Figure 5.15: An ∃ intro step backwards
CHAPTER 5. THE STEPS SUMMARISED
5.8 \( \exists \) steps
\( \exists \) intro works backwards, \( \exists \) elim forwards. \( \exists \) elim is worth using early, because it introduces a variable. The privacy condition on the \( \exists \) elim step won’t usually bother you, because Jape always invents a new variable (\( i, i1, i2 \), and so on).
To use \( \exists \) elim forwards, you must select a hypothesis \( \exists x.P(x) \) and also a target conclusion, and then choose “\( \exists \) elim” from the Forward menu. The justification is written next to the target conclusion, and a generalised proof outline is introduced, as illustrated in figure 5.14.
To use \( \exists \) intro backwards, you must select a conclusion \( \exists x.P(x) \) and also a hypothesis actual \( j \), and then choose “\( \exists \) intro” from the Backward menu. The effect is illustrated in figure 5.15.
Chapter 6
Unknowns, hyp and Unify
When James Aczel looked at novices learning logic with Jape, he pointed out that they found incomplete steps quite disconcerting. Incomplete steps are ones that let you leave out important information so that you can fill it in later, when you’ve discovered what it ought to be. You might be allowed to leave out the variable in an ∃ intro step, for example, and fill it in later.
Jape uses unknowns — names starting with an underscore, like _B1 — to stand for formulae which can be filled in later. Even though James persuaded me to eliminate almost all incomplete steps from my treatment of natural deduction, my users pleaded with me to allow some. So it is possible to introduce unknowns into a Jape proof, and because of that it’s necessary to know how to get rid of them again.
To understand what follows you have to recognise the distinction between formula selection (red box round a conclusion or hypothesis formula) and subformula selection (yellow background behind part or all of a conclusion or hypothesis formula). Note, in particular, that a subformula selection that encompasses a complete formula is not the same thing as a formula selection.
6.1 Introducing an unknown
∨ elim backwards takes a formula with an ∨ connective and throws away half of it, because from A you can prove A ∨ B, and from B you can prove A ∨ B. Some obsessively-forward reasoners wanted me to allow ∨ intro forwards, and just to spite them I did: it’s there in the Forward menu, under the line. Figure 6.1 shows how you can deduce A ∨ B from A by choosing “∨ intro (inventing right)” from the Forward menu.
¬ elim backwards is another good way to get an unknown, illustrated in figure 6.2. There’s only one unknown in the proof, but it occurs twice.
These are by no means the only way to introduce unknowns into a proof. One very interesting way is to select a conclusion, choose “Text command” from the File menu, type “apply cut” and press return. You

6.2 Avoiding unknowns by subformula selection
If you know beforehand what should go in place of the unknown, you can tell Jape what to use at the time you make an ∨ intro or ¬ elim step, by subformula selection (alt/option/middle-press-and-drag over the formula or subformula you want to use). Figure 6.3 shows an example: note that the hypothesis is not selected, but ¬R(x) is subformula-selected.
will get an unknown, intermediate between the hypotheses and your conclusion. I leave you to work out how useful that can be.
6.2 Avoiding unknowns by subformula selection
If you know beforehand what should go in place of the unknown, you can tell Jape what to use at the time you make an ∨ intro or ¬ elim step, by subformula selection (alt/option/middle-press-and-drag over the formula or subformula you want to use). Figure 6.3 shows an example: note that the hypothesis is not selected, but ¬R(x) is subformula-selected.
will get an unknown, intermediate between the hypotheses and your conclusion. I leave you to work out how useful that can be.
6.3 Eliminating an unknown with Unify
If you have an unknown in your proof, and a subformula somewhere in the proof which you want to use to replace the unknown, then subformula selection and the “Unify” command from the Edit menu will do the job. You subformula-select two or more subformulae which you want to make the same by replacing unknowns (command/ctrl-alt/option/middle-press-and-drag is the gesture you need to make more than one subformula selection). Figure 6.4 shows an example. Note that there are no formula selections in figure 6.4(a), only subformula selections.
6.4 Eliminating an unknown with hyp
The hyp step is really just a cloak for the Unify command: make this selected conclusion the same as this selected hypothesis (or these selected hypotheses). For example see figure 6.5. Note that making $\neg \exists x. \neg R(x)$ the same as $\neg B1$ means making $\exists x. \neg R(x)$ the same as $B1$.
6.5 Provisos and the privacy condition
Recall that the $\forall$ intro and $\exists$ elim steps introduce a variable — $i, i_1, i_2, ...$ — for private use within a generalised proof. If a proof attempt doesn’t include unknowns, Jape can enforce the privacy condition easily: simply use a variable that isn’t in use already, anywhere in the proof. If there are unknowns about, things aren’t so simple: Jape has to ensure that you don’t unify that unknown with a formula that uses the private variable, because that could violate the privacy condition.
Figure 6.6 shows what happens when Jape has to invent a variable and there is an unknown in the proof. It must prohibit the possibility that the formula $B_1$ includes the variable $i$: it does this with the proviso “$i$ NOTIN $B_1$” in the proviso pane below the proof. Figure 6.7 shows preparation for an attempt to violate the proviso with a Unify command (making $B_1$, which appears outside the box, the same as $R(i)$ inside the box). Figure 6.8 shows the error message that Jape generates when you try the unification.
You get the same situation if you try the steps in the other order ($\forall$ intro backwards, $\forall$ intro forwards selecting only the hypothesis).
Chapter 7
Disproof
My encoding of natural deduction in Jape deals with disproof by allowing you to make Kripke diagrams which Jape will check against a sequent to see if they are examples, or counter-examples, or neither. You do it by manipulating blobs, lines and atomic formulae in a disproof pane above the proof.
Jape shows you what is going on by underlining and colouring formulae and subformulae in the sequent and by colouring blobs on the screen. This chapter should, therefore, be read in colour rather than in monochrome.¹
7.1 Getting started
At any point during a proof attempt you can invoke ‘Disprove’ from the Edit menu.² Jape splits the proof window and shows you a Kripke diagram with a single empty world and the proof-attempt sequent (you can select hypotheses and conclusions to construct an alternative sequent: see below) in a disproof pane above the proof pane. The sequent may already be underlined and coloured as in figure 7.1.
The main things to notice in this illustration are
- the blob in the diagram is ringed in red;
- some of the names and connectives in the sequent are coloured magenta, some are grey, and some are black;
- some of the formulae are underlined and some are not;
- the sequent uses the semantic turnstile ⊨;
- there’s a little green waste bin on the right;
- above the waste bin there are some tiles, each containing an atomic formula.
¹ I am aware that use of colour means that some people can’t use Jape as easily as others. This is only the tip of a nasty iceberg: Jape uses a visual presentation to make it easier to understand proof and disproof, and that excludes many people with reduced visual acuity from using it at all. Contrary to some propaganda, these problems aren’t easy to overcome. But I’d love to try, and I’d love to hear from anybody who has suggestions to make, advice to offer, technical information to share, guinea-piggery to volunteer, whatever. Messages to me at the title page address, as usual.
² There isn’t a Disprove button in the conjecture panels. Perhaps there should be! You have to hit Prove to start a proof attempt and then choose Edit:Disprove.
CHAPTER 7. DISPROOF
7.2 Alternative sequents
If you select a conclusion in the proof pane (or, if Jape doesn’t want to let you do that, the reason next to the conclusion) and then hit Edit:Disprove, Jape will use the disproof sequent consisting of the conclusion as consequent and all the hypotheses above it as antecedent formulae. If you select some of the hypotheses as well as the conclusion, they will be used as the antecedents.
You can choose a new disproof sequent at any time, even in the middle of a disproof attempt.
If you want to set up an entirely new challenge, enter it into one of the conjectures panels (New button), hit Prove and then Edit:Disprove in the proof window.
7.2.1 Selecting a situation
There is always one selected world in a diagram, ringed in red. The selected world is the root world of the situation evaluated by Jape. For example, in figure 7.2 the situation is the whole diagram; in figure 7.3 it’s
7.3 Making diagrams
To begin your disproof, Jape presents you with the simplest diagram: the isolated empty world. You add components to your diagram — worlds, formulae and lines — using drag-and-drop mouse gestures. You can delete components by dragging them to the waste bin. You can add tiles with double-clicks. You can Undo your actions to any degree that you like, and Redo likewise.\(^3\)}
To *move* a component of a diagram you put the mouse pointer over it, press (left button on a multi-button mouse), hold a moment, then — still pressing — move to a new position, and finally release. Jape shows you a transparent image of the thing you are dragging, so you can see what you are doing.
To *duplicate* a component you hold down the alt/option key throughout the gesture (or use the middle button on a multi-button mouse) and you drag a new copy of the thing you pressed on.
Sometimes the purpose of the drag is to *drop* one component onto, or into, another. You drag the first component until the mouse pointer is over the one you want to drop onto/into. The recipient will change colour if it’s prepared to accept the drop, and you simply let go while it’s lit up. If the drop fails (you aren’t over a component, or it won’t light up) then the dropped component flies back to base.
### 7.3.1 Dragging worlds
You can move-drag worlds about the place to make your diagram look nicer. You press on the blob, and you drag the blob plus any formulae attached to it. Lines connected to the world stay connected, and you can drop it anywhere you like. To preserve the ‘connections only upwards’ principle, Jape will delete a line if you drag a world below its parent.
By duplicate-dragging a blob, you drag a copy of the world — the blob plus any formulae attached to it — with a line attaching it to the world it came from. The new world stays where you put it. If it’s above the world you dragged it from, they will be connected by a line. If it’s level or below, then no line.
You can drag a world onto a line (which lights up to show you are over it) or another world (which lights up ditto) and Jape does the obvious thing, adding whatever formulae are necessary to whatever worlds in order to maintain monotonicity.
If you drag a world to the waste bin it’s deleted, along with all the formulae attached to it and all the lines connected to it. The bin lights up when it will accept the world; it won’t accept the currently-selected world.
If you drop the world onto another the two worlds are merged. Jape does the correct monotonicity adjustments to the diagram by adding formulae to the children of the destination.
### 7.3.2 Dragging lines
You can drag lines and drop them onto worlds or into the waste bin, as you wish. Jape does the obvious thing in each case (if you can’t guess what that is, just try it!).
If you want to make a line between world A and world B, where A is below B, duplicate-drag A to B and drop the new world onto B. Jape makes the monotonicity adjustments, adding formulae to B and its children as necessary. Dragging from B to A will have some not-very-useful effect (but there’s always Undo!).
### 7.3.3 Dragging formulae
You can drop a formula on a world, or drag it away from a world, in one of several ways. If a formula is added to a world then, to preserve monotonicity, it is added to the children of that world as well. If a formula is deleted from a world then it is deleted from all ancestors of that world.
You can drag a formula from a tile and drop it onto a world (the world lights up, of course, to show that it’s eager to accept; worlds won’t accept a second copy of a formula they already have). The formula isn’t deleted from the tile, whichever kind of drag gesture you use, because tiles are infinite sources of formula-copies.
---
3 Undo and Redo apply to the last pane you used the mouse in, either proof or disproof.
7.4 Making individuals and predicate instances
If a sequent mentions an individual by including a hypothesis like actual \( j \), then there will be a tile for that individual (see figures 7.1, 7.2, 7.3 and 7.5, for example). If there are quantifiers but no individuals you will get a free actual \( i \) tile.
If you need more individuals, double-click any “actual ...” tile. Jape uses an obvious numbering algorithm to name the new individual — actual \( j \), for example, generates first actual \( j1 \) then actual \( j2 \), and so on.\(^4\)
If you want to make a new predicate instance, double-click a predicate-instance tile. Jape looks through your tiles for unused individuals and uses them to give you a new one. For example if you have a tile for \( S(j) \) and tiles for actual \( j \) and actual \( j1 \), then double-clicking \( S(j) \) gets you \( S(j1) \), as you might expect. If there’s more than one instance that could be made, Jape shows you the alternatives and asks you to choose. If you don’t have enough individuals to make a new instance, it tells you so.
7.5 Exploring reasons
The hardest thing to explain to a novice is why a particular formula is forced in a particular situation. Figure 7.6 shows a particularly ludicrous formula and its counter-example, which really needs explanation. Jape provides mechanisms which can be some help.
\(^4\) It would be nicer if it went \( i, j, k, i1, j1, k1, i2, ... \) One day I’ll make it do that.
CHAPTER 7. DISPROOF
The colouring of atomic (sub-)formulae and connectives is the first mechanism. But with the negative connectives — → and ¬ — and with the quantifiers, we often need a bit more help.
Jape allows you to select (click or left-click) or subformula-select (alt/option/middle-press-and-drag) in the sequent. It will colour magenta all the worlds which force the selected (sub)formula. For example, figure 7.7 shows where ¬¬E is forced in the difficult example. Figures 7.8 and 7.9 show information about some other subformulae.
If you choose more than one formula, Jape colours the worlds that force them all: see figures 7.10 and 7.11.
If you choose a quantifier formula in the sequent, Jape colours magenta not only the worlds which support it, but also the presence markers which support that formula: i.e. individuals at worlds which which generate a forced formula if you instantiate the quantified formula with them. In figure 7.12, for example, worlds which force the selected quantifier are magenta; individuals which support it are magenta too. You can see that R(k) → R(j) ∧ R(k) at each of the worlds in this example.
7.6 Completing a disproof
When you have a disproof (all the premises underlined, the conclusion not underlined), you can register it with Jape using Edit:Done. Disproved conjectures are marked with a cross; proved conjectures get a tick. Because you can prove some conjectures classically and also disprove them constructively, it’s possible to get both a tick and a cross together. See figure 1.5 on page 8 for examples.
7.7 Printing disproofs
You can print or export an image of the disproof pane, using Edit:Print Disproof or Edit:Export Disproof.
Chapter 8
Using theorems and stating conjectures
8.1 Using theorems
A theorem is a proved conjecture, one with a tick next to it in a conjecture panel. Jape lets you use the theorem
\[ A_1, A_2, \ldots, A_n \vdash B \]
as if it was the rule
\[
\begin{array}{c}
A_1 \\
A_2 \\
\vdots \\
A_n \\
B
\end{array}
\]
You can apply a theorem backwards or forwards using the Apply button in its conjecture panel. To work backwards, just select an open conclusion in the proof window, select the theorem in its panel, and press the Apply button. Jape will apply the theorem like any rule, instantiating it to fit the conclusion, making antecedents of its premises and entering them as hypotheses or linking to existing hypotheses if possible. See figure 8.1, for example.
Because forward steps take a lot of careful use of behind-the-scenes technology, theorems work best backwards. But sometimes you must work forwards, and you can do that in two ways.
If your theorem has exactly one premise, you can select a hypothesis to match that premise and apply the theorem like a forward rule, as illustrated in figure 8.2.
If your theorem has no premises or more than one premise, don’t select a hypothesis but do select a conclusion and apply the theorem. It sounds daft but it works: Jape makes a version of the theorem in which all the variable and formula names are replaced by unknowns and inserts it into the proof, as illustrated in figure 8.3. You have to get rid of the unknowns using hyp steps or the Unify command, as described in chapter 6.
(a) before: conclusion \( E \lor \neg E \) selected
(b) after: hypothesis \( \neg(\neg E \land \neg E) \) generated
Figure 8.1: Theorem applied backward
CHAPTER 8. USING THEOREMS AND STATING CONJECTURES
Figure 8.2: Single-premise theorem applied forward
(a) before: hypothesis \((E \rightarrow F) \rightarrow E\) selected
(b) after: consequent \((E \land F) \rightarrow E\) generated
Figure 8.3: Theorem applied to non-matching conclusion
(a) before: conclusion selected
(b) after: theorem inserted, with unknowns
Figure 8.4: The New Conjecture window
8.2 Stating your own conjectures
Press the “New...” button on any conjectures panel and you see a window like figure 8.4. You type your conjecture using the keyboard in the normal way, and you can use the buttons provided to get the fancy symbols that aren’t on your keyboard.
Names in a conjecture have to obey certain rules:
- **variables** must start with $x, y, z, i, j$ or $k$ and can continue with any letter or digit;
- **formula or predicate** names must start with $A, B, C, D, E, F, G, H, P, R, S$ or $T$ and can continue with any letter or digit.
If you make a mistake, like leaving out a dot or a bracket in a crucial spot, or typing a name it can’t recognise, Jape will complain. If you make more than one mistake, Jape only spots the first (reading left to right). Sometimes the error messages will be hard to understand: I’m sorry for that, but it’s really hard to improve them.
Chapter 9
Troubleshooting
9.1 Problems getting started
On MacOS / OS X your security settings may stop you running Jape, saying it comes from a unknown developer. There’s a way round this: ctrl-click Jape, choose Open from the little menu that appears, and say Open on the alert window that comes up. After that it can be double-clicked as normal.
On Windows and Linux Bernard distributes an ‘installation jar’ – a bundle of stuff that distributes itself round your filestore. From his website:
> It is never appropriate for you to unpack the .jar file, but it turns out that certain file-archiving software on these operating systems tell the operating system that .jar files are archives, and that opening means unpacking. On a Linux machine the way round this is to java -jar Install...jape.jar from a terminal window. On a Windows machine right-click on the jar file, and select the Open with ... java ... menu entry.
The Jape app can be put anywhere you like. The examples can be anywhere you like.
If you installed Jape and it won’t start properly, throw it all away and install it again. This time follow the instructions precisely. Then it will work.
9.2 What if a proof step goes wrong?
When you try to apply a rule one of two things can happen. Either the rule applies, and the step goes through, or it doesn’t, and Jape shows you an error message.
Even though a rule does apply and the proof step does go through, it may not turn out to be the right thing to do. Sometimes an apparently successful step can lead to a dead end. Sometimes a step works, but not in the way that you expected — perhaps lots of unknowns suddenly appear in the proof, or there are lots of extra lines that you didn’t expect, or lots of lines suddenly disappear.
Whenever something happens that isn’t what you expected, the first stage of a cure is to use the Undo command from the Edit menu. Undo takes you back one step in the proof, two Undos take you back two steps, and so on. Using several Undos can move you back from a dead end to an earlier position from which you can move forward in a different direction.
You can even recover from Undo! The Redo command (also in the Edit menu) reverses the effect of Undo, two Redos reverse two Undos, and so on. So if you decide, after Undoing, that you really did want to make
the step after all, Redo will make it again. (If you Undo and make a new proof step, then the one you Undid is gone for ever, like it or not.)
The Undo command allows you to explore if you don’t know what rule or theorem to apply in a proof: you can experiment with different rules and theorems from the menus until you find one that works. That can be a bad thing, if you just try things at random until you find one that happens to work, and don’t reflect on what you are doing. It’s also the slowest way of finding proofs. But sometimes we all need to search and experiment (aka “thrash about”), and then Undo and Redo are invaluable. I hope that when you do search and find a surprising avenue that happens to work, you will pause to ask yourself why it works. Jape is designed to support ‘reflective exploration’ — it helps with the exploring part, and you learn by reflecting on the results.
|
{"Source-Url": "http://www.cs.ox.ac.uk/people/bernard.sufrin/personal/jape.org/MANUALS/natural_deduction_manual.pdf", "len_cl100k_base": 15283, "olmocr-version": "0.1.50", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 110512, "total-output-tokens": 17460, "length": "2e13", "weborganizer": {"__label__adult": 0.0005121231079101562, "__label__art_design": 0.0028533935546875, "__label__crime_law": 0.0005335807800292969, "__label__education_jobs": 0.0338134765625, "__label__entertainment": 0.000522613525390625, "__label__fashion_beauty": 0.0002913475036621094, "__label__finance_business": 0.0006680488586425781, "__label__food_dining": 0.0006151199340820312, "__label__games": 0.002841949462890625, "__label__hardware": 0.0015707015991210938, "__label__health": 0.0004050731658935547, "__label__history": 0.0007700920104980469, "__label__home_hobbies": 0.0005998611450195312, "__label__industrial": 0.0008072853088378906, "__label__literature": 0.0021991729736328125, "__label__politics": 0.0005211830139160156, "__label__religion": 0.0012063980102539062, "__label__science_tech": 0.1268310546875, "__label__social_life": 0.0005583763122558594, "__label__software": 0.11285400390625, "__label__software_dev": 0.70751953125, "__label__sports_fitness": 0.00034809112548828125, "__label__transportation": 0.000576019287109375, "__label__travel": 0.00028586387634277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61851, 0.03179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61851, 0.48439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61851, 0.89328]], "google_gemma-3-12b-it_contains_pii": [[0, 103, false], [103, 103, null], [103, 2276, null], [2276, 4289, null], [4289, 6192, null], [6192, 8163, null], [8163, 9264, null], [9264, 9472, null], [9472, 12564, null], [12564, 14240, null], [14240, 15886, null], [15886, 17659, null], [17659, 19013, null], [19013, 19013, null], [19013, 20426, null], [20426, 22522, null], [22522, 23915, null], [23915, 24800, null], [24800, 26516, null], [26516, 26516, null], [26516, 28642, null], [28642, 29995, null], [29995, 32016, null], [32016, 33424, null], [33424, 34483, null], [34483, 36104, null], [36104, 36914, null], [36914, 38735, null], [38735, 39313, null], [39313, 40210, null], [40210, 42246, null], [42246, 43301, null], [43301, 44229, null], [44229, 45462, null], [45462, 47606, null], [47606, 48548, null], [48548, 48946, null], [48946, 52445, null], [52445, 53918, null], [53918, 55620, null], [55620, 57323, null], [57323, 57729, null], [57729, 58629, null], [58629, 58629, null], [58629, 60953, null], [60953, 61851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 103, true], [103, 103, null], [103, 2276, null], [2276, 4289, null], [4289, 6192, null], [6192, 8163, null], [8163, 9264, null], [9264, 9472, null], [9472, 12564, null], [12564, 14240, null], [14240, 15886, null], [15886, 17659, null], [17659, 19013, null], [19013, 19013, null], [19013, 20426, null], [20426, 22522, null], [22522, 23915, null], [23915, 24800, null], [24800, 26516, null], [26516, 26516, null], [26516, 28642, null], [28642, 29995, null], [29995, 32016, null], [32016, 33424, null], [33424, 34483, null], [34483, 36104, null], [36104, 36914, null], [36914, 38735, null], [38735, 39313, null], [39313, 40210, null], [40210, 42246, null], [42246, 43301, null], [43301, 44229, null], [44229, 45462, null], [45462, 47606, null], [47606, 48548, null], [48548, 48946, null], [48946, 52445, null], [52445, 53918, null], [53918, 55620, null], [55620, 57323, null], [57323, 57729, null], [57729, 58629, null], [58629, 58629, null], [58629, 60953, null], [60953, 61851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61851, null]], "pdf_page_numbers": [[0, 103, 1], [103, 103, 2], [103, 2276, 3], [2276, 4289, 4], [4289, 6192, 5], [6192, 8163, 6], [8163, 9264, 7], [9264, 9472, 8], [9472, 12564, 9], [12564, 14240, 10], [14240, 15886, 11], [15886, 17659, 12], [17659, 19013, 13], [19013, 19013, 14], [19013, 20426, 15], [20426, 22522, 16], [22522, 23915, 17], [23915, 24800, 18], [24800, 26516, 19], [26516, 26516, 20], [26516, 28642, 21], [28642, 29995, 22], [29995, 32016, 23], [32016, 33424, 24], [33424, 34483, 25], [34483, 36104, 26], [36104, 36914, 27], [36914, 38735, 28], [38735, 39313, 29], [39313, 40210, 30], [40210, 42246, 31], [42246, 43301, 32], [43301, 44229, 33], [44229, 45462, 34], [45462, 47606, 35], [47606, 48548, 36], [48548, 48946, 37], [48946, 52445, 38], [52445, 53918, 39], [53918, 55620, 40], [55620, 57323, 41], [57323, 57729, 42], [57729, 58629, 43], [58629, 58629, 44], [58629, 60953, 45], [60953, 61851, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61851, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
f8189eaf1a13988f78932263b2db4526d1c69df7
|
VE Architect-Driven Service-Oriented Business Network Process Realization
Alireza Khoshkbarforoushha
Dept. of Information Technology Engineering
Tarbiat Modares University
Tehran, Iran
a_khoshkbarforoushha@sbu.ac.ir
Mohammad Aghdas
Dept. of Information Technology Engineering
Tarbiat Modares University
Tehran, Iran
aghdasim@modares.ac.ir
Mehrnoush Shamsfard
Electrical and Computer Engineering Faculty
Shahid Beheshti University GC
Tehran, Iran
m-shams@sbu.ac.ir
Received: May 1, 2010 - Accepted: July 9, 2010
Abstract— Business opportunities are not permanent. Enterprises to instantly meet them collaborate with each other through realizing business network processes (BNP) in which their activities are done with various partners within a network. Recently, these business network processes are enabled with Service-Oriented technologies, that we call them Service-Oriented Business Network Process (SOBNP). In today’s dynamic and changing environment Virtual Enterprise (VE) architects require a flexible framework through which they could design and realize SOBNP instantly. In this theme, there exist a number of frameworks that constitute the SOBNP, but they almost neglect two salient issues: a) Covering and incorporating high-level (i.e. business level) and low-level (i.e. technical level) requirement in business process creation; b) Adjusting to the VE architect without deep knowledge of computer science. Thus, the main objective of this paper is to propose a framework and related tools and techniques to constitute SOBNP, as a main building block of Instant Virtual Enterprise (IVE), which address two above-mentioned issues. The framework namely SOBNP Realization consists of three phases including requirements specification, ontology-based partner search and selection, and BPEL (Business Process Execution Language) process synthesis. A prototype system is implemented to demonstrate the concept of VE architect-driven SOBNP realization in IVE.
Keywords: Semi-automatic realization of business network process; Service-oriented Business Network Process; Ontology-based partner selection; Instant virtual enterprise, BPEL process.
1. INTRODUCTION
Rapidly changing business atmosphere and turbulent market conditions cause business opportunities change over and over. To meet these business opportunities enterprises require collaborating with each other through realizing business network processes (BNP) in which their activities are done with various partners within a network [1]. In fact, the competitive market requires that these BNPs to be realized highly agile, effective and efficient. Such an agility and effectiveness lead to the formation of highly dynamic virtual enterprises within supplier networks, which are referred as instant virtual enterprises (IVE) [1]. In this regard, Presley et al. [2] stress that the rapid formation and reconfiguration of enterprises and their processes provide complexities for process engineering and integration. In the context of this paper, a business process is composed of activities and every single activity is defined as any organized behavior that transforms an input into an output through executing a sequence of actions.
BNPs can be utilized through diverse technologies including service-oriented computing (SOC) [3][4], agent-based approaches [5], and so on. Our approach for BNPs realization is based on service-oriented technologies including web services, Business Process Execution Language (BPEL), etc. Consequently, in this article BNP is called Service-Oriented Business Network Process (SOBNP). In SOC, a business process is a coarse-grained -
Composite web service executing a control flow to complete a business goal. Among various technologies, BPEL (Business Process Execution Language) is a de-facto standard that is utilized to realize required orchestration and choreography between diverse web services [6]. In fact, BPEL is a workflow-oriented composition model and provide flexible business processes.
There exist a number of frameworks that could constitute the SOBNP but they almost neglect two important issues including covering business-level and technical-level requirement in business process realization, and adjusting to the VE architect without deep knowledge of computer science. Thus, this paper is to propose a framework and related tools and techniques to constitute SOBNP which meet two above-mentioned issues. In other words, the framework namely SOBNP Realization not only embodies both high-level and low-level requirements, but also it has been tuned that could be employed by novice process owners, VE architects, business managers or business domain experts.
The remaining of the paper is organized as follows. Section 2 elaborates the motivation of the work through a real-life scenario. Section 3 discusses the related work. Section 4 presents the SOBNP realization framework. Subsections A, B, and C, explain in detail the major components of the framework. Section 5 discusses preliminary implementation of prototype system. Section 6 evaluates the framework using two approaches: scenario simulation, and gathering experts’ judgments through a survey. We next sum up the discussion and provide some conclusions in Section 7 and 8, respectively.
II. MOTIVATING SCENARIO: COLLABORATIVE ONLINE BROKERAGE
Collaborative Online Brokerage is one of the important business processes of the banking industry. As the process mapping, Figure 1, shows three parties including a customer, a bank, and a stock exchange carry out securities transactions. In fact, efficiency is increased through the electronic support and automation of information and communication processes both within banks and between organizations[7].
By using the appropriate brokerage solution, the "new intermediaries" enable the entire business transaction to be carried out efficiently from the initiation of the transaction up to transaction execution. The greatest potential for this increased efficiency lies in the electronic support and automation of information and communication processes both within banks and between organizations.
In reality, such a business process must be supported with different partners within a network of organizations. Meanwhile, highly flexible and changing environment causes the combination of partners to execute such a business process alters over and over. Therefore, the VE architect requires a flexible tools and techniques for business process realization to instantly meet new business opportunities.
III. RELATED WORK
To the best of our knowledge, SOBNP realization in IVE with proposed approach is almost non-existent in the literature; however, this section reviews the closest work to our approach.
In [3], authors propose dynamic VE integration via business rule enhanced semantic service composition. Its composition architecture realizes dynamic formation of business workflows through three steps: abstract workflow formation, concrete workflow formation, and workflow execution via web service selection. However, in this approach abstract workflows are pre-defined. This means the approach is not flexible. Beside, the paper lacks on introducing partner selection procedure.
In [1], Grefen and his colleagues develop a novel approach that, firstly, focuses on dynamic, multi-party market scenarios, in which complex instant VEs are created to follow market movements and secondly, covers the entire spectrum from high-level, global business goals down to low-level, local business processes. Even though their contribution has high-quality, it is not adjusted and tuned respecting to the VE architect. Moreover, unlike the SOBPNP Realization their framework is not semi-automatic. As a matter of fact, the procedure of partner selection in their framework is not automatic.
In [8], authors present a goal-directed composition framework to support on-demand business processes. In their framework composition schemas are generated incrementally by a rule inference mechanism based on a set of domain-specific business rules enriched with contextual information. Although the proposed framework is one of the high-quality service composition frameworks in recent years, it has some shortcomings. Firstly, their ontology matching algorithm primarily considers the simple subsumption between the concepts in the ontology, and ignores their detailed semantic difference. In other words, some parameters such as concept definition, path type between resources, etc. have been neglected. Secondly, their approach has mainly been developed for business process realization within an organization. In other words, in cannot be used for business network processes which must be constituted through collaborating various organizations within a network.
IV. SOBNP REALIZATION FRAMEWORK
In the proposed framework that is depicted in Figure 2, SOBNP is generated during three phases. In the first phase that is requirement specification, the VE architect specifies requirements with a known business rule language, that is, Semantics of Business Vocabulary and Business Rules (SBVR) [9]. This phase is divided into two steps: step number 1, and step number 3. Step 1 is to take business-level (i.e. high-level) requirement of VE architect in which goals, opportunities, competencies, desired resources can be conveyed. In a similar way, step 3 takes technical-level (i.e. low-level) requirements in which the VE architect could specify the desired Quality of Services (QoS) for the identified services.
Second phase deals with partners activities selection. This phase is also divided into two steps: step number 2, and step number 4. Indeed, according to the specified requirements, which have been conveyed in business rule form in step 1 and 3, qualified activities and services, respectively, are identified. It should be noted that step 2 and 4 utilizes the same algorithm for partner selection. In fact our framework utilizes an ontology-based partner selection algorithm to effectively select the most appropriate partners within a network. The algorithm is, indeed, a semantic matchmaking method, since it can play a vital and effective role in partner selection in virtual enterprises [10][11][12].
Finally in the third phase, the VE architect specifies the control-flows between the qualified services using the workflow-patterns (step 5). As a matter of fact, the VE architect must identify the
required patterns among the selected services and
generate the expected SOBNP with the aid of
provided tools. Third phase also includes a
background activity that is process optimization.
This activity, in fact, examines the designed SOBNP on
the basis of a metrics suite. Our metric suite
comprises five metrics that analyses four key quality features
of BPEL business processes including business
value [13], reusability [14], context-
dependency [15], complexity [16], and
granularity [13]. In other words, these metrics
guarantee that the output BPEL process meet the key
quality features. In the following sub-sections, the
details of phases are revealed.
A. Requirement specification
In the first phase, SOBNP realization framework
deals with VE architect in order to grasp his/her
requirements. Requirements can be specified and
conveyed via business rule languages. A business rule
is a statement that defines or constrains some aspect
of the business. Business rule languages are going to
be the common language among various enterprises.
There are miscellaneous languages for business
rule specification such as RuleML [17], SBVR [9],
SWRL [18], and so on. Each of these languages has
both advantages and disadvantages. In this
regard, [19] explores the pros and cons of state of the
art for business rule languages.
SOBNP realization framework leverages SBVR
language since the SBVR has salient advantages
including quite straightforward structure, and notably
easy to use for business people [19], that is, for
someone without training in formal methods. Both
former and latter features make it appropriate for our
framework since our approach should be close to the
end user (i.e. VE architect).
A.1. SBVR Language
One of possibly many notations that can be used to
express the SBVR meta-model is SBVR Structured
English [9]. In other words, SBVR Structured English
is a notation which is used to define SBVR
vocabulary, definitions, and statements. Even though
the semantics of definitions and rules can be formally
represented in terms of the SBVR vocabulary and,
particularly, in terms of logical formulations, but
SBVR Structured English is natural and easy to use
for business people.
A.2. Requirement Ontology: Transformation of
SBVR from CIM to PIM
Requirement ontology is, in fact, the representation
of a request using ontology languages that capture
consensual knowledge of requirements in a formal
way. In fact, it specifies the expected competencies of
desired partner. In this subsection, we aim to discuss
how the VE architect can verbalize requirements and
thereafter how the corresponding ontology of
requirements has been generated.
As discussed earlier, SBVR is conceptualized
optimally for business people and designed to be used
for business purposes independent of information
systems designs. According to Model Driven
Architecture (MDA) models [20] SBVR language is
situated in computational independent model
(CIM) [21]. Therefore we have to transform it from
CIM model to platform independent model (PIM)
to make it appropriate for applying required
computations and processing. Since SOBNP
realization framework leverages ontology-based
techniques for partner selection, we have to transform
specified requirements which are in SBVR into
corresponding ontology. Among various ontology
languages the framework leverages OWL-DL [22]
because it provides required expressiveness and also
most of the existing tools support that. Therefore, we
have to generate corresponding OWL-DL of SBVR.
Further, generated OWL-DL is utilized to choose
desired activities semantically.
As stated, our framework needs to be semi-
automatic; hence the transformation from SBVR to
OWL-DL must be automatic. In this regard, our
framework leverages Attempto Controlled English
(ACE) [23]. ACE is a subset of English (i.e.
controlled English) that can be unambiguously
translated into first-order logic (FOIL). Following, the
produced FOL can be translated to OWL-DL, with
the aim of Attempto Parsing Engine (APE) web
service which produces concerned OWL with ACE
sentences.
Owing to the fact that the business vocabulary and
rules in SBVR underpinned by First Order Predicate
Logic, it is rational and feasible to distinguish the
relationship between ACE construction rules [24] and
SBVR Structured English. In other words, it is
possible to easily recognize that to what extent ACE
supports SBVR Structured English, since ACE
function words such as determiners, quantifiers,
prepositions, coordinators, negation words, etc. are
predefined and cannot be changed by users.
A.3. SOBNP Ontology
After taking VE architect’s requirements (using
business rule language) and translating them into
corresponding ontology, the framework utilizes a
semantic matchmaking algorithm that tries to find the
partner that its ontology match with the expressed
requirement. Therefore, there is a key assumption in
the proposed approach that is every partner in the
network defines and organizes relevant knowledge
about activities, processes, organizations, skills,
competencies etc. using OWL-DL ontology language.
In reality, such an assumption is trivial, since in the
last decades many projects aimed at creating
ontologies concerning the domain of virtual
enterprises including Collaborative Network
Organization (CNO) ontology [25], TOronto Virtual
Enterprise ontology (TOVE) [26]. However, these
ontologies do not overlay the required scope and
depth in SOBNP Realization framework.
To construct the required ontology, we follow some of steps and recommendations of Noy and McGuinness’s methodology [27] which relies on developing an ontology using Protégé tool [28]. The methodology consists of seven steps including defining ontology scope, reusing existing ontologies, enumerating major terms, defining classes and class hierarchy, defining slots (i.e. class properties), defining facets of slots, and creating instances.
To determine the scope of the ontology, we need to sketch a set of questions (i.e. competency questions) on the condition that the ontology should answer [29]. By inspiration of Hepp and Roman’s work [30], here are some of the questions for determination of SOBNP ontology scope:
- What is a business opportunity?
- What are the goals of a particular SOBNP?
- Does a particular SOBNP contribute to a business opportunity?
- Which set of activities does constitute a particular SOBNP?
- What are the conditions of a qualified activity?
- What kinds of resources exist for a particular SOBNP?
- For each activity in a particular process, what are the pre-state and post-state?
After scope determination, it is expected to reusing the existing ontologies to eliminate cost, time, and effort for building the ontology. There are two major efforts such that their partial combinations can make extensive progress in SOBNP ontology development. These ontologies are CNO ontology [25] and Multi Meta-Model Process Ontology (m3po) [31].
In [32], Plisson et al., proposed CNO ontology, which is also referred as Virtual organization Breeding Environment (VBE) ontology. The proposed ontology overlay two different level of knowledge in a network. First level deals with common knowledge about the organizational structure itself and the second one copes with the domain specific knowledge that such networks cover. Even though CNO ontology satisfies some semantic issues of our framework; it lacks to supply the required depth of knowledge that we have to know about each partner within a network, for instance, the sub-processes or activities of a particular process. On the contrary, some elements of CNO ontology are outside of our ontology scope, for instance, some kinds of CNO concepts (e.g. Virtual team, professional virtual community), some VBE roles (e.g. VBE Support Institution), etc.
In this regard, m3po embodies five aspects of workflow specifications including functional and behavioral, informational, organizational, operational, and orthogonal. In the same way, we include some parts of functional and behavioral aspects and exclude the other aspects on the basis of SOBNP ontology scope.
Table 1. The characteristics of SOBNP ontology
<table>
<thead>
<tr>
<th>Characteristics</th>
<th>SOBNP Ontology</th>
</tr>
</thead>
<tbody>
<tr>
<td>Max Depth of Ontology</td>
<td>4</td>
</tr>
<tr>
<td>Number of Concepts</td>
<td>23</td>
</tr>
<tr>
<td>Number of Relationships</td>
<td>34</td>
</tr>
<tr>
<td>Super/SubClassOf Relationships</td>
<td>25</td>
</tr>
</tbody>
</table>
Instantiation of SOBNP ontology can be achieved semi-automatically or manually. Moreover, provided any partner within a network does have any existing ontology, it is beneficial to integrate it to SOBNP ontology via ontology merging methods. Figure 4 represents some of the concepts from SOBNP ontology. Besides, Table 1 denotes the characteristics of the designed SOBNP ontology.
B. Ontology-based semantic partner selection
In this section, we are going to match the requirements, which are now in OWL-DL, against the SOBNP ontology. The proposed partner selection algorithm identifies the best partner through semantic similarity measurement between VE architect's requirement ontology and partners' ontologies.
The proposed SOBNP Realization framework utilizes the ontology-based partner selection algorithm that has thoroughly been discussed by the authors in [33][34]. The approach for semantic matchmaking consists of three phases including Lexical level matchmaking, Conceptual level matchmaking, and Aggregation and comparison, Figure 4.
First of all, both requirement ontology and partners' enterprise ontology, as inputs are inserted into the framework. In fact, it is expected to find the best partner who satisfies the requirement as much as possible. At the first phase, the framework measures syntactic similarity of resources (i.e. concepts or concept instances) between two ontologies. Thereafter, at the second phase, the resulted sets which are the outputs of syntactic similarity analysis are going to be examined via semantic-based techniques including gravitation of resources, path similarity, path weight, and definition similarity. At the third phase, Conceptual similarity values are compared in order to identify the qualified partner.
C. Synthesizing abstract BPEL process
This section aims to describe the third phase of SOBNP creation. In this phase the VE architect have to express the process patterns among selected activities to synthesize desired SOBNP. In other words, we have to synthesize the activities in the activity repository for the purpose of generation abstract BPEL of the desired SOBNP. Synthesis is the process of producing one specification from another at an appropriate level of abstraction, while significant features of the source specification are kept in the target one [35].
Although a body of work has been reported on generating process models in the area of service oriented computing, but most of them are not suitable to novice process owners, VE architects, business managers or business domain experts. For instance, Yu et al. [35] propose an outstanding method for generating process model on the basis of temporal business rules. Their method uses PROPSOL language [36] for specification of the rules. However, in our perspective the PROPSOL language requires that the end-users do have background in formal methods in order to express rules accurately.
Thus, regarding SOBNP realization framework needs an approach that gets harmonized with its demand; we use workflow patterns for synthesizing process model. According to our framework requirements, utilizing workflow patterns is an appropriate choice, since they are easy to use for VE architects and their semantics is clear already. Workflow process schemas are defined to specify which activities need to be executed and in what order. Van der Aalst and his colleagues [37] introduce 26 workflow patterns, but not all of them do have common-use in business process schema generation.
### Table 2. Workflow patterns and their corresponding BPEL constructs
<table>
<thead>
<tr>
<th>Workflow Patterns</th>
<th>BPEL Construct</th>
<th>BPEL Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sequence</td>
<td><code><Sequence></code></td>
<td><code><sequence standard-attributes></code><br><code>standard-elements</code><br><code>activity+</code><br><code></sequence></code></td>
</tr>
<tr>
<td>Parallel Split</td>
<td><code><Flow></code></td>
<td><code><flow standard-attributes></code><br><code>standard-elements</code><br><code><links>?</code><br><code><link name="nocom">+</code><br><code></link></code><br><code>activity+</code><br><code></flow></code></td>
</tr>
<tr>
<td>Exclusive Choice</td>
<td><code><Switch></code></td>
<td><code><switch standard-attributes></code><br><code>standard-elements</code><br><code><case condition="bool-expr">+</code><br><code>activity</code><br><code></case></code><br><code><otherwise>?</code><br><code>Activity</code><br><code></otherwise></code><br><code></switch></code></td>
</tr>
<tr>
<td>Simple Merge</td>
<td></td>
<td>This pattern is supported directly by means of the <code><Switch></code> construct and alternatively by using links with disjunctive transition conditions inside a <code><Flow></code> construct.</td>
</tr>
</tbody>
</table>
Our framework and its supported toolset, in initial phases utilizes four basic control flow patterns including Sequence, Simple Merge, Exclusive Choice, and Split Parallel through which VE architect could synthesis expected SOBNP. These patterns have equivalent BPEL constructs. Table 2, denotes the above-mentioned patterns and their corresponding BPEL constructs.
V. SYSTEM IMPLEMENTATION
In the previous sections, we have described an abstract architecture of SOBNP realization framework. In this section, we discuss the architecture and technologies that have been employed for the purpose of realizing the system. Totally, the system has an interface for business rule specification and three modules. Figure 5 shows the architecture of implemented tool. First module copes with generation of corresponding OWL-DL of SBVR. The second module deals with semantic-based partner selection and the last one concerned with synthesizing composite process model.
For user interface module, we have chosen the Eclipse rich client platform. For generation of corresponding OWL-DL of SBVR module we have used ACE parsing engine. For partners’ activities selection, we have used Secondstring Java package [38] for syntactic similarity measurement and Jena package [39] for semantic based similarity.

measurement. Jena is a Java framework for building Semantic Web applications.
VI. SOBPN REALIZATION FRAMEWORK EVALUATION
This section aims to evaluate and validate the proposed framework through two approaches: a) Scenario simulation; b) Gathering experts' judgments through a survey. In fact, the first approach demonstrates, firstly, how our framework works, and secondly, how easily and instantly a VE architect could constitute the expected SOBPN. On the other hand, the second approach examines that if the proposed framework meets the expressed claims in terms of experts' views.
A. Scenario Simulation
In what follows, we will show how our framework generates SOBPN in terms of the business scenario that was described in section 2 (i.e., Collaborative Online Brokerage).
A.1. First step of Phase one: High-level Requirement Specification
As discussed earlier, in the first step of the phase one, the VE architect must express his/her high-level requirements including goals, activities, opportunities, competencies through SBVR. Figure 6, denotes a sample high level requirement for the given scenario in which a VE architect specifies his/her business requirements about ExecuteOrder activity. In fact, even though there are some Stock Exchange organizations that have such an activity, but some of them may satisfy the expected requirements. The output of this step is the corresponding ontology of specified requirement that is requirement ontology. Figure 6.
A.2. First step of Phase Second: First-level Matchmaking
In the next step (Step 1, Phase 2) the VE architect must match the requirement ontology with partners' ontologies to identify and select best partners. This step is done through ontology-based partner selection tool that is depicted in Figure 7.
This should be noted that, the VE architect could select best partners for all of the activities once. However, if we repeat these two steps for every activity separately, the obtained results would be more desirable. This is due to the fact that, each partner may excel at one activity; hence if we analyze partners separately, each of them has a chance to be selected for participation.
In the second step of the first phase, VE architect must specify the detailed requirement about each web service that supports selected activities in previous step. To be more specific, we assume that there is a web service that support ExecuteOrder activity, for instance. This web service may have various versions with different QoS; hence VE architect must express the expected QoS. Figure 8, indicates the sample low-level requirements that express the expected QoS for ExecuteOrder web service. The output of this step is low-level requirement ontology, Figure 8.
A.4. Second step of Phase One: Low-level Requirement Specification
In second step of the phase two, VE architect must match the low-level requirements with OWL-S partners’ ontologies. It should be noted that step 2 and 4 utilizes the same algorithm for partner selection. Figure 9 shows ontology-based partners’ web services selection form.
A.5. Phase third: BPEL Process Synthesis
After identifying the best partners, VE architect must determine appropriate patterns between selected web services and synthesize the abstract BPEL for desired SOBPNP. For the given example the patterns is set as depicted in Figure 10. Thereafter the corresponding process model is synthesized by the tool and the output BPEL process is formed. For readability reasons, the produced Abstract BPEL are modeled and represented through Eclipse BPEL Designer plug-in [40].
B. Framework capabilities analysis
To evaluate the SOBPN framework, we have employed the Sol methodology framework [41], which pays explicit attention to all the important aspects of a development methodology. Sol’s framework defines a set of essential factors that characterize an information system development process and classifies them into a way of thinking, a way of modeling, a way of working, and a way of controlling. The way of thinking of the process provides an abstract description of the underlying concepts. The way of modeling of the method structures the models, which can be used in the information system development. The way of working of the process organizes the way in which an information system is developed. It defines the possible tasks that have to be performed as part of the development process. The way of controlling of the process deals with specific management aspects of the development process in terms of the resource management, actors’ roles, intermediate and final results.
We utilize the above-mentioned aspects to verbalize and determine both general and specific capabilities of our proposed process and framework, and then used these capabilities to design questionnaires for conducting users’ evaluation of our process. The users’ evaluation of our process was collected through a survey. To gain experts’ judgments, firstly the framework was introduced for them. Thereafter, we simulate some sample scenarios using the prototype system through which they could observe if the framework is usable and nifty.
The interviewees could answer a question based on
<table>
<thead>
<tr>
<th>No of Experts</th>
<th>Profile</th>
<th>Practical years experienced in IT field in years</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>PhD student in Industrial Eng. Tarbiat Modares University.</td>
<td>9</td>
</tr>
<tr>
<td>1</td>
<td>PhD student in Management. Tarbiat Modares University.</td>
<td>7</td>
</tr>
<tr>
<td>1</td>
<td>PhD student in Software Eng. Shahid Beheshti University.</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>BSc. and MSc. in Software Eng. Amirkabir University, I.A. University</td>
<td>7</td>
</tr>
<tr>
<td>3</td>
<td>MSc. students in Software Eng. Shahid Beheshti University.</td>
<td>4</td>
</tr>
</tbody>
</table>
a five-point Likert scale [42] ranging from (1) strongly disagree, (2) disagree, (3) neutral, (4) agree to (5) strongly agree. We had nine participants in the survey from both academic and industry. Table 3 denotes survey participants’ profiles. These experts were selected based on the following criterion:
- They must have experience in business process engineering projects.
- They must have experience in development and deployment of information system projects.
After gathering experts’ judgments, the authors use statistical test to gain confidence for the directions of the outcomes. Table 4 and 5 denote the questionnaire and the experts’ answers with respect to the provided questions. In the survey Table, M denotes the mean that is the average of the given grades. STD denotes the standard deviation, and NP represents the number of positive responses, i.e. responses 4 or 5.
According to the obtained results, we obtained indications of positive evaluation of our proposed framework from persons involved in with the framework explanation and its toolset simulation through statistical analysis of the answers which are shown in Table 3 and 4. With respect to the point that the mean value more than 3.5 indicates that the statement is agreeable by the experts, among the 14 statements in questionnaire, just statement number 5 gets the value of 3.22 which obviously is less than 3.5. The statement number 5 question that if the framework can be employed in the scale of real enterprises and networks. Since the SOBNP Realization framework is one of the rare semi-automated approaches for IVE creation that has been adjusted to be employed by VE architect, it seems trivial that the work needs to be matured further in the future to get ready for actual use in real world networks of organization.
VII. DISCUSSION AND FUTURE DIRECTIONS
A.1. Converting requirement from SBVR to OWL
One of the key challenges in our approach is concerned with our translation. It is not possible to transfer all the SBVR Structured English via ACE since not every English sentence is an ACE sentence.
<table>
<thead>
<tr>
<th>No</th>
<th>SOBNP Realization framework capabilities</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>NP</th>
<th>M</th>
<th>STD</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Both framework and its concerned toolset are easy to use, straightforward and trouble-free.</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>5</td>
<td>3</td>
<td>8</td>
<td>4.22</td>
<td>0.62</td>
</tr>
<tr>
<td>2</td>
<td>The overall performance of the framework and its toolset is acceptable.</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>6</td>
<td>1</td>
<td>7</td>
<td>3.66</td>
<td>1.05</td>
</tr>
<tr>
<td>3</td>
<td>The framework accelerates the process of IVE creation.</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>5</td>
<td>4</td>
<td>9</td>
<td>4.44</td>
<td>0.49</td>
</tr>
<tr>
<td>4</td>
<td>The framework simplifies the process of IVE creation.</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>2</td>
<td>5</td>
<td>7</td>
<td>4.33</td>
<td>0.81</td>
</tr>
<tr>
<td>5</td>
<td>The framework can be employed in the scale of real enterprises and networks.</td>
<td>0</td>
<td>3</td>
<td>3</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>3.22</td>
<td>1.13</td>
</tr>
<tr>
<td>6</td>
<td>The framework shows the importance of automated approaches for SOBNP creation.</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>6</td>
<td>3</td>
<td>9</td>
<td>4.33</td>
<td>0.47</td>
</tr>
<tr>
<td>No</td>
<td>SOBNP Realization framework capabilities</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>NP</td>
<td>M</td>
<td>STD</td>
</tr>
<tr>
<td>----</td>
<td>------------------------------------------------------------------------------------------------------------</td>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
<td>---</td>
<td>----</td>
<td>-----</td>
<td>-----</td>
</tr>
<tr>
<td>1</td>
<td>The VE architect could generate expected SOBNP without deep knowledge of computer science.</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>6</td>
<td>1</td>
<td>7</td>
<td>3.88</td>
<td>0.56</td>
</tr>
<tr>
<td>2</td>
<td>The framework could generate business processes at different level of abstractions.</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>7</td>
<td>0</td>
<td>7</td>
<td>3.55</td>
<td>0.95</td>
</tr>
<tr>
<td>3</td>
<td>The framework covers both high-level (i.e. business level) and low-level (technical-level) requirement in business process creation.</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>5</td>
<td>2</td>
<td>7</td>
<td>3.77</td>
<td>1.13</td>
</tr>
<tr>
<td>4</td>
<td>The framework realizes the requirement specification phase properly.</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>5</td>
<td>3</td>
<td>8</td>
<td>4.22</td>
<td>0.62</td>
</tr>
<tr>
<td>5</td>
<td>The framework realizes the partner selection phase properly.</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>2</td>
<td>5</td>
<td>7</td>
<td>4.33</td>
<td>0.81</td>
</tr>
<tr>
<td>6</td>
<td>The framework realizes the process model synthesis phase properly.</td>
<td>0</td>
<td>0</td>
<td>2</td>
<td>4</td>
<td>3</td>
<td>7</td>
<td>4.11</td>
<td>0.73</td>
</tr>
<tr>
<td>7</td>
<td>The developed methods and algorithms have appropriately been integrated into the framework.</td>
<td>0</td>
<td>0</td>
<td>3</td>
<td>2</td>
<td>4</td>
<td>6</td>
<td>4.11</td>
<td>0.87</td>
</tr>
<tr>
<td>8</td>
<td>The framework could be integrated with business modeling and software development environments.</td>
<td>0</td>
<td>1</td>
<td>3</td>
<td>3</td>
<td>2</td>
<td>5</td>
<td>3.66</td>
<td>0.94</td>
</tr>
</tbody>
</table>
In addition, OWL-DL does not capture the full semantics of SBVR [43]. It means there are FOL definitions expressing the SBVR structures that cannot be rendered with OWL-DL. For that reason, the fundamental work on improving the translation approach is ahead.
### A.2. Nested control constructs
The implemented prototype in its initial states does not support complex business processes in which the process could have nested control constructs. Moreover, in those cases the determination of nested controls could be hard and complicated for VE architect; hence we aim to develop some heuristics through which the VE architect could synthesize complicated SOBNP straightforwardly. To be more specific, the authors want to provide an environment through which VE architect could, firstly, specify some control-flow rules among the activities using SBVR. Thereafter, the system infers the rules and generates some potential process models. Finally, VE architect could select the one which is exactly what he/she expected.
### A.3. Metrics Suite
As cited in section 4, third phase includes a background activity that is process measurement and optimization. This activity examines the designed SOBPNP on the basis of a metrics suite. Now, we are engaging with the implementation of the metrics suite module of our SOBNP Realization framework in Automated Software Engineering Research group (ASER'). Thereafter, we also intend to put the approach to the test to evaluate and verify the framework and computations against more actual utility and real life cases.
---
1. [http://aser.sbu.ac.ir/](http://aser.sbu.ac.ir/)
---
### VIII. Conclusion
Dynamic market conditions require a flexible framework through which novice VE architects, business managers, or business domain experts could design and realize business processes instantly. In this paper, the authors presented a framework and associated techniques and toolset to semi-automatically realize SOBPNP in IVEs. The approach has two salient features that in combination make it stand out with respect to other approaches. Firstly, it has been tuned to the end-user (i.e. the VE architect) who does not have deep knowledge of computer science. Secondly, it covers business level and technical-level requirements in business process creation. A proof-of-concept prototype system implemented to demonstrate the concept of VE architect-driven service-oriented business network process realization in IVE.
### ACKNOWLEDGMENT
This work was supported by Iran Telecommunication Research Center (ITRC) under Contract No. T/500/1381.
The authors would like to thank the following persons for their insight, constructive criticism, enthusiastic guidance, and help: P. Jamshidi, S. Khoshnevis, M. Fahmideh, A. Nikravesh, A. Khorasanchi, B. Nahavandi, E. Malahi, A. Ghashari, and S. Farrokhi.
### REFERENCES
[22] W3C: OWL Web Ontology Language Overview - http://www.w3.org/TR/owl-features/
Alireza Khoshkbarforoushba received his M.Sc. degree in Information Technology Engineering from Tarbiat Modares University, Tehran, Iran. He is a member of Automated Software Engineering Research (ASER) Group at Shahid Beheshti University. His research interests include Service-Oriented Business Process and Work-flows, Service-Oriented Architecture, Software and Process Metrics.
Mohammad Aghdasi is an Associate Professor at the Department of Information Technology Engineering at Tarbiat Modares University, Tehran, Iran. He received his B.Sc. degree in Engineering from Sharif University of Technology, Tehran, Iran, in 1981, the M.Sc. degree in Management Engineering from University of Electro-Communications, Japan, in 1986, and his Ph.D. degree in Management Science Engineering from University of Tsukuba institute of Socio-Economic Planning, Japan, in 1989. His current research interests include Service Oriented Business Process, Business Process Management, and Business Process Reengineering.
Mehrnoush Shamsfard has received her B.Sc. and M.Sc. both on computer software engineering from Sharif University of Technology, Tehran, Iran. She received her Ph.D. in Computer Engineering- Artificial Intelligence from AmirKabir University of Technology in 2003. Dr. Shamsfard has been an assistant professor at Shahid Beheshti University from 2004. She is the head of NLP Research Laboratory of Electrical & Computer Engineering Faculty. Her main fields of interest are natural language processing, ontology engineering, text mining, and semantic web.
|
{"Source-Url": "http://ijict.itrc.ac.ir/article-1-249-en.pdf", "len_cl100k_base": 9234, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 14196, "total-output-tokens": 10471, "length": "2e13", "weborganizer": {"__label__adult": 0.0003762245178222656, "__label__art_design": 0.0009379386901855468, "__label__crime_law": 0.00044417381286621094, "__label__education_jobs": 0.004390716552734375, "__label__entertainment": 0.00014197826385498047, "__label__fashion_beauty": 0.00026297569274902344, "__label__finance_business": 0.00384521484375, "__label__food_dining": 0.0004429817199707031, "__label__games": 0.0007190704345703125, "__label__hardware": 0.0009012222290039062, "__label__health": 0.0006384849548339844, "__label__history": 0.0004320144653320313, "__label__home_hobbies": 0.00012493133544921875, "__label__industrial": 0.0008392333984375, "__label__literature": 0.0005116462707519531, "__label__politics": 0.0005083084106445312, "__label__religion": 0.0004901885986328125, "__label__science_tech": 0.1309814453125, "__label__social_life": 0.0001537799835205078, "__label__software": 0.0237579345703125, "__label__software_dev": 0.828125, "__label__sports_fitness": 0.0002524852752685547, "__label__transportation": 0.0006923675537109375, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43780, 0.01824]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43780, 0.22185]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43780, 0.91177]], "google_gemma-3-12b-it_contains_pii": [[0, 3666, false], [3666, 7260, null], [7260, 10482, null], [10482, 15984, null], [15984, 18971, null], [18971, 22486, null], [22486, 24747, null], [24747, 27558, null], [27558, 30596, null], [30596, 33579, null], [33579, 38419, null], [38419, 42217, null], [42217, 43780, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3666, true], [3666, 7260, null], [7260, 10482, null], [10482, 15984, null], [15984, 18971, null], [18971, 22486, null], [22486, 24747, null], [24747, 27558, null], [27558, 30596, null], [30596, 33579, null], [33579, 38419, null], [38419, 42217, null], [42217, 43780, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43780, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43780, null]], "pdf_page_numbers": [[0, 3666, 1], [3666, 7260, 2], [7260, 10482, 3], [10482, 15984, 4], [15984, 18971, 5], [18971, 22486, 6], [22486, 24747, 7], [24747, 27558, 8], [27558, 30596, 9], [30596, 33579, 10], [33579, 38419, 11], [38419, 42217, 12], [42217, 43780, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43780, 0.12759]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4de84d7839c8721d4ac5bbf97801e25f2ef4cdf1
|
Piazza: Data Management Infrastructure for Semantic Web Applications
Alon Y. Halevy
University of Pennsylvania
Zachary G. Ives
University of Pennsylvania, zives@cis.upenn.edu
Peter Mork
University of Pennsylvania
Igor Tatarinov
University of Pennsylvania
Follow this and additional works at: http://repository.upenn.edu/db_research
http://repository.upenn.edu/db_research/32
Publisher URL: http://doi.acm.org/10.1145/775152.775231
This paper is posted at ScholarlyCommons. http://repository.upenn.edu/db_research/32
For more information, please contact repository@pobox.upenn.edu.
Piazza: Data Management Infrastructure for Semantic Web Applications
Abstract
The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to be able to accommodate the huge amounts of existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world’s data is available not in RDF but in XML; XML and the applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies. This paper describes the Piazza system, which addresses these challenges. Piazza offers a language for mediating between data sources on the Semantic Web, which maps both the domain structure and document structure. Piazza also enables interoperation of XML data with RDF data that is accompanied by rich OWL ontologies. Mappings in Piazza are provided at a local scale between small sets of nodes, and our query answering algorithm is able to chain sets mappings together to obtain relevant data from across the Piazza network. We also describe an implemented scenario in Piazza and the lessons we learned from it.
Keywords
Semantic web, peer data management systems, XML
Comments
Publisher URL: http://doi.acm.org/10.1145/775152.775231
This journal article is available at ScholarlyCommons: http://repository.upenn.edu/db_research/32
Piazza: Data Management Infrastructure for Semantic Web Applications
Alon Y. Halevy Zachary G. Ives Peter Mork Igor Tatarinov
University of Washington Box 352350 Seattle, WA 98195-2350
{alon,zives,pmork,igor}@cs.washington.edu
ABSTRACT
The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to be able to accommodate the huge amounts of existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world’s data is available not in RDF but in XML; XML's applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies.
This paper describes the Piazza system, which addresses these challenges. Piazza offers a language for mediating between data sources on the Semantic Web, which maps both the domain structure and document structure. Piazza also enables interoperability of XML data with RDF data that is accompanied by rich OWL ontologies. Mappings in Piazza are provided at a local scale between small sets of nodes, and our query answering algorithm is able to chain sets mappings together to obtain relevant data from across the Piazza network. We also describe an implemented scenario in Piazza and the lessons we learned from it.
Categories and Subject Descriptors
H.3.5 [Information Storage and Retrieval]: Online Information Services--Data sharing; H.2.5 [Database Management]: Heterogeneous Databases; H.2.3 [Database Management]: Languages—Data description languages (DDL)
General Terms
Algorithms, Management, Languages
Keywords
Semantic web, peer data management systems, XML
1. INTRODUCTION
HTML and the World Wide Web have had amazing impact on the process of distributing human-readable data to even casual computer users. Yet these technologies are actually quite limited in scope: Web data lacks machine-understandable semantics, so it is generally not possible to automatically extract concepts or relationships from this data or to relate items from different sources. The Web community is attempting to address this limitation by designing a Semantic Web [4]. The Semantic Web aims to provide data in a format that embeds semantic information, and then seeks to develop sophisticated query tools to interpret and combine this information. The result should be a much more powerful knowledge-sharing environment than today’s Web: instead of posing queries that match text within documents, a user could ask questions that can only be answered via inference or aggregation; data could be automatically translated into the same terminology; information could be easily exchanged between different organizations.
Much of the research focus on the Semantic Web is based on treating the Web as a knowledge base defining meanings and relationships. In particular, researchers have developed knowledge representation languages for representing meanings — relating them within custom ontologies for different domains — and reasoning about the concepts. Well-known examples include RDF and RDF Schema, as well as languages that build upon these data models: DAML+OIL and OWL, the recent standard emerging from the W3C.
The progress on developing ontologies and representation languages leaves us with two significant problems. The first problem (also noted by [28]) is that there is a wide disconnect between the RDF world and most of today’s data providers and applications. RDF represents everything as a set of classes and properties, creating a graph of relationships. As such, RDF is focused on identifying the domain structure. In contrast, most existing data sources and applications export their data into XML, which tends to focus less on domain structure and more around important objects or entities. Instead of explicitly spelling out entities and relationships, they often nest information about related entities directly within the descriptions of more important objects, and in doing this they sometimes leave the relationship type unspecified. For instance, an XML data source might serialize information about books and authors as a list of book objects, each with an embedded author object. Although book and author are logically two related objects with a particular association (e.g., in RDF, author writes book), applications using this source may know that this document structure implicitly represents the logical writes relationship.
The vast majority of data sources (e.g., relational tables, spreadsheets, programming language objects, e-mails, and web logs) use hierarchical structures and references to encode both objects and domain structure-like relationships. Moreover, most application
development tools and web services rely on these structures. Clearly, it would be desirable for the Semantic Web to be able to interoperate with existing data sources and consumers — which are likely to persist indefinitely since they serve a real need. From the perspective of building semantic web applications, we need to be able to map not only between different domain structures of two sources, but also between their document structures.
The second challenge we face concerns the scale of ontology and schema mediation on the semantic web. Currently, it is widely believed that there will not exist a single ontology for any particular domain, but rather that there will be a few (possibly overlapping) ones. However, the prevailing culture, at least in the data management industry, entails that the number of ontologies/schemas we will need to mediate among is actually substantially higher. Suppliers of data are not used to mapping their schemas to a select small set of ontologies (or schemas): it is very hard to build a consensus about what terminologies and structures should be used. In fact, it is for this reason that many data warehouse projects tend to fail precisely at the phase of schema design [33]. Interoperability is typically attained in the real world by writing translators (usually with custom code) among small sets of data sources that are closely related and serve similar needs, and then gradually adding new translators to new sources as time progresses. Hence, this practice suggests a practical model for how to develop a large-scale system like the Semantic Web: we need an architecture that enables building a web of data by allowing incremental addition of sources, where each new source maps to whatever sources it deems most convenient — rather than requiring sources to map to a slow-to-evolve and hard-to-manage standard schema. Of course, in the case of the Semantic Web, the mappings between the sources should be specified declaratively. To complement the mappings, we need efficient algorithms that can follow semantic paths to obtain data from distant but related nodes on the web.
This paper describes the Piazza system, which provides an infrastructure for building Semantic Web applications, and addresses the aforementioned problems. A Piazza application consists of many nodes, each of which can serve either or both of two roles: supplying source data with its schema, or providing only a schema (or ontology). A very simple node might only supply data (perhaps from a relational database); at the other extreme, a node might simply provide a schema or ontology to which other nodes’ schemas may be mapped. The semantic glue in Piazza is provided by local mappings between small sets (usually pairs) of nodes. When a query is posed over the schema of a node, the system will utilize data from any node that is transitively connected by semantic mappings, by chaining mappings. Piazza’s architecture can accommodate both local point-to-point mappings between data sources, as well as collaboration through select mediated ontologies. Since the architecture is reminiscent of peer-to-peer architectures, we refer to Piazza as a peer data management system (PDMS).
We make the following specific contributions.
- We propose a language for mediating between nodes that allows mapping simple forms of domain structure and rich document structure. The language is based on XQuery [6], the emerging standard for querying XML. We also show that this language can map between nodes containing RDF data and nodes containing XML data.
- We describe an algorithm for answering queries in Piazza that chains semantic mappings specified in our language. The challenge in developing the algorithm is that the mappings are directional, and hence may sometimes need to be traversed in reverse. In fact, the algorithm can also go in reverse through mappings from XML to RDF that flatten out the document structure. Previous work [16] has presented an analogous algorithm for the simple case where all data sources are relational. Here we extend the algorithms considerably to the XML setting.
- Finally, we describe an implemented scenario using Piazza and several observations from this experience. The scenario includes 15 nodes (based on the structures and data of real web sites) that provide information about different aspects of the database research community.
At a more conceptual level, we believe that Piazza paves the way for a fruitful combination of data management and knowledge representation techniques in the construction of the Semantic Web. In fact, we emphasize that the techniques offered in Piazza are not a replacement for rich ontologies and languages for mapping between ontologies. Our goal is to provide the missing link between data described using rich ontologies and the wealth of data that is currently managed by a variety of tools. See [19] for a discussion of additional challenges in this area.
The paper is organized as follows. Section 2 provides an overview of Piazza, and Section 3 describes the language for mapping between nodes in Piazza. Section 4 presents the key algorithm underlying query answering in Piazza. In Section 5 we offer our experiences from implementing the scenario. Section 6 describes related work, and Section 7 concludes.
2. SYSTEM OVERVIEW
We begin by providing an overview of the concepts underlying Piazza and our approach to building Semantic Web applications.
2.1 Data, Schemas, and Queries
Our ultimate goal with Piazza is to provide query answering and translation across the full range of data, from RDF and its associated ontologies to XML, which has a substantially less expressive schema language. The main focus of this paper is on sharing XML data, but we explain how to accommodate richer data as we proceed.
Today, most commercial and scientific applications have facilities for automatically exporting their data into XML form. Hence, for the purpose of our discussion, we can consider XML to be the standard representation of a wide variety of data sources (as do others [28]). In some cases, accessing the actual data may require an additional level of translation (e.g., with systems like [13, 31]). Perhaps of equal importance, many applications, tools, and programming languages or libraries have facilities for loading, processing, and importing XML data. In the ideal case, one could map the wealth of existing XML-style data into the Semantic Web and query it using semantic web tools; correspondingly, one could take the results of Semantic Web queries and map them back into XML so they can be fed into conventional applications.
RDF is neutral with respect to objects’ importance: it represents a graph of interlinked objects, properties, and values. RDF also assigns uniform semantic meaning to certain reserved objects (e.g., containers) and properties (e.g., identifiers, object types, references). Relationships between pairs of objects are explicitly named.
The main distinctions between RDF and unordered XML are that XML (unless accompanied by a schema) does not assign semantic meaning to any particular attributes, and XML uses hierarchy (membership) to implicitly encode logical relationships. Within
In this paper we consider only unordered XML; order information can still be encoded within the data.
an XML hierarchy, the central objects are typically at the top, and related objects are often embedded as subelements within the document structure; this embedding of objects creates binary relationships. Of course, XML may also include links and can represent arbitrary graphs, but the predominant theme in XML data is nesting. Whereas RDF names all binary relationships between pairs of objects, XML typically does not. The semantic meaning of these relationships is expressed within the schema or simply within the interpretation of the data. Hence, it is important to note that although XML is often perceived as having only a syntax, it is more accurately viewed as a semantically grounded encoding for data, in a similar fashion to a relational database. Importantly, as pointed out by Patel-Schneider and Simeon [28], if XML is extended simply by reserving certain attribute names to serve as element IDs and IDREFs, one can maintain RDF semantics in the XML representation.
As with data, the XML and RDF worlds use different formalisms for expressing schema. The XML world uses XML Schema, which is based on object-oriented classes and database schemas: it defines classes and subclasses, and it specifies or restricts their structure and also assigns special semantic meaning (e.g., keys or references) to certain fields. In contrast, languages such as RDFS, DAML+OIL [17] and OWL [9] come from the Knowledge Representation (KR) heritage, where ontologies are used to represent sets of objects in the domain and relationships between sets. OWL uses portions of XML Schema to express the structure of so-called domain values. In the remainder of this paper, we refer to OWL as the representative of this class of languages.
It is important to note that some of the functionality of KR descriptions and concept definitions can be captured in the XML world (and more generally, in the database world) using views. In the KR world, concept definitions are used to represent a certain set of objects based on constraints they satisfy, and they are compared via subsumption algorithms. In the XML world, queries serve a similar purpose, and furthermore, when they are named as views, they can be referenced by other queries or views. Since a view can express constraints or combine data from multiple structures, it can perform a role like that of the KR concept definition. Queries can be compared using query containment algorithms. There is a detailed literature that studies the differences between the expressive power of description logics and query languages and the complexity of the subsumption and containment problem for them (e.g., [21]). For example, certain forms of negation and number restrictions, when present in query expressions, make query containment undecidable, while arbitrary join conditions cannot be expressed and reasoned about in description logics.
Many different types of semantic mappings are required in converting within and between the XML and RDF worlds: one-to-one correspondences may occur between concepts, requiring simple renamings; more complex, n-to-m-arity correspondences may require join-like operations; there may be complex restructuring of concept definitions in going from one format to another (especially when XML is involved); and some complex concept definitions may require significant inference capabilities. For several reasons we focus on an XQuery-based approach to defining mappings: (1) it is important to be able to map existing XML data into RDF, and this requires the strong restructuring, joining, and renaming capabilities of XQuery; (2) existing, scalable, and practical techniques have been developed for reasoning about query-based mappings in the database community, and we can leverage these; (3) while XQuery views are less expressive than OWL concept definitions, they can capture many common types of semantic mappings, and we expect that they can be supplemented with further OWL constructs as necessary.
### 2.2 Data Sharing and Mediation
Logically, a Piazza system consists of a network of different sites (also referred to as peers or nodes), each of which contributes resources to the overall system. The resources contributed by a site include one or more of the following: (1) ground or extensional data, e.g., XML or RDF data instances, (2) models of data, e.g., XML schema or OWL ontologies. In addition, nodes may supply computed data, i.e., cached answers to queries posed over other nodes.
When a new site (with data instance or schema) is added to the system, it is semantically related to some portion of the existing network, as we describe in the next paragraph. Queries in Piazza are always posed from the perspective of a given site’s schema, which defines the preferred terminology of the user. When a query is posed, Piazza provides answers that utilize all semantically related XML data within the system.
In order to exploit data from other sites, there must be semantic glue between the sites, in the form of semantic mappings. Mappings in Piazza are specified between small numbers of sites, usually pairs. In this way, we are able to support the two rather different methods for semantic mediation mentioned earlier: mediated mapping, where data sources are related through a mediated schema or ontology, and point-to-point mappings, where data is described by how it can be translated to conform to the schema of another site. Admittedly, from a formal perspective, there is little difference between these two kinds of mappings, but in practice, content providers may have strong preferences for one or the other.
The actual formalism for specifying mappings depends on the kinds of sites we are mapping. There are three main cases, depending on whether we are mapping between pairs of OWL/RDF nodes, between pairs of XML/XML Schema nodes, or between nodes of different types.
**Pairs of OWL/RDF nodes**: OWL itself already provides the constructs necessary for mapping between two OWL ontologies. Specifically, OWL’s owl:equivalemtProperty construct declares that two edge labels denote the same relationship. The owl:equivalemtClass construct is even more powerful: one can use it to create a boolean combination of the classes in a source ontology and equate that to a class (or even another boolean combination) in a target ontology. In principle, the reasoning procedures for OWL can be used to provide reasoning across ontologies, and hence integrate data from multiple nodes. Performing such reasoning efficiently raises many interesting research questions.
**Pairs of XML/XML Schema nodes**: This case is more challenging because it does not make sense to simply assert that two structures should be considered the same. To illustrate the challenges associated with designing a language for mapping between two XML nodes, consider the following example.
**Example 2.1.** Suppose we want to map between two sites. Suppose the target contains books with nested authors; the source contains authors with nested publications. We illustrate partial schemas for these sources below, using a format in which indentation illustrates nesting and a * suffix indicates “0 or more occurrences of...”., as in a BNF grammar.
1 It is also be possible to let the user narrow the set of sites considered in a query; this does not introduce any difficulties.
<table>
<thead>
<tr>
<th>Target:</th>
<th>Source:</th>
</tr>
</thead>
<tbody>
<tr>
<td>pubs</td>
<td>authors</td>
</tr>
<tr>
<td>book*</td>
<td>author*</td>
</tr>
<tr>
<td>title</td>
<td>full-name</td>
</tr>
<tr>
<td>author*</td>
<td>publication*</td>
</tr>
<tr>
<td>name</td>
<td>title</td>
</tr>
<tr>
<td>publisher*</td>
<td>pub-type</td>
</tr>
<tr>
<td>name</td>
<td></td>
</tr>
</tbody>
</table>
In general, it should be possible to specify mappings in either direction (for reasons we discuss in the next section), and mappings must have two important capabilities:
- **Translation of domain structure and terminology:** In the simple case, we must be able to perform simple renamings from one concept (XML tag label) to another, either globally or within a certain subtree or context. For instance, we want to state that every occurrence of the full-name tag in S2 matches the name tag in S1. On the other hand, if we create a mapping in the reverse direction, name in S1 only corresponds to full-name in S2 when it appears within an author tag. In some cases, the terminological translations involve additional conditions. For instance, a title entry in site S2 is only equivalent to a book title in S1 if the pub-type is book.
- **Translation of document structure:** We must be able to map between different nesting structures. Source S1 is book-centric and S2 is author-centric. In order to do this, we must be able to coalesce groups of items when they are associated with the same entity — every time we see a book with the same name in S1, we should insert the book’s title (within a publication element and with a pub-type of book) into the same author element in S2.
Section 3 describes our mapping specification language for mappings between XML/XML Schema nodes, which achieves these goals. The language is based on features of the XQuery XML query language [6], which is able to specify rich transformations.
**XML-to-RDF mappings:** There are two issues when mapping between XML to RDF/OWL data. The first is expressive power — clearly, we cannot map all the concepts in an OWL ontology into an XML schema and preserve their semantics. It is inevitable that we will lose some information in such a mapping. In practice, we need to ensure that the XML schema of a node is rich enough for the queries that are likely to be posed at the node.
The second issue is how to rebuild the appropriate document structure when transferring data from the OWL ontology into XML. We illustrate the challenge below.
**Example 2.2.** Suppose we have a simple network with three nodes: A and B are XML nodes and P is an RDF node with an associated OWL ontology. XML Node A contains author information, including books written by the author, nested within author elements (and a given book may appear under multiple author elements). Node B contains book information, including authors, nested within book elements (again, an author may appear within multiple books). Hence, nodes A and B contain the same data but in different structures. Finally, Node P is a rich OWL ontology describing the Publishing world. Among other concept definitions, it contains two classes (Author and Book) and one property (writes). The relationship in P can be encoded in RDF using the definition:
```xml
<rdf:Description rdf:about="authorID"
rdf:type="Author">
<P:writes rdf:resource="bookID"/>
</rdf:Description>
```
The important point to note is that once data has been mapped (using the mapping language described in Section 3) from nodes A or B to RDF, it loses its original document structure. In fact, the two different structures of nodes A and B are mapped to the same RDF. Our mapping language can be used to map from the XML of A and B into XML-encoded RDF at P. We could also write mappings in the opposite direction, from the RDF to XML, that restore the document structure. However, we would like to avoid having to write two mappings in every case. In fact, as we explain in the next section, we may compromise expressive power by forcing mappings in both directions.
Hence, suppose we have two mappings $A \rightarrow P$ and $B \rightarrow P$ from the XML to the RDF. Answering a query over the RDF is conceptually easy. Note that the RDF query is oblivious to document structure. The interesting case occurs when a query is posed over one of the XML sources, say node B. Here, we must use P as an intermediate node for getting data from node A. Data from A is first mapped into RDF form using the $A \rightarrow P$ mapping, “flattening” it and relating it to the ontology at P. Then, we need to somehow use the mapping $B \rightarrow P$ in reverse in order to answer the B query. In Section 4 we describe an algorithm that is also able to use XML-to-RDF mappings in the reverse direction. With that algorithm, we can follow any semantic path in Piazza, regardless of the direction in which the mappings are specified.
In summary, the language we describe in Section 3 offers a mechanism for inter-operation of XML/XML Schema nodes and RDF/OWL nodes. It enables mapping between XML nodes and between an XML node and an RDF node.
### 2.3 Query Processing
Given a set of sites, the semantic mappings between them, and a query at a particular site, the key problem we face is how to process queries. The problem is at two levels: (1) how to obtain semantically correct answers, and (2) how to process the queries efficiently. In this paper we focus mostly on the first problem, called query reformulation. Section 4 describes a query answering algorithm for the Piazza mapping language: given a query at a particular site, we need to expand and translate it into appropriate queries over semantically related sites, as well. Query answering may require that we follow semantic mappings in both directions. In one direction, composing semantic mappings is simply query composition for an XQuery-like language. In the other direction, composing mappings requires using mappings in the reverse direction, which is known as the problem of answering queries using views [15]. These two problems are well understood in the relational setting (i.e., when data is relational and mappings are specified as some restricted version of SQL), but they have only recently been treated in limited XML settings.
### 3. MAPPINGS IN PIAZZA
In this section, we describe the language we use for mapping between sites in a Piazza network. As described earlier, we focus on nodes whose data is available in XML (perhaps via a wrapper over some other system). For the purposes of our discussion, we ignore the XML document order. Each node has a schema, expressed in XML Schema, which defines the terminology and the structural constraints of the node. We make a clear distinction between the intended domain of the terms defined by the schema at a node and the actual data that may be stored there. Clearly, the stored data conforms to the terms and constraints of the schema, but the intended domain of the terms may be much broader than the particular data stored at the node. For example, the terminology for publications applies to data instances beyond the particular ones stored at the
Given this setting, mappings play two roles. The first role is as storage descriptions that specify which data is actually stored at a node. This allows us to separate between the intended domain and the actual data stored at the node. For example, we may specify that a particular node contains publications whose topic is Computer Science and have at least one author from the University of Washington. The second role is as schema mappings, which describe how the terminology and structure of one node correspond to those in a second node. The language for storage mappings is a subset of the language for schema mappings, hence our discussion focuses on the latter.
The ultimate goal of the Piazza system is to use mappings to answer queries; we answer each query by rewriting it using the information in the mapping. Of course, we want to capture structural as well as terminological correspondences. As such, it is important that the mapping capture maximal information about the relationship between schemas, but also about the data instances themselves — since information about content can be exploited to more precisely answer a query.
The field of data integration has spent many years studying techniques for precisely defining such mappings with relational data, and we base our techniques on this work. In many ways, the vision of Piazza is a broad generalization of data integration: in conventional data integration, we have a mediator that presents a mediated schema, and a set of data sources that are mapped to this single mediated schema; in Piazza, we have a web of sites and semantic mappings.
The bulk of the data integration literature uses queries (views) as its mechanism for describing mappings: views can relate disparate relational structures, and can also impose restrictions on data values. There are two standard ways of using views for specifying mappings in this context: data sources can be described as views over the mediated schema (this is referred to as local-as-view or LAV), or the mediated schema can be described as a set of views over the data sources (global-as-view or GAV). The direction of the mapping matters a great deal: it affects both the kinds of queries that can be answered and the complexity of using the mapping to answer the query. In the GAV approach, query answering requires only relatively simple techniques to “unfold” (basically, macro-expand) the views into the query so it refers to the underlying data sources. The LAV approach requires more sophisticated query reformulation algorithms (surveyed in [15]), because we need to use the views in the reverse direction. It is important to note that in general, using a view in the reverse direction is not equivalent to writing an inverse mapping.
As a result of this, LAV offers a level of flexibility that is not possible with GAV. In particular, the important property of LAV is that it enables to describe data sources that organize their data differently from the mediated schema. For example, suppose the mediated schema contains a relationship Author, between a paper-id and an author-id. A data source, on the other hand, has the relationship CoAuthor that relates two author-id’s. Using LAV, we can express the fact that the data source has the join of Author with itself. This description enables us to answer certain queries — while it is not possible to use the source to find authors of a particular paper, we can use the source to find someone’s co-authors, or to find authors who have co-authored with at least one other. With GAV we would lose the ability to answer these queries, because we lose the association between co-authors. The best we could say is that the source provides values for the second attribute of Author.3 (Recall that the relational data model is very weak at modeling incomplete information.)
This discussion has a very important consequence as we consider mappings in Piazza. When we map between two sites, our mappings, like views, will be directional. One could argue that we can always provide mappings in both directions, and even though this doubles our mapping efforts, it avoids the need for using mappings in reverse during query reformulation. However, when two sites organize their schemas differently, some semantic relationships between them will be captured only by the mapping in one of the directions, and this mapping cannot simply be inverted. Instead, these semantic relationships will be exploited by algorithms that can reverse through mappings on a per-query basis, as we illustrated in our example above. Hence, the ability to use mappings in the reverse direction is a key element of our ability to share data among sites, and therefore the focus of Section 4.
Our goal in Piazza is to leverage this work — both LAV and GAV — from data integration, but to extend it in two important directions. First, we must extend the basic techniques from the two-tier data integration architecture to the peer data management system’s heterogeneous, graph-structured network of interconnected nodes; this was the focus of our work in [16]. Our second direction, which we discuss in this paper, is to move these techniques into the realms of XML as well as its serializations of RDF.
Following the data integration literature, which uses a standard relational query language for both queries and mappings, we might elect to use XQuery for both our query language and our language for specifying mappings. However, we found XQuery inappropriate as a mapping language for the following reasons. First, an XQuery user thinks in terms of the input documents and the transformations to be performed. The mental connection to a required schema for the output is tenuous, whereas our setting requires thinking about relationships between the input and output schemas. Second, the user must define a mapping in its entirety before it can be used. There is no simple way to define mappings incrementally for different parts of the schemas, to collaborate with other experts on developing sub-regions of the mapping, etc. Finally, XQuery is an extremely powerful query language (and is, in fact, Turing-complete), and as a result some aspects of the language make it difficult or even impossible to reason about.
### 3.1 The Mapping Language
Our approach is to define a mapping language that borrows elements of XQuery, but is more tractable to reason about and can be expressed in piecewise form. Mappings in the language are defined as one or more mapping definitions, and they are directional from a source to a target: we take a fragment of the target schema and annotate it with XML query expressions that define what source data should be mapped into that fragment. The mapping language is designed to make it easy for the mapping designer to visualize the target schema while describing where its data originates.
Conceptually, the results of the different mapping definitions are combined to form a complete mapping from the source document to the target, according to certain rules. For instance, the results of different mapping definitions can often be concatenated together to form the document, but in some cases different definitions may create content that should all be combined into a single element; Piazza must “fuse” these results together based on the output element’s unique identifiers (similar to the use of Skolem functions in languages such as XML-QL [10]). A complete formal description
[3]Note that in principle it is possible to define a CoAuthor view in the mediated schema, and map the data source to the view. However, the algorithmic problem of query answering would be identical to the LAV scenario.
of the language would be too lengthy for this paper. Hence, we describe the main ideas of the language and illustrate it via examples.
Each mapping definition begins with an XML template that matches some path or subtree of a legal instance of the target schema, i.e., a prefix of a legal string in the target DTD’s grammar. Elements in the template may be annotated with query expressions (in a subset of XQuery) that bind variables to XML nodes in the source; for each combination of bindings, an instance of the target element will be created. Once a variable is bound, it can be referenced anywhere within its scope, which is defined to be the enclosing tags of the template. Variable bindings can be output as new target data, or they can be referenced by other query expressions to correlate data in different areas of the mapping definition. The following is a basic example of the language for the sites in Example 2.1.
```xml
<pubs>
<book>
{: $a IN document("source.xml")
/authors/author,
$t IN $a/publication/title,
$typ IN $a/publication/pub-type
WHERE $typ = "book" :}
<title>{ $t }/title>
<author>
{: $a/full-name :} </author>
</book>
</pubs>
```
Where we make variable references within { } braces and delimit query expression annotations by {: :}. This mapping definition will instantiate a new book element in the target for every occurrence of variables $a, $t, and $typ, which are bound to the author, title, and publication-type elements in the source, respectively. We construct a title and author element for each occurrence of the book. The author name contains a new query expression annotation ($a/full-name), so this element will be created for each match to the XPath expression (for this schema, there should only be one match).
The example mapping will create a new book element for each author-publication combination. This is probably not the desired behavior, since a book with multiple authors will appear as multiple book entries, rather than as a single book with multiple author subelements. To enable the desired behavior in situations like this, Piazza reserves a special piazza:id attribute in the target schema for mapping multiple binding instances to the same name and ID attribute, then they will be coalesced — all of their attributes and element content will be combined. This coalescing process is repeated recursively over the combined elements. We can modify our mapping to the following:
```xml
<book piazza:id={$t}>
{: $a IN document("source.xml")
/authors/author,
$t IN $a/publication/title,
$typ IN $a/publication/pub-type
WHERE $typ = "book" :}
PROPERTY $t >= 'A' AND $t < 'B'
</book>
</pubs>
```
The first PROPERTY definition specifies that we know this mapping includes only titles starting with “A.” The second defines a “virtual subtree” (delimited by { : : }) in the target. There is insufficient data at the source to insert a value for the publisher name, but we can define a PROPERTY restriction on the values it might have. The special variable $this allows us to establish a known invariant about the value at the current location within the virtual subtree: in this case, it is known that the publisher name must be one of the two values specified. In general, a query over the target looking for books will make use of this mapping; a query looking for books published by BooksInc will not. Moreover, a query looking for books published by PubsInc cannot use this mapping, since Piazza cannot tell whether a book was published by PubsInc or by PrintersInc.
3.2 Semantics of Mappings
We briefly sketch the principles underlying the semantics of our mapping language. At the core, the semantics of mappings can be defined as follows. Given an XML instance, $I_s$, for the source node $S$ and the mapping to the target $T$, the mapping defines a subset of an instance, $I_t$, for the target node. The reason that $I_t$ is a subset of the target instance is that some elements of the target may not exist in the source (e.g., the publisher element in the examples). In fact, it may even be the case that required elements of the target are not present in the source. In relational terms, $I_t$ is a projection of some complete instance $I'_t$ of $T$ on a subset of its elements and attributes. In fact, $I_t$ defines a set of complete instances of $T$...
whose projection is $I_e$. When we answer queries over the target $T$, we provide only the answers that are consistent with all such $I_s$'s (known as the certain answers [1], the basis for specifying semantics in the data integration literature). It is important to note that partial instances of the target are useful for many queries, in particular, when a query asks for a subset of the elements. Instances for $T$ may be obtained from multiple mappings (and instances of the sources, in turn, can originate from multiple mappings), and as we described earlier, may be the result of coalescing the data obtained from multiple bindings using the piazza:id attribute.
A mapping between two nodes can either be an inclusion or an equality mapping. In the former case, we can only infer instances of the target from instances of the source. In the latter case, we can also infer instances of the source from instances of the target. However, since the mapping is defined from the source to the target, using the mapping in reverse requires special reasoning. The algorithm for doing such reasoning is the subject of Section 4. Finally, we note that storage descriptions, which relate the node’s schema to its actual current contents allow for both the open-world assumption or the closed-world assumption. In the former case, a node is not assumed to store all the data modeled by its schema (it describes a general concept more inclusive than the data it provides, e.g., all books published, and new data sources may provide additional data for this schema), while in the latter case it holds the complete set of all data relevant to its concept (e.g., all books published by major publishers since 1970). In practice, very few data sources have complete information.
3.3 Discussion
To complete the discussion of our relationship to data integration, we briefly discuss how our mapping language relates to the LAV and GAV formalisms. In our language, we specify a mapping from the perspective of a particular target schema — in essence, we define the target schema using a GAV-like definition relative to the source schemas. However, two important features of our language would require LAV definition in the relational setting. First, we can map data sources to the target schema even if the data sources are missing attributes or subelements required in the source schema. Hence, we can support the situation where the source schema is a projection of the target. Second, we support the notion of data source properties, which essentially describes scenarios in which the source schema is a selection on the target schema.
Hence, our language combines the important properties of LAV and GAV. It is also interesting to note that although query answering in the XML context is fundamentally harder than in the relational case, specifying mappings between XML sources is more intuitive. The XML world is fundamentally semistructured, so it can accommodate mappings from data sources that lack certain attributes — without requiring null values. In fact, during query answering we allow mappings to pass along elements from the source that do not exist in the target schema — we would prefer not to discard these data items during the transitive evaluation of mappings, or query results would always be restricted by the lowest-common-denominator schemas along a given mapping chain. For this reason, we do not validate the schema of answers before returning them to the user.
4. QUERY ANSWERING ALGORITHM
Given a set of mappings, our goal is to be able to answer queries posed over any peer’s schema, making use of all relevant (mapped) data. We do this at runtime rather than mapping the data once and later answering queries: this allows us to provide “live” answers as source data changes, and we can sometimes exploit “partial” mappings to answer certain queries, even if those mappings are insufficient to entirely transform data from one schema to another.
This section describes Piazza’s query answering algorithm, which performs the following task: given a network of Piazza nodes with XML data, a set of semantic mappings specified among them, and a query over the schema of a given node, efficiently produce all the possible certain answers that can be obtained from the system. A user’s query is posed over a node’s logical schema, which may be defined in terms of incomplete data sources (e.g., we may define the concept “all books published” but may not have complete knowledge of these books). Certain answers are those results that are guaranteed to be in the logical schema in order for it to be consistent with the mappings and the existing source data.
From a high level, an algorithm proceeds along the following lines. Given a query $Q$ posed over the schema of node $P$, we first use the storage descriptions of data in $P$ (i.e., the mappings that describe which data is actually stored at $P$) to rewrite $Q$ into a query $Q'$ over the data stored at $P$. Next, we consider the semantic neighbors of $P$, i.e., all nodes that are related to elements of $P$’s schema by semantic mappings. We use these mappings to expand the reformulation of query $Q$ to a query $Q''$ over the neighbors of $P$. In turn, we expand $Q''$ so it only refers to stored data in $P$ and its neighbors; then we union it with $Q'$, eliminating any redundancies. We repeat this process recursively, following all mappings between nodes’ schemas, and the storage mappings for each one, until there are no remaining useful paths.
Ignoring optimization issues, the key question in designing such an algorithm is how to reformulate a query $Q$ over its semantic neighbors. Since semantic mappings in Piazza are directional from a source node $S$ to a target node $T$, there are two cases of the reformulation problem, depending on whether $Q$ is posed over the schema of $S$ or over that of $T$. If the query is posed over $T$, then query reformulation amounts to query composition: to use data at $S$, we compose the query $Q$ with the query (or queries) defining $T$ in terms of $S$. Our approach to query composition is based on that of [13], and we do not elaborate on it here.
The second case is when query is posed over $S$ and we wish to reformulate it over $T$. Now both $Q$ and $T$ are defined as queries over $S$. In order to reformulate $Q$, we need to somehow use the mapping in the reverse direction, as explained in the previous section. This problem is known as the problem of answering queries using views (see [15] for a survey), and is conceptually much more challenging. The problem is well understood for the case of relational queries and views, and we now describe an algorithm that applies to the XML setting. The key challenge we address for the context of XML is the nesting structure of the data (and hence of the query) — relational data is flat.
4.1 Query Representation
Our algorithm operates over a graph representation of queries and mappings. Suppose we are given the following XQuery for all advisees of Ullman, posed over source $S1$:
```
<result>
for $faculty in /S1/people/faculty,
$name in $faculty/name/text(),
$advisee in $faculty/advisee/text()
where $name = "Ullman"
return <student> {$advisee} </student>
</result>
```
The query is represented graphically by the leftmost portion of Figure 1. Note that the `result` element in the query simply specifies the root element for the resulting document. Each box in the
Figure 1: Matching a query tree pattern into a tree pattern of a schema mapping. The matching tree patterns are shown in bold. The schema mapping corresponding to the middle graph is shown on the right.
figure corresponds to a query block, and indentation indicating the nesting structure. With each block we associate the following constructs that are manipulated by our algorithm:
A set of tree patterns: XQuery’s FOR clause binds variables, e.g., `${faculty}` in `/S1/people/faculty` binds the variable `$faculty` to the nodes satisfying the XPath expression. The bound variable can then be used to define new XPath expressions such as `${faculty/name}` and bind new variables. Our algorithm consolidates XPath expressions into logically equivalent tree patterns for use in reformulation. For example, the tree pattern for our example query is indicated by the thick forked line in the leftmost portion of Figure 1.
For simplicity of presentation, we assume here that every node in a tree pattern binds a single variable; the name of the variable is the same as the tag of the corresponding tree pattern node. Hence, the node `advisee` of the tree pattern binds the variable `$advisee`.
A set of predicates: a predicate in a query specifies a condition on one or two of the bound variables. Predicates are defined in the XQuery WHERE clause over the variables bound in the tree patterns. The variables referred to in the predicate can be bound by different tree patterns. In our example, we have a single predicate: `name="Ullman"`. If a predicate involves a comparison between two variables, then it is called a join predicate, because it essentially enforces a relational join.
Output results: output, specified in the XQuery RETURN clause, consists of element or attribute names and their content. An element tag name is usually specified in the query as a string literal, but it can also be the value of a variable. This is an important feature, because it enables transformations in which data from one source becomes schema information in another. In our query graph of Figure 1, an element tag is shown in angle brackets. Hence, the element tag of the top-level block is result. The element tag of the inner block is student. The contents of the returned element of a query block may be a sequence of elements, attributes, string literals, or variables. (Note that our algorithm does not support “mixed content,” in which subelements and data values may be siblings, as this makes reformulation much harder). We limit our discussion to the case of a single returned item. In the figure, the variable/value returned by a query block is enclosed in curly braces. Thus, the top level block of our example query has empty returned contents, whereas the inner block returns the value of the `$advisee` variable.
We use the same representation for mappings as for queries. In this case, the nesting mirrors the template of the target schema. The middle of Figure 1 shows the graph representation of the mapping shown on the right of the figure. The mapping is between the following schemas. (The schemas differ in how they represent advisor-advisee information. `S1` puts advisee names under the corresponding faculty advisor whereas `S2` does the opposite by nesting advisor names data under corresponding students.)
4.2 The Rewriting Algorithm
Our algorithm makes the following simplifying assumptions about the queries and the mappings (we note that in the scenario we implemented, all the mappings satisfied these restrictions). First, we assume the query over the target schema contains a single non-trivial block, i.e., a block that includes tree patterns and/or predicates. The mapping, on the other hand, is allowed to contain an arbitrary number of blocks. Second, we assume that all “returned” variables are bound to atomic values, i.e., `text()` nodes, rather than XML element trees (this particular limitation can easily be removed by expanding the query based on the schema). In Figure 1 the variable `$people` is bound to an element; variables `$name` and `$student` are bound to values. Third, we assume that queries are evaluated under a set semantics. In other words, we assume that duplicate results are eliminated in the original and rewritten query. Finally, we assume that a tree pattern uses the child axis of XPath only. It is possible to extend the algorithm to work with queries that use the descendant axis. For purposes of exposition, we assume that the schema mapping does not contain sibling blocks with the same element tag. Handling such a case requires the algorithm to consider multiple possible satisfying paths (and/or predicates) in the tree pattern.
Intuitively, the rewriting algorithm performs the following tasks. Given a query `Q`, it begins by comparing the tree patterns of the mapping definition with the tree pattern of `Q` — the goal is to find a corresponding node in the mapping definition’s tree pattern for
every node in the $Q$’s tree pattern. Then the algorithm must restructure $Q$’s tree pattern along the same lines as the mapping restructures its input tree patterns (since $Q$ must be rewritten to match against the target of the mapping rather than its source). Finally, the algorithm must ensure that the predicates of $Q$ can be satisfied using the values output by the mapping. The steps performed by the algorithm are:
**Step 1: pattern matching.** This step considers the tree patterns in the query, and finds corresponding patterns in the target schema. Intuitively, given a tree pattern, $t$ in $Q$, our goal is to find a tree pattern $t'$ on the target schema such that the mapping guarantees that an instance of that pattern could only be created by following $t$ in the source. The algorithm first matches the tree patterns in the query to the expressions in the mapping and records the corresponding nodes. In Figure 1, the darker lines in the representation of the schema mapping denote the tree pattern of the query (far left) and its corresponding form in the mapping (second from left). The algorithm then creates the tree pattern over the target schema as follows: starting with the recorded nodes in the mapping, it recursively marks all of their ancestor nodes in the output template. It then builds the new tree pattern over the target schema by traversing the mapping for all marked nodes.
Note that $t'$ may enforce additional conditions to $t$, and that there may be several patterns in the target that match a pattern in the query, ultimately yielding several possible queries over the target that provide answers to $Q$. If no match is found, then the resulting rewriting will be empty (i.e., the target data does not enable answering the query on the source).
**Step 2: Handling returned variables and predicates.** In this step the algorithm ensures that all the variables required in the query can be returned, and that all the predicates in the query have been applied. Here, the nesting structure of XML data introduces subtleties beyond the relational case.
To illustrate the first potential problem, recall that our example query returns advisee names, but the mapping does not actually return the advisee, and hence the output of Step 1 does not return the advisee. We must extend the tree pattern to return a block that actually outputs the $<$advisee$>$ element, but the $<advisor>$ block where $<advisee>$ is bound does not have any subblocks, so we cannot simply extend the tree pattern. Fortunately, the $<advisor>$ block includes an equality condition between $<advisee>$ and $<student>$, which is output by the $<name>$ block. We can therefore rewrite the tree pattern as $<student> /<advisor>$. Of course, it is not always possible to find such equalities, and in those cases there will be no rewriting for that pattern.
Query predicates can be handled in one of three ways. First, a query predicate (or one that subsumes it) might already be applied by the relevant portion of the mapping (or might be a known property of the data being mapped). In this case, the algorithm can consider the predicate to be satisfied. A second case is when the mapping does not impose the predicate, but returns all nodes necessary for testing the predicate. Here, the algorithm simply inserts the predicate into the rewritten query. The third possibility is more XML-specific: the predicate is not applied by the portion of the mapping used in the query rewriting, nor can the predicate be evaluated over the mapping’s output — but a different sub-block in the mapping may impose the predicate. If this occurs, the algorithm can add a new path into the rewritten tree pattern, traversing into the sub-block. Now the rewritten query will only return a value if the sub-block (and hence the predicate) is satisfied.
In our case, the query predicate can be reformulated in terms of the variables bound by the replacement tree pattern as follows:
```
<result> { $name } </student> }
```
Note that in the above discussion, we always made the assumption that a mapping is useful if and only if it returns all output values and satisfies all predicates. In many cases, we may be able to loosen this restriction if we know more information about the relationships within a set of mappings, or about the properties of the mappings. For instance, if we have two mappings that share a key or a parent element, we may be able to rewrite the query to use both mappings if we add a join predicate on the key or the parent element ID, respectively. Conversely, we may be able to make use of properties to determine that a mapping cannot produce any results satisfying the query.
In the full version of the paper we prove the following theorem that characterizes the completeness of our algorithm.
**Theorem 1.** Let $S$ and $T$ be source and target XML schemas, and $Q$ be a query over $S$, all of which satisfy the assumptions specified in the beginning of this section. Then, our algorithm will compute a query $Q'$ that is guaranteed to produce all the certain answers to $Q$ for any XML instance of $T$.
5. A PIAZZA APPLICATION
To validate our approach, we implemented a small but realistic semantic web application in Piazza. This section briefly reports on our experiences. While our prototype is still relatively preliminary, we can already make several interesting observations that are helping to shape our ideas for future research.
The Piazza system consists of two main components. The query reformulation engine takes a query posed over a node, and it uses the algorithm described in Section 4 in order to chain through the semantic mappings and output a set of queries over the relevant nodes. Our query evaluation engine is based on the Tukwila XML Query Engine [18], and it has the important property that it yields answers as the data is streaming in from the nodes on the network.
We chose our application, DB Research, to be representative of certain types of academic and scientific data exchange. Our prototype relates 15 nodes concerning different aspects of the database research field (see Figure 2, where directed arrows indicate the direction of mappings). The nodes of DB Research were chosen so they cover complementary but overlapping aspects of database

research. All of the nodes of DB Research, with the exception of DB-Projects, contribute data. DB-Projects is a schema-only node whose goal is to map between other sources. DB Research nodes represent university database groups (Berkeley, Stanford, UPenn, and UW), research labs (IBM and MSR), online publication archives (ACM, DBLP, and CiteSeer), web sites for the major database conferences (SIGMOD, VLDB, and PODS), and DigReview, which is an open peer-review web site. The Submissions node represents data that is available only to a PC chair of a conference, and not shared with others. The node schemas were designed to mirror the actual organization and terminology of the corresponding web sites. When defining mappings, we tried to map as much information in the source schema into the target schema as possible, but a complete schema mapping is not always possible since the target schema may not have all of the attributes of the source schema. We report our experiences on four different aspects.
Reformulation times: the second and third columns of Table 1 show the reformulation time for the test queries and the number of reformulations obtained (i.e., number of queries that can be posed over the nodes to obtain answers to the query). We observe that even with relatively unoptimized code, the reformulation times are quite low, even though some of them required traversing paths of length 8 in the network. Hence, sharing data by query reformulation along semantic paths appears to be feasible. Although we expect many applications to have much larger networks, we also expect many of the paths in the network to require only very simple reformulations. Furthermore, by interleaving reformulation and query evaluation, we can start providing answers to users almost immediately.
Optimization issues: the interesting optimization issue that arises is reducing the number of reformulations. Currently, our algorithm may produce more reformulations than necessary because it may follow redundant paths in the network, or because it cannot detect a cyclic path until it traverses the final edge. Minimizing the number of reformulations has been considered in two-tier data integration systems [27, 22] both address this problem, but they rely on a two-tier mediator architecture, in which data sources are mapped to a global mediated schema that encompassed all available information. This architecture requires centralized administration and schema design, and it does not scale to large numbers of small-scale collaborations. To better facilitate data sharing, Piazza adopts a peer-to-peer-style architecture and eliminates the need for a single unified schema — essentially, every node’s schema can serve as the mediated schema for a query, and the system will evaluate schema mappings transitively to find all related data. Our initial work in this direction focused on the relational model and was presented in [16]; a language for mediating between relational sources has recently been presented in [5]. Mappings between schemas can be specified in many ways. Cluet et al. suggest a classification of mapping schemes between XML documents in [8]; following their framework, we could classify our system as mapping from paths to (partial) DTDs. The important, but complementary issue of providing support for generating semantic mappings between peers has been a topic of considerable interest in the database community [29, 11], and in the ontology literature [23, 12, 26]. The problem of estimating information loss in mappings has also been studied [24]. An important problem that we have not yet addressed is that of potential data source inconsistencies; but this problem has received recent attention in [3, 20].
A second goal of this paper is to address not only mediation between XML sources, but to provide an intermediary between the XML and RDF worlds, since most real-world data is in XML but ontologies may have richer information. Patel-Schneider and Simeon [28] propose techniques for merging XML and RDF into a common, XML-like representation. Conversely, the Sesame [7] stores RDF in a variety of underlying storage formats. Amann et al. [2] discuss a data integration system whereby XML sources are mapped into a simple ontology (supporting inheritance and roles, but no description logic-style definitions).
The Edutella system [25] represents an interesting design point
<table>
<thead>
<tr>
<th>Query</th>
<th>Description</th>
<th>Reformulation time</th>
<th># of reformulations</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q1</td>
<td>XML-related projects.</td>
<td>0.5 sec</td>
<td>12</td>
</tr>
<tr>
<td>Q2</td>
<td>Co-authors who reviewed each other’s work.</td>
<td>0.9 sec</td>
<td>25</td>
</tr>
<tr>
<td>Q3</td>
<td>PC members with a paper at the same conference.</td>
<td>0.2 sec</td>
<td>3</td>
</tr>
<tr>
<td>Q4</td>
<td>PC chairs of recent conferences + their projects.</td>
<td>0.5 sec</td>
<td>24</td>
</tr>
<tr>
<td>Q5</td>
<td>Conflicts-of-interest of PC members.</td>
<td>0.7 sec</td>
<td>36</td>
</tr>
</tbody>
</table>
in the XML-RDF interoperability spectrum. Like Piazza, it is built on a peer-to-peer architecture and it mediates between different data representations. The focus of Edutella is to provide query and storage services for RDF, but with the ability to use many different underlying stores. Thus an important focus of the project is on translating the RDF data and queries to the underlying storage format and query language. Rather than beginning with data in a particular document structure and attempting to translate between different structures, Edutella begins with RDF and uses canonical mappings to store it in different subsystems. As a result of its inherent RDF-mediated architecture, Edutella does not employ point-to-point mappings between nodes. Edutella uses the JXTA peer-to-peer framework in order to provide replication and clustering services.
The architecture we have proposed for Piazza is a peer-to-peer, Web-like system. Recently, there has been significant interest in developing grid computing architectures (see www.mygrid.org.uk, www.gridcomputing.com), modeled after the electric power grid system. The goal is to construct a generic parallel, distributed environment for resource sharing and information exchange, and to allow arbitrary users (especially scientific users) to “plug in” to the grid. As noted in the lively discussion in [30], there will be some interesting relationships between grid computing and the Semantic Web. We believe that Piazza provides a data management infrastructure to support data services on the grid.
Finally, we note that Piazza is a component of the larger Revere Project [14] that attempts to address the entire life-cycle of content creation on the Semantic Web.
7. CONCLUSIONS AND FUTURE WORK
The vision of the semantic web is compelling and will certainly lead to significant changes in how the Web is used, but we are faced with a number of technical obstacles in realizing this vision. Knowledge representation techniques and standardized ontologies will undoubtedly play a major role in the ultimate solution. However, we believe that the semantic web cannot succeed if it requires everything to be rebuilt “from the ground up”: it must be able to make use of structured data from non-semantic web-enabled sources, and it must inter-operate with traditional applications. This requires the ability to deal not only with domain structure, but also with document structures that are used by applications. Moreover, mediated schemas and ontologies can only be built by consensus, so they are unlikely to scale.
In this paper, we have presented the Piazza peer data management architecture as a means of addressing these two problems, and we have made the following contributions. First, we described a mapping language for mapping between sets of XML source nodes with different document structures (including those with XML serializations of RDF). Second, we have proposed an architecture that uses the transitive closure of mappings to answer queries. Third, we have described an algorithm for query answering over this transitive closure of mappings, which is able to follow mappings in both forward and reverse directions, and which can both remove and reconstruct XML document structure. Finally, we described several key observations about performance and research issues, given our experience with an implemented semantic web application.
Although our prototype application is still somewhat preliminary, it already suggests that our architecture provides useful and effective mediation for heterogeneous structured data, and that adding new sources is easier than in a traditional two-tier environment. Furthermore, the overall Piazza system gives us a strong research platform for uncovering and exploring issues in building a semantic web. We are currently pursuing a number of research directions.
A key aspect of our system is that there may be many alternate “mapping paths” between any two nodes. An important problem is identifying how to prioritize these paths that preserve the most information, while avoiding paths that are too “diluted” to be useful. A related problem at the systems level is determining an optimal strategy for evaluating the rewritten query. We are also interested in studying Piazza’s utility in applications that are much larger in scale, and in investigating strategies for caching and replicating data and mappings for reliability and performance.
Acknowledgments
The authors would like to express their gratitude to Natasha Noy, Rachel Pottinger, and Dan Weld for their invaluable comments and suggestions about this paper.
8. REFERENCES
[19] V. Kashyap. The semantic web: Has the db community missed the bus (again)? In Proceedings of the NSF Workshop on DB & IS
|
{"Source-Url": "http://repository.upenn.edu/cgi/viewcontent.cgi?article=1035&context=db_research", "len_cl100k_base": 14324, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 42868, "total-output-tokens": 16822, "length": "2e13", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0006871223449707031, "__label__crime_law": 0.0005388259887695312, "__label__education_jobs": 0.005039215087890625, "__label__entertainment": 0.00024318695068359375, "__label__fashion_beauty": 0.00026988983154296875, "__label__finance_business": 0.0007586479187011719, "__label__food_dining": 0.000457763671875, "__label__games": 0.0008440017700195312, "__label__hardware": 0.0008835792541503906, "__label__health": 0.0007977485656738281, "__label__history": 0.0007042884826660156, "__label__home_hobbies": 0.0001684427261352539, "__label__industrial": 0.0005812644958496094, "__label__literature": 0.0013380050659179688, "__label__politics": 0.0004808902740478515, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.363525390625, "__label__social_life": 0.0002999305725097656, "__label__software": 0.05487060546875, "__label__software_dev": 0.56494140625, "__label__sports_fitness": 0.0002579689025878906, "__label__transportation": 0.0007572174072265625, "__label__travel": 0.0003066062927246094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75439, 0.02042]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75439, 0.68019]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75439, 0.90934]], "google_gemma-3-12b-it_contains_pii": [[0, 923, false], [923, 2970, null], [2970, 8219, null], [8219, 15577, null], [15577, 22943, null], [22943, 29920, null], [29920, 37620, null], [37620, 42014, null], [42014, 49493, null], [49493, 54463, null], [54463, 60849, null], [60849, 65733, null], [65733, 73272, null], [73272, 75439, null]], "google_gemma-3-12b-it_is_public_document": [[0, 923, true], [923, 2970, null], [2970, 8219, null], [8219, 15577, null], [15577, 22943, null], [22943, 29920, null], [29920, 37620, null], [37620, 42014, null], [42014, 49493, null], [49493, 54463, null], [54463, 60849, null], [60849, 65733, null], [65733, 73272, null], [73272, 75439, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75439, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75439, null]], "pdf_page_numbers": [[0, 923, 1], [923, 2970, 2], [2970, 8219, 3], [8219, 15577, 4], [15577, 22943, 5], [22943, 29920, 6], [29920, 37620, 7], [37620, 42014, 8], [42014, 49493, 9], [49493, 54463, 10], [54463, 60849, 11], [60849, 65733, 12], [65733, 73272, 13], [73272, 75439, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75439, 0.06324]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
45662249a045ae294d935b62abe51766ce7834e3
|
[REMOVED]
|
{"Source-Url": "http://www.usixml.org/servlet/Repository/szekely-cadui96.pdf?ID=408&saveFile=true", "len_cl100k_base": 12226, "olmocr-version": "0.1.50", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 62063, "total-output-tokens": 18014, "length": "2e13", "weborganizer": {"__label__adult": 0.0004787445068359375, "__label__art_design": 0.004974365234375, "__label__crime_law": 0.00027823448181152344, "__label__education_jobs": 0.004352569580078125, "__label__entertainment": 0.00018036365509033203, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.00029206275939941406, "__label__food_dining": 0.0003833770751953125, "__label__games": 0.0010786056518554688, "__label__hardware": 0.0015125274658203125, "__label__health": 0.0004355907440185547, "__label__history": 0.0006313323974609375, "__label__home_hobbies": 0.0001468658447265625, "__label__industrial": 0.0005173683166503906, "__label__literature": 0.0005993843078613281, "__label__politics": 0.00022840499877929688, "__label__religion": 0.0006303787231445312, "__label__science_tech": 0.04534912109375, "__label__social_life": 0.0001003742218017578, "__label__software": 0.01477813720703125, "__label__software_dev": 0.92138671875, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.0006017684936523438, "__label__travel": 0.0002301931381225586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77097, 0.02409]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77097, 0.69701]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77097, 0.88727]], "google_gemma-3-12b-it_contains_pii": [[0, 1951, false], [1951, 4605, null], [4605, 5885, null], [5885, 8900, null], [8900, 11613, null], [11613, 14368, null], [14368, 17119, null], [17119, 20056, null], [20056, 23048, null], [23048, 26042, null], [26042, 28870, null], [28870, 31567, null], [31567, 33980, null], [33980, 37106, null], [37106, 40175, null], [40175, 43367, null], [43367, 46463, null], [46463, 49575, null], [49575, 52356, null], [52356, 55582, null], [55582, 58253, null], [58253, 58645, null], [58645, 61511, null], [61511, 63879, null], [63879, 66701, null], [66701, 69448, null], [69448, 72241, null], [72241, 75140, null], [75140, 77097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1951, true], [1951, 4605, null], [4605, 5885, null], [5885, 8900, null], [8900, 11613, null], [11613, 14368, null], [14368, 17119, null], [17119, 20056, null], [20056, 23048, null], [23048, 26042, null], [26042, 28870, null], [28870, 31567, null], [31567, 33980, null], [33980, 37106, null], [37106, 40175, null], [40175, 43367, null], [43367, 46463, null], [46463, 49575, null], [49575, 52356, null], [52356, 55582, null], [55582, 58253, null], [58253, 58645, null], [58645, 61511, null], [61511, 63879, null], [63879, 66701, null], [66701, 69448, null], [69448, 72241, null], [72241, 75140, null], [75140, 77097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77097, null]], "pdf_page_numbers": [[0, 1951, 1], [1951, 4605, 2], [4605, 5885, 3], [5885, 8900, 4], [8900, 11613, 5], [11613, 14368, 6], [14368, 17119, 7], [17119, 20056, 8], [20056, 23048, 9], [23048, 26042, 10], [26042, 28870, 11], [28870, 31567, 12], [31567, 33980, 13], [33980, 37106, 14], [37106, 40175, 15], [40175, 43367, 16], [43367, 46463, 17], [46463, 49575, 18], [49575, 52356, 19], [52356, 55582, 20], [55582, 58253, 21], [58253, 58645, 22], [58645, 61511, 23], [61511, 63879, 24], [63879, 66701, 25], [66701, 69448, 26], [69448, 72241, 27], [72241, 75140, 28], [75140, 77097, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77097, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
6b475a32d39b4b037a8da0d4975400a7886783fd
|
Rate Monotonic Analysis for Real-Time Systems
Lui Sha
Mark H. Klein
John B. Goodenough
March 1991
Rate Monotonic Analysis for Real-Time Systems
Lui Sha
Mark H. Klein
John B. Goodenough
Rate Monotonic Analysis for Real-Time Systems Project
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213
This technical report was prepared for the
SEI Joint Program Office
ESD/AVS
Hanscom AFB, MA 01731
The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange.
Review and Approval
This report has been reviewed and is approved for publication.
FOR THE COMMANDER
JOHN S. HERMAN, Capt, USAF
SEI Joint Program Office
This work is sponsored by the U.S. Department of Defense.
Copyright © 1991 by Carnegie Mellon University.
This document is available through the Defense Technical Information Center. DTIC provides access to and transfer of scientific and technical information for DoD personnel, DoD contractors and potential contractors, and other U.S. Government agency personnel and their contractors. To obtain a copy, please contact DTIC directly: Defense Technical Information Center, Attn: FDRA, Cameron Station, Alexandria, VA 22304-6145.
Copies of this document are also available through the National Technical Information Service. For information on ordering, please contact NTIS directly: National Technical Information Service, U.S. Department of Commerce, Springfield, VA 22161.
Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder.
# Table of Contents
1. Introduction
2. The Development of Rate Monotonic Theory
2.1. Selection of Rate Monotonic Theory
2.2. Scheduling Aperiodic Tasks
2.3. Handling Task Synchronization
2.4. The Requirements of the Mode Change Protocol
3. Analysis of Real-Time Paradigms
3.1. Reasoning About Time
3.2. Schedulability Models
4. Systems Issues
4.1. Overview of Futurebus+
4.2. The Design Space for Real-Time Computing Support
4.3. The Number of Priority Levels Required
4.4. Overview of Futurebus+ Arbitration
5. Summary
References
List of Figures
Figure 4-1: Schedulability Loss vs. The Number of Priority Bits 18
Rate Monotonic Analysis for Real-Time Systems
Abstract: The essential goal of the Rate Monotonic Analysis (RMA) for Real-Time Systems Project at the Software Engineering Institute is to catalyze improvement in the practice of real-time systems engineering, specifically by increasing the use of rate monotonic analysis and scheduling algorithms. In this report, we review important decisions in the development of RMA. Our experience indicates that technology transition considerations should be embedded in the process of technology development from the start, rather than as an afterthought.
As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by the ideas coming from 'reality,' it is beset with very grave dangers. It becomes more and more pure aestheticizing, more and more purely l'art pour l'art....
There is grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much "abstract" breeding, a mathematical subject is in danger of degeneration." — John Von Neumann. "The Mathematician," an essay in The Works of Mind, Editor E. B. Heywood, University of Chicago Press, 1957.
1. Introduction
The essential goal of the Rate Monotonic Analysis for Real-Time Systems (RMARTS) Project at the Software Engineering Institute (SEI) is to catalyze an improvement in the state of the practice for real-time systems engineering. Our core strategy for accomplishing this is to provide a solid analytical foundation for real-time resource management based on the principles of rate monotonic theory. However, influencing the state of the practice requires more than research advances; it requires an ongoing interplay between theory and practice. We have encouraged this interplay between theory and practice through cooperative efforts among academia, industry, and government. These efforts include:
- Conducting proof of concept experiments to explore the use of the theory on test case problems and to understand issues related to algorithm implementation in commercially available runtime systems.
---
1This is a follow-on to the Real-Time Scheduling in Ada (RTSIA) Project.
• Working with major development projects to explore the practical applicability of the theory and to identify the fundamental issues related to its use.
• Conducting tutorials and developing instructional materials for use by other organizations.
• Publishing important findings to communicate results to practitioners and to stimulate the research community.
• Working to obtain support from major national standards.
The interplay between research and application has resulted in our extending rate monotonic theory from its original form of scheduling independent periodic tasks [11] to scheduling both periodic and aperiodic tasks [24] with synchronization requirements [15, 16, 19] and mode change requirements [18]. In addition, we have addressed the associated hardware scheduling support [10, 22], implications for Ada scheduling rules [5], algorithm implementation in an Ada runtime system [1], and schedulability analysis of input/output paradigms [7]. Finally, we also have performed a number of design and analysis experiments to test the viability of the theory [2, 13]. Together, these results constitute a reasonably comprehensive set of analytical methods for real-time system engineering. As a result, real-time system engineering based on rate monotonic theory has been:
• Recommended by both the 1st and 2nd International Workshop on Real-Time Ada Issues for real-time applications using Ada tasking. The rate monotonic approach has been supported by a growing number of Ada vendors, e.g., DDC-I and Verdix, and is influencing the Ada 9X process.
• Provides the theoretical foundation in the design of the real-time scheduling support for IEEE Futurebus+, which has been widely endorsed by industry, including both VME and Multibus communities. It is also the standard adopted by the US Navy. The rate monotonic approach is the recommended approach in the Futurebus+ System Configuration Manual (IEEE 896.3).
• Recommended by IBM Federal Sector Division (FSD) for its real-time projects. Indeed, IBM FSD has been conducting workshops in rate monotonic scheduling for its engineers since April 1990.
• Successfully applied to both the active and passive sonar of a major submarine system of the US Navy.
• Selected by the European Space Agency as the baseline theory for its Hard Real-Time Operating System Project.
• Adopted in 1990 by NASA and its Space Station contractors for development of real-time software for the Space Station data management subsystem and associated avionics applications.
Many of the important results of the rate monotonic approach have been reviewed elsewhere [20, 21]. In this paper, we would like to illustrate the interplay between rate monotonic theory and practice by drawing on three examples. The first example, which is discussed in the next section, describes how this interplay influenced the developmental history of the theory itself. The second example, discussed in Chapter 3, outlines several issues arising from the use of the theory to understand the timing behavior of real-time input/output paradigms. Chapter 4 discusses the importance of considering standard hardware architectures.
2. The Development of Rate Monotonic Theory
The development of rate monotonic theory after the initial work of Liu and Layland [11] has been closely related to practical applications from its very beginning. Many of the significant results are the product of the close cooperation between Carnegie Mellon University, the Software Engineering Institute, IBM's Federal Sector Division, and other industry partners. The interplay between research and practice has guided us to develop analytical methods that are not only theoretically sound but also have wide applicability.
2.1. Selection of Rate Monotonic Theory
The notion of rate monotonic scheduling was first introduced by Liu and Layland in 1973 [11]. The term rate monotonic (RM) derives from a method of assigning priorities to a set of processes: assigning priorities as a monotonic function of the rate of a (periodic) process. Given this simple rule for assigning priorities, rate monotonic scheduling theory provides the following simple inequality—comparing total processor utilization to a theoretically determined bound—that serves as a sufficient condition to ensure that all processes will complete their work by the end of their periods.
\[
\frac{C_1}{T_1} + \ldots + \frac{C_n}{T_n} \leq U(n) = n(2^{1/n} - 1)
\]
\(C_i\) and \(T_i\) represent the execution time and period respectively associated with periodic task \(\tau_i\). As the number of tasks increases, the scheduling bound converges to \(\ln 2\) (69%). We will refer to this as the basic rate monotonic schedulability test.
In the same paper, Liu and Layland also showed that the earliest deadline scheduling algorithm is superior since the scheduling bound is always 1:
\[
\frac{C_1}{T_1} + \ldots + \frac{C_n}{T_n} \leq U(n) = 1
\]
The 31% theoretical difference in performance is large. At first blush, there seemed to be little justification to further develop the rate monotonic approach. Indeed, most publications on the subject after [11] were based on the earliest deadline approach. However, we found that our industrial partners at the time had a strong preference for a static priority scheduling approach for hard real-time applications. This appeared to be puzzling at first, but we quickly learned that the preference is based on important practical considerations:
1. The performance difference is small in practice. Experience indicates that an approach based on rate monotonic theory can often achieve as high as 90% utilization. Additionally, most hard real-time systems also have soft real-time components, such as certain non-critical displays and built-in self tests that can execute at lower priority levels to absorb the cycles that cannot be used by the hard real-time applications under the rate monotonic scheduling approach.
2. Stability is an important problem. Transient system overload due to excep-
tions or hardware error recovery actions, such as bus retries, are inevitable. When a system is overloaded and cannot meet all the deadlines, the deadlines of essential tasks still need to be guaranteed provided that this subset of tasks is schedulable. In a static priority assignment approach, one only needs to ensure that essential tasks have relatively high priorities. Ensuring that essential tasks meet their deadlines becomes a much more difficult problem when earliest deadline scheduling algorithms are used, since under them a periodic task’s priority changes from one period to another.
These observations led members of the Advanced Real-Time Technology (ART) Project at Carnegie Mellon University to investigate the following two problems:
1. What is the average scheduling bound of the rate monotonic scheduling algorithm and how can we determine whether a set of tasks using the rate monotonic scheduling algorithm can meet its deadlines when the Liu & Layland bound is exceeded? This problem was addressed in Lehoczky et al. [9], which provides an exact formula to determine if a given set of periodic tasks can meet their deadlines when the rate monotonic algorithm is used. In addition, the bound for tasks with harmonic frequencies is 100%, while the average bound for randomly generated task sets is 88%.
2. If an essential task has a low rate monotonic priority (because its period is relatively long), how can its deadline be guaranteed without directly raising its priority and, consequently, lowering the system’s schedulability? This problem led to the discovery of the period transformation method [17], which allows a critical task’s priority to be raised in a way that is consistent with rate monotonic priority assignment. In addition, the period transformation method can be used to increase a task set’s scheduling bound should a particular set of periods result in poor schedulability.
While these results were encouraging, the ART Project still faced the problem of scheduling both aperiodic and periodic tasks, as well as the handling of task synchronization in a unified framework.
2.2. Scheduling Aperiodic Tasks
The basic strategy for handling aperiodic processing is to cast such processing into a periodic framework. Polling is an example of this. A polling task will check to see if an aperiodic event has occurred, perform the associated processing if it has, or if no event has occurred, do nothing until the beginning of the next polling period. The virtue of this approach is that the periodic polling task can be analyzed as a periodic task. The execution time of the task is the time associated with processing an event and the period of the task is its polling period. There are two problems with this model:
- If many events occur during a polling period, the amount of execution time associated with the periodic poller may vary widely and on occasion cause lower priority periodic tasks to miss deadlines.
- If an event occurs immediately after the polling task checks for events, the associated processing must wait an entire polling period before it commences.
A central concept introduced to solve these problems is the notion of an aperiodic server [8, 24]. An aperiodic server is a conceptual task\(^2\) that is endowed with an execution budget and a replenishment period. An aperiodic server will handle randomly arriving requests at its assigned priority (determined by the RM algorithm based on its replenishment period) as long as the budget is available. When the server’s computation budget has been depleted, requests will be executed at a background priority (i.e., a priority below any other tasks with real-time response requirements) until the server’s budget has been replenished. The execution budget bounds the execution time, thus preventing the first problem with the polling server. The aperiodic server provides on-demand service as long as it has execution time left in its budget, thus preventing the second problem.
The first algorithm using this concept to handle aperiodic tasks was known as the priority exchange algorithm [8, 23]. This algorithm was shown to have very good theoretical performance and to be fully compatible with the rate monotonic scheduling algorithm. However, our industry partners were not pleased with the runtime overhead incurred by this algorithm.
This led to the design of the second algorithm known as the deferrable server algorithm [8]. This algorithm has a very simple computation budget replenishment policy. At the beginning of every server period, the budget will be reset to the designated amount. While this algorithm is simple to implement, it turns out to be very difficult to analyze when there are multiple servers at different priority levels due to a subtle violation of a rate monotonic scheduling assumption known as the deferred execution effect [8, 15]. It is interesting to note that the deferred execution effect appears in other contexts. This effect is further discussed in Chapter 3.
This problem led to a third revision of an aperiodic server algorithm known as the sporadic server algorithm [24]. The sporadic server differs from the deferrable server algorithm in a small, but theoretically important, way: the budget is no longer replenished periodically. Rather, the allocated budget is replenished only if it is consumed. In its simplest form, a server with a budget of 10 msec and a replenishment period of 100 msec will replenish its 10 msec budget 100 msec after the budget is completely consumed. Although more sophisticated replenishment algorithms provide better performance, the important lesson is that with relatively little additional implementation complexity, the deferred execution effect was eliminated, making the sporadic server equivalent to a regular periodic task from a theoretical point of view and thus fully compatible with RMS algorithm.
The sporadic server algorithm represents a proper balance between the conflicting needs of implementation difficulty and analyzability. Such balance is possible only with the proper interaction between theory and practice.
\(^2\)It is conceptual in the sense that it may manifest itself as an application-level task or as part of the runtime system scheduler. Nevertheless, it can be thought of as a task.
2.3. Handling Task Synchronization
To provide a reasonably comprehensive theoretical framework, task synchronization had to be treated. However, the problem of determining necessary and sufficient schedulability conditions in the presence of synchronization appeared to be rather formidable [14]. The ART project team realized that for practical purposes all that is needed is a set of sufficient conditions coupled with an effective synchronization protocol that allows a high degree of schedulability. This led to an investigation of the cause of poor schedulability when tasks synchronize and use semaphores; this investigation, in turn, led to the discovery of unbounded priority inversion [3].
Consider the following three-task example that illustrates unbounded priority inversion. The three tasks are "High," "Medium," and "Low." High and Low share a resource that is protected by a classical semaphore. Low locks the semaphore; later High preempts Low's critical section and then attempts to lock the semaphore and, of course, is prevented from locking it. While High is waiting for Low to complete, Medium preempts Low's critical section and executes. Consequently, High must wait for both Medium to finish executing and for Low to finish its critical section. The duration of blocking that is experienced by High can be arbitrarily long if there are other Medium priority tasks that also preempt Low's critical section. As a result, the duration of priority inversion is not bounded by the duration of critical sections associated with resource sharing. Together with our industry partners, we initially modified a commercial Ada runtime to investigate the effectiveness of the basic priority inheritance protocol at CMU, and later, at the SEI, the priority ceiling protocol as solutions to the unbounded priority inversion problem [12].
Although the basic priority inheritance protocol solved the problem of unbounded priority inversion, the problems of multiple blocking and mutual deadlocks persisted. Further research resulted in the priority ceiling protocol, which is a real-time synchronization protocol with two important properties: 1) freedom from mutual deadlock and 2) bounded priority inversion, namely, at most one lower priority task can block a higher priority task during each task period [5, 19].
Two central ideas are behind the design of this protocol. First is the concept of priority inheritance: when a task \( \tau \) blocks the execution of higher priority tasks, task \( \tau \) executes at the highest priority level of all the tasks blocked by \( \tau \). Second, we must guarantee that a critical section is allowed to be entered only if the critical section will always execute at a priority level that is higher than the (inherited) priority levels of any preempted critical sections. It was shown [19] that following this rule for entering critical sections leads to the two desired properties. To achieve this, we define the priority ceiling of a binary semaphore \( S \) to be the highest priority of all tasks that may lock \( S \). When a task \( \tau \) attempts to execute one of its critical sections, it will be suspended unless its priority is higher than the priority ceilings of all semaphores currently locked by tasks other than \( \tau \). If task \( \tau \) is unable to enter its critical section for this reason, the task that holds the lock on the semaphore with the highest priority ceiling is said to be blocking \( \tau \) and hence inherits the priority of \( \tau \). As long as a task \( \tau \) is not attempting to enter one of its critical sections, it will preempt any task that has a lower priority.
Associated with these results is a new schedulability test (also referred to as the *extended rate monotonic schedulability test*) that accounts for the blocking that may be experienced by each task. Let $B_i$ be the worst-case total amount of blocking that task $\tau_i$ can incur during any period. The set of tasks will be schedulable if the following set of inequalities are satisfied:
$$\frac{C_1}{T_1} + \frac{B_1}{T_1} \leq 1(2^{1/1} - 1) \text{ and}$$
$$\frac{C_1}{T_1} + \frac{C_2}{T_2} + \frac{B_2}{T_2} \leq 2(2^{1/2} - 1) \text{ and}$$
$$\ldots$$
$$\frac{C_1}{T_1} + \frac{C_2}{T_2} + \cdots + \frac{C_k}{T_k} + \frac{B_k}{T_k} \leq k(2^{1/k} - 1) \text{ and}$$
$$\ldots$$
$$\frac{C_1}{T_1} + \frac{C_2}{T_2} + \cdots + \frac{C_n}{T_n} \leq n(2^{1/n} - 1)$$
This set of schedulability inequalities can also be viewed as a mathematical model that predicts the schedulability of a set of tasks. Each task is modeled with its own inequality and there are terms in the inequality that account for all factors that impact that task's schedulability. This idea is discussed further in Chapter 3. The priority ceiling protocol was also extended to address the multi-processor issues [15, 16, 21].
### 2.4. The Requirements of the Mode Change Protocol
Potential users of the rate monotonic algorithm were uncomfortable with the notion of a fixed task set with static priorities. Their point was that in certain real-time applications, the set of tasks in the system, as well as the characteristics of the tasks, change during system execution. Specifically, the system moves from one mode of execution to another as its mission progresses. A change in mode can be thought of as a deletion of some tasks and the addition of new tasks, or changes in the parameters of certain tasks (e.g., increasing the sampling rate to obtain a more accurate result). Our dialogue with practitioners made it clear that the existing body of rate monotonic theory needed to be expanded to include this requirement. This precipitated the development of the mode change protocol.
At first sight, it appeared that the major design goal was to achieve near optimal performance in terms of minimal mode change delay.\(^3\) However, having surveyed the complaints
\(^3\)The elapsed time between the initiation time of a mode change command to the starting time of a new mode.
about the difficulties associated with maintaining the software of a cyclical executive with embedded mode change operations, we realized that performance was only part of the requirement. To be useful, rate monotonic theory must address software engineering issues as well. The requirements include:
- **Compatibility**: The addition of the mode change protocol must be compatible with existing RM scheduling algorithms, e.g., the preservation of the two important properties of the priority ceiling protocol: the freedom from mutual deadlocks and the blocked-at-most-once (by lower priority tasks) property.
- **Maintainability**: To facilitate system maintenance, the mode change protocol must allow the addition of new tasks without adversely affecting tasks that are written to execute in more than one mode. Tasks must be able to meet deadlines, before, during, and after the mode change. In addition, a task cannot be deleted until it completes its current transaction and leaves the system in a consistent state.
- **Performance**: The mode change protocol for rate monotonic scheduling should perform at least as fast as mode changes in cyclical executives.
Once the mode change requirements were clear, designing the protocol was straightforward.
1. **Compatibility**: The addition and/or the deletion of tasks in a mode change may lead to the modification of the priority ceilings of some semaphores across the mode change. Upon the initiation of a mode change:
- For each unlocked semaphore $S$ whose priority ceiling needs to be raised, $S$'s ceiling is raised immediately and indivisibly.
- For each locked semaphore $S$ whose priority ceiling needs to be raised, $S$'s priority ceiling is raised immediately and indivisibly after $S$ is unlocked.
- For each semaphore $S$ whose priority ceiling needs to be lowered, $S$'s priority ceiling is lowered when all the tasks which may lock $S$, and which have priorities greater than the new priority ceiling of $S$, are deleted.
- If task $\tau$'s priority is higher than the priority ceilings of locked semaphores $S_1, ..., S_k$ which it may lock, the priority ceilings of $S_1, ..., S_k$ must be first raised before adding task $\tau$.
2. **Maintainability and Performance**: A task $\tau$, which needs to be deleted, can be deleted immediately upon the initiation of a mode change if $\tau$ has not yet started its execution in its current period. In addition, the spare processor capacity due to $\tau$'s deletion may be reclaimed immediately by new tasks. On the other hand, if $\tau$ has started execution, $\tau$ can be deleted after the end of its execution and before its next initiation time. In this case, the spare processor capacity due to $\tau$'s deletion cannot become effective until the deleted task's next initiation time. In both cases, a task can be added into the system only if sufficient spare processor capacity exists.
Sha et al. [18] showed that the mode change protocol described above is compatible with the priority ceiling protocol in the sense that it preserves the properties of freedom from mutual deadlock and blocked-at-most-once. In addition, under this protocol tasks that execute in more than one mode can always meet their deadlines as long as all the modes are schedulable [18]. Since a task is not deleted until it completes its current transaction, the consistency of the system state will not be adversely affected.
Finally, Sha et al. [18] showed that the mode change delay is bounded by the larger of two numbers: the longest period of all the tasks to be deleted and the shortest period associated with the semaphore that has the lowest priority ceiling and needs to be modified. This is generally much shorter and will never be longer than the least common multiple (LCM) of all the periods. In the cyclical executive approach, the major cycle is the LCM of all the periods and a mode change will not be initiated until the current major cycle completes. In addition, the mode change protocol also provides the flexibility of adding and executing the most urgent task in the new mode before the mode change is completed.
The development of the mode change protocol illustrates how the interaction between the real-time systems development and research communities guided the extension of rate monotonic theory.
3. Analysis of Real-Time Paradigms
An important goal of the RMARTS Project is to ensure that the principles of rate monotonic theory as a whole provide a foundation for a solid engineering method that is applicable to a wide range of realistic real-time problems. One mechanism for ensuring the robustness of the theory is to perform case studies. In this vein, the concurrency architecture of a generic avionics system [13] was designed using the principles of rate monotonic theory. Additionally, an inertial navigation system simulator written at the SEI [1] is an example of an existing system that was subjected to rate monotonic analysis and improved as a consequence.
Another mechanism for ensuring the robustness of the theory is to apply it to common design paradigms that are pervasive in real-time systems. Klein and Ralya examined various input/output (I/O) paradigms to explore how the principles of rate monotonic scheduling can be applied to I/O interfaces to predict the timing behavior of various design alternatives [7]. Two main topics they explored [7] will be reviewed here:
- Reasoning about time when the system design does not appear to conform to the premises of rate monotonic scheduling.
- Developing mathematical models of schedulability.
3.1. Reasoning About Time
On the surface, it appears that many important problems do not conform to the premises of rate monotonic theory. The basic theory [11] gives us a rule for assigning priorities to periodic processes and a formula for determining whether a set of periodic processes will meet all of their deadlines. This result is theoretically interesting but its basic assumptions are much too restrictive. The set of assumptions that are prerequisites for this result are (see [1]):
- Task switching is instantaneous.
- Tasks account for all execution time (i.e., the operating system does not usurp the CPU to perform functions such as time management, memory management, or I/O).
- Task interactions are not allowed.
- Tasks become ready to execute precisely at the beginning of their periods and relinquish the CPU only when execution is complete.
- Task deadlines are always at the start of the next period.
- Tasks with shorter periods are assigned higher priorities; the criticality of tasks is not considered.
- Task execution is always consistent with its rate monotonic priority: a lower priority task never executes when a higher priority task is ready to execute.
Notice that under these assumptions, only higher priority tasks can affect the schedulability of a particular task. Higher priority tasks delay a lower priority task's completion time by preempting it. Yet we know there are many circumstances, especially when considering I/O services, where these assumptions are violated. For example:
* Interrupts (periodic or aperiodic) generally interrupt task execution, independent of the period or the interrupt or the importance of the event that caused the interrupt. Interrupts are also used to signal the completion of I/O for direct memory access (DMA) devices.
Moreover, when a DMA device is used, tasks may relinquish the CPU for the duration of the data movement, allowing lower priority tasks to execute. This will, of course, result in a task switch to the lower priority task, which requires saving the current task's state and restoring the state of the task that will be executing.
It is not uncommon that portions of operating system execution are non-preemptable. In particular, it may be the case that portions of an I/O service may be non-preemptable.
It appears that the above mentioned aspects of performing I/O do not conform to the fundamental assumptions of rate monotonic scheduling and thus are not amenable to rate monotonic analysis. To show how rate monotonic analysis can be used to model the aforementioned seemingly non-conforming aspects of I/O, we will examine:
- No non-zero task switching time.
- Task suspension during I/O.
- Tasks executing at non-rate monotonic priorities.
From [7] we know that task switching can be modeled by adding extra execution to tasks. More specifically, let \( C_i \) represent the execution time of task \( t_i \) and the worst-case context switching time between tasks is denoted by \( C_s \). Then \( C'_i \) is the new execution time that accounts for context switching, where \( C'_i = C_i + 2C_s \). Thus, context switching time is easily included in the basic rate monotonic schedulability test.
Task I/O time refers to the time interval when a task relinquishes the CPU to lower priority tasks. Clearly, this I/O time (or interval of suspension time) must be accounted for when considering a task's schedulability. A task's completion time is postponed by the duration of the I/O suspension. Notice, however, that this period of suspension is not execution time for the suspending task and thus is not preemption time for lower priority tasks.
On the surface, it appears as if lower priority tasks will benefit only from I/O-related suspension of higher priority tasks. This is not totally true. A subtle effect of task suspension is the jitter penalty (also known as the deferred execution effect), which is discussed in [7, 15, 21]. This is an effect that I/O suspension time for task \( t_i \) has on lower priority tasks. Intuitively, I/O suspension has the potential to cause a "bunching of execution time."
Imagine the case where the highest priority task has no suspension time. It commences execution at the beginning of every period and there is always an interval of time between the end of one interval of execution and the beginning of the next. This pattern of execution is built into the derivation of the basic rate-monotonic inequality. Now imagine if this same task is allowed to suspend and spend most of its execution time at the end of one period, followed by a period in which it spends all of its execution at the beginning of the period. In
this case, there is a contiguous "bunch" of execution time. Lower priority tasks will see an
atypical amount of preemption time during this "bunching." Also, this "bunching" is not built
into the basic rate monotonic inequality. However, this situation can be accommodated by
adding an extra term in the inequalities associated with lower priority tasks. Alternatively,
Sha et al. discuss a technique for eliminating the jitter penalty completely by eliminating the
variability in a task's execution [21].
Another factor that affects the schedulability of a task is priority inversion. Priority inversion
was first discussed in [3] in the context of task synchronization, where the classic example
of so called unbounded priority inversion was described. This synchronization-induced pri-
ority inversion motivated the creation of a class of priority inheritance protocols that allows
us to bound and predict the effects of synchronization-induced priority inversion (briefly dis-
cussed in Chapter 2).
However, there are other sources of priority inversion. This becomes more apparent when
we consider the definition of priority inversion: delay in the execution of higher priority tasks
caused by the execution of lower priority tasks. Actually, we are concerned with priority
inversion relative to a rate monotonic priority assignment. Intervals of non-preemptability
and interrupts are sources of priority inversion. When a higher priority task is prevented
from preempting a lower priority task, the higher priority task's execution is delayed due to
the execution of a lower priority task. Interrupts in general preempt task processing inde-
pendent of event arrival rate and thus clearly have an impact on the ability of other tasks to
meet their deadlines. Once again, additional terms can be added to the schedulability in-
equalities to account for priority inversion.
3.2. Schedulability Models
The preceding discussion merely offers a sample of how rate monotonic analysis allows us
to reason about the timing behavior of a system. In fact, we have found that the principles
of rate monotonic scheduling theory provide analytical mechanisms for understanding and
predicting the execution timing behavior of many real-time requirements and designs. How-
ever, when various input/output paradigms are viewed in the context of a larger system, it
becomes apparent that timing complexity grows quickly. It is not hard to imagine a system
comprised of many tasks that share data and devices, where the characteristics of the de-
vices vary. The question is, how do we build a model of a system's schedulability in a
realistically complex context?
We refer to a mathematical model that describes the schedulability of a system as a
schedulability model. A schedulability model is basically a set of rate monotonic inequalities
(i.e., the extended schedulability test) that captures the schedulability-related characteristics
of a set of tasks. As described in [1], there is generally one inequality for each task. Each
Inequality has terms that describe or model various factors that affect the ability of a task to
meet its deadline. For example, there are terms that account for preemption effects due to
higher priority tasks; a term is needed that accounts for the execution time of the task itself;
there may be terms to account for blocking due to resource sharing or priority inversion due to
interrupts; and terms may be needed to account for schedulability penalties due to the
jitter effect.
An incremental approach for constructing schedulability models is suggested by [7]. The
approach basically involves striving to answer two fundamental questions for each task \( \tau_i \):
1. How do other tasks affect the schedulability of \( \tau_i \)?
2. How does task \( \tau_i \) affect the schedulability of other tasks?
In effect, answering these two questions is like specifying a schedulability interface for process \( \tau_i \): importing the information needed to determine its schedulability and exporting the
information needed to determine the schedulability of other processes. This approach facilit-
tates a separation of concerns, allowing us to focus our attention on a single task as differ-
ent aspects of its execution are explored. It is not hard to imagine extending this idea of a
schedulability interface to collections of task that represent common design paradigms.\(^4\)
The person responsible for implementing the paradigm would need to determine how other
tasks in the system affect the schedulability of the paradigm, and it would be incumbent
upon this person to offer the same information to others. These ideas were illustrated in [7],
where schedulability models were constructed for several variations of synchronous and
asynchronous input/output paradigms. The analysis at times confirmed intuition and com-
mon practice, and at times offered unexpected insights for reasoning about schedulability in
this context.
\(^4\)For example, the client-server model for sharing data between tasks [1].
4. Systems Issues
The successful use of RMS theory in a large scale system is an engineering endeavor that is constrained by many logistical issues in system development. One constraint is the use of standards. For reasons of economy, it is important to use open standards. An open standard is, however, often a compromise between many conflicting needs, and provides a number of primitive operations which usually allow a system configuration to be optimized for certain applications while maintaining inter-operability. The RMARTS Project has been heavily involved with an emerging set of standards including IEEE Futurebus+, POSIX real-time extension and Ada 9x.
The RMS theory belongs to the class of priority scheduling theory. Hence, it is important to ensure that primitives for priority scheduling are properly embedded in the standards. In this section, we will review some the design considerations in the context of IEEE 896 (Futurebus+).
4.1. Overview of Futurebus+
The Futurebus+ is a specification for a scalable backplane bus architecture that can be configured to be 32, 64, 128 or 256 bits wide. The Futurebus+ specification is a part of the IEEE 896 family of standards. The Futurebus+ specification has become a US Navy standard and has also gained the support of the VMEbus International Trade Association and other major industry concerns. This government and industry backing promises to make the Futurebus+ a popular candidate for high-performance and embedded real-time systems of the 1990s. The important features of Futurebus+ include:
- A true open standard in the sense that it is independent of any proprietary technology or processor architecture.
- A technology-independent asynchronous bus transfer protocol whose speed will be limited only by physical laws and not by existing technology. Transmission line analysis [4] indicates that Futurebus+ can realize 100M transfers of 32, 64, 128, or 256 bits of data per second.
- Fully distributed connection, split transaction protocols and a distributed arbiter option that avoid single point failures. Parity is used on both the data and control signals. Support is available for dual bus configuration for fault tolerant applications. In addition, Futurebus+ supports on-line maintenance involving live insertion/removal of modules without turning off the system.
- Direct support for shared memory systems based on snoopy cache. Both strong and weak sequential consistency are supported.
- Support for real-time mission critical computation by providing a sufficiently large number of priorities for arbitration. In addition, there is a consistent treatment of priorities throughout the arbitration, message passing and DMA protocols. Support is available for implementing distributed clock synchronization protocols.
From the viewpoint of real-time computing, the Futurebus+ is perhaps the first major national standard that provides extensive support for priority-driven preemptive real-time scheduling. In addition, the support for distributed clock synchronization protocols provides users with accurate and reliable timing information. In summary, Futurebus+ provides strong support for the use of priority scheduling algorithms that can provide analytical performance evaluation such as the rate monotonic theory [11, 10, 15, 20]. As a result, the Futurebus+ architecture facilitates the development of real-time systems whose timing behavior can be analyzed and predicted.
In the following, we provide an overview on the design considerations. Readers interested in a more comprehensive overview of this subject may refer to [22]. Those who are interested in the details of Futurebus+ are referred to the three volumes of Futurebus+ documents. IEEE 896.1 defines the logical layer that is the common denominator of Futurebus+ systems. IEEE 896.2 defines the physical layer which covers materials such as live insertion, node management, and profiles. IEEE 896.3 is the system configuration manual which provides guidelines (not requirements) for the use of Futurebus+ for real-time systems, fault-tolerant systems, or secure computing environments.
4.2. The Design Space for Real-Time Computing Support
It is important to realize that the design space is highly constrained not only by technical considerations but also by cost and management considerations. The final specification is determined by a consensus process among representatives from many industrial concerns. The constraints include:
- **Pin count**: The pin count for the bus must be tightly controlled. Each additional pin increases power requirements in addition to increasing weight and imposing connector constraints. Many of these costs are recurring.
- **Arbitration logic complexity**: The complexity of bus arbitration driver logic is low in the context of modern VLSI technology. The addition of simple logic such as multiplexing arbitration lines for dual or multiple functions is not a recurring cost once it has been designed. The major constraint here is managerial. The development process must converge to a standard that would meet the manufacturers' expected schedules. This implies that a good idea that comes too late is of little value.
- **Arbitration speed**: Priority levels can be increased by multiplexing the same priority pins over two or more cycles. While such straightforward multiplexing of the arbitration lines will increase the priority levels without adding pins, it will also double the arbitration time for even the highest priority request.
- **Bus transaction complexity**: While specialized bus transaction protocols can be introduced for functions useful for real-time systems (such as clock synchronization), each additional transaction type can add to the size and complexity of the bus interface chips. In other words, whenever possible, existing transaction types should be used to achieve real-time functions like clock synchronization.
To summarize, support mechanisms for real-time systems must not add non-negligible overhead to either the performance or the manufacturing cost of a bus that is designed primarily for general data processing applications. However, these constraints do not have an equal impact on different support features for real-time computing. For example, while they heavily constrain the design space of arbitration protocols, they are essentially independent of the design of real-time cache schemes.\(^5\)
### 4.3. The Number of Priority Levels Required
Ideally, there should be as many priority levels as are required by the scheduling algorithm, and a module must use the assigned priority of the given bus transaction to contend for the bus. For example, under the rate-monotonic algorithm [11], if there are 10 periodic tasks each with a different period, each of these tasks should be assigned a priority based on its period. The bus transactions executed by each of these tasks should reflect the task priority. From the viewpoint of backplane design, only a small number of pins should be devoted to arbitration and the degree of multiplexing for arbitration speed should be limited.
As a result, we need to find a way that can use a smaller number of priority levels than the ideal number for the rate monotonic algorithm. When there is a smaller number of priority levels available compared with the number needed by the priority scheduling algorithm, the schedulability of a resource is lowered [10]. For example, suppose that we have two tasks \(\tau_1\) and \(\tau_2\). Task \(\tau_1\) has 1 msec execution and a period 100 msec while task \(\tau_2\) has 100 msec execution time and a period of 200 msec. If we have only a single priority to be shared by these two tasks, it is possible that task \(\tau_2\) may take precedence over task \(\tau_1\) since ties are broken arbitrarily. As a result, task \(\tau_1\) will miss its deadline even though the total processor utilization is only 51%.
Fortunately, the loss of schedulability due to a lack of sufficient number of priority levels can be reduced by employing a constant ratio priority grid for priority assignments. Consider a range of the task periods such as 1 msec to 100 seconds. A constant-ratio grid divides this range into segments such that the ratio between every pair of adjacent points is the same. An example of a constant ration priority grid is \(\{L_1 = 1\text{ msec, } L_2 = 2\text{ msec, } L_3 = 4\text{ msec, }\ldots\}\) where there is a constant ratio of 2 between pairs of adjacent points in the grid.
With a constant ratio grid, a distinct priority is assigned to each interval in the grid. For example, all tasks with periods between 1 to 2 msec will be assigned the highest priority, all tasks with periods between 2 to 4 msec will have the second highest priority and so on when using the rate-monotonic algorithm. It has been shown [10] that a constant ratio priority grid is effective only if the grid ratio is kept smaller than 2. For the rate-monotonic algorithm, the percentage loss in worstcase schedulability due to the imperfect priority representation can be computed by the following formula [10]:
\[\text{Percentage Loss} = \frac{\text{Grid Ratio} - 1}{\text{Grid Ratio}}\]
\(^5\)Readers who are interested in using cache for real-time applications are referred to [6].
\[
\text{Loss} = 1 - \frac{\ln(2/r) + 1 - 1/r}{\ln 2}
\]
where \( r \) is the grid ratio.
---
**Figure 4-1: Schedulability Loss vs. The Number of Priority Bits**
For example, suppose that the shortest and longest periods in the system are 1 msec and 100,000 msec respectively. In addition, we have 256 priority levels. Let \( L_0 = 1 \) msec and \( L_{256} = 100,000 \) msec respectively. We have \((L_1/L_0) = (L_2/L_1) = \ldots = (L_{256}/L_{255}) = r\). That is, \( r = (L_{256}/L_0)^{1/256} = 1.046\). The resulting schedulability loss is \((1 - \frac{\ln(2/r) + 1 - 1/r}{\ln 2}) = 0.0014\), which is small.
Figure 4-1 plots the schedulability loss as a function of priority bits under the assumption that the ratio of the longest period to the shortest period in the system is 100,000. As can be seen, the schedulability loss is negligible with 8 priority bits. In other words, the worst case obtained with 8 priority bits is close to that obtained with an unlimited number of priority levels. As a result, Futurebus+ arbiters have real-time options that support 8 priority bits for arbitration.
---
*The ratio of 100,000 was chosen here for illustration purposes only. The equation for schedulability loss indicates that 8 priority bits (256 priority levels) are effective for a wide range of ratios.*
4.4. Overview of Futurebus+ Arbitration
The Futurebus+ supports up to 31 modules. Each module with a request contends during an arbitration cycle, and the winner of an arbitration becomes bus master for one transaction. Futurebus+ designers can choose between one of two possible arbiters:
- A distributed arbiter scheme: as the name implies, the arbitration of multiple requests happens in a distributed fashion in this model. Its chief advantage is that its distributed nature tends to make it fault-tolerant. However, the arbitration procedure is relatively slow, because the request and grant process has to be resolved over the backplane wired-or logic bit by bit.
- A central arbiter scheme: in this scheme, all requests for bus access are transmitted to the central arbiter, which resolves the contention and grants the bus to one module at a time. The obvious disadvantage is that the central arbiter could cause single point failure unless a redundant arbiter is employed. On the other hand, fault tolerance is not a major concern in workstation applications, and a central arbiter operates faster since there is no contention over the dedicated request and grant lines for each module.
The difference between the two is performance vs reliability. One can, however, combine the two schemes to achieve both performance and reliability. For example, Texas Instrument's Futurebus+ Arbitration Controller chip set, TFB2010, allows one to first operate in centralized arbitration mode after initialization for performance. If the central arbiter fails, the system can switch into the slower but more robust distributed arbitration mode.
5. Summary
The essential goal of the Real-Time Scheduling in Ada Project at the SEI is to catalyze an improvement in the state of the practice for real-time systems engineering. Our goals naturally include contributing to the advancement of the state-of-the-art, but we are equally concerned with advancing the state-of-the-practice. While research is central to changing the state-of-the-practice, research alone is not sufficient. We have tried to illustrate through several examples the importance of the interplay between research and practice, which at times forces tradeoffs between solving theoretically interesting problems versus producing practicable results.
The first example illustrated this interplay by examining the rationale for selecting a research agenda. The second example illustrated several issues concerning the use of the theory in a potentially complex but realistic setting. The third example exposed the problem of having to consider the current and future technology infrastructure when attempting to push a technology from research to practice. In summary, our experience indicates that technology transition considerations should be embedded in the process of technology development from the start, rather than as an afterthought.
References
The essential goal of the Rate Monotonic Analysis (RMA) for Real-Time Systems Project at the Software Engineering Institute is to catalyze improvement in the practice of real-time systems engineering, specifically by increasing the use of rate monotonic analysis and scheduling algorithms. In this report, we review important decisions in the development of RMA. Our experience indicates that technology transition considerations should be embedded in the process of technology development from the start, rather than as an afterthought.
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a235641.pdf", "len_cl100k_base": 10817, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 62903, "total-output-tokens": 13653, "length": "2e13", "weborganizer": {"__label__adult": 0.0004315376281738281, "__label__art_design": 0.0005941390991210938, "__label__crime_law": 0.00041294097900390625, "__label__education_jobs": 0.0009832382202148438, "__label__entertainment": 0.00012099742889404296, "__label__fashion_beauty": 0.000240325927734375, "__label__finance_business": 0.0005321502685546875, "__label__food_dining": 0.0003659725189208984, "__label__games": 0.0008440017700195312, "__label__hardware": 0.006618499755859375, "__label__health": 0.0006971359252929688, "__label__history": 0.0005450248718261719, "__label__home_hobbies": 0.0001628398895263672, "__label__industrial": 0.001293182373046875, "__label__literature": 0.0003104209899902344, "__label__politics": 0.0003991127014160156, "__label__religion": 0.0006546974182128906, "__label__science_tech": 0.338134765625, "__label__social_life": 9.113550186157228e-05, "__label__software": 0.01300811767578125, "__label__software_dev": 0.6318359375, "__label__sports_fitness": 0.0003407001495361328, "__label__transportation": 0.0012969970703125, "__label__travel": 0.00024080276489257812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56919, 0.01965]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56919, 0.58299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56919, 0.92686]], "google_gemma-3-12b-it_contains_pii": [[0, 99, false], [99, 332, null], [332, 1657, null], [1657, 2224, null], [2224, 2224, null], [2224, 2308, null], [2308, 4763, null], [4763, 7919, null], [7919, 10787, null], [10787, 13908, null], [13908, 17104, null], [17104, 20779, null], [20779, 23145, null], [23145, 26069, null], [26069, 27485, null], [27485, 27485, null], [27485, 30282, null], [30282, 33430, null], [33430, 36745, null], [36745, 38479, null], [38479, 41283, null], [41283, 44427, null], [44427, 47799, null], [47799, 49114, null], [49114, 50760, null], [50760, 50760, null], [50760, 52024, null], [52024, 52024, null], [52024, 54641, null], [54641, 56382, null], [56382, 56919, null]], "google_gemma-3-12b-it_is_public_document": [[0, 99, true], [99, 332, null], [332, 1657, null], [1657, 2224, null], [2224, 2224, null], [2224, 2308, null], [2308, 4763, null], [4763, 7919, null], [7919, 10787, null], [10787, 13908, null], [13908, 17104, null], [17104, 20779, null], [20779, 23145, null], [23145, 26069, null], [26069, 27485, null], [27485, 27485, null], [27485, 30282, null], [30282, 33430, null], [33430, 36745, null], [36745, 38479, null], [38479, 41283, null], [41283, 44427, null], [44427, 47799, null], [47799, 49114, null], [49114, 50760, null], [50760, 50760, null], [50760, 52024, null], [52024, 52024, null], [52024, 54641, null], [54641, 56382, null], [56382, 56919, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56919, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56919, null]], "pdf_page_numbers": [[0, 99, 1], [99, 332, 2], [332, 1657, 3], [1657, 2224, 4], [2224, 2224, 5], [2224, 2308, 6], [2308, 4763, 7], [4763, 7919, 8], [7919, 10787, 9], [10787, 13908, 10], [13908, 17104, 11], [17104, 20779, 12], [20779, 23145, 13], [23145, 26069, 14], [26069, 27485, 15], [27485, 27485, 16], [27485, 30282, 17], [30282, 33430, 18], [33430, 36745, 19], [36745, 38479, 20], [38479, 41283, 21], [41283, 44427, 22], [44427, 47799, 23], [47799, 49114, 24], [49114, 50760, 25], [50760, 50760, 26], [50760, 52024, 27], [52024, 52024, 28], [52024, 54641, 29], [54641, 56382, 30], [56382, 56919, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56919, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
d84bda683ce184a64f6c9c12b2fe65f06bc54305
|
Project-Team Jacquard
Weaving of Software Components
Futurs
# Table of contents
1. **Team**
1
2. **Overall Objectives**
1
2.1.1. J.M. Jacquard and the weaving machines
2
3. **Scientific Foundations**
2
3.1. Weaving of Software Components
2
3.2. OpenCCM
2.1. Open Middleware for the CCM
3
2.2. Open Containers
3
2.3. Open Environment
4
3.3. Aspects Oriented Design of Dynamic Components Assemblies
4
3.3.1. Early aspects
4
3.3.2. Aspects at design-time
5
3.3.3. Aspects at run-time
5
3.4. Functional Aspects for Components Applications
5
4. **Application Domains**
7
5. **Software**
7
5.1. Apollon
7
5.2. Fractal Explorer
7
5.3. GoTM
8
5.4. OpenCCM
8
5.5. Java Aspect Components
8
5.6. UML Profile
8
6. **New Results**
9
6.1. Open Middleware for the CCM
9
6.1.1. Extensible Containers
9
6.1.2. Middleware Infrastructures to Deploy Distributed Component-Based Applications
9
6.1.3. Component-Based Software Framework for Building Transaction Services
10
6.1.4. Middleware Benchmarking
10
6.2. Aspect Oriented design of dynamic components assemblies
10
6.2.1. Aspects at design-time
10
6.2.2. Architectures and points of view
11
6.2.3. Aspects and Components for Software Architectures
11
6.3. Functional Aspect and MDE
12
6.3.1. Functional Aspects
12
6.3.2. Model Driven Engineering
13
7. **Contracts and Grants with Industry**
13
7.1. RNTL ACCORD
13
7.2. France Telecom
13
7.3. NorSys
14
8. **Other Grants and Activities**
14
8.1. Regional Initiatives
14
8.1.1. IRCICA
14
8.1.2. MOSAIQUES
14
8.2. National Initiatives
14
8.2.1. AS MDA
14
8.3. European Initiatives
14
8.3.1. ObjectWeb
14
8.3.2. IST COACH 15
8.3.3. ITEA OSMOSE 15
8.3.4. AOSD-Europe 15
8.4. International Initiative 15
8.4.1. OMG 15
8.4.2. AOP Alliance 16
9. Dissemination 16
9.1. Scientific community animation 16
9.1.1. Examination Committees 16
9.1.2. Journals, Conferences, Workshop 17
9.1.3. Miscellaneous 18
9.2. Teaching 18
9.3. Miscellaneous 18
10. Bibliography 19
1. Team
Jacquard is a joint project between INRIA, CNRS and Université des Sciences et Technologies de Lille (USTL), via the Computer Science Laboratory of Lille : LIFL (UMR 8022).
Head of project-team
Jean-Marc Geib [Professor, USTL]
Administrative Assistant
Axelle Magnier [Assistant project INRIA since September 1st 2004]
Staff member INRIA
Philippe Merle [Research associate]
Renaud Pawlak [Research associate since October 1st 2004]
Lionel Seinturier [Research associate (secondment INRIA)]
Staff member LIFL
Olivier Caron [Associate Professor Polytech’Lille]
Bernard Carré [Associate Professor Polytech’Lille]
Laurence Duchien [Professor USTL]
Anne-Françoise Le Meur [Associate Professor USTL since September 1st 2004]
Raphaël Marvie [Associate Professor USTL]
Gilles Vanwormhoudt [Associate Professor Telecom Lille I]
Ph. D. student
Olivier Barais [MESR grant]
Dolorès Diaz [NorSys CIFRE grant]
Frédéric Loiret [CEA grant since October 1st 2004]
Alexis Muller [Assistant professor, IUT, USTL]
Nicolas Pessemier [France Télécom grant since October 1st 2004]
Romain Rouvoy [INRIA-Région grant]
Mathieu Valdet [THALES CIFRE grant until December 1st 2004]
Post-doctoral fellow
Patricia Serrano-Alvadaro [Post-doctoral fellow, 2004-2005 since December 1st 2004]
Project technical staff
Pierre Carpentier [Project staff - IST COACH until June 1st 2004]
Christophe Contreras [Project staff - IST COACH - ITEA OSMOSE]
Christophe Demarey [Project staff - ITEA OSMOSE until September 1st 2004]
Cédric Dumoulin [Project staff - ITEA OSMOSE since October 1st 2004]
Areski Flissi [Technical staff CNRS]
Fabien Hameau [Project staff - IST COACH until April 1st 2004]
Jérôme Moroy [Project staff - ITEA OSMOSE]
Tran-Anh Missi [Project staff - EDF until December 1st 2004]
2. Overall Objectives
Keywords: Aspect-Oriented Programming (AOP), Component Models, Component Weaving, Component-Based Adaptive Middleware (CBAM), Integrated Tools for Production and Exploitation of Software Components, Model-Driven Software Engineering (MDSE), Run-time Containers, Separation of Concerns (SoC).
The Jacquard project focuses on the problem of designing complex distributed applications, i.e., those composed of numerous cooperative and distributed software components, which are constrained by various requirements, such as persistency, security and fault tolerance. We want to investigate the ability of software
engineers to produce new component-oriented platforms and new methodological and technical approaches to design and exploit these applications. In particular, we explore the use of component models, separation of concerns and weaving in the different phases of an application’s life cycle (i.e., modelling, design, assembling, deployment, and execution). Our goal is to produce fully functional platforms and tools. Finally, we are members of standardization organizations (OMG) and the open source software world (ObjectWeb).
2.1.1. J.M. Jacquard and the weaving machines
One of the first historical steps towards programming appeared in 1725 on a weaving machine. The French "Lyonnais" Basile Bouchon first gives instructions to a weaving machine using a perforated paper. His assistant Mr Falcon will replace the fragile paper by more robust perforated cards. After that, Mr Vancanson will replace the cards by a metallic cylinder and a complex hydraulic system, which gives the machine a cyclic flow of instructions a program!
But History keeps in mind Joseph-Marie Jacquard who creates and commercialises the first automatic weaving machine during the beginning of 19th century. The machine was so precise that Joseph-Marie Jacquard designs a program that weaves his own face on a fabric. Joseph-Marie Jacquard innovations have greatly contribute to first steps of computer science with the perforated cards to support programs. The idea of independent programs for a programmatic machine was born!
3. Scientific Foundations
3.1. Weaving of Software Components
The software components challenge needs new models and platforms to allow large scale interoperability of components for designing complex distributed applications. Actually, some models exist: Enterprise Java Beans by Sun Microsystems, .Net by Microsoft and the Corba Component Model in the CORBA3 OMG standard [71]. These models and platforms are clearly not satisfactory because of the lack of functional completeness and interoperability. Moreover, the industrial propositions only deal with a lot of technical problems to capture the component software notion, but mainly forgets the needs to manipulate the models of components and applications independently of the technical aspects. This point has been recently tackled by OMG with its Model Driven Architecture (MDA) initiative [69][73]. We agree that these points (Component Models, Component oriented Platforms and Model Driven Engineering) lead to new research problems in the goal to produce a better integrated product line from analysis to exploitation for component based applications.
Jacquard members have a great research experience in two computer science domains related with the goal of the project: Jean-Marc Geib, Philippe Merle and Raphaël Marvie have some important contributions in the Distributed Object based Platforms area [51], Laurence Duchien, Bernard Carré and Olivier Caron on specifications and use of separation of concerns for complex applications. For example, we can quote the contributions to the OMG standardization work with the CorbaScript language [67] (proposed to the Scripting Language for CORBA RFP, and accepted as the CORBA Scripting Language chapter of CORBA3 [60]) and with the CCM (Corba Component Model) chapter for which we lead the response group and the revision task force. Other examples are the JAC (Java Aspect Component) platform, one of the leading platforms for dynamic weaving of aspects [72], and the View Approach for structuring the design of information systems [76].
We aim to associate these experiences to design and produce an ambitious new platform for component based complex applications with new methodological and technical traits for structuring the large set of hardly related problems in supporting theses applications. Models, platforms and applications have to benefit from new open middleware using separation of concerns and weaving. Our contributions want to understand how a better structure of models and platforms can give better software for complex applications.
For the next four years the projects goals are:
- to produce a full platform for the CCM model. This platform, called OpenCCM, has to contribute to the OMG standardization work. Moreover it will provide new adaptable containers allowing the weaving of system
aspects, dynamically following the application requirements. It will also provide an integrated environment to manipulate, deploy and exploit assemblies of components.
- to define a complete design and technical environment for assembling of components and aspect, via a dedicated modelling tool for composition and a dynamic component and aspect oriented platform that will be a next step of our aspect platform.
3.2. **OpenCCM**
This part of the project deals with the design and the production of new tools for component based platforms. This work was initiated in the Computer Science Laboratory of Lille (LIPL) and is now one of the projects of the ObjectWeb Consortium [53] under the name OpenCCM. Our goal is a full platform for the OMG’s Corba Component Model (CCM). We want to fully capture all the aspects of this norm and contribute to it. Our ambition is to produce the first referenced CCM platform in an open source format. Actually OpenCCM is already a LGPL software accessible at [http://openccm.objectweb.org](http://openccm.objectweb.org). Beyond this production we aim to investigate three points as research topics: open the platform to allow extensibility and adaptability, open the run-time containers to weave non functional aspects, and give the capability to freely assemble components in an open environment. These three points are detailed in the next sections. This work is related to other works on open middleware: the Fractal model [75] for component middleware (ObjectWeb, Inria Sardes project, France Telecom), reflexive middleware approaches (Dynamic TAO [58], Flexinet [52], OpenCorba [63], OpenORB [74]), adaptable middleware approaches (ARCAD RNTL project [62]), virtual machines (VVM) and QoS driven Middleware [54].
3.2.1. **Open Middleware for the CCM**
The OpenCCM project proposes an open framework to produce and exploit CORBA Components. One can specifies such a component in the new OMG IDL3 language with is an extension of the old CORBA IDL2 language. The framework can produce IDL2 schema from IDL3 descriptions, and the associated stubs for various programming languages (Java, C++, IDLscript, ...) [66]. The framework is itself composed of reusable components around an IDL3 global repository. This architecture is open and extensible. The components are written in the Java language and are also CORBA components, so that they can be assembled to create several configurations. So the platform can be instantiated in several way for middleware like ORBacus, OpenORB or Borland Enterprise Server.
Current work plans to complete the framework with the Component Implementation Definition Language, the Persistent State Definition Language, and the JORM framework. This will allow the platform to automatically generate containers with persistency capabilities. We work also on the assembly and packaging tools using the XML descriptors of CCM, and we also work on the transformation tools towards C++.
3.2.2. **Open Containers**
A major goal of component based platforms is to be able to separate functional aspects (ideally programmed by an expert of the tackled domain) from the non functional aspects (ideally programmed by an expert of the computer system techniques). This separation can be implemented by a technical separation between the components (functional aspects) and the containers (non functional aspects). A container hosts components, so that the components inherit the non functional aspects of the container.
Actually containers (like the EJB or CCM containers) can only contain a limited set of non functional aspects (activation/termination, communications and events, security, transactions and persistency). Theses containers are not extensible neither statically nor dynamically. So they cannot respond to specific needs like fault tolerance, replication, load balancing, real-time or monitoring.
We plan to design these open containers. We investigate a generic model for containers and the weaving mechanisms which will allow an application to specify particular needs. So an application will be able to reclaim the deployment of well-fitted containers. We work on a specific API to develop non functional aspects for our containers. In a first step we have to specify a great set of non functional aspects to find the way to compose them. Non functional aspects can be seen as interceptors, so we work on composition of interceptors
to produce containers. In a second step we will investigate the possibility to dynamically manipulate the containers to change the configuration of non functional aspects.
3.2.3. Open Environment
An open environment for component based applications has to deal with several problems. For instance, we have to allow assemblies and deployment on demand. In this part we plan three goals: a virtual machine for programming distributed deployments, a trader of components to realize assemblies from ‘on the shelves’ components, a repository to manipulate and drive assemblies of components.
Current middleware propose fixed deployment strategies which are not adaptable to specific needs. These deployment tools are mainly ‘black boxes’ and ad-hoc in a particular environment. In the CCM context we can exploit the XML based OSD language which is used to describe assemblies. This is a good basis to describe deployments. But the CCM does not define an API to control the deployment and the associated tools have not be realized for today in an open manner. Actually we work on a set of operations to deploy OSD assemblies. We investigate several useful properties (like optimised deployment, parallel deployment, fault tolerant deployment, transactional deployment) implemented by these operations. This will lead to an open API for adaptable deployment strategies [65]. We plan to use IDLscript to specify the strategies.
Assemblies can be constructed on demand with ‘Components Off The Shelves’. We work on this point with our TORBA environment [61]. Within TORBA we can instantiate components for trading from trading contracts (specified in our TDL - Trading Description Language). This is the basis for an open infrastructure for components brokering that we plan to investigate here.
In an open framework for components we have to manipulate assemblies in all the phases of the design work and also during execution. Assemblies have to be manipulated by various users, each with its own concern (e.g., assemble, deploy, distribute, non functional aspects set-up, monitoring). We plan to construct a global repository for all these activities. Moreover this repository has to be opened for new activities. In this way we want to define an environment which allow to define, at a meta level, the different concerns that we want to exist on the repository [64]. Then the environment will be able to automatic generate a new view on the repository to capture the specified activity [59]. This work will be facilitated by the works on the following topics on the project.
3.3. Aspects Oriented Design of Dynamic Components Assemblies
The behaviour of a complex application in an open environment is difficult to specify and to implement because it has to evolve in accordance with the context. Changes can occur in an asynchronous manner and the behaviour has to be adapted without human actions and without stopping the application. A language to specify an assembly of components has to capture these dynamic aspects. A platform which supports the assembly at run-time also has to be able to respond to the needed changes. In this part of the project we plan to investigate three directions.
The first one deals with the study of separation of concerns from the first steps of analysis to the implementation and to be able to trace the evolution of this concerns in these various stages. The second one is related to dynamic features of Architecture Description Languages (ADL) [55]. The last one focuses on Aspect Oriented Programming [57] [1] in which one can capture a specific concern of a behaviour.
Finally, this project part enhances specifications of component assemblies in the goal of designing adaptable applications. We introduce integration contracts for specifying the impact of components on the application and its context. Our approach is based on AOP to specify connection and integration schemas. We also work on the JAC (Java Aspect Components) platform that provides dynamic weaving of aspects.
3.3.1. Early aspects
Business applications are faced with two main challenges. On one side they are mostly developed with an iterative process where business functionalities are added to the core application as the project requirements evolve. On the other side, the non-functional requirements (in terms of security, remote communication and transaction, data persistence, etc.) are also high and need to be incorporated as seamlessly as possible. Both
the component-based and the aspect-oriented approaches separately provide directions for these challenges. However, no integrated software process exists to take both into account. The goal of this work is thus to propose such a process and some tools to support it since the early stages of analysis and to provide features to trace their evolution from user requirements until deployment and run-time. This work is done in the context of Dolores Diaz’s PhD thesis.
3.3.2. Aspects at design-time
Software architects and designers have a reasoned frame to iteratively integrate functional and non-functional concerns into their projects, and to adapt them to unforeseen functional or non-functional requirements. For helping them, analysis methods support modelling and verification tools from functional to technical architecture. Their main advantages are capacity of modelling large-scale distributed systems that require interoperability between system parts and the separation of concerns between business functionality and communication mechanisms.
However, no standard and universal definition of the software architecture was accepted by all the community. Various points of view on different studies bring to several approaches. These approaches focus on only one or two concerns such as component interfaces specification, behavioural analysis or software reconfiguration. So we argue that, in order to increase benefits of software architecture approaches, one may need to use an architecture-centric approach with a global reasoning: From software architecture design to software architecture management to software architecture building, deployment and refinement. However, these different concerns of a software architecture definition must be kept consistent.
Our first goal is to propose enhancements of a component model for specifying dynamic evolution of an software architecture. It concerns three points of view: structural, functional and behavioural points of view. We use Model Driven Architecture approach with Context Independent Model and Context Specific Model. Our second goal is to introduce non-functional aspects - and then connections between components and containers - in languages for software architectures. We extend contracts between components to contracts between components and non-functional components.
3.3.3. Aspects at run-time
In distributed environments, applications run in an open context. They use networks and their associated services where quality of service is not always guaranteed and may change quickly. In these environments, several concerns must be considered, including fault tolerance, data consistency, remote version update, runtime maintenance, dynamic lookup, scalability, lack of rate. Addressing these issues may require dynamic and fast reconfiguration of distributed applications.
We have defined the Java Aspect Components (JAC) framework for building aspect-oriented distributed applications in Java [72] [9][8]. Unlike other languages like AspectJ, JAC allows dynamic weaving of aspects (aspects can be weaved or unwove at run-time) and proposes a modular solution to specify the composition of aspects. We defined on aspect-oriented programming model and the architectural details of the framework implementation. The framework enables extension of application semantics for handling well-separated concerns. This is achieved with a software entity called an aspect component (AC). ACs provide distributed pointcuts, dynamic wrappers and metamodel annotations. Distributed pointcuts are a key feature of our framework. They enable the definition of crosscutting structures that do not need to be located on a single host. ACs are dynamic. They can be added, removed, and controlled at runtime. This enables our framework to be used in highly dynamic environments where adaptable software is needed.
3.4. Functional Aspects for Components Applications
Software Engineering helps in increased productivity by re-usability. Component oriented design is a recent step towards that productivity. It allows the composition of "off the shelf" software entities, while preserving good properties on the software. The composition mechanisms are mainly used in construction and deployment phases, but the modelling phases often are not addressed by these ideas around composition.
After being considered only as documentation elements for a long time, models are gaining more and more importance in the software development lifecycle, as full software artefacts. The UML [70] standard contributes a lot to this mutation, with the identification and the structuration of models space dimensions and constructs. Models can nowadays be explicitly manipulated through metamodeling techniques, dedicated tools or processes such as the MDA [69] transformation chains. This is “Model Driven Engineering” [56].
The main motivation is the reduction of delays and costs by the capitalization of design efforts (models) at each stage, and the automation, as far as possible, of transitions between these stages. So it would be possible to separate high level business oriented models from low level architectural and technological ones, but also to reuse these models from one application to another. Indeed, once it is clear that models are full software ingredients, we are faced with new problems (needs!) such as the possibility of their reusability and composability. As a consequence, models stand more and more as good candidates for the “design for reuse” quest and specific constructs are introduced to make them generic.
We want to investigate the idea that functional decomposition of models is a way for increased re-usability. Our interest takes place in the use of functional aspects which represent the various dimensions of a tackled domain. It is related to aspect oriented structuring, and design plans like the Views, SOP [46] and Catalysis [48] approaches. We think that the scope of functional aspects can be a basis for structuring system modelling.
Our goal is to ‘disconnect’ functional views from a specific domain in order to obtain functional components which will be adaptable to various contexts. This is the way to functional re-usability. Such a functional component has to capture a functional dimension with a high level of abstraction. Our idea is to introduce the notion of ‘model components’ parameterized by a ‘required model’ and that produce a ‘provided model’. Then the modelling phase can be seen as the assembly of such components by connecting provided model to required model. Note that component ports (specified by a model) can be more sophisticated than simple interfaces of objects or software components.
As a first step, we formalized such a component model [68] and its associated design and assembly rules as an extension of the UML meta-model. We obtain adaptable model components that can be targeted to the EJB platform and the CORBA component model. We realized an implementation of this work via a UML profile. The corresponding UML Objecteering module is available at http://www.lifl.fr/~mullera.
Model parameterization is related to templates notions, such that found in the UML scope. We are exploring this notion in order to compare it to our component model. A first study shows that our model components can be expressed by UML template packages. We also identify that the UML specification needs to be extended in order to make templates parameterizable by complex models. We are defining a set of OCL constraints which formalizes this extension.
We plan to use this extension in order to define a process where package templates are composed to build a system, the way our components must do. This will lead us to define new operators for composing templates. This is related to the work of Clarke [46] around composition operations (e.g., override, merge).
A second dimension of our work is concerned with the preservation of the ‘functional aspects oriented design style’ from the modelling phase to the exploitation phase. We think that the functional aspects can be transformed into software components of the underlying platform. This way gives several advantages: re-usability at the modelling phase leads to re-usability at the production phase, designers can trace the design work in the exploitation of the application. So our work can be a contribution to a seamless integration of modelling tools and component based platforms like OpenCCM or EJB. This point, preserving functional aspects into applications, was present in our earlier work on CROME [76].
We identify some structural patterns which allow to target functional decomposition onto component platforms. In [45], we present a composition-oriented approach grounded on the splitting of entities according to views requirements. Two original design patterns are formulated and capture the main issues of the approach. The first one is concerned with the management of the split component and its conceptual identity. The second offers a solution for relationships among such components. These patterns improve evolution and traceability of views and can be applied to different technological platforms.
At a practical stage, all this work is gradually integrated in Case Tools (Objecteering, Eclipse Plugin), as functional aspect oriented modelling and design facilities.
4. Application Domains
The Jacquard project addresses the large problem of designing complex distributed applications composed of numerous cooperative and distributed software components. Our application domains are numerous. First, our component models and platforms target information systems. These software need properties such as functional and technics and they must evolve. Second, component models tackle several specific domains needing adaptability of process context such as mobility or ubiquitous computing. We apply it in transportation or communication domains, for example in MOSAIQUES project or AOSD NoE. Finally, we participate to platforms definition for grid computing.
5. Software
5.1. Apollon
Keywords: Graphical Editor, Model Driven Software Framework, XML.
Participant: Christophe Contreras [correspondant].
Apollon is a model driven software framework to generate Java-based graphical editors for XML documents.
According to a XML DTD given as input, the Apollon’s code generator generates a set of Java Data classes and Java Swing components implementing graphical editors for XML documents. The Java Data classes are a strongly typed reification of the XML DTD: Each XML DTD element is reified as a Java class, XML DTD children and attributes are reified as getter and setter Java methods. The Java Swing components implement the graphical representation of the Java Data classes. The graphical representation of any XML element and attribute could be customized at the generation time according to users’ graphical requirements.
The Apollon’s code generator is built as an extension of the open source Zeus software. The Apollon’s runtime is based on the Fractal Explorer software framework described below. Apollon is already used in OpenCCM to automatically generate graphical editors for the XML DTDs defined in the OMG’s CORBA Components Specification.
Apollon is a LGPL open source software available at http://forge.objectweb.org/projects/apollon.
5.2. Fractal Explorer
Keywords: Fractal Component-Based Software Framework, Graphical User Interface, Management Console.
Participant: Jérôme Moroy [correspondant].
Fractal Explorer is a generic Fractal component-based software framework to build Java-based graphical explorer and management consoles.
Fractal Explorer is composed of the Explorer Description Language, the plug-in programming interface, and the Fractal component-based explorer framework. The Explorer Description Language (a XML DTD) allows users to describe at a high level the configuration of graphical explorer consoles to build, i.e. icons, menu items and panels associated to resources to explore/manage and according to end-user roles. Reactions associated to these described graphical elements could be implemented by Java classes which must be conform to the plug-in programming interface. Finally, the explorer framework implements the interpretation of explorer configurations and executes plug-in classes according to users’ interactions. This framework is implemented as an extensible set of software components conform to the ObjectWeb Fractal component model defined by Inria and France Telecom. Moreover a set of plug-ins is already provided to explore and manage any Java objects and Fractal components.
Fractal Explorer is already reused and customized by our Apollon, FAC, GoTM, and OpenCCM software to provide respectively explorer consoles for XML documents, Fractal aspect components, component-based transaction services and CORBA objects/components.
Fractal Explorer is a LGPL open source software available at http://fractal.objectweb.org.
5.3. GoTM
Keywords: Component-Based Software Framework, Middleware Transaction Services.
Participant: Romain Rouvoy [correspondant].
GoTM is a Fractal component-based software framework to build middleware transaction services.
GoTM is composed of an extensible set of Fractal components providing basic building blocks (Transaction, Resource, Coordination, Concurrency, etc.) to build various transaction models and services (OMG OTS, JTA, etc.). A JTA personalisation of this framework is already implemented.
The GoTM component-based software framework is designed on top of the ObjectWeb Fractal component model and is implemented on top of the ObjectWeb Julia reference implementation.
GoTM is a LGPL open source software available at http://gotm.objectweb.org.
5.4. OpenCCM
Keywords: CORBA Component Model, Component-Based Middleware.
Participant: Philippe Merle [correspondant].
OpenCCM is a middleware platform for distributed applications based on CORBA components.
OpenCCM stands for the Open CORBA Component Model Platform: The first public available and open source implementation of the CORBA Component Model (CCM) specification defined by the Object Management Group (OMG). The CORBA Component Model (CCM) is the first vendor neutral open standard for Distributed Component Computing supporting various programming languages, operating systems, networks, CORBA products and vendors seamlessly. The CCM is an OMG’s specification for creating distributed, server-side scalable, component-based, language-neutral, transactional, multi-users and secure applications. Moreover, one CCM application could be deployed and run on several distributed nodes simultaneously.
OpenCCM allows users to design, implement, compile, package, assemble, deploy, install, instantiate, configure, execute, and manage distributed CORBA component-based applications. For these purposes, OpenCCM is composed of a set of tools, i.e. UML and OMG IDL model repositories, compilers, code generators, a graphical packaging and assembling tool, a distributed deployment infrastructure, extensible containers integrating various services (communication, monitoring, transaction, persistency, security, etc.), and a graphical management console.
OpenCCM is a LGPL open source software available at http://openccm.objectweb.org.
5.5. Java Aspect Components
Keywords: Java Aspect Components, dynamic weaving.
Participant: Renaud Pawlak [correspondant].
JAC (Java Aspect Components) is a project consisting in developing an aspect-oriented middleware layer. JAC current version is 0.12.1. Current application servers do not always provide satisfying means to separate technical concerns from the application code. Since JAC uses aspect-orientation, the complex components are replaced by POJOs (Plain Old Java Objects) and technical concerns implementations that are usually wired deep into the containers implementations are replaced by loosely-coupled, dynamically pluggable aspect components. JAC aspect components provide: seamless persistence (CMP) that fully handles collections and references, flexible clustering features (customisable broadcast, load-balancing, data-consistency, caching), instantaneously defined users, profiles management, access rights checking, and authentication features. See at http://jac.objectweb.org
5.6. UML Profile
Keywords: CCM Specification, UML Profile.
Participant: Olivier Caron [correspondant].
6. New Results
6.1. Open Middleware for the CCM
6.1.1. Extensible Containers
The definition of a common component middleware which can be specialized with some technical services is a major stake in the endeavour to capitalize operational systems’ functions. However, modern middleware do not provide such a specialization function, i.e. a way to build extensible containers.
In [3] we define a unified approach to build specialized component middleware by assembling software services. The analysis of the Sun Microsystem’s Java 2 Enterprise Edition (J2EE) and the Object Management Group (OMG)’s CORBA Component Model (CCM) standard middleware has led us to the characterization of the specialization function. Then, we apply the software component concept to services themselves with a mind to covering the services’ integration, composition and use needs. We also document a system of patterns targeted to services’ architectural needs. This latter meets the quality attributes of the specialization function’s architecture. This approach was implemented in CCM and we delivered a prototype in the OpenCCM platform in partnership with the European IST COACH project.
The use of software components and patterns, combined with an empirical and incremental method, rationally divides the inherent complexity of the specialization between the middleware provider, the service provider and the end-user. We note a noteworthy benefit in terms of reuse and efficiency in practice.
6.1.2. Middleware Infrastructures to Deploy Distributed Component-Based Applications
Deployment of software components for building distributed applications consists of the coordination of a set of basic tasks like uploading component binaries to the execution sites, loading them in memory, instantiating components, interconnecting their ports, setting their business and technical attributes. The automation of the deployment process then requires the presence of a software infrastructure distributed itself on the different execution sites.
[18] presents the specification of such an infrastructure for the deployment of CORBA component-based applications. This latter is designed and implemented in the context of our OpenCCM platform, an open source implementation of the CORBA Component Model. The main characteristic lays on the fact that this infrastructure is itself designed as a set of CORBA component assemblies. This allows its dynamic assembly during its deployment over the execution sites.
Component middleware allow the automatization of applications deployment process. This function, called deployment machine, instantiates applications from their architectural descriptions. Unfortunately, currently each middleware implements its own deployment machine. Thus no capitalization is proposed as far as conceptual or implementation aspects are concerned.
To promote this capitalization, in [24] we propose a model driven approach to build component middleware deployment machines. This approach introduces a UML profile of workflow which allows us to define deployment models independently of any targeted component middleware. Such a model is then refined for each targeted middleware and the obtained model is mapped to different execution platforms. The models and transformations of this approach are illustrated on a CORBA Components deployment machine implemented using the Fractal component model.
The multiplication of architecture description languages, component models and platforms implies a serious dilemma for component based software architects. On the one hand, they have to choose a language to describe
concrete configurations which will be automatically deployed on execution platforms. On the other hand, they wish to capitalize their software architectures independently of any description languages or platforms.
To solve this problem, we propose a multi personalities environment for the configuration and the deployment of component based applications. This environment is composed of a core capturing a canonical model of configuration and deployment, and a set of personalities tailored to languages and platforms. [23] details the architecture of such an environment and describes the personalities for the CORBA and Fractal component models.
6.1.3. Component-Based Software Framework for Building Transaction Services
Transactions have always been involved in various applications since they have been introduced in databases. Many transaction services have been developed to address various transaction standards and various transaction models. Furthermore, these transaction services are more and more difficult to build since the complexity of the transaction standards is increasing constantly. Each transaction service implements pieces of code that has already been written in another transaction services. As a consequence, there is no code factorization between the transaction services and the added values of each transaction service, such as extensibility or performance, are never reused in another transaction service.
In [34] and [35], we present GoTM, a Component-Based Adaptive Middleware (CBAM) software framework. This framework enables to build various transaction services compliant with existing transaction standards. GoTM provides adaptive properties to support different transaction models and standards in the same transaction service. GoTM supports also the definition of new transaction models and standards as new components of the framework. Finally, GoTM provides (re)configurability, extensibility and adaptability properties as added values. The implementation of the GoTM framework is based on the Fractal component model.
6.1.4. Middleware Benchmarking
Nowadays, distributed Java-based applications could be built on top of a plethora of middleware technologies such as Object Request Brokers (ORB) like CORBA and Java RMI, Web Services, and component-oriented platforms like Enterprise Java Beans (EJB) or CORBA Component Model (CCM). Choosing the right middleware technology fitting with application requirements is a complex activity driven by various criteria such as economic costs (e.g. commercial or open source availability, engineer training and skills), conformance to standards, advanced proprietary features, performance, scalability, etc. Regarding performance, a lot of basic metrics could be evaluated like round-trip latency, jitter, or throughput of two-way interactions according to various parameter types and sizes.
Many projects have already evaluated these middleware performance metrics. Unfortunately, they have not compared different kinds of middleware platforms simultaneously. This could be helpful for application designers requiring to select both the kind of middleware technology to apply and the best implementation to use.
In [22], we present an experience report on the design and implementation of a simple benchmark to evaluate the round-trip latency of various Java-based middleware platforms, i.e. only measuring the response time of two-way interactions without parameters. Even if simple, this benchmark is relevant as it allows users to evaluate the minimal response mean time and the maximal number of interactions per second provided by a middleware platform. Empirical results and analysis are discussed on a large set of widely available implementations including various ORB (Java RMI, Java IDL, ORBacus, JacORB, OpenORB, and Ice), Web Services projects (Apache XML-RPC and Axis), and component-oriented platforms (JBoss, JOnAS, OpenCCM, Fractal, ProActive). This evaluation shows that our OpenCCM platform already provides better performance results than most of the other evaluated middleware platforms.
6.2. Aspect Oriented design of dynamic components assemblies
6.2.1. Aspects at design-time
First, with new component platforms, architects create distributed applications by assembling components. In all these platforms, software architecture defines the application organization as a collection of components
plus a set of constraints on the interactions between components. Facing the difficulties of building correct software architecture, abstract software architecture models were built. They are powerful methods in the specification and analysis of high-level designs. Lots of architecture description models have been defined to describe, design, check, and implement software architectures. Many of these models support sophisticated analysis and reasoning or support architecture-centric development.
Nevertheless, these models are often static or work only on the components composition. In these conditions, it is difficult to build large software architecture or to integrate new components in an existing software architecture or to add a forgotten concern such as security or persistency. So, we propose SafArchie and TranSAT (Transform Software Architecture Technologies), an abstract component model for designing software architectures [36]. With SafArchie, we based our approach on architecture types that are points of reference at each step of our reasoning [36][12]. We develop SafArchie Studio [13], an architecture centric tool based on three-view perspective and driven by the component life cycle. In TranSAT [11][12], we extend the classical concepts of the software architecture models to describe technical concerns independently from a design model and integrate them step by step. These refinement steps are highly inspired by Aspect Oriented Programming (AOP), where the designer defines all the facets of an application (business and technical). To ensure a correct component composition and a correct weaving of technical components, we add behavioural information on components. These information are used to specify temporal communication protocols and check properties such as deadlock freedom or synchronization between components.
6.2.2. Architectures and points of view
ODP (Open Distributed Processing) model provides five viewpoints where each one represents one concern. They are Enterprise viewpoint (describes client needs and organisation policy), Information viewpoint (describes systems data), Computational viewpoint (describes business functionality), Engineering viewpoint (describes mechanism supporting distributed communication) and Technology viewpoint (describes technology utilisation). However, ODP is limited because it provides rich concepts for modelling distributed systems in accordance with the different viewpoints but it does not provide languages and tools for modelling and analysing software qualities and it does not guide an architect from one viewpoint to another viewpoint.
We propose a methodology for supporting modelling and analysing in the change from Computational viewpoint to Engineering viewpoint. First, we create a component model to build functional architectures in the Computational viewpoint. This component model is improved with assembly and composition concerns. Second, the Engineering viewpoint needs some concepts as hosts and channels to represent technical architectures.
Finally, one of the difficult tasks is to define non-functional properties in technical architectures. These properties such as transaction, persistence or security come from non-functional requirements. The transformation from functional to technical architecture must take into consideration these properties in order to specify the complete execution platform architecture. We have developed an architectural figure concept that represents some solutions to address these non-functional requirements [28]. These figures can be reused and help in the transformation from functional architecture to technical architecture. Moreover, we require efficient methods in order to analyze technical architecture’s qualities about three analysis requirements, which are behaviour, deployment and non-functional properties [27]. Our models for constructing and analysing software architecture can apply not only in ODP but also in the architectural level of any proposed model following IEEE1471’s viewpoint. Within our approach, architecture is evaluated following three principal criteria: system behaviour, deployment correctness, and non-functional qualities. The capacity of analysing software architecture with the three criteria allows for building efficient quality software.
6.2.3. Aspects and Components for Software Architectures
Software architectures need to accommodate both functional and non-functional concerns. Functional concerns encompass pieces of code coming from the application domain being analyzed, and the non functional ones relate to the services provided by the environment (OS, network, application server, ...) that need to be integrated.
Component-based approaches and their associated architecture description languages are good at addressing functional software assemblies but do not provide dedicated concepts for integrating non functional needs. Conversely, aspect-oriented programming is well suited for adding some non functional crosscutting concerns into an application but does not address the issue of assembling functional parts. As both facets are almost always present in applications we then believe that there is a strong need for an approach that address both. This is the topic of the FAC action that we begun in 2004.
The goal of the FAC (Fractal Aspect Component) action [44][31][32] is to define a model for components and aspects. We envision a symmetric model in which there is only one kind of entity, component, and in which aspects are themselves components. Treating aspects as components leads to a model that can be better and more easily integrated in the various stages (building, packaging, deploying, managing, debugging, ...) of the software life cycle: e.g. packaging an aspect is not different than packaging a component. The question of whether AOP should be symmetric or not (see the debate AspectJ vs Hyper/J) is open in the international research community. We believe that the asymmetric approach taken by AspectJ and others simplified the acceptance of AOP at the early stages, but that for the longer term the seamless integration of AOP requires to come back to a symmetric approach.
The second main feature of the FAC model deals with bindings. In existing ADL and component models, a binding links a client component requiring a service to a server component providing that service. Binding are most of the time dynamic in the sense that a component can be unbound and rebound to another one at runtime. This kind of binding is well suited for representing functional dependencies within an application. The FAC model supports a second kind of binding that we call crosscut binding. It links an aspect component with the components being aspectized (for instance, it links the transaction aspect with all components requiring some transactions). This binding is associated as in any other AOP tool, with a crosscut regular expression that defines the component aspectized. Hence, an application with FAC owns two kinds of binding: functional ones and crosscut ones. All these bindings can be dynamically introspected and modified. The advantage of such an approach is that the software architecture is made clearer and that the dependencies, both functional and crosscutting, are expressed, which is not the case with other approaches.
The FAC model is designed and developed in the context of a contract with France Télécom R&D.
6.3. Functional Aspect and MDE
6.3.1. Functional Aspects
The design of information systems remains at the present time a difficult task which requires that great number of entities and concerns be taken into account whether they are functional or not. Consequently, approaches supporting a decomposition of systems according to their functional dimensions were thus proposed at the programming level with AOP (Aspect-Oriented Programming) [57] but also at the design level with the SOD (Subject-Oriented Design) [47] or with views approaches [49].
The problem of the reuse of these functional aspects arises now. Indeed, this reuse must make it possible to improve the productivity and reliability in the field of information systems design. Various approaches propose the reuse of functional aspects in various forms, like the design of reusable frameworks [48] or in the form of UML templates [46].
In [29] we compare techniques for composing and parameterizing models and keep the advantages of the later ones to specify reusable functional aspects.
Parameterization capabilities are offered by the UML Template notion. Applications are numerous and various, with the result that its initial introduction in UML1.3 was deeply revisited and strengthened in the UML2 standard. Though its specification remains much more structural and verbal in [70]. Particularly, constraints lack for the precise definition of the related "binding" relation which allows to obtain models from templates. These constraints are needed to verify the correctness of the resulting models. That is why we propose in [20] a set of OCL constraints [77] which could strengthen the notion of model templates and facilitate the above verification. An Eclipse plugin was realized. This new plugin enables to build new
metamodels with OCL constraints and then generate Java implementations of these metamodels which both support the creation and verification of models based on these OCL constraints.
At a design level, we have defined an independent platform model of view-components which enables to describe complex information systems involving numerous functional dimensions. To target this model on specific platforms, we propose an approach which is based on several design patterns. We start from our patterns supporting views through split representation of entities [45]. A first experimentation in the Fractal component model is presented in [14] which allows to envision the usage of Fractal controllers to manage split components. In [21] we focus on the reuse of functional aspects or views at the implementation level using adaptation techniques. The reuse of views is ensured by applying the adapter pattern [50]. We show how to compose the views pattern and the adaptation one. The result provides an implementation of reusable functional aspects that can be composed at the exploitation stage.
6.3.2. Model Driven Engineering
In the context of model driven engineering (MDE), models are the cornerstone of software engineering processes. Then, models have to be properly defined in order to be useful and to be the basis of software production. In order to assist the designer in defining a model as well as to control the result, we have studied the chaining and composition of model transformations.
A modeling process is made of several steps (for example identifying components, then their relations, and finally the services used and provided by each component). Delta-trans framework has been defined to support such well defined modeling processes. Defining a model is then seen as a set of transformations (like creating model elements, defining relations between existing model elements) applied on an empty model. Based upon the definition of a modeling process, tools are automatically produced in order to support the process. The modeling tool is dedicated to a process [42].
In order to easily support software processes in a particular domain (like telecoms or health care) product lines are factorizing the common concerns of application while supporting their adaptation to particular needs using configurable concerns. The goal of our study regarding the composition of model transformations is to support the building of software production lines. Transformations are primitive or composite (composition of transformations). Transformations are provided as executable components, then easily composable to define MDE-based product lines. Our experimental framework picotin provides modeling means for defining and composing model transformations. A set of tools rely on these definitions to generate transformation component implementations. Finally, a prototype environment support the execution of these transformations [39].
Both studies are complementary in that the first one provides means for modeling a system respecting domain constraints, and the second provides means for actually building the system from its definition. This approach is currently studied together with Alicante (a company working in the field of health care) in order to build tools for GPs based upon their basic data items (person, medical act, temperature, etc) and specific constraints (36°C < temperature < 42°C).
7. Contracts and Grants with Industry
7.1. RNTL ACCORD
The RNTL project ACCORD has 8 participants (EDF, CNAM, ENST, ENST-Bretagne, France Télécom, Inria, LIFL, Softeam). The goal is to produce a design framework using explicit contracts to specify and assemble components. We want to facilitate understanding of complex systems, enhance the flexibility and the reusability of components, and allow strong validation of assemblies. This work has to be done independently of technical infrastructures.
7.2. France Telecom
This contract is a CRE ("Contrat de Recherche Externe") that takes place in the context of the "accord-cadre" between Inria and France Telecom R&D. This is a 3-year contract that begun in October 2004. The
scientific teams involved in the project are for Inria, the Jacquard project-team, and for France Telecom R&D, the ASR/Polair department. The contract goal is to study and construct component and aspect based software architectures. The Fractal component model from France Telecom R&D and the JAC AOP framework from Jacquard form the background of this work. The expected result is a model (FAC for Fractal Aspect Component) that merges and unifies aspects and components. Nicolas Pessemier PhD thesis work is directly related to this contract.
7.3. NorSys
This contract is associated to a CIFRE PhD thesis between the Jacquard project-team and the NorSys service company. The goal of the contract is to study the aspect-orientation in the early stages of software development. AOP emerged as a programming technique but the question is now open in the international research company to tell whether it can also bring to innovations into the early stages of requirement engineering, analyze and design. This contract begun in January 2004. Dolores Diaz PhD thesis work is directly related to this contract.
8. Other Grants and Activities
8.1. Regional Initiatives
8.1.1. IRCICA
The ’Region Nord Pas de Calais’ has initiated a large research plan around the new technologies for communications. We lead the software section of this plan. Beyond this plan the ’Region Nord Pas de Calais’ has facilitated the creation of a new research institute called IRCICA to promote new collaborative research projects between software and hardware laboratories. The Jacquard project is one of the first projects supported by this institute.
8.1.2. MOSAIQUES
The MOSAIQUES Project ("MOdèles et infraStructures pour Applications ubIQUitairES" or Models and middleware for ubiquitous applications) defines a design and programming framework for application definitions that run in ubiquitous environment. The project includes The University of Lille with LIFL Laboratory (STC and SMAC teams) and Inria projects Jacquard and POPS, TRIGONE laboratory, INRETS, Ecole des Mines de Douai and The University of Valenciennes and of Hainaut-Cambrésis. Application domains are transportation and e-learning systems. Laurence Duchien is headed of this project.
8.2. National Initiatives
8.2.1. AS MDA
The specific action MDA, created in June 2003 et funded by CNRS, studies the interest of the Model Driven Architecture approach. The standard promoted by OMG is only one example. The aim of this AS is to organize the research community in this domain in order to understand and to help the industrial community in an approach that it can be a significant evolution on the middle and long term. This AS includes IRIN, LIFL, LSR, IMAG, I3S, PRIM and CEA laboratories.
8.3. European Initiatives
8.3.1. ObjectWeb
ObjectWeb is an European initiative to promote high quality open source middleware. The vision of ObjectWeb is that of a set of components which can be assembled to offer high quality. We are member of this consortium, and Fractal Explorer, GoTM, JAC, and OpenCCM are projects hosted by the consortium.
8.3.2. IST COACH
The ‘Component based ArCHitecture for distributed telecom applications’ project is a PCRDT project in
the IST program. The project groups 9 academic and industrial labs. The goal is a component oriented CORBA
platform for the telecom domain. Our OpenCCM platform forms the basis of this architecture.
8.3.3. ITEA OSMOSE
OSMOSE stands for ‘Open Source Middleware for Open Systems in Europe’. It is an ITEA project. The
project groups 16 European industrials and 7 public labs. The goal is to give up an European dimension for
the ObjectWeb consortium. The OSMOSE project wants to federate high quality components from European
labs, and to produce applications for the great European industrial domains.
8.3.4. AOSD-Europe
AOSD-Europe is an ongoing proposal to set up a Network of Excellence (NoE) on aspect-oriented software
development within IST-FP6. The proposal brings together 11 research groups and among them members of
the Jacquard project and other members from OBASCO, Pop-Art and Triskell Inria projects. The proposal
is led by Lancaster University, Darmstadt University and University of Twente. The goal of the NoE is to
harmonise, integrate and strengthen European research activities on all issues related to aspect orientation:
analysis, design, development, formalization, applications, empirical studies.
8.4. International Initiative
8.4.1. OMG
We work in the international consortium OMG (Object Management Group) since 1997. OMG defines well-
known standards: CORBA, UML, MOF, MDA. We can quote our contributions to the OMG standardization
work with the CorbaScript language (proposed to the Scripting Language for CORBA RFP, and accepted as
the CORBA Scripting Language chapter of CORBA 3.x) and with the CCM (CORBA Component Model)
chapter for which we lead the response group and the revision task force. We also participate in the definition
of a UML profile for CORBA Components.
Philippe Merle is:
- Chair of the OMG Components 1.2 Revision Task Force (RTF).
- Member of the OMG Deployment Revision Task Force (RTF).
- Member of the OMG UML Profile for CCM Finalization Task Force (FTF).
- Member of submission team for the OMG UML Profile for CCM RFP.
- Member of the voting list for the OMG MOF 2.0 IDL RFP.
- Member of the voting list for the OMG MOF 2.0 Query/View/Transf. RFP.
- Member of the voting list for the OMG MOF 2.0 Versioning RFP.
- Member of the voting list for the OMG QoS for CORBA Components RFP.
- Member of the voting list for the OMG Streams for CCM RFP.
8.4.2. AOP Alliance
AOP Alliance is an international open-source initiative to provide a common API for building aspect weavers <http://aopalliance.sourceforge.net>. This initiative has been launched and is led by Jacquard’s members (R. Pawlak and L. Seinturier). The goal is to bring together several aspect framework creators, to analyse the functional requirements that are shared by all these frameworks, and to set up a common API. This API allows to modularise the writing of aspect weavers, and promotes the reuse of common building blocks between AOP frameworks. AOP Alliance brings together leaders of international recognized AOP projects such as JAC, Spring, PROSE. The API is in beta-stage but is already implemented in JAC and PROSE.
9. Dissemination
9.1. Scientific community animation
9.1.1. Examination Committees
- Jean-Marc Geib was in the examination committee of the following PhD thesis:
- M. Vadet, November 2004, University of Lille (adviser)
- E. Renaux, December 2004, University of Lille (adviser)
- D. Touzet, March 2004, University of Rennes (referee)
- C. Nebut, November 2004, University of Rennes (referee)
- H. Zheng, November 2004, University of Lille (chair)
- M. Figeac, December 2004, University of Lille (chair)
- K. Drira (HDR), December 2004, University of Toulouse (referee)
- Laurence Duchien was in the examination committee of the following PhD thesis:
- A.-T. Le, January 2004, University of Grenoble (referee)
- K. Macedo, March 2004, University of Rennes (referee)
- D. Fauthoux, June 2004, University of Toulouse (referee)
- L. Quintian, July 2004, University of Nice (referee)
- R. Lenglet, November 2004, University of Grenoble (referee)
- O. Nano, December 2004, University of Nice (referee)
- Philippe Merle was in the examination committee of the following PhD thesis:
- M. Vadet, November 2004, University of Lille (co-adviser)
- H Cervantes, March 2004, University of Grenoble (referee)
- A. Ribes, December 2004, University of Rennes (referee)
- Olivier Caron was in the examination committee of the following PhD thesis:
- E. Renaux, December 2004, University of Lille (co-adviser)
9.1.2. Journals, Conferences, Workshop
- **Jean-Marc Geib** has been a member of the following programme committees:
- DECOR 2004, 1st Francophone conference on software Deployment and (Re)configuration, October 2004, Grenoble
- NOTERE 2004, Les Nouvelles technologies de la répartition, June 2004, Sadia, Maroc
- **Laurence Duchien** has been a member of the following programme committees:
- JC 2004, Journées composants, March 2004, Lille
- JFPLA 2004, Journées Francophones sur le développement de logiciels par aspects, September 2004, Paris
- Workshop ECOOP Abstract Communications on Distributed Systems, Oslo, June 2004
- ADVICE Journal
- **Philippe Merle** has been a member of the following programme committees:
- DECOR 2004, 1st Francophone conference on software Deployment and (Re)configuration, October 2004, Grenoble
- JC 2004, Journées Composants, March 2004, Lille
- IJCA, special issue, June 2004
- **Bernard Carré** has been a member of the following programme committees:
- **Lionel Seinturier** has been a member of the following programme committees:
- JC 2004, Journées Composants, March 2004, Lille
- JFPLA 2004, Journées Francophones sur le développement de logiciels par aspect, September 2004, Paris
- DECOR 2004, 1st Francophone conference on software Deployment and (Re)configuration, October 2004, Grenoble
- ADVICE Journal
- **Renaud Pawlak** has been a member of the following programme committees:
- workshop AOSD ACP4IS, AOD Conference March 2004, Lancaster, UK.
- ADVICE Journal
9.1.3. Miscellaneous
- The Jacquard Team has organized the LMO Conference and The JC Workshop in Lille (15-18 March 2004).
- Bernard Carré and Jérome Euzenat (INRIA Rhône-Alpes) are co-editors of the Special volume of the journal ‘L’Objet’ (Vol 10/2-3, 2004).
- Renaud Pawlak gives an invited seminar on Recombining programming at Demeter research unity of Northeastern University, Boston, US.
9.2. Teaching
Jean-Marc Geib teaches Object Oriented Design and Programming and Distributed Application Design in L3 and M1 at USTL, UFR IEEA.
Laurence Duchien teaches Distributed Applications Design - Master Professionnel Sciences et Technologies Mention Informatique - M1 and Master Professionnel Sciences et Technologies Mention Informatique - M2 - Spécialité IAGL et TIIR at USTL, UFR IEEA. She is in charge of the Master Professionnel Sciences et Technologies Mention Informatique - M2 - Spécialité IAGL at USTL, UFR IEEA.
Raphaël Marvie Object Oriented Design, Distributed System Design and C++ programming in Master GMI, Distributed System Design in Master MIAGE-FC, Advanced Technologies for Distribution in Master Pro IPI NT at USTL, UFR IEEA.
Anne-Françoise Le Meur teaches Databases and the Internet, Design of distributed application, and C programming at USTL, UFR IEEA.
Bernard Carré teaches OO design and programming at Polytech'Lille (USTL Engineer school). He is in charge of the Computer Sciences and Statistics Department of Polytech'Lille.
At Polytech'Lille Engineering school, Olivier Caron is in charge of the following modules: Data Bases and Distributed Software Components. He also teaches Object-Oriented Programming and Operating Systems.
Gilles Vanwormhoudt teaches Algorithms and Programming in C, 1th year of the engineering program, Design and Programming of Distributed Applications, 3rd year of the engineering program and Technologies for Web development, 5th year of the engineering program, ENIC Telecom Lille 1. He is in charge of Multimedia computing and engineering specialization, 5th year of the engineering program, and the Project-Conferences-Report module for the “Multimedia computing and engineering” specialization, ENIC Telecom Lille 1.
Jacquard team’s members participate to some Research Masters of Computer Science (University of Lille, University of Paris 6, University of Montpellier, University of Valenciennes and RPI University, Hartford, US) on the CCM, MDE and on AOP.
9.3. Miscellaneous
Jean-Marc Geib is the head of the LIFL laboratory CNRS 8022, the head of the CIM axis, member of the UFR board at USTL, member of CSE 27nd section of Universities of Lille 1, Lille 2 and Littoral. He is member of the management committee TACT (Technologies Avancées pour la Communication et les Transports) of the Etat-Region Contract Nord-Pas-de-Calais and the coordinator of the communication program since 2004. He is IRCICA cofounder (Institut de Recherche sur les Composants matériels et logiciels pour l’information et la communication avancée) that is a research federation between LIFL (Science computing), l’IEMN (electronics) and the PhLAM (photonic). He is in charge of the RTP Distributed Systems of the CNRS STIC Dept.
Laurence Duchien is member of the UFR board at USTL, member of the LIFL scientific board, member of CSE 27nd section of Universities of Lille 1, CNAM and Paris 6. She is member of scientific committee of the national ACI Security.
10. Bibliography
Books and Monographs
Doctoral dissertations and Habilitation theses
Articles in referred journals and book chapters
Publications in Conferences and Workshops
**Internal Reports**
Miscellaneous
Bibliography in notes
[64] R. MARVIE. Séparation des préoccupations et métamodélisation pour environnements de manipulation d’architectures logicielles à base de composants, Ph. D. Thesis, Laboratoire d’Informatique Fondamentale
|
{"Source-Url": "https://radar.inria.fr/rapportsactivite/RA2004/jacquard/jacquard.pdf", "len_cl100k_base": 14346, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 69069, "total-output-tokens": 21416, "length": "2e13", "weborganizer": {"__label__adult": 0.00034546852111816406, "__label__art_design": 0.0007534027099609375, "__label__crime_law": 0.0002796649932861328, "__label__education_jobs": 0.0018520355224609375, "__label__entertainment": 8.279085159301758e-05, "__label__fashion_beauty": 0.00018715858459472656, "__label__finance_business": 0.0002758502960205078, "__label__food_dining": 0.0002593994140625, "__label__games": 0.0005545616149902344, "__label__hardware": 0.0007123947143554688, "__label__health": 0.00035452842712402344, "__label__history": 0.0003859996795654297, "__label__home_hobbies": 0.0001220703125, "__label__industrial": 0.00037479400634765625, "__label__literature": 0.0002906322479248047, "__label__politics": 0.00025725364685058594, "__label__religion": 0.0004582405090332031, "__label__science_tech": 0.01457977294921875, "__label__social_life": 0.00012230873107910156, "__label__software": 0.00533294677734375, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.00027823448181152344, "__label__transportation": 0.0005044937133789062, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 84200, 0.06655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 84200, 0.20413]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 84200, 0.83191]], "google_gemma-3-12b-it_contains_pii": [[0, 62, false], [62, 62, null], [62, 2021, null], [2021, 2374, null], [2374, 4849, null], [4849, 9188, null], [9188, 13608, null], [13608, 18089, null], [18089, 22444, null], [22444, 27469, null], [27469, 31100, null], [31100, 34532, null], [34532, 38162, null], [38162, 42577, null], [42577, 47305, null], [47305, 51844, null], [51844, 55998, null], [55998, 59098, null], [59098, 61629, null], [61629, 63809, null], [63809, 65572, null], [65572, 68988, null], [68988, 71535, null], [71535, 74502, null], [74502, 77099, null], [77099, 79546, null], [79546, 82100, null], [82100, 84200, null]], "google_gemma-3-12b-it_is_public_document": [[0, 62, true], [62, 62, null], [62, 2021, null], [2021, 2374, null], [2374, 4849, null], [4849, 9188, null], [9188, 13608, null], [13608, 18089, null], [18089, 22444, null], [22444, 27469, null], [27469, 31100, null], [31100, 34532, null], [34532, 38162, null], [38162, 42577, null], [42577, 47305, null], [47305, 51844, null], [51844, 55998, null], [55998, 59098, null], [59098, 61629, null], [61629, 63809, null], [63809, 65572, null], [65572, 68988, null], [68988, 71535, null], [71535, 74502, null], [74502, 77099, null], [77099, 79546, null], [79546, 82100, null], [82100, 84200, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 84200, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 84200, null]], "pdf_page_numbers": [[0, 62, 1], [62, 62, 2], [62, 2021, 3], [2021, 2374, 4], [2374, 4849, 5], [4849, 9188, 6], [9188, 13608, 7], [13608, 18089, 8], [18089, 22444, 9], [22444, 27469, 10], [27469, 31100, 11], [31100, 34532, 12], [34532, 38162, 13], [38162, 42577, 14], [42577, 47305, 15], [47305, 51844, 16], [51844, 55998, 17], [55998, 59098, 18], [59098, 61629, 19], [61629, 63809, 20], [63809, 65572, 21], [65572, 68988, 22], [68988, 71535, 23], [71535, 74502, 24], [74502, 77099, 25], [77099, 79546, 26], [79546, 82100, 27], [82100, 84200, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 84200, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b5ad362b2a3743ecf123eefb5cb948419dee428b
|
Understanding Parallel Applications
Xiaoxu Guan
High Performance Computing, LSU
May 31, 2018
Diagram with nodes and edges labeled with indices.
Overview
- Parallel applications and programming on **shared**-memory and **distributed**-memory machines
- We follow the **parallelism** methodology from **top** to **bottom**
- Heterogeneous and homogeneous systems
- Models of parallel computing
- **Multi-node** level: MPI
- **Single-node** level: MPI/OpenMP
- **Hybrid** model: MPI + OpenMP
- **Compute-bound** and **memory-bound** applications
- **Socket** and **Processor** level: NUMA and **affinity**
- **Core** level: SIMD (pipeline and vectorization)
- **Summary**
Parallel computing
• Parallel computing means a lot;
• It almost covers everything in the HPC community;
• Many programming languages support parallel computing:
◦ Fortran, C, and C++;
◦ Matlab, Mathematica;
◦ Python, R, Java, Hadoop, . . .;
◦ Parallel tools: GNU parallel, parallel shells, . . .;
• They support parallel computing at very different levels through a variety of mechanisms;
• From embarrassment computing to parallel computing that needs extensively data communication;
• Beyond the language level: parallel filesystems: Lustre, and the fabric network: Ethernet and Infiniband;
Parallel computing
- Why parallel or concurrency computing?
- Goes beyond the single-core capability (memory and flops per unit time), and therefore increases performance;
- Reduces wall-clock time, and saves energy;
- Finishes those impossible tasks in my lifetime;
- Handles larger and larger-scale problems;
Consider a production MPI job:
(a) Runs on 2,500 CPU cores
(b) Finishes in $\approx 40$ hours (wall-clock time)
(c) Charged CPU hours are $2,500 \times 40 = 0.1$ M SUs
(d) It is about $100,000/24/365 \approx 11$ years on 1 CPU core!
- Is parallel computing really necessary?
Parallel computing
• Why parallel or concurrency computing?
• Goes beyond the single-core capability (memory and flops per unit time), and therefore increases performance;
• Reduces wall-clock time, and saves energy;
• Finishes those impossible tasks in my lifetime;
• Handles larger and larger-scale problems;
• There is no free lunch, however!
• Different techniques other than serial coding are needed;
• Effective parallel algorithms in terms of performance;
• Increasing flops per unit time or throughput is one of our endless goals in the HPC community;
• Think in parallel;
• Start parallel programming as soon as possible;
Parallel computing
- **Our goal** here is to “Understanding Parallel Applications”;
- This is no simple and easy way to master parallel computing;
- Evolving software stack and architecture complexity;
- HPC is one of essential tools in my research;
- And **my goal** is to advance scientific progress;
- I’m not the code developer, **what can I do?**
- I have been a programmer for years, **is there anything else I should be concerned?**
- Besides, “Understanding Parallel Applications” requires basic knowledge of the **hardware**;
- Provide you a concrete introduction to **parallel computing** and **parallel architecture**;
- Focus on **performance** and **efficiency** analysis;
Parallel computing
- Parallel computing can be viewed from different ways;
- Flynn’s taxonomy: *execution* models to achieve parallelism
- SISD: single instruction, single data;
- MISD: multiple instruction, single data;
- SIMD: single instruction, multiple data;
- MIMD: multiple instructions, multiple data (or tasks);
- SPMD: single program, multiple data;
- Memory access and *programming* model:
- *Shared memory*: a set of cores that can access the common and shared physical memory space;
- *Distributed memory*: No direct and remote access to the memory assigned to other processes;
- *Hybrid*: they are not exclusive;
Parallel computing
- Parallel computing can be viewed from different ways;
- Flynn’s taxonomy: execution models to achieve parallelism
- SISD: single instruction, single data;
- MISD: multiple instruction, single data;
- SIMD: single instruction, multiple data;
- MIMD: multiple instructions, multiple data (or tasks);
- SPMD: single program, multiple data;
- Model of workload breakup: data and task parallelism
```
1 for i from imin to imax, do
2 c(i) = a(i) + b(i)
3 end do
```
Data parallelism
```
1 { for c(i) = a(i) + b(i) }
2 { for d(j) = sin(a(j)) }
```
Task parallelism
- All the levels of parallelism found on a production cluster;
Parallel computing
- SISD (Single Instruction, Single Data)
- SIMD (Single Instruction, Multiple Data)
- MISD (Multiple Instruction, Single Data)
- MIMD (Multiple Instruction, Multiple Data)
Multi-node level parallelism
MPI applications on distributed-memory systems
Multi-node level parallelism
- On a **distributed-memory** system:
- Each node has its own **local** memory;
- There is **no** physically **global** memory;
- **Message passing**: send/receive message through network;
- **MPI** (Message Passing Interface) is a default programming model on DM systems in HPC user community;
- **MPI-1** started in 1992. The current standard is **MPI 3.x**.
- MPI standard is **not** an IEEE or ISO standard, but a **de facto** standard in HPC world;
- Don’t be confused between MPI implementations and MPI standard;
- **MPICH, MVAPICH2, OpenMPI, Intel MPI, ...**;
Multi-node level parallelism
- Requirements for parallel computing;
- How does MPI meet these requirements?
- **Specify parallel execution** – single program on multiple data (SPMD) and tasks;
- **Data communication** – two- and one-side communication (explicit or implicit message passing);
- **Synchronization** – synchronization functions;
1. Expose and then express parallelism;
2. Must exactly know the data that need to be transferred;
3. Management of data transfer;
4. Manually partition and decompose;
5. Difficult to program and debug (deadlocks, ...);
Multi-node level parallelism
- Requirements for parallel computing;
- How does MPI meet these requirements?
- Specify parallel execution – single program on multiple data (SPMD) and tasks;
- Data communication – two- and one-side communication (explicit or implicit message passing);
- Synchronization – synchronization functions;
(6) SPMD: All processes (MPI tasks) run the same program. They can store different data but in the same variable names because of distributed memory location. Each process has its own memory space;
(7) Less data communication, more computation;
MPI collective communication
- **Collective** communications: synchronization, data movement, and collective computation;
**Broadcast**
- Rank 0
- Rank 1
- Rank 2
- Rank 3
**Scatter**
- Rank 0
- Rank 1
- Rank 2
- Rank 3
**Gather**
- Rank 0
- Rank 1
- Rank 2
- Rank 3
**Reduction**
- Rank 0
- Rank 1
- Rank 2
- Rank 3
17
MPI examples on multiple nodes
- Use Intel **MPI** (*impi*), MVAPICH2, and OpenMPI on Mike-II;
- **impi**: better performance on Intel architecture;
- It also supports diagnostic tools to report **MPI** cost;
**Example 1**: the open source *miniFE* code
1. It is a part of the *miniapps* package;
2. It is written in **C++**;
3. It mimics the unstructured finite element generation, assembly, and solution of a 3D physical domain;
4. It can be thought as the **kernel** part in many science and engineering problems;
5. Output the performance in **FLOPS**, **walltime**, and **MFLOP/s**;
MPI examples on multiple nodes
- **Benchmark** your parallel applications;
- The baseline info is important for further **tuning**;
- It also allows us to determine the **optimal** settings to run the application more efficiently;
- Have a better understanding of your **target machine**;
- Set up a **non-trivial** case (or maybe an artificial test case, if multiple production runs are not feasible);
- Know how large your workload is in the test case and make it **measurable**;
- Set up the correct **MPI** run-time environment, if necessary;
- Be aware of the issues with **high load**, memory usage, and **intensive swapping**;
- Any computational “experiments” should be **reproducible**;
- Tune only **one** of the multiple control knobs at a given time;
MPI examples on multiple nodes
- Load **Intel MPI (+impi-4.1.3.048-Intel-13.0.0)**;
- Run the pre-built **miniFE.x** on 1 or 2 nodes;

- The base info with 1 MPI task is **not** always available;
- On 2 nodes, the max FP perf. is 23.8 GFLOP/s (3.6%);
- Is it a compute-bound or memory-bound application?
MPI examples on multiple nodes
- Load Intel MPI (+impi-4.1.3.048-Intel-13.0.0);
- Run the pre-built miniFE.x on 2 nodes;
```bash
$ mpirun -np 32 ./miniFE.x nx=500
```
1. Starting CG solver ...
2. Initial Residual = 501.001
3. ...
4. Final Resid Norm: 0.00397271
- Check the yaml log:
```yaml
# 32 cores on Mike-II regular nodes.
Total:
Total CG Time: 77.6081
Total CG Flops: 1.68522e+12
Total CG Mflops: 21714.4
Time per iteration: 0.38804
Total Program Time: 110.087
```
MPI examples on multiple nodes
- Load MVAPICH2 (+mvapich2-1.9-Intel-13.0.0);
- Run the pre-built miniFE.x on 2 nodes;
```bash
1 $ mpirun -np 32 ./miniFE.x nx=500
```
Starting CG solver ...
```
1 Starting CG solver ...
2 Initial Residual = 501.001
3 ...
4 Final Resid Norm: 0.00393607
```
- Check the yaml log:
```yaml
# 32 cores on Mike-II regular nodes.
1 Total:
2 Total CG Time: 79.0407
3 Total CG Flops: 1.68522e+12
4 Total CG Mflops: 21320.9
5 Time per iteration: 0.395203
6 Total Program Time: 104.769
```
MPI examples on multiple nodes
- Load OpenMPI-1.6.2 (+openmpi-1.6.2-Intel-13.0.0);
- Run the pre-built miniFE.x on 2 nodes;
```bash
$ mpirun -np 32 ./miniFE.x nx=500
```
Starting CG solver ... mpicxx/mpicc/mpif90
Initial Residual = 501.001
...
Final Resid Norm: 0.00393607
- Check the yaml log:
```plaintext
# 32 cores on 2 Mike-II regular nodes.
Total:
Total CG Time: 221.005
Total CG Flops: 1.68522e+12
Total CG Mflops: 7625.23
Time per iteration: 1.10503
Total Program Time: 324.937
```
MPI examples on multiple nodes
- The same performance with Intel MPI and MVAPICH2;
- OpenMPI-1.6.2 seems much slower than the ones above;
1. **High** average load > 100 per node;
2. **Control** the number of OpenMP threads;
```bash
$ OMP_NUM_THREADS=1 \
mpirun -np 32 ./miniFE.x nx=500
```
1. # 32 cores on 2 Mike-II regular nodes.
2. Total:
3. Total CG Time: 104.758
4. Total CG Flops: 1.68522e+12
5. Total CG Mflops: 16086.7
6. Time per iteration: 0.523792
7. Total Program Time: 182.978
(3) After that, the performance difference is $\sim 1.33 \times$;
MPI examples on multiple nodes
- Use OpenMPI-1.6.2, but reduce MPI tasks to 23;
```bash
$ OMP_NUM_THREADS=1 \nmpirun -np 23 ./miniFE.x nx=500
```
1 # 23 cores on 2 Mike-II regular nodes.
2 Total:
3 Total CG Time: 2194.6
4 Total CG Flops: 1.68522e+12
5 Total CG Mflops: 767.89
6 Time per iteration: 10.973
7 Total Program Time: 2365.55
- That's too bad: 20× slower! What happened with -np 23?
- Memory footprint is ~46 GB with nx=500;
- Load imbalance: (1) wrt process or MPI task, (2) wrt node;
- Intense swapping and large swap space in use (≫10 GB);
**MPI examples on multiple nodes**
- Use OpenMPI-1.6.2, but reduce MPI tasks to 23;
- There are 16 MPI tasks on the 1st node, while the rest of the 7 tasks on the 2nd node — **load imbalance wrt nodes**;
- Swapping mechanism was triggered differently;
```bash
$ OMP_NUM_THREADS=1 \n mpirun -np 23 -npernode 12 ./miniFE.x nx=500
```
1 # 23 cores on 2 Mike-II regular node.
2 # 12 on 1st node, 11 on 2nd node.
3 Total:
4 Total CG Time: 104.151
5 Total CG Flops: 1.68522e+12
6 Total CG Mflops: 16180.6
7 Time per iteration: 0.520753
8 Total Program Time: 179.608
- Note that it is fine to have a little swapping (∼20 MB here);
Latency and throughput matter
Latency (sec)
- L1 Cache: $10^{-9}$ ns
- L2/L3 Cache: $10^{-8}$ ns
- Infiniband: $10^{-7}$ µs
- Gigabit Ethernet: $10^{-6}$ ms
- Main Memory: $10^{-5}$ ms
- Hard Drive: $10^{-4}$ ms
- Hard Drive: $10^{-3}$ ms
- Hard Drive: $10^{-2}$ ms
Throughput
- L1 Cache: 100 GB/s
- L2/L3 Cache: 10 GB/s
- Main Memory: 1 GB/s
- Infiniband: 100 MB/s
- Gigabit Ethernet: 10 MB/s
- Main Memory: 100 MB/s
- Hard Drive: 10 MB/s
- L2/L3 Cache: 100 GB/s
Information Technology Services
7th Annual LONI HPC Workshop, 2018
MPI examples on multiple nodes
- No need to specify a machine file explicitly in the 3 cases;
- Try OpenMPI-1.6.5 (+openmpi-1.6.5-Intel-13.0.0);
```bash
$ OMP_NUM_THREADS=1 \
mpirun -np 32 ./miniFE...x nx=500
```
1 # 32 cores on 2 Mike-II regular nodes.
2 Total:
3 Total CG Time: ≫ 74 minutes
4 Total CG Flops: 1.68522e+12
5 Total CG Mflops: ???
6 Time per iteration: ???
7 Total Program Time: ≫ 74 minutes
- Too bad, again: all tasks piled up on 1st node and 2nd is idle;
- Load imbalance wrt node;
- Intense swapping and large swap space in use (≫ 23 GB);
MPI examples on multiple nodes
- Use `OpenMPI-1.6.5 (+openmpi-1.6.5-Intel-13.0.0)`;
- Specify a machine file explicitly;
```
$ OMP_NUM_THREADS=1 \\
mpirun -np 32 -machinefile $PBS_NODEFILE \\
./miniFE.x nx=500
```
```
# 32 cores on 2 Mike-II regular nodes.
Total:
Total CG Time: 213.942
Total CG Flops: 1.68522e+12
Total CG Mflops: 7876.99
Time per iteration: 1.06971
Total Program Time: 280.768
```
- After that, the MPI tasks were properly mapped on 2 nodes;
- Still $1.6 \times$ slower than `OpenMP-1.6.2-Intel-13.0.0` (Total CG Mflops: 12659.4);
MPI examples on multiple nodes
- Load Intel MPI (+impi-4.1.3.048-Intel-13.0.0);
- Diagnostic facilities (the log stats.ipm);
```
1$ I_MPI_STATS=ipm mpirun -np 32 ./miniFE.x nx=500
```
<table>
<thead>
<tr>
<th>#</th>
<th>call</th>
<th>time</th>
<th>calls</th>
<th>%mpi</th>
<th>%wall</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>MPI_Allreduce</td>
<td>324.365</td>
<td>13024</td>
<td>77.06</td>
<td>9.13</td>
</tr>
<tr>
<td>3</td>
<td>MPI_Send</td>
<td>38.2421</td>
<td>75072</td>
<td>9.09</td>
<td>1.08</td>
</tr>
<tr>
<td>4</td>
<td>MPI_Init</td>
<td>29.3108</td>
<td>32</td>
<td>6.96</td>
<td>0.83</td>
</tr>
<tr>
<td>5</td>
<td>MPI.Wait</td>
<td>28.3825</td>
<td>75072</td>
<td>6.74</td>
<td>0.80</td>
</tr>
<tr>
<td>6</td>
<td>MPI_Bcast</td>
<td>0.363768</td>
<td>64</td>
<td>0.09</td>
<td>0.01</td>
</tr>
<tr>
<td>7</td>
<td>MPI_Allgathers</td>
<td>0.163873</td>
<td>96</td>
<td>0.04</td>
<td>0.00</td>
</tr>
<tr>
<td>8</td>
<td>MPI_Irecv</td>
<td>0.0918336</td>
<td>75072</td>
<td>0.02</td>
<td>0.00</td>
</tr>
<tr>
<td>9</td>
<td>MPI_Comm_size</td>
<td>0.0051572</td>
<td>6720</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<td>10</td>
<td>MPI_TOTAL</td>
<td>420.925</td>
<td>245536</td>
<td>100.00</td>
<td>11.85</td>
</tr>
</tbody>
</table>
- Overhead of MPI communication;
MPI examples on multiple nodes
- Number of MPI tasks needs to match the nodes’ capacity;
- Pinning MPI tasks (ranks) to CPU cores;
- Properly distribute MPI tasks on multiple nodes;
- Run-time control:
- **Intel MPI:**
- `-hostfile <filename>`: specifies the host names on which MPI job runs (same as `-f`);
- `-ppn <number>`: specifies no. of tasks per node;
- **MVAPICH2:**
- `-hostfile <filename> (-f): same as impi;
- `-ppn <number>`: same as impi;
- **Open MPI:**
- `-hostfile <filename> (-machinefile): see the above;
- `-npernode <number>`: specifies no. of tasks per node;
- `-npersocket <number>`: specifies no. of tasks per socket;
Hybrid model
distributed-memory plus shared-memory systems
Hybrid model
- Except **inter-node** MPI communication, no essential difference between single- and multiple-node MPI jobs;
- Faster **intra-node** data communication within a node;
- More examples on a shared-memory systems;
- Here we focus on **MPI+OpenMP**:
**Example 2**: calculation of $\pi$
- **MPI** takes care of **inter-node** communication, while **intra-node** parallelism is achieved by **OpenMP**;
- **MPI**: coarse-grained parl.; **OpenMP**: fine-grained parl.;
- Each MPI process can spawn multiple threads;
- May reduce the memory usage on node level;
- Good for accelerators or coprocessors;
- It is hard to outperform a pure MPI job;
Hybrid model
\[ \pi = \int_0^1 \frac{4}{1 + x^2} \, dx \]
\[ \pi \simeq \frac{1}{N} \sum_{i=1}^{N} \frac{4}{1 + x_i^2}, \quad x_i = \frac{1}{N} \left( i - \frac{1}{2} \right), \quad i = 1, \ldots, N \]
- Pure MPI:
<table>
<thead>
<tr>
<th>MPI rank 0</th>
<th>MPI rank 1</th>
<th>\cdots</th>
<th>MPI rank n − 1</th>
</tr>
</thead>
<tbody>
<tr>
<td>(x_1)</td>
<td>(x_2)</td>
<td>(x_3)</td>
<td>(x_4)</td>
</tr>
<tr>
<td>(x_5)</td>
<td>(x_6)</td>
<td>(x_7)</td>
<td>(x_8)</td>
</tr>
<tr>
<td>(x_i)</td>
<td>(\cdots)</td>
<td>(\cdots)</td>
<td></td>
</tr>
</tbody>
</table>
Hybrid model
\[ \pi = \int_0^1 \frac{4}{1 + x^2} dx \]
\[ \pi \simeq \frac{1}{N} \sum_{i=1}^{N} \frac{4}{1 + x_i^2}, \quad x_i = \frac{1}{N} \left( i - \frac{1}{2} \right), \quad i = 1, \ldots, N \]
- **Pure MPI:**
- **MPI rank 0**
- **MPI rank 1**
- \ldots
- **MPI rank \( n - 1 \)**
\[ x_1 \ x_2 \ x_3 \ x_4 \ x_5 \ x_6 \ x_7 \ x_8 \ x_i \ \ldots \]
\[ \text{MPI\_REDUCE}(\ldots) \quad \rightarrow \quad \text{result.} \]
Hybrid model
\[ \pi = \int_{0}^{1} \frac{4}{1 + x^2} dx \]
\[ \pi \simeq \frac{1}{N} \sum_{i=1}^{N} \frac{4}{1 + x_i^2}, \quad x_i = \frac{1}{N} \left( i - \frac{1}{2} \right), \quad i = 1, \ldots, N \]
- **Pure MPI:**
MPI rank 0 MPI rank 1 \ldots MPI rank \( n - 1 \)
| \( x_1 \) | \( x_2 \) | \( x_3 \) | \( x_4 \) | \( x_5 \) | \( x_6 \) | \( x_7 \) | \( x_8 \) | \( x_i \) | \ldots |
\( \text{MPI\_REDUCE}(\ldots) \implies \text{result.} \)
- **Hybrid MPI+OpenMP:**
MPI rank 0 MPI rank 1 \ldots MPI rank \( n - 1 \)
| \( x_1 \) | \( x_2 \) | \( x_3 \) | \( x_4 \) | \( x_5 \) | \( x_6 \) | \( x_7 \) | \( x_8 \) | \( x_i \) | \ldots |
\( \text{openmp plus reduction} + \text{MPI\_REDUCE}(\ldots) \implies \text{result.} \)
Hybrid model
1
2
3 do i = istart, iend ! same var. diff. values
4 xi = h * (dble(i)-0.5_idp)
5 tmp = 1.0_idp + xi * xi
6 fsum = fsum + 1.0_idp / tmp
7 end do
8 fsum = 4.0_idp * h * fsum
9 call MPI_REDUCE(fsum, pi, 1, ..., &
10 MPI_SUM, 0, MPI_COMM_WORLD, ierr)
- **SPMD**: Each MPI task runs the **same** program and holds the **same** variable names;
- Due to the **distinct** memory space, the **same** variable (**istart** and **iend**) may hold **different** values;
Hybrid model
```fortran
!$omp parallel do private(i,xtmp), &
reduction(+:fsum)
do i = istart, iend !same var. diff. values
xi = h * (dble(i)-0.5_idp)
tmp = 1.0_idp + xi * xi
fsum = fsum + 1.0_idp / tmp
end do
fsum = 4.0_idp * h * fsum
call MPI_REDUCE(fsum,pi,1,..., &
MPI_SUM,0,MPI_COMM_WORLD,ierr)
```
- Add the OpenMP directive/pragma to parallelize the loop;
- Make the partial sum (fsum) a reduction variable with plus operation;
- The MPI_REDUCE is the same as before at the outer level;
Hybrid model
- Hybrid MPI+OpenMP:
```fortran
1 !$omp parallel do private(i,xi,tmp), &
2 reduction(+:fsum)
3 do i = istart, iend ! same var. diff. values
4 xi = h * (dble(i)-0.5_idp)
5 ...
```
- On Mike-II using impi-4.1.3.048, $N = 2 \times 10^9$:
<table>
<thead>
<tr>
<th>No. of MPI tasks</th>
<th>No. of threads</th>
<th>Wall time (sec)</th>
</tr>
</thead>
<tbody>
<tr>
<td>16</td>
<td>1</td>
<td>0.45986</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>0.46088</td>
</tr>
<tr>
<td>4</td>
<td>4</td>
<td>0.46389</td>
</tr>
<tr>
<td>2</td>
<td>8</td>
<td>0.46021</td>
</tr>
<tr>
<td>1</td>
<td>16</td>
<td>0.45919</td>
</tr>
</tbody>
</table>
Hybrid model
• How many OpenMP threads and MPI tasks are needed?
• What happens if `OMP_NUM_THREADS=16 mpirun -np 16` ...
```
top - ... 1 user, load average: 186.71, 84.11, 32.83
Tasks: 813 total, 88 running, 725 sleeping, 0 stopped, 0 zombie
Cpu(s): 95.2%us, 2.1%sy, 0.0%ni, 2.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 32815036k total, 16993228k used, 15821808k free, 48676k buffers
Swap: 100663292k total, 45556k used, 100617736k free, 13629192k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
64761 xiaoxu 20 0 203m 3472 2868 R 1.3 0.0 0:00.04 mpi_openmp_pi_f
64762 xiaoxu 20 0 203m 3388 2800 R 1.3 0.0 0:00.04 mpi_openmp_pi_f
64763 xiaoxu 20 0 203m 3392 2804 R 1.3 0.0 0:00.04 mpi_openmp_pi_f
64764 xiaoxu 20 0 203m 5436 2804 R 1.3 0.0 0:00.04 mpi_openmp_pi_f
```
• Again, **high load** issues per node and should prevent;
• Don’t oversubscribe the node resources;
• **MPI+OpenMP** turns out to be **MPI × OpenMP**;
Compute-bound and memory-bound applications
Where are the bottlenecks?
- A lot of factors can slow down your applications;
- In terms of execution units and a variety of bandwidths, we have:
1. Compute-bound (aka. “CPU”-bound);
2. Cache-bound;
3. Memory-bound;
4. I/O-bound;
- For a given application, how do we know it is compute-bound or memory-bound?
- Why do we need to know this and what is the benefit of it?
1. you’re the developer of the application;
2. you’re the user of the application;
Where are the bottlenecks?
- A lot of factors can **slow** down your applications;
- Parallel algorithms, bandwidths, overhead, . . . ;
- Once a datum is fetched from the **memory**, on average how many **arithmetic** operations do we need to perform on that datum to keep the execution units busy?
**FP Performance** (GFLOP/s) =
*Memory BW* (GB/s) × *Operation Intensity* (FLOP/byte)
\[ y(\text{FP Perf.}) = k(\text{BW.}) \times (\text{OI.}) \]
- However, the **max** performance **cannot** go beyond the theoretical **peak** performance;
Where are the bottlenecks?
- A lot of factors can **slow** down your applications;
- Parallel algorithms, bandwidths, overhead, . . .
---
**Roofline model**
Where are the bottlenecks?
- A lot of factors can **slow** down your applications;
- Parallel algorithms, bandwidths, overhead, ...;
---
### Double precision
- **Performance (100 GFLOP/s)**
- **Operation intensity (FLOP/byte)**
**On Mike-II**
- **Memory bound:** $OI \ll 3.2$ FLOP/byte
- **Compute bound:** $OI \gg 3.2$ FLOP/byte
---
Peak perf. of **332.8** GFLOP/s
Bandwidth of **106.6 GB/s**
$OI \approx 3.2$ FLOP/byte
Where are the bottlenecks?
- A lot of factors can **slow** down your applications;
- Parallel algorithms, bandwidths, overhead, . . . ;
- On average, for each DP FP number an application needs at least **25 FLOPs** to be **compute bound**;
- What can we learn from the **roofline** model?
- It is **not uncommon** to see that there are many applications performing at a level of much less than **30 GFLOP/s (10%)**;
- These applications are typically **memory** bound;
- We need to **increase** the **OI**. per data fetching;
- **Reuse** the data in **caches** as much as possible;
- Use well developed and optimized libraries: **MKL** routines on Intel CPUs and **ACML** on AMD CPUs;
- Link your **top-level** applications to the optimized libraries;
Compute bound
- On SuperMIC (Ivy Bridge at 2.8 GHz), the theoretical peak performance is 22.4 GFLOP/s per core;
- Benchmark MKL DGEMM routine (matrix-matrix products);
```c/c++
const int nsize = 10000;
const int iteration = 20;
// allocate the matrices.
// initialize the matrices.
for (k=0; k<iteration; k++) // C = A × B.
{
cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, nsize, nsize, nsize,
alpha, matrix_a, nsize, matrix_b, nsize,
beta, matrix_c, nsize); }
perf = 2.0 * dsize * dsize * dsize * (double) (iteration) / elapsed_time / 1.e+6;
```
**Compute bound**
- On SuperMIC (Ivy Bridge at 2.8 GHz), the theoretical peak performance is *22.4 GFLOP/s* per core;
- Benchmark **MKL DGEMM** routine (matrix-matrix products);

Compute bound
- How does the attainable performance improve with respect to the **matrix size**?
- How does the attainable performance improve with respect to the **thread count**?
- What happens around the matrix size of $1,000 \times 1,000$?
<table>
<thead>
<tr>
<th>No. of threads</th>
<th>Attainable perf. (GFLOP/s)</th>
<th>Peak perf. (GFLOP/s)</th>
</tr>
</thead>
<tbody>
<tr>
<td>(matrix size $10^4 \times 10^4$)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>27.16</td>
<td>22.4</td>
</tr>
<tr>
<td>2</td>
<td>52.41</td>
<td>44.8</td>
</tr>
<tr>
<td>4</td>
<td>98.46</td>
<td>89.6</td>
</tr>
<tr>
<td>10</td>
<td>220.3</td>
<td>224.0</td>
</tr>
<tr>
<td>20</td>
<td>209.0</td>
<td>448.0</td>
</tr>
</tbody>
</table>
- Turbo **boost** mode at **higher** frequency;
Memory bound
- Does the **roofline** model tell us the whole story?
- The **MKL DGEMM** routine is **compute bound**;
- Consider the other scenario: what happens if my code does **not** have too many **FP** operations?
- We need a quantity like the **memory bandwidth** (MB/s or GB/s) to benchmark the code, instead of FLOP/s;
- Consider the out-of-place **matrix transposition**:
```fortran
1 do i = 1, nsize
2 do j = 1, nsize
3 matrix_out(i,j)= matrix_inp(j,i)
4 end do
5 end do
```
- **Throughput** (GB/s) = \(2N^2/(2^{30}T_{walltime})\);
Memory bound
- Intel Xeon processors on SuperMIC, Mike-II, QB2, and Philip;
<table>
<thead>
<tr>
<th>Machine</th>
<th>CPU Family</th>
<th>CPU Freq.</th>
<th>LLC</th>
<th>DDR Freq.</th>
</tr>
</thead>
<tbody>
<tr>
<td>SuperMIC</td>
<td>E5 v2 2680</td>
<td>2.8 GHz</td>
<td>25 MB</td>
<td>1866 MHz</td>
</tr>
<tr>
<td>SuperMIC†</td>
<td>E5 v4 2690</td>
<td>2.6 GHz</td>
<td>35 MB</td>
<td>2400 MHz</td>
</tr>
<tr>
<td>QB2</td>
<td>E5 v2 2680</td>
<td>2.8 GHz</td>
<td>25 MB</td>
<td>1866 MHz</td>
</tr>
<tr>
<td>QB2†</td>
<td>E7 v2 4860</td>
<td>2.6 GHz</td>
<td>30 MB</td>
<td>1066 MHz</td>
</tr>
<tr>
<td>Mike-II</td>
<td>E5 v1 2670</td>
<td>2.6 GHz</td>
<td>20 MB</td>
<td>1600 MHz</td>
</tr>
<tr>
<td>Mike-II†</td>
<td>E7 4870</td>
<td>2.4 GHz</td>
<td>30 MB</td>
<td>1066 MHz</td>
</tr>
<tr>
<td>Philip</td>
<td>X5570</td>
<td>2.93 GHz</td>
<td>8 MB</td>
<td>1333 MHz</td>
</tr>
</tbody>
</table>
† on SuperMIC’s and QB2’s bigmem nodes, or Mike-II’s bigmemtb nodes.
- Different Xeon processors on bigmem or bigmemtb nodes to support large memory;
Memory bound
- Matrix transposition: MKL routine mkl_domatcopy;
```c
for (k=0; k<iteration; k++)
mkl_domatcopy('R', 'T', nsize, nsize, \
alpha, matrix_a, nsize, matrix_b, nsize);
```
- Benchmark the throughput (GB/s): 10 threads with `numactl`
<table>
<thead>
<tr>
<th>Machine</th>
<th>4,000</th>
<th>20,000</th>
<th>40,000</th>
</tr>
</thead>
<tbody>
<tr>
<td>SuperMIC</td>
<td>23.93</td>
<td>21.22</td>
<td>18.68</td>
</tr>
<tr>
<td>SuperMIC(^\dagger)bigmem</td>
<td>17.96</td>
<td>18.01</td>
<td>18.08</td>
</tr>
<tr>
<td>QB2(^\dagger)k40</td>
<td>20.96</td>
<td>18.05</td>
<td>15.45</td>
</tr>
</tbody>
</table>
\(^\dagger\)k40 configured at 1600 MHz.
- Both memory bandwidth and latency contribute to the throughput;
Memory and compute bound
- **Memory-bound** by *nature*: increase throughput;
- **Memory-bound** due to *implementation*:
1. Optimize the algorithm and code to *reuse* the data in caches: *spatial* and *temporal* reuse;
2. It is possible to convert memory-bound to compute-bound code;
3. Mixed heavy *arithmetic* parts and *non-FP* operations;
4. Why most applications fall in the *memory-bound* category?
5. Know memory architecture better;
6. Changing *compiler* may be helpful;
7. Prior to optimizing the “*hotspot*”, identify if it is *compute-bound* or *memory-bound*;
Socket and processor level
within a socket or a processor
Socket and processor level
- Within a node, several processors can be connected together to form a **multi-processor** system;
- This is called a **socket**: two-socket or four-socket systems;
- The Intel Xeon processors **Sandy Bridge** (v1), **Ivy Bridge** (v2), and **Broadwell** (v4) on SuperMIC, Mike-II, and QB2;
- Connection through the Intel **QPI** (QuickPath Interconnect), while AMD uses **HyperTransport** technology;
- It can be thought of a **point-to-point** interconnection between multiple-processors;
- Not only implemented as **links** between processors, but also used to connect a processor and the **I/O hub**;
- How does this affect **parallelism** at the application or code execution level?
Socket and processor level
- The **NUMA** (non-uniform memory access) architecture;
- The key point in **NUMA** is about **shared memory**;
- Furthermore, it has been implemented as **ccNUMA** (cache coherent NUMA);
Socket and processor level
- The **NUMA** (non-uniform memory access) architecture;
- The key point in **NUMA** is about *shared* memory;
- Furthermore, it has been implemented as **ccNUMA** (cache coherent NUMA);

- Intel Xeon E5 (node 0) memory controller
- QPI links 32 GB/s
- DDR3 RAM 57.6 GB/s
- Intel Xeon E5 (node 1) memory controller
- Memory
---
Information Technology Services
7th Annual LONI HPC Workshop, 2018
Socket and processor level
- Each processor is connected to its own RAM via the memory controller;
- Due to the QPI links, CPU cores in a processor (node 0) can access the RAM connected to the other processor (node 1);
![Diagram of QPI links and memory configuration]
Socket and processor level
- Why the **NUMA** matters?
- Focus on how an array was allocated and **initialized** on shared-memory system;
- **“First Touch”** policy – memory **binding** or **affinity**;
- Bandwidth differences in **local** and **remote** memory access;
- It may have significant impact on code performance;
- If it plays a role in application’s **performance**, are there any ways to **control** it?
- Linux provides a wonderful tool `numctl` that allows us to
(1) run processes with a memory **placement policy** or specified scheduling;
(2) set the processor **affinity** and memory **affinity** of a process;
Socket and processor level
- With `numctl` we can
1. run processes with a memory `placement policy` or specified scheduling;
2. set the processor `affinity` and memory `affinity` of a process;
# Lists the available cores: same as `--H`.
$ numactl --hardware
# Ensures memory is allocated only on specific nodes.
$ numactl --membind
# Ensures specified command and its child processes
# execute only on the specified node.
$ numactl --cpunodebind
# Ensures a specified command and its child processes
# execute only on the specified processor.
$ numactl --phycpubind
Socket and processor level
- Memory latency between **UMA** cores and **NUMA** cores;
- On **SuperMIC** 2-socket regular node and 2-socket bigmem node:
<table>
<thead>
<tr>
<th></th>
<th>Measuring idle latencies (in ns)...</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Numa node</td>
</tr>
<tr>
<td>3</td>
<td>Numa node 0 1</td>
</tr>
<tr>
<td>4</td>
<td>0 72.3 123.0</td>
</tr>
<tr>
<td>5</td>
<td>1 123.5 72.9</td>
</tr>
</tbody>
</table>
# DDR3 1866 MHz
Bandwidths are in GB/sec
Using Read-only traffic type
<table>
<thead>
<tr>
<th></th>
<th>Bandwidths are in GB/sec</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Using Read-only traffic type</td>
</tr>
<tr>
<td>3</td>
<td>Numa node</td>
</tr>
<tr>
<td>4</td>
<td>Numa node 0 1</td>
</tr>
<tr>
<td>5</td>
<td>0 55.86 25.43</td>
</tr>
<tr>
<td>6</td>
<td>1 25.48 50.23</td>
</tr>
</tbody>
</table>
# DDR3 1866 MHz
# UMA/NUMA = 1.7
# UMA/NUMA = 2.2
Socket and processor level
- Memory latency between UMA cores and NUMA cores;
- On SuperMIC 2-socket regular node and 2-socket bigmem node:
```
1 Measuring idle latencies (in ns)...
2 Numa node
3 Numa node 0 1 # DDR4 2400 MHz
4 0 87.2 128.6 # SuperMIC bigmem node
5 1 129.8 87.9 # NUMA/UMA = 1.5
1 Bandwidths are in GB/sec
2 Using Read-only traffic type
3 Numa node
4 Numa node 0 1 # DDR4 2400 MHz
5 0 67.78 23.49 # SuperMIC bigmem node
6 1 23.41 67.94 # UMA/NUMA = 2.9
```
Socket and processor level
- Memory latency between **UMA** cores and **NUMA** cores;
- On **QB2** the 2-socket regular node and 4-socket bigmem node:
1. Measuring idle latencies (in ns)...
2. Numa node
3. Numa node 0 1 # DDR3 1866/1600 MHz
4. 0 71.4 122.9 # QB2 reg. node
5. 1 123.6 71.5 # NUMA/UMA = 1.7
1. Bandwidths are in GB/sec
2. Using Read-only traffic type
3. Numa node
4. Numa node 0 1 # DDR3 1866/1600 MHz
5. 0 53.46 25.02 # QB2 reg. node
6. 1 25.03 46.82 # UMA/NUMA = 2.2
## Socket and processor level
1. **Measuring idle latencies (in ns)...**
- Numa node # QB2 bigmem node, 1.6
<table>
<thead>
<tr>
<th>Numa node</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>129.4</td>
<td>202.1</td>
<td>192.0</td>
<td>200.9</td>
</tr>
<tr>
<td>1</td>
<td>202.2</td>
<td>130.4</td>
<td>199.6</td>
<td>194.2</td>
</tr>
<tr>
<td>2</td>
<td>196.4</td>
<td>196.0</td>
<td>129.0</td>
<td>193.4</td>
</tr>
<tr>
<td>3</td>
<td>201.4</td>
<td>195.9</td>
<td>191.4</td>
<td>128.2</td>
</tr>
</tbody>
</table>
1. **Bandwidths are in GB/sec**
- Using Read-only traffic type # DDR3 1600/1066 MHz
- Numa node # QB2 bigmem node, 4.2
<table>
<thead>
<tr>
<th>Numa node</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>53.52</td>
<td>12.65</td>
<td>12.68</td>
<td>12.44</td>
</tr>
<tr>
<td>1</td>
<td>12.70</td>
<td>54.39</td>
<td>12.65</td>
<td>12.65</td>
</tr>
<tr>
<td>2</td>
<td>12.48</td>
<td>12.50</td>
<td>53.71</td>
<td>12.66</td>
</tr>
<tr>
<td>3</td>
<td>12.63</td>
<td>12.52</td>
<td>12.71</td>
<td>54.37</td>
</tr>
</tbody>
</table>
Core level parallelism
<table>
<thead>
<tr>
<th>Instruction set</th>
<th>Register width</th>
<th>Processor</th>
</tr>
</thead>
<tbody>
<tr>
<td>SSE</td>
<td>128-bit</td>
<td>Pentium (1997)</td>
</tr>
<tr>
<td>SSE2</td>
<td>128-bit</td>
<td>Pentium III (1999)</td>
</tr>
<tr>
<td>AVX</td>
<td>256-bit</td>
<td>Xeon Sandy Bridge (2011)</td>
</tr>
<tr>
<td>AVX</td>
<td>256-bit</td>
<td>AMD Bulldozer (2011)</td>
</tr>
<tr>
<td>AVX2</td>
<td>256-bit</td>
<td>Xeon Haswell (2013)</td>
</tr>
<tr>
<td>AVX2</td>
<td>256-bit</td>
<td>Xeon Broadwell (2014)</td>
</tr>
<tr>
<td>AVX2</td>
<td>256-bit</td>
<td>AMD Carrizo (2015)</td>
</tr>
</tbody>
</table>
- Compiler and assembler support of **AVX**:
1. GCC higher than **v4.6**;
2. Intel compiler suite higher than **v11.1**;
3. PGI compilers since **2012**;
- Linux kernel version higher than **2.6.30** to support **AVX**;
Core level (vectorization)
- Why **vectorization** matters?
- Vector width keeps increasing from **128-bit** to **256-bit**, even to **512-bit** on KNC and KNL;
- Take the advantage of **longer** vector register width;
- Each register in the **256-bit** AVX can hold up to **four** 64-bit (8-byte) DP floating point numbers, or **eight** SP numbers;
**(1)** For additions or products, it is preferable to operate **four** pairs of DP numbers, or **eight** pairs of SP numbers with a single instruction;
**(2)** By comparison, the **vectorization** (AVX) can deliver the max speedup of **4** for DP or **8** for SP;
**(3)** Improvement for SP operations is always **doubled** compared to DP;
Core level (vectorization)
- **Vectorization** works in such a way so that the execution units execute a *single* instruction on multiple data *simultaneously* (in parallel) on a *single* CPU core (SIMD);
- Enabling vectorization in your applications will “potentially” improve performance;
- Typically vectorization can be attributed to *data* parallelism;
Core level (vectorization)
- Intel compilers support **auto-vectorization** for `-O2` or higher;
- Compile the following code with `-vec` and `-no-vec` flags;
```c
1 // vectorized or non-vectorized loop
2 const int nsize = 20;
3 const int kitemax = 10000000;
4 // allocate and initialize vectors.
5 ...
6 // sum over all vector elements
7 for (k=0; k<kitemax; k++)
8 for (i=0; i<nosize; i++)
9 vector_a[i] = vector_a[i] + vector_b[i] +
10 vector_c[i] + vector_d[i] + vector_e[i];
```
- Add `#pragma simd` or `#pragma vector` right above the inner loop, and see what happens;
Core level (vectorization)
- Intel compilers support **auto-vectorization** for `-O2` or higher;
- Compile the following code with `-vec` and `-no-vec` flags;
```c
// vectorized or non-vectorized loop
const int nsize = 20;
const int kitemax = 10000000;
// allocate and initialize vectors.
...
// sum over all vector elements
for (k=0; k<kitemax; k++)
for (i=0; i<nosize; i++)
vector_a[i] = vector_a[i] + vector_b[i] + vector_c[i] + vector_d[i] + vector_e[i];
```
- `-vec` (-O2): 0.113 sec; `-no-vec` (-O1): 0.226 sec with 1 thread;
- Does the speedup remain the **same** if we use more threads?
Core level (vectorization)
- Intel compilers support **auto-vectorization** for `-O2` or higher;
- Compile the following code with `-vec` and `-no-vec` flags;
```c
1 // vectorized or non-vectorized loop
2 const int nsize = 20;
3 const int kitemax = 10000000;
4 // allocate and initialize vectors.
5 ...
6 // sum over all vector elements
7 for (k=0; k<kitemax; k++)
8 for (i=0; i<nosize; i++)
9 vector_a[i] = vector_a[i] + vector_b[i] \
10 + vector_c[i] + vector_d[i] + vector_e[i];
```
- Record the speedup of vec/no-vec with varying nosize;
- nosize = 20, 200, 500, 1000, 3000, and 5000 (1 thread);
Core level (vectorization)
- Let's take a look at which loop is vectorized and which is not:
turn \texttt{-vec-report3} on;
\begin{verbatim}
...v0.c(52): (col. 3) remark: LOOP WAS VECTORIZED
...v0.c(78): (col. 4) remark: LOOP WAS VECTORIZED
...v0.c(77): (col. 4) remark: loop was not vectorized: not inner loop
\end{verbatim}
- Everything is expected. We know that the \textit{inner loop} is a good candidate for vectorization.
Core level (vectorization)
- Check the speedup and performance:
- A speedup of $\sim 2$ for small data and $\sim 1$ for large data;
- Significant improvement over the non-vectorized loops;
- The max performance is about 31% of the peak performance (22.4 GFLOP/s) with one thread on SuperMIC.

Core level (vectorization)
- Can we do better?
- Make `nosize` **unknown** at compilation time (**v1**), so the compiler may choose a different optimization technique;
```c
1 // vectorized or non-vectorized loop
2 int main (int argc, char *argv[])
3 ...
4 nosize = atoi(argv[1]);
5 ...
6 // sum over all vector elements
7 for (k=0; k<kitemax; k++)
8 for (i=0; i<nosize; i++)
9 vector_a[i] = vector_a[i] + vector_b[i] \
10 + vector_c[i] + vector_d[i] + vector_e[i];
```
**v1 C/C++**
Core level (vectorization)
- Can we do **better**?
- Make **nosize unknown** at compilation time (**v1**), so the compiler may choose a different optimization technique;
```
..._v1.c(50): (col. 3) remark: LOOP WAS VECTORIZED
..._v1.c(75): (col. 4) remark: PERMUTED LOOP WAS VECTORIZED
..._v1.c(76): (col. 4) remark: loop was not vectorized: not inner loop
```
- Confused?!
- The compiler is smart enough to **permute** (swap) the **inner** and **outer** loops, and vectorize the “**inner**” (the ordinary **outer**) loop;
Core level (vectorization)
- Again, the speedup and performance:
- Significant improvement for the large data size;
- The relative performance (\(-\text{vec/}-\text{no-vec}\)) may be lower (small data);
- The performance of \(-\text{no-vec}\) is also improved;
Core level (vectorization)
- On SuperMIC (Ivy Bridge at 2.8 GHz), a simple estimate shows we achieved $\sim 2.5$ DP FLOP/cycle ($v1$);
- Both Sandy Bridge and Ivy Bridge support up to 8 DP FLOP/cycle (4 add and 4 mul);
- Thus, $2.5/8 \approx 31\%$ of the peak performance;
- Can we improve it?
- Loop was already vectorized;
- Contiguous memory access;
- Memory affinity?
- Reuse the data in cache?
- FP execution units are not saturated;
- ...
Summary
- Performance scales on different levels:
- **MPI**: $\sim 10-1000 \times$;
- **OpenMP**: $\sim 10-40 \times$;
- **Memory affinity** on multiple-socket: $\sim 2-4 \times$;
- **Vectorization**: $\sim 4-8 \times$;
- **Compute-bound** and **memory-bound** applications;
- Bottlenecks in most parallel applications;
- Memory **hierarchy** and **throughput**;
- Performance killers: **High** load, load **imbalance** issues, and **intensive** swapping, ...;
Questions?
sys-help@loni.org
|
{"Source-Url": "http://www.hpc.lsu.edu/training/weekly-materials/Workshops/Parallel-2018/Parallel-Applications-31May2018.pdf", "len_cl100k_base": 13241, "olmocr-version": "0.1.43", "pdf-total-pages": 75, "total-fallback-pages": 0, "total-input-tokens": 137651, "total-output-tokens": 16326, "length": "2e13", "weborganizer": {"__label__adult": 0.0003864765167236328, "__label__art_design": 0.0006146430969238281, "__label__crime_law": 0.0004069805145263672, "__label__education_jobs": 0.0018815994262695312, "__label__entertainment": 0.0001480579376220703, "__label__fashion_beauty": 0.0002237558364868164, "__label__finance_business": 0.0004756450653076172, "__label__food_dining": 0.000347137451171875, "__label__games": 0.001033782958984375, "__label__hardware": 0.00640106201171875, "__label__health": 0.0005712509155273438, "__label__history": 0.000560760498046875, "__label__home_hobbies": 0.00019729137420654297, "__label__industrial": 0.0012531280517578125, "__label__literature": 0.00028514862060546875, "__label__politics": 0.0003533363342285156, "__label__religion": 0.0007534027099609375, "__label__science_tech": 0.41748046875, "__label__social_life": 0.0001380443572998047, "__label__software": 0.01409149169921875, "__label__software_dev": 0.55078125, "__label__sports_fitness": 0.0004189014434814453, "__label__transportation": 0.0009407997131347656, "__label__travel": 0.00027179718017578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38978, 0.06543]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38978, 0.43157]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38978, 0.71032]], "google_gemma-3-12b-it_contains_pii": [[0, 148, false], [148, 686, null], [686, 1289, null], [1289, 1879, null], [1879, 2511, null], [2511, 3198, null], [3198, 3842, null], [3842, 4502, null], [4502, 4694, null], [4694, 4771, null], [4771, 5376, null], [5376, 5947, null], [5947, 6531, null], [6531, 6861, null], [6861, 7452, null], [7452, 8230, null], [8230, 8606, null], [8606, 9083, null], [9083, 9600, null], [9600, 10096, null], [10096, 10658, null], [10658, 11214, null], [11214, 11842, null], [11842, 12378, null], [12378, 12952, null], [12952, 13506, null], [13506, 14370, null], [14370, 15030, null], [15030, 15090, null], [15090, 15751, null], [15751, 16173, null], [16173, 16615, null], [16615, 17384, null], [17384, 17876, null], [17876, 18410, null], [18410, 19064, null], [19064, 20012, null], [20012, 20056, null], [20056, 20523, null], [20523, 21068, null], [21068, 21274, null], [21274, 21705, null], [21705, 22462, null], [22462, 23057, null], [23057, 23293, null], [23293, 24167, null], [24167, 24738, null], [24738, 25508, null], [25508, 26122, null], [26122, 26712, null], [26712, 26770, null], [26770, 27487, null], [27487, 27704, null], [27704, 28154, null], [28154, 28424, null], [28424, 29058, null], [29058, 29633, null], [29633, 30408, null], [30408, 30926, null], [30926, 31413, null], [31413, 32180, null], [32180, 32203, null], [32203, 33052, null], [33052, 33745, null], [33745, 34104, null], [34104, 34723, null], [34723, 35323, null], [35323, 35927, null], [35927, 36373, null], [36373, 36737, null], [36737, 37223, null], [37223, 37750, null], [37750, 38019, null], [38019, 38477, null], [38477, 38978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 148, true], [148, 686, null], [686, 1289, null], [1289, 1879, null], [1879, 2511, null], [2511, 3198, null], [3198, 3842, null], [3842, 4502, null], [4502, 4694, null], [4694, 4771, null], [4771, 5376, null], [5376, 5947, null], [5947, 6531, null], [6531, 6861, null], [6861, 7452, null], [7452, 8230, null], [8230, 8606, null], [8606, 9083, null], [9083, 9600, null], [9600, 10096, null], [10096, 10658, null], [10658, 11214, null], [11214, 11842, null], [11842, 12378, null], [12378, 12952, null], [12952, 13506, null], [13506, 14370, null], [14370, 15030, null], [15030, 15090, null], [15090, 15751, null], [15751, 16173, null], [16173, 16615, null], [16615, 17384, null], [17384, 17876, null], [17876, 18410, null], [18410, 19064, null], [19064, 20012, null], [20012, 20056, null], [20056, 20523, null], [20523, 21068, null], [21068, 21274, null], [21274, 21705, null], [21705, 22462, null], [22462, 23057, null], [23057, 23293, null], [23293, 24167, null], [24167, 24738, null], [24738, 25508, null], [25508, 26122, null], [26122, 26712, null], [26712, 26770, null], [26770, 27487, null], [27487, 27704, null], [27704, 28154, null], [28154, 28424, null], [28424, 29058, null], [29058, 29633, null], [29633, 30408, null], [30408, 30926, null], [30926, 31413, null], [31413, 32180, null], [32180, 32203, null], [32203, 33052, null], [33052, 33745, null], [33745, 34104, null], [34104, 34723, null], [34723, 35323, null], [35323, 35927, null], [35927, 36373, null], [36373, 36737, null], [36737, 37223, null], [37223, 37750, null], [37750, 38019, null], [38019, 38477, null], [38477, 38978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38978, null]], "pdf_page_numbers": [[0, 148, 1], [148, 686, 2], [686, 1289, 3], [1289, 1879, 4], [1879, 2511, 5], [2511, 3198, 6], [3198, 3842, 7], [3842, 4502, 8], [4502, 4694, 9], [4694, 4771, 10], [4771, 5376, 11], [5376, 5947, 12], [5947, 6531, 13], [6531, 6861, 14], [6861, 7452, 15], [7452, 8230, 16], [8230, 8606, 17], [8606, 9083, 18], [9083, 9600, 19], [9600, 10096, 20], [10096, 10658, 21], [10658, 11214, 22], [11214, 11842, 23], [11842, 12378, 24], [12378, 12952, 25], [12952, 13506, 26], [13506, 14370, 27], [14370, 15030, 28], [15030, 15090, 29], [15090, 15751, 30], [15751, 16173, 31], [16173, 16615, 32], [16615, 17384, 33], [17384, 17876, 34], [17876, 18410, 35], [18410, 19064, 36], [19064, 20012, 37], [20012, 20056, 38], [20056, 20523, 39], [20523, 21068, 40], [21068, 21274, 41], [21274, 21705, 42], [21705, 22462, 43], [22462, 23057, 44], [23057, 23293, 45], [23293, 24167, 46], [24167, 24738, 47], [24738, 25508, 48], [25508, 26122, 49], [26122, 26712, 50], [26712, 26770, 51], [26770, 27487, 52], [27487, 27704, 53], [27704, 28154, 54], [28154, 28424, 55], [28424, 29058, 56], [29058, 29633, 57], [29633, 30408, 58], [30408, 30926, 59], [30926, 31413, 60], [31413, 32180, 61], [32180, 32203, 62], [32203, 33052, 63], [33052, 33745, 64], [33745, 34104, 65], [34104, 34723, 66], [34723, 35323, 67], [35323, 35927, 68], [35927, 36373, 69], [36373, 36737, 70], [36737, 37223, 71], [37223, 37750, 72], [37750, 38019, 73], [38019, 38477, 74], [38477, 38978, 75]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38978, 0.0902]]}
|
olmocr_science_pdfs
|
2024-11-22
|
2024-11-22
|
613e5026fcbf655a5532c4ce74e747b5223e2833
|
[REMOVED]
|
{"Source-Url": "https://ken.baclawski.com/pub/2004/14/public.pdf", "len_cl100k_base": 9127, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35814, "total-output-tokens": 9979, "length": "2e13", "weborganizer": {"__label__adult": 0.0003857612609863281, "__label__art_design": 0.0007576942443847656, "__label__crime_law": 0.0005955696105957031, "__label__education_jobs": 0.002231597900390625, "__label__entertainment": 0.00019037723541259768, "__label__fashion_beauty": 0.00026488304138183594, "__label__finance_business": 0.0004589557647705078, "__label__food_dining": 0.0004508495330810547, "__label__games": 0.0009450912475585938, "__label__hardware": 0.0008540153503417969, "__label__health": 0.0010995864868164062, "__label__history": 0.0004906654357910156, "__label__home_hobbies": 0.00015282630920410156, "__label__industrial": 0.0005121231079101562, "__label__literature": 0.001682281494140625, "__label__politics": 0.0005135536193847656, "__label__religion": 0.0008416175842285156, "__label__science_tech": 0.258544921875, "__label__social_life": 0.00022304058074951172, "__label__software": 0.03826904296875, "__label__software_dev": 0.689453125, "__label__sports_fitness": 0.00031566619873046875, "__label__transportation": 0.0007219314575195312, "__label__travel": 0.00026726722717285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46072, 0.01533]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46072, 0.6527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46072, 0.93703]], "google_gemma-3-12b-it_contains_pii": [[0, 3637, false], [3637, 6227, null], [6227, 8939, null], [8939, 10653, null], [10653, 13223, null], [13223, 17224, null], [17224, 21451, null], [21451, 23982, null], [23982, 26225, null], [26225, 29640, null], [29640, 31731, null], [31731, 35344, null], [35344, 39080, null], [39080, 42702, null], [42702, 46072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3637, true], [3637, 6227, null], [6227, 8939, null], [8939, 10653, null], [10653, 13223, null], [13223, 17224, null], [17224, 21451, null], [21451, 23982, null], [23982, 26225, null], [26225, 29640, null], [29640, 31731, null], [31731, 35344, null], [35344, 39080, null], [39080, 42702, null], [42702, 46072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46072, null]], "pdf_page_numbers": [[0, 3637, 1], [3637, 6227, 2], [6227, 8939, 3], [8939, 10653, 4], [10653, 13223, 5], [13223, 17224, 6], [17224, 21451, 7], [21451, 23982, 8], [23982, 26225, 9], [26225, 29640, 10], [29640, 31731, 11], [31731, 35344, 12], [35344, 39080, 13], [39080, 42702, 14], [42702, 46072, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46072, 0.03627]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
85beedc2f5aaa4037e0f0029ac2fba5a83d73621
|
Summary of New Features in Magma V2.13
July 2006
1 Introduction
This document provides a terse summary of the new features installed in Magma for release version V2.13 (July 2006).
Previous releases of Magma were: V2.12 (June 2005), V2.11 (May 2004), V2.10 (April 2003), V2.9 (May 2002), V2.8 (July 2001), V2.7 (June 2000), V2.6 (November 1999), V2.5 (July 1999), V2.4 (December 1998), V2.3 (January 1998), V2.2 (April 1997), V2.1 (October 1996), V2.01 (June 1996) and V1.3 (March 1996).
2 Summary
Groups
- *Finitely-Presented Groups*: An algorithm due to Derek Holt for testing whether two finitely presented groups are isomorphic has been implemented in Magma by Derek. The algorithm uses the Knuth-Bendix procedure to enumerate elements.
- *Finitely-Presented Groups*: Machinery for classifying metacyclic $p$-groups developed by Eamonn O’Brien and Michael Vaughan-Lee has been included.
- *Matrix Groups over Finite Fields*: Constructive recognition of a matrix group as $SL(3,q)$ has been provided by Eamonn O’Brien. The corresponding code for $SL(2,q)$ has been improved.
- *Matrix Groups over Finite Fields*: It is now possible to recognise the Suzuki and Ree groups in various matrix representations using a package developed by Hendrik Bäärnhielm. This makes use of the code for recognising $SL(2,q)$. The package also contains functions to compute Sylow $p$-subgroups for these two families of groups.
- *Matrix Groups over Finite Fields*: It is now possible to construct a Sylow $p$-subgroup of any classical group using a package developed by Mark Stather. Functions are also included for computing normalisers and solving the conjugacy problem for Sylow subgroups.
Basic Rings
- **Real and Complex Numbers**: Support for real numbers has been improved in this release by integrating in the latest version of the MPFR library. This includes new implementations of many functions (such as the gamma function) which are faster and more stable than the previous implementations.
Linear Algebra and Module Theory
- **Lattice Reduction**: A new implementation of LLL reduction of integer lattices has been undertaken by Damien Stehlè and is based on the Nguyen–Stehlè floating-point algorithm. The LLL and LLLGram algorithms are now guaranteed both to complete and to produce LLL-reduced bases. The new algorithm is more efficient than the previous one, sometimes dramatically so.
- Very great speedups have been achieved for the fundamental matrix algorithms over small and moderately-sized finite fields, and new fast modular algorithms for Hermite and Smith normal form computation have been introduced.
Extensions of Rings
- **Series Rings**: A uniform interface for series rings and local rings has been defined. Factorization of polynomials over local rings has been generalised for polynomials over series rings. Extensions of series rings can be constructed.
- **Number Fields**: A new algorithm for the computation of Galois groups of extensions of the rationals and of square-free polynomials over the integers has been implemented. This is the first ever degree-independent implementation. It has already been used in degrees up to 64. Support for infinite places of relative extensions has been added as well.
- **Algebraic Function Fields**: Functions have been added to perform Weil Descent on Artin-Schreier extensions of the rational function field in characteristic $p$. This generalises the Gaudry-Heß-Smart explicit descent method in characteristic 2. The code was contributed by Florian Heß.
- **Algebraically Closed Fields**: Algebraically closed fields may now be defined over finite fields and rational function fields over finite fields or the rational field, as well as the rational field.
Algebras
- **Orders of Associative Algebras**: These orders have been reimplemented and expanded. Functionality for orders defined over orders of number fields has been provided by representing basis information as a combination of a matrix and coefficient ideals. Ideals of these orders can now be constructed.
• **Orders of Quaternion Algebras:** These orders now inherit from the orders of associative algebras instead of the associative algebras themselves.
• **Quaternion Algebras:** It is now possible to work with quaternion algebras over number fields. It can be determined whether an associative algebra is quaternion. Matrix rings can be constructed from quaternion algebras.
**Lie Theory**
• **Coxeter groups:** Weight orbits and the dominant weight in an orbit can now be computed – these tools are useful for understanding representations. Several other functions have been speeded up.
• **Root data and root systems:** Non-reduced root data and systems, such as type $BC_n$, can now be constructed. Extended root data are also available – these contain the combinatorial data necessary to construct twisted groups of Lie type. A new category, RootDtmSprs, provides a sparse representation for classical root data. This requires much less memory and makes it possible to work with root data of very large rank. Morphisms between root data, including isomorphisms, fractional morphisms and dual morphisms have been implemented.
• **Groups of Lie type:** Twisted groups of Lie type are now available (these include unitary groups). A large speed up in element operations has been achieved by implementing a new algorithm called collection from outside. A new method for element multiplication in classical groups of Lie type, based on sparse root data, has been implemented, resulting in big memory savings.
**Algebraic Geometry**
• **General Schemes:** Point searching using the Elkies ANTS-IV $p$-adic method is now implemented for general schemes defined over the rationals.
• **Algebraic Curves:** Functionality has been added for ordinary plane curves and to produce random curves with genus $\leq 13$. The ordinary curve functionality includes much faster computation of canonical maps/images and much faster parametrization of rational curves.
• **Surfaces:** Functions for the parametrization of degree 6, 8 and 9 Del Pezzo surfaces over $\mathbb{Q}$ have been added. The degree 8 and 9 packages were contributed by Jana Pilnikova.
**Arithmetic Geometry**
• **Elliptic Curves over the Rationals:** A new package has been included for performing a full 3-descent on any elliptic curve over $\mathbb{Q}$. This involves computing the 3-Selmer group, and then representing the elements as plane cubic curves.
• **Elliptic Curves over Function Fields:** A new package has been included for elliptic curves defined over algebraic function fields (function fields of curves). The package includes an implementation of Tate’s algorithm, and height machinery, in this generality. It also includes, in lesser generality, routines for computing the $L$-function, the 2-Selmer group, and the Mordell-Weil group.
• **Elliptic Curves over finite fields:** Functions to perform Gaudry-Heß-Smart Weil descent on elliptic curves in characteristic 2 have been added. The core of the code was contributed by Florian Heß.
• **Genus One Models:** A package has been contributed by Tom Fisher dealing with invariant theory of genus one normal curves of degrees 2, 3, 4 and 5, and arithmetic applications of this.
• **Hyperelliptic Curves over Finite Fields:** A package has been contributed by Hendrik Hubrechts for counting points/ computing zeta functions of Jacobians of hyperelliptic curves lying in parametrized families, following the deformation method of Lauder.
• **Modular Abelian Varieties:** A package has been contributed by Jordi Quer for determining the endomorphism algebra, and the fields of definition, of a building block of a modular abelian variety.
**Coding Theory**
• **LDPC Codes:** A module for constructing, decoding and analyzing Low Density Parity Check codes has been developed. As well as iterative decoding, the module includes simulation tools such as density evolution.
• **Algebraic-geometric Codes:** Machinery for decoding algebraic-geometric codes up to the Goppa designated distance has been added.
• **McEliece Cryptosystem:** The best published decoding attacks on the McEliece cryptosystem together with improved attacks have been implemented. These include the attacks developed by McEliece, Lee & Brickell, Leon, Stern and Canteaut & Chabaud as well as generalized combinations of attacks.
3 **Removals and Changes**
This section lists the most important changes in Version 2.13. Other minor changes are listed in the relevant sections.
- The **LLL** and **LLLGram** functions have been changed so that they now produce a guaranteed LLL-reduced basis by default. (In particular, the output is no longer sorted.) Consequently the results may be slower on some problems; the old behaviour may still be obtained via the intrinsics **BasisReduction** and **GramReduction**. Additionally, the possible parameters have also changed.
4 Documentation
New chapters in the Handbook for V2.13 (with their chapter numbers) are:
- Matrix Groups Over General Rings (replaces Matrix Groups) (19)
- Matrix Groups Over Finite Fields (20)
- Orders of Associative Algebras (72)
- Elliptic Curves over Finite Fields (102)
- Elliptic Curves over Function Fields (103)
- Models of Curves of Genus 1 (104)
- Algebraic-geometric Codes (124)
- Low Density Parity Check Codes (125)
5 Language and System Features
New Features:
- Magma now offers facilities to read and write binary data. In previous versions, reading and writing was restricted to string types, which have trouble handling non-printable characters. A new type has been introduced, with semantics similar to a sequence of integers, but the implementation of which is tuned for character arrays.
- Functions, procedures, and intrinsics, can now be variadic; that is, they can take in a variable number of arguments.
- The maximum number of return values for an intrinsic package function has been increased from 5 to 256.
- Exception handling has been implemented, using the familiar notion of try/catch statements found in many other languages, such as Java, C++, and Python. Currently, Magma supports the catching of two kinds of errors: system errors and user errors. The user has the ability to attach arbitrary data to error objects through the use of attributes.
- Magma now has an “eval” keyword, similar to that found in languages like Python and Perl. This allows the evaluation of a string as a piece of Magma code. For instance, if \( s \) is a string with value “1+1”, then \( x := \text{eval } s \) assigns to \( x \) the value 1. This feature allows a great deal of runtime flexibility, and is useful, for instance, in the implementation of databases of objects.
6 Aggregates
Bug Fixes:
- Removal of multiset elements no longer causes the multiplicities of other elements to change.
7 Groups
7.1 Matrix Groups Over General Rings
New Features:
- `IsAbelian`, `IsElementaryAbelian`, and `IsCyclic` now work for infinite matrix groups.
Bug Fixes:
- A bug was fixed in the test for finiteness of a group defined over a large degree extension of \( \mathbb{Q} \).
7.2 Matrix Groups over Finite Fields
Changes:
- Constructive recognition of a matrix group as \( SL(3, q) \) has been provided by Eamonn O’Brien. The corresponding code for \( SL(2, q) \) has been improved.
- The original code of Alice Niemeyer for finding a form fixed by a classical group has been replaced by code written by Derek Holt. This code overcomes a number of errors and omissions in the Niemeyer version.
- A revised version of code for recognising classical groups has been provided by Alice Niemeyer. It fixes known problems and uses the Holt code for forms.
- A more efficient version of the Monte-Carlo program for recognising quasi-simple groups developed by G Malle and E O’Brien has been installed.
New Features:
- Code developed by Henrik Baarnhielm for recognising the Suzuki and Ree groups in various matrix representations is now included. The package also has functions to compute the Sylow subgroups for these two families of groups.
- A package developed by Mark Stather for computing the Sylow subgroups of the classical groups (given as matrix groups) has been included. This can also compute normalisers and solve the conjugacy problem for Sylow subgroups.
7.3 Finite Soluble Groups
Changes:
- The use of the obsolete Stackhandler Memory Manager has been completely removed from the soluble groups module.
New Features:
- The Leedham-Green algorithm for computing the center of a soluble group defined by a pc-presentation has been implemented.
- A more efficient algorithm for the calculation of centralisers has been installed.
7.4 Finitely Presented Groups
New Features:
– An algorithm due to Derek Holt for testing whether two finitely presented groups are isomorphic has been implemented in Magma by Derek. The algorithm uses the Knuth-Bendix procedure to enumerate elements.
8 Basic Rings
8.1 Integer Ring
New Features:
– The GMP 4.2.1 multiprecision integer library is now linked in.
– New function Normalize for residue ring elements.
8.2 Real and Complex Fields
New Features:
– Support for real numbers has been improved in this release by integrating in the latest version of the MPFR library. This includes new implementations of many functions (such as the gamma function) which are faster and more stable than the previous implementations.
9 Linear Algebra and Module Theory
9.1 Matrices
New Features:
– Very great speedups have been achieved for the fundamental matrix algorithms over small and moderately-sized finite fields. In particular, matrix multiplication over GF(q) for $q = 3, 4, 5, 7, 8, 16$ uses a new fast packed representation.
– New fast modular algorithms for the computation of Hermite or Smith normal forms has been developed. See http://tinyurl.com/z68bu for a webpage which gives timings involving the new Hermite algorithm.
– New function RandomMatrix for convenient construction of random matrices over finite rings.
9.2 Modules over Dedekind domains
The pseudo matrix structure underlying the modules have been made available.
New Features:
- A pseudo matrix can be constructed from a sequence of ideals and a matrix. These coefficient ideals and the matrix can be returned from the pseudo matrix as well as the order the pseudo matrix is over and the length and dimension of the pseudo matrix. Pseudo matrices can be compared for equality.
- Pseudo matrices can be transposed. A \texttt{HermiteForm} of a pseudo matrix can be computed and 2 pseudo matrices can be vertically joined.
10 Commutative Algebra
New Features:
- For homogeneous ideals, the Radical and Equidimensional Decomposition computations have been reimplemented using a more homological approach. These are now generally faster — much more so in some cases.
- New function \texttt{MonomialBasis} for quotient rings (affine algebras).
11 Extensions of Rings
11.1 Algebraic Number Fields
New Features:
- Support for infinite places of (relative) extensions has been added.
- Computation of Galois groups for absolute extensions and square-free integer polynomials is now possible without any degree limitations. This replaces the old method by Geißler.
- Functionality to compute arbitrary fields in the normal closure of number fields has been rewritten to complement the new Galois group computations.
- A places/divisor based interface to ray class groups has been added. In particular, defining modules including infinite places becomes much cleaner.
Bug Fixes:
- The fields returned by \texttt{SubfieldLattice} are now identical to those reported by \texttt{Subfields}. In the case of non-monic defining polynomials this was not the case previously.
11.2 Algebraically Closed Fields
New Features:
- Algebraically closed fields may now be defined over finite fields and rational function fields over finite fields or the rational field, as well as the rational field.
11.3 Quadratic Fields
Bug Fixes:
- The map between quadratic forms and ideals in non-maximal orders has been fixed.
11.4 Abelian Extensions
New Features:
- A new interface that allows places and divisors to be used to define class fields and ray class groups has been added.
- It is now possible to use subset, meet and ‘*’ on abelian extensions of the same base field.
11.5 Algebraic Function Fields
Removals and Changes:
- The old intrinsic IsIsomorphic which finds isomorphisms of function fields $E$ and $F$ over a common $\mathbb{Q}(t)$ base field has been renamed IsIsomorphicOverQt. This prevents the clash with the version of IsIsomorphic which finds isomorphisms between more general fields $E$ and $F$ extending a given isomorphism of their constant fields.
New Features:
- A function WeilDescent which performs Weil Descent on Artin-Schreier extensions of the rational function field over finite fields has been included. This generalises the explicit descent method of Gaudry, Heß and Smart in characteristic 2. If the constant field of the function field $E$ is $K$ and $k$ is the subfield of $K$ to be descended to, the result is a function field $F$ with constant field $k$ together with a divisor map from places/divisors of $E$ to divisors of $F$.
- There are related helper functions: ArtinSchreierExtension to generate the field $E$ from parameters; WeilDescentGenus and WeilDescentDegree to compute the genus and degree (over its base field) of $F$ before performing the descent.
- It is now possible to form completions in global function fields at places of degree greater than 1.
Bug Fixes:
- The calculation of 2 generators of an ideal has been improved by the increased use of the algorithm of Belabas especially when random elements of the bottom coefficient ring are not available.
11.6 Newton Polygons
Bug Fixes:
- A bug in IsInterior when the polygon was a line and defined by all its faces has been fixed.
11.7 Series Rings
New Features:
- A few intrinsics have been added in order to make the series rings compatible with the $p$-adic rings and fields. These are ResidueClassField, ChangePrecision and UniformizingElement. It is now also possible to use HenselLift with polynomials over series rings.
- Now that series rings share a uniform interface with $p$-adic rings the Factorization algorithm implemented for $p$-adic rings and their extensions can be used to factor polynomials over series rings over finite fields also.
- Series rings over finite fields can now also be extended. Extensions must be either unramified or totally ramified and can be made using UnramifiedExtension and TotallyRamifiedExtension.
- Extensions of series rings over finite fields have type RngSerExt. They support the uniform interface of $p$-adic rings and series rings. This interface includes Precision, UniformizingElement, ChangePrecision, ResidueClassField, equality of rings, and Valuation, arithmetic and predicates on elements. For extensions it also includes InertiaDegree and RamificationIndex. Additionally CoefficientRing and DefiningPolynomial are supported.
- Polynomials over extensions of series rings can be factored using the Factorization algorithm available for $p$-adic and series rings.
- Unramified extensions of series rings can be converted using OptimizedRepresentation to an isomorphic series ring.
12 Differential Rings
12.1 Differential Rings
Removals and Changes:
- Infinite precision is no longer denoted using -1, Infinity() is used instead. This also effects intrinsics taking Precision parameters.
13 Lattices and Quadratic Forms
13.1 Lattice Reduction
The LLL and LLLGram routines have been almost completely rewritten.
New Features:
- Correctness. The default output of these routines is always LLL-reduced for the input pair of factors. To achieve this, the FinalSort option has been turned off by default.
- Termination. The calls to the LLL and LLLGram routines should always terminate.
- A new LLL factor, Eta, has been introduced. It is used for the size-reduction property: the Gram-Schmidt coefficients $\mu_{i,j}$ of the output basis satisfy $|\mu_{i,j}| \leq Eta, \forall i > j$. By default, Eta is set to 0.501.
To avoid any ambiguity, the Sort option has been renamed InitialSort.
An EarlyReduction option has been added: when a new vector is visited, the other vectors are sometimes size-reduced as with respect to the already reduced vectors.
The traditional Lovász swapping condition may now be replaced by the so-called Siegel swapping condition, with the SwapCondition option.
A Fast option has been added: the system will try to choose the best user parameters in order to terminate as fast as possible.
The LLLGram routine can take as input any symmetric matrix, since it makes use of Simon’s variant of the Lovász swapping condition.
The following parameters are now deprecated: InitialDelta, DeltaSteps, FPBlock, Large and UnderflowCheck.
The zero lattice is now supported.
14 Algebras
14.1 Quaternion Algebras
Orders of quaternion algebras now inherit from orders of general associative algebras (AlgAssVOrd) instead of from the structure constant associative algebras. Ideals of orders of quaternion algebras inherit from the ideals of orders of general associative algebras (AlgAssVOrdIdl).
Orders of quaternion algebras can now be defined over orders of number fields. These orders and ideals however use the general order and ideal types AlgAssVOrd rather than the specific quaternion order and ideal types.
Removals and Changes:
- Orders and ideals of orders of quaternion algebras no longer inherit from associative algebras so will no longer have all the functionality of associative algebras. They will instead have all the functionality of AlgAssVOrd and AlgAssVOrdIdl.
- Ideals of orders of quaternion algebras are no longer structures but are elements of a power ideal. They now have their own type AlgQuatOrdIdl. Bases of ideals are now returned as sequences of elements of an algebra rather than of ideals (ideals no longer have elements). Some output may appear different because it is with respect to a different basis. Basis and BasisMatrix can accept a second argument of a structure the output will be with respect to.
- Functionality for the old AlgQuatOrd has been split between AlgQuatOrd and AlgQuatOrdIdl as appropriate.
- The Composite intrinsic for ideals has been removed.
- IsRamified for integer primes in quaternion algebras over the rational field has been replaced. The order of the arguments has been swapped, the prime now being the first argument.
New Features:
- A general associative algebra can be tested if it is a quaternion algebra by IsQuaternionAlgebra. A standard representation is returned.
– Given a zero divisor in a quaternion algebra, one can compute an isomorphism to the matrix ring by the command MatrixRing.
– The IsDefinite intrinsic now works for quaternion algebras over any number field and for orders over number rings.
– Full functionality for the Hilbert symbol has been added. One can compute the set of ramified places of a quaternion algebra.
– A quaternion algebra can be constructed by specifying any even set of noncomplex places of a number field.
– Embeddings for quadratic fields into quaternion algebras and quadratic orders for quaternion orders has been added in the intrinsics IsSplittingField, HasEmbedding, and Embed.
– The new intrinsic pMatrixRing computes a local splitting of a quaternion algebra over a number field at an unramified prime.
– A tame order and a maximal order of a quaternion algebra over a number field (or containing a given order) can be computed, as well as p-maximal orders. Orders can be tested for maximality and p-maximality.
– For definite quaternion algebras (over totally real number fields), the intrinsic Enumerate now lists elements in an ideal or order with bounded norm (or bounded in a box with respect to a Minkowski embedding). See also LatticeVectorsInBox which works for a general lattice.
– One can compute an OptimizedRepresentation of either a quaternion algebra or order; the result will tend to have much “smaller coefficients”.
– For orders of quaternion algebras over totally real number fields, one can compute a reduced basis via ReducedBasis. This basis is either the canonical one if the algebra is totally definite, and is a Minkowski-like embedding otherwise.
– For a definite quaternion order (now also over an order of a number field), the intrinsic UnitGroup returns the group of units of an order modulo the group of units of the base ring; returns an abstract group as well as a map to the order.
– One can test if two (left or right) ideals of a quaternion algebra (now also of a number field) are isomorphic, as well as a complete set of (left or right) ideal classes. In particular, one can test if an ideal is principal and, if so, compute a generator.
Bug Fixes:
– Zero input on the right hand side of the QuaternionAlgebra constructor is checked for and an error occurs.
14.2 Orders of Associative Algebras
Orders of general associative algebras have been rewritten and extended. Some specific functionality is provided for orders defined over orders of number fields.
New Features:
– Orders of associative algebras can now be created over orders of number fields by specifying a pseudo-basis. This pseudo-basis consists of a matrix and a sequence of coefficient ideals.
– Orders can be constructed by specifying any generating set of algebra elements. One can adjoin an algebra element to an order or compute the sum of two orders.
Maximal orders can be computed as well as $p$-maximal orders over number rings.
Information regarding the basis of the order can be returned as a PseudoMatrix or PseudoBasis.
Orders can now be compared to determine whether they are equal. It can also be determined whether an algebra element is in the order.
Basic functionality for orders, including computing degree, trace-zero subspace, and discriminant.
Orders now have their own elements of type AlgAssVOrdElt. Elements can be added, subtracted, negated, multiplied, multiplied by scalars, divided and powered as well as tested for equality and for equality with zero. The intrinsics LeftRepresentationMatrix, RightRepresentationMatrix, MinimalPolynomial, Norm, Trace, Conjugate and Eltseq are also supported.
Ideals of orders of associative algebras can be formed. These have type AlgAssVOrdIdl. Ideals can be left, right or two-sided. A basis of an ideal can be retrieved and left and right orders (multiplicator rings) can be computed, as well as the colon of two ideals. Addition and multiplication of compatible ideals is also available.
The Algebra an ideal is contained in is available as well as the Order the ideal was created as an ideal of.
Basis and BasisMatrix of an ideal can be retrieved with respect to a given order or algebra. Basis information can also be accessed as a PseudoMatrix or PseudoBasis.
Ideals can be tested for being left or right ideals. They can be created by multiplying an order by an element and compared to determine equality. One can test for containment of an element or inclusion of ideals as well as compute the (reduced) norm of an ideal.
15 Lie Theory
15.1 Root Systems and Root Data
Removals and Changes:
The extraspecial signs are now assigned to a root datum upon creation and cannot be changed afterwards.
Some changes has been made to intrinsics which return roots as vectors. Previously, the returned vectors were over integers in some cases, over rationals in other cases. Now the returned roots are always vectors over the field of rational numbers.
Optional parameter Basis has been added to all intrinsics returning roots as vectors or taking roots as vectors as arguments, to indicate the basis, with respect to which the vectors are built.
Some changes have been made to internal handling of (co)root lattices and (co)root spaces. The full root lattice $X$ associated with a root datum is now returned by FullRootLattice, the sublattice spanned by simple roots is returned by RootLattice, and the root space $X \otimes \mathbb{Q}$ is returned by RootSpace. The coroot lattices and coroot space can be obtained in a similar way.
Creation of root subdata and root subsystems has been improved and the constructor sub<..> can now be used to create them.
New Features:
Non-reduced root data and systems of type $BC_n$ are now supported. Reducedness of a root datum or system can now be checked by using the intrinsic `IsReduced`.
The subsystem or subdatum consisting of indivisible roots of any root system or datum can be constructed by `IndivisibleSubsystem` or `IndivisibleSubdatum`, respectively.
Indivisibility of roots can be checked by using `IsIndivisibleRoot`.
A new category for sparse root data `RootDtmSprs` has been implemented. This requires much less memory, as neither roots nor constants associated with a root datum are stored in memory, but always computed when required. Due to repeated computation of constants and roots, this is slower than the standard, dense, representation of root data. But the sparse representation is unavoidable for large ranks. For example, the root datum of type $B_{1500}$ requires about 122MB of memory using the sparse representation and can’t be created on a machine with 2GB of memory using the dense representation.
Extended (twisted) root data can now be constructed. This corresponds to Tits indices and carries information associated with the root datum of a twisted group of Lie type.
Relative root datum can be computed from an extended root datum.
Morphisms between root data can be constructed. This includes dual morphisms, isomorphisms and fractional morphisms.
### 15.2 Coxeter Groups as Permutation Groups
**New Features:**
- It is now possible to compute dominant weights and weight orbits.
### 15.3 Groups of Lie Type
**Changes:**
- Element operations in groups of Lie type have been dramatically improved. Collection algorithms have been implemented in the C kernel replacing the previous package code, resulting in large speed-ups.
- New algorithms for element operations have been implemented. The available algorithms in the current release include:
- Collection To Left.
- Collection From Left.
- Collection From Outside (new).
- Symbolic Collection From Left.
- Symbolic Collection From Outside (new).
- Symbolic Collection using direct formulas for classical types (new).
- The algorithm used for element operations can be specified by the user. By default, the fastest algorithm for the given group is chosen.
- Properties and constants of groups of Lie type, that do not depend on the base ring of the group, are now stored in the associated root datum. This speeds up subsequent creation of groups of Lie type having the same root datum over different base rings.
New Features:
- Twisted groups of Lie type can now be constructed.
16 Algebraic Geometry
16.1 Schemes
Removals and Changes:
- The Check parameter on the creation of maps between schemes now controls whether the defining polynomials define a map into the codomain of the scheme. The CheckInverse parameter now controls whether the inverse is checked to be an inverse. Both are true by default. If Check is set to false then the checking of the inverse will only be done if CheckInverse is set to true.
- Some operations involving function fields of schemes may be faster due to function fields knowing they are fields and not having to determine this.
New Features:
- A function HeightOnAmbient has been added, for calculating the height of a point on a general variety in affine or projective space. The point may be defined over the rationals, a number field, or a function field.
- A nontrivial algorithm to search for points on general schemes over the rationals has been implemented under the name PointSearch. The scheme can be in any affine or non-weighted projective space. This uses a $p$-adic algorithm: first find points locally modulo a small prime (or two small primes), then lift these $p$-adically, and then see if these give global solutions. Lattice reduction is used at this stage, and this makes the method far more efficient than a naive search. In fact, it becomes more efficient as the dimension of the ambient space increases; for instance, the asymptotic time to find points up to absolute height $H$ on a curve in $\mathbb{P}^d$ is $O(H^{2/d})$.
- Functions to determine the existence of and explicitly construct parametrisations over $\mathbb{Q}$ of Del Pezzo surfaces of degrees 6, 8 and 9. The main functions are ParametrizeDegree9DelPezzo, ParametrizeDegree8DelPezzo and ParametrizeDegree6DelPezzo. The algorithms used are based on working with the Lie algebra of the automorphism group of the surface to reduce the explicit construction of a parametrisation to at most solving a norm equation over $\mathbb{Q}$. The third of these functions has an option to determine only the existence of a parametrisation using local solubility which is much faster.
- Functions for generating Degree 6 Del Pezzo surfaces having a given 2-dimensional torus as automorphism group. These are Degree6DelPezzoTypeX where $X$ is $2_1$, $2_2$, $2_3$, $3$, $4$ or $6$ depending on the torus type.
- Function RationalPointsByFibration for finding all points of a general scheme $X$ over a finite field. This is much more efficient than the old RationalPoints which now calls the new function by default in most cases. The method is to sum up the points in the zero-dimensional fibres of a finite map of $X$ to a hyperplane or, more generally, a well chosen hypersurface.
Bug Fixes:
- Coercion into function fields of schemes has been improved (although possibly restricted) by delaying the attempt to coerce into the base ring of the scheme.
16.2 Algebraic Curves
Removals and Changes:
- The functions `CanonicalLinearSystem` and `AdjointLinearSystem` which return the linear system of polynomials giving the canonical map or the more general degree $d$ adjoint linear system for a projective plane curve $C$, now use a faster, more efficient computational method when $C$ is determined to be an ordinary curve (see below). This relies on a direct computation of the full adjoint ideal.
New Features:
- There are new functions for computing the automorphism group of a general curve and isomorphisms between curves. This transfers over the existing functionality for algebraic function fields.
- `Automorphisms` returns the sequence of all, or up to a given number of, automorphisms of a curve.
- `AutomorphismGroup` returns the full automorphism group of a curve as a generic group along with a map from the group to the actual isomorphisms given as scheme maps.
- `IsIsomorphic` determines whether two curves are isomorphic and finds an explicit isomorphism between them. `Isomorphisms` returns a sequence of all, or up to a given number of, isomorphisms between two curves.
- Functions have been added to generate random curves of given genus $\leq 13$ or with specified singularities over finite fields and $\mathbb{Q}$.
- The main function is `RandomCurveByGenus` which takes a field $K$ and genus $g$ as arguments. If $g \leq 10$, it returns a random plane curve with only nodes as singularities. For $g > 10$, it returns a curve in $\mathbb{P}^3$.
- `RandomNodalCurve` returns a random plane curve with only nodes as singularities. The degree and number of nodes is specified by the caller.
- `RandomOrdinaryPlaneCurve` returns a random plane curve with only ordinary singularities. The degree of the curve, and the number of singularities of multiplicity 2,3,4,... are specified by the caller.
- There are a number of new functions that deal with ordinary plane curves (ones with only ordinary singularities). These rely on the computation of the adjoint ideal for ordinary curves more quickly and efficiently than by generic function field methods or by the general resolution of singularities for plane curves. The $d$-th graded part of this ideal gives the degree $d$ adjoint linear system of the curve. For a plane curve of degree $d$, the degree $d - 3$ adjoint linear system is the canonical linear system of polynomials that give the canonical map.
- There are functions to determine whether a plane curve is ordinary or nodal, compute the adjoint ideal of such a curve and to return the adjoint linear system of a specified degree given the adjoint ideal.
- A `CanonicalImage` function has been added. This gives the canonical embedding of a plane curve and is much faster than using the general image machinery for the canonical map.
17 Arithmetic Geometry
17.1 Rational Curves and Conics
Removals and Changes:
- The \texttt{Parametrization} functions for a genus 0 curve \(C\) use an improved method when \(C\) is determined to be an ordinary plane curve. This utilises the functionality for ordinary curves described earlier and the new parametrization for rational normal curves described below.
New Features:
- A routine for solving diagonal conics over number fields, using a version of Legendre’s method is provided under the name \texttt{LegendresMethod}. This is an alternative to the standard approach of solving such problems using \texttt{NormEquation}, and the solutions obtained are often far simpler. This routine will be improved in future releases.
- \texttt{ParametrizeRationalNormalCurve} is a new function that gives a fast method of parametrizing a rational normal curve \(C\) in projective space over any field. If \(C\) lies in \(\mathbb{P}^d\) for \(d\) odd then the function returns a scheme isomorphism from the projective line \(\mathbb{P}^1\) to \(C\). If \(d\) is even the isomorphism is from a plane conic to \(C\). The function uses the geometric method of adjoint maps rather than the function field machinery.
17.2 Elliptic Curves
New Features:
- The database of elliptic curves of small conductor constructed by John Cremona has been updated to include all curves having conductor up to 130,000.
17.2.1 Mordell–Weil groups
New Features:
- A routine for computing the \texttt{Saturation} (at primes up to some bound) of the group generated by a given list of points has been added.
17.2.2 Heegner Points
A considerable amount of information about Heegner points can now be obtained. Previously in Magma, Heegner points were seen primarily as a tool for computing points on elliptic curves.
New Features:
- For a given elliptic curve \(E\) and discriminant \(D\) \texttt{Heegner Forms(E,D)} computes a set of points in the upper half plane which represent a Galois orbit of CM points on \(X_0(N)\) where \(N\) is the conductor of \(E\). The images of these points on \(E\) under the modular parametrisation are computed (over their field of definition, which is a subfield of the class field of \(\mathbb{Q}(\sqrt{D})\)) by \texttt{CMPoints(E,D)}. The discriminant is not required to be a fundamental discriminant. (For a fundamental \(D\), the sum of these points on the elliptic curve is a standard Heegner point in \(E(\mathbb{Q})\).)
17.2.3 Three-Descent
This is a new package for performing a full 3-descent for any elliptic curve over $\mathbb{Q}$. There are two stages to the descent process: first computing the 3-Selmer group, and then representing its elements as plane cubic curves with maps to the given elliptic curve. There is functionality for producing nice models of these curves. The package also contains a completely separate implementation of “descent by 3-isogeny”. Large parts of the code were written by Tom Fisher and Michael Stoll. Features:
- The $\text{ThreeSelmerGroup}$ of any elliptic curve over $\mathbb{Q}$ can be computed. An abstract group, together with a map to the relevant affine algebra, is returned. This is an implementation of the algorithm given by Schaefer and Stoll in *How to do a $p$-descent on an elliptic curve*, Transactions of the AMS, *356* No. 3, 2004.
- Any element in the $\text{ThreeSelmerGroup}$ of an elliptic curve $E$, or more generally any suitable element of the relevant affine algebra, can be represented as a plane cubic curve with covering map to $E$ (defined over $\mathbb{Q}$). The intrinsic that does this for a given Selmer element is $\text{ThreeDescentCubic}$. Alternatively, the whole process may be performed together by calling $\text{ThreeDescent}(E)$. The algorithm for this is joint work by Cremona, Fisher, O’Neil, Simon and Stoll (to appear).
- The reverse process, of starting with a plane cubic curve $C$ and obtaining its Jacobian $E$ and the element that $C$ represents in the 3-Selmer group of $E$, can also be carried out. The intrinsics are $\text{Jacobian}$ and $\text{ThreeSelmerElement}$. Also, given two plane cubics with the same Jacobian, one can form their sum in the Weil-Chatelet group using $\text{AddCubics}$ (the computation goes via their $\text{ThreeSelmerElement}$’s).
- There are intrinsics $\text{ThreeTorsionType}$, $\text{ThreeTorsionPoints}$ and $\text{ThreeTorsionMatrices}$ (which gives the translation action of each 3-torsion point on $E$ as a linear transformation on $\mathbb{P}^2$).
17.2.4 Integral Points
Bug Fixes:
- The reliability of the $\text{IntegralPoints}$ function has been improved (previously it had failed to return all the integral points in some examples). By default, both the new and old routines are run in parallel, as a check. When the option $\text{Fast}$ is set to true, only the new version of the routine is used (in many cases it is much faster).
17.2.5 Elliptic Curves over Number Fields
New Features:
- Local $\text{RootNumbers}$ can now be computed at all primes (including primes above 2).
17.3 Elliptic Curves over Finite Fields
New Features:
- An implementation of Gaudry, Heß and Smart’s explicit Weil descent for an elliptic curve $E$ in characteristic 2 has been included. This allows the reduction of problems in large subgroups of $E(K)$, like the Discrete Logarithm Problem, to corresponding ones in the Jacobian of $C$ over $k$ where $C$ is the (higher genus) descent curve and $k$ is a proper subfield of the original base field $K$.
- The main function for the above is `ECWeilDescent`, taking $E$ and $k$ as arguments as well as an additional parameter $c$ which determines the curve $C$. In addition to $C$, the divisor map from points of $E(K)$ (considered as degree 1 divisors) to the corresponding divisor on $C$ (as a `DivCrvElt`) is returned in the general case. If $C$ is hyperelliptic, then the divisor map returned is the actual divisor class map from $E$ to the Jacobian of $C$.
- There are additional helper functions `ECWeilDescentDegree` and `ECWeilDescentGenus` to quickly compute the degree and genus of the (plane) curve $C$ for a given $E,k,c$ input without actually performing the descent.
### 17.4 Elliptic Curves over Function Fields
This is a new package for elliptic curves with coefficients in a function field $k(C)$ where $C$ is a regular projective curve over some field $k$ (usually a number field or a finite field). The commands are largely parallel to those for elliptic curves over the rationals; one can compute local information (Tate’s algorithm and so forth), a minimal model, the $L$-function, the 2-Selmer group, and the Mordell–Weil group. This goes in order of decreasing generality: local information is available for curves over univariate function fields over any exact base field, while at the other extreme Mordell–Weil groups are available only for curves over rational function fields over finite fields for which the associated elliptic surface is a rational surface. The generality of many of the commands will be expanded in future releases.
**Features:**
- For an elliptic curve defined over the function field of an arbitrary curve, the conductor and places of bad reduction are computed. For places of bad reduction, `LocalInformation` carries out Tate’s algorithm, determining the Kodaira type and a minimal model.
- In the same generality, the Neron-Tate height of a point, and the height pairing for a given sequence of points, can be computed.
- For curves over function fields whose base field is finite (in other words, elliptic surfaces over finite fields) there is considerable functionality for counting points on the surface over finite field extensions. In this way, the `LFunction` of the elliptic curve is obtained. This in turn provides `AnalyticInformation` (conditional on the Birch–Swinnerton-Dyer or Artin-Tate conjectures), predicting the Mordell-Weil rank of the elliptic curve, the geometric rank, and the product of the regulator and the order of the Tate-Shafarevich group.
- The `TwoSelmerGroup` can be computed for an elliptic curve defined over a function field of odd characteristic whose base field is finite.
- The Mordell-Weil group can be computed for an elliptic curve defined over a rational function field whose base field is finite, and such that the elliptic curve, viewed as a surface over that finite field, is a rational surface. This is done by computing the Neron-Severi group of the surface, and the geometric Mordell-Weil group can also be obtained.
- The action of Frobenius on points or on fibres can be computed.
17.5 Genus One Models
This new package contributed by Tom Fisher deals with curves of genus 1 given by models of a special kind (genus one normal curves) of degree 2, 3, 4 and 5. The principal functionality involves invariant theory, and applications of this to arithmetic questions.
A genus one model in Magma has type ModelG1, and this is not a subtype of Crv or Sch; however the defining data for models of degree 2, 3 and 4 amounts to, respectively, a hyperelliptic curve, a plane cubic curve, and an intersection of two quadrics in $\mathbb{P}^3$. A model of degree 5 is given as a five-by-five matrix of linear forms in five variables.
Features:
- **Invariant theory**: The discriminant, and the $a$-, $b$-, and $c$-invariants of a model are computed. The Hessian, the CoveringCovariants, the HesseCovariants, and the Contravariants of a given model are also computed.
- As applications of the invariants, the Jacobian of a model can be computed, as well as the nCovering map to the Jacobian.
- Certain calculations in the Weil–Chatelet group of an elliptic curve can be performed: the sum of two models of degree 3, or multiplication by 2 for a model of degree 4 or 5.
- There is functionality to compute minimised or reduced models of a given model (except for models of degree 5).
- Families of elliptic curves that have the same Galois action on their $n$-torsion as a given elliptic curve (for $n = 2, 3, 4$ or 5) are computed by RubinSilverbergPolynomials. These are the same families defined by Rubin and Silverberg in *Families of elliptic curves with constant mod $p$ representation* (in *Elliptic curves, modular forms and Fermat’s last theorem*, Internat. Press, Cambridge MA, 1995).
- Conversion functions relating genus one models to the curves arising in the machinery for 2-descent and 4-descent are provided.
17.6 Hyperelliptic Curves
17.6.1 Jacobians over Number Fields
Changes:
- **TwoSelmerGroup**: Previously there were two separate routines for computing the 2-Selmer group of the Jacobian of a hyperelliptic curve, namely TwoSelmerGroup and TwoSelmerGroupData. Now, there is a single intrinsic TwoSelmerGroup. In cases where either of the previous intrinsics were applicable, the user may choose which is used by setting the optional parameter $A1$ to TwoSelmerGroupOld or to TwoSelmerGroupData.
The new intrinsic returns an abstract group together with a map to the relevant algebra. Other data previously available can still be obtained by setting optional parameters appropriately. The upper bound on the rank which was previously the first value returned by TwoSelmerGroupData is now obtained by calling RankBound (see below).
The recommended way to control the bounds used in class group computations is to use the new feature for “globally” setting the bounds (as functions on orders in number fields), by calling SetClassGroupBoundMaps or the simpler alternative SetClassGroupBounds. These bounds will then be used by default in all class group computations during the same Magma session. They should be set before TwoSelmerGroup is called. They may be reset at any time.
The required unit group computations are now done by a call to pSelmerGroup in all cases (this was not previously called in TwoSelmerGroupData).
New Features:
- Intrinsics RankBound and RankBounds have been provided, to collect the information available from various other functions about the rank of $J(\mathbb{Q})$ for the Jacobian $J$ of a hyperelliptic curve of genus 2. RankBound gives an upper bound, taking into account the 2-Selmer group of $J$, the 2-torsion, and whether the order of the Tate-Shafarevich group is a square or twice a square (this assumes that the order is finite). In addition, RankBounds returns a lower bound obtained by a certain amount of searching for points.
Bug Fixes:
- Several improvements and fixes have been made to the local point search phase in the routines for computing 2-Selmer groups. However, it is still best to provide minimal models for the curve (or nearly minimal models, particularly at 2), especially for curves over number fields. Further improvements are expected in future patch releases.
17.6.2 Jacobians over Finite Fields
New Features:
- Routines for point-counting and computing zeta-functions using deformation methods have been added for parametrized families of hyperelliptic curves and their Jacobians in small, odd characteristic. These are faster than the existing Kedlaya implementation for a single curve (once the ground field becomes moderately large) and also have the advantage of being able to compute results for multiple curves in the family in less time than for the individual curves. The functions are JacobianOrdersByDeformation, EulerFactorsByDeformation, ZetaFunctionsByDeformation.
17.7 Modular Abelian Varieties
New Features:
- A package for computing “building blocks” has been contributed by Jordi Quer. A modular abelian variety $A$ can be decomposed (up to isogeny) as a power $B^r$ for some abelian variety $B$ over $\mathbb{Q}$, which is called a building block. The package provides tools, in the non-CM case, for determining $r$, the endomorphism algebra of $B$ (which is either a number field or a quaternion algebra over some number field) and the fields of definition of $B$. In general the fields of definition are described by an element of the Brauer group of some number field.
The intrinsics take spaces of modular symbols (with sign +1), rather than modular abelian varieties. The main intrinsics are $\text{HasCM}$, $\text{InnerTwists}$, $\text{DegreeMap}$, $\text{BrauerClass}$, which computes the class of the endomorphism algebra of $B$, and $\text{ObstructionDescentBuildingBlock}$ which computes the Brauer element that describes the fields of definition of $B$.
A brief summary of the theory is provided in the handbook.
Bug Fixes:
- In $\text{InnerTwistCharacters}$, the bound $15+N \div 4$ has been replaced by $15+N \div 2$, because using the former too many twists were detected (for example, for level 28 and quadratic character, and for level 52 and characters of orders 4 and 12).
- $\text{InnerTwistCharacters}$ has been changed so that the output does not contain any CM twists in the case of squarefree level and trivial Nebentypus.
18 Incidence Structures
18.1 Graphs
Bug Fixes:
- It is now possible to use the label 0 on edges and vertices of graphs.
18.2 Hadamard Matrices
Bug Fixes:
- Computation of canonical forms now works for matrices over rings other than $\mathbb{Z}$.
18.3 Finite Incidence Geometry
New Features:
- Code written by Dimitri Leemans for testing whether a coset geometry has the Intersection Property has been included.
19 Coding Theory
19.1 Linear Codes over Finite Fields
New Features:
- The best published decoding attacks on the McEliece cryptosystem together with improved attacks have been implemented. These include attacks developed by McEliece, Lee & Brickell, Leon, Stern and Canteaut & Chabaud as well as generalized combinations of attacks.
19.2 Algebraic-geometric Codes
New Features:
- An efficient algorithm has been implemented for decoding algebraic-geometric codes up to the Goppa designated distance.
19.3 Low Density Parity Check Codes
Basic machinery has been installed for constructing and analysing Low Density Parity Check codes (LDPC codes).
Features:
- Construction of LDPC codes from sparse matrices
- Deterministic LDPC constructions
- Random constructions from regular and irregular LDPC ensembles
- Iterative LDPC decoding
- Simulation of decoding performance on specified channels
- Density evolution on binary symmetric and Gaussian channels for given channel parameters, as well as threshold determination.
- Small database of good irregular LDPC ensembles.
|
{"Source-Url": "http://magma.maths.usyd.edu.au/magma/releasenotes/pdf/relv213.pdf", "len_cl100k_base": 12085, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 52707, "total-output-tokens": 13414, "length": "2e13", "weborganizer": {"__label__adult": 0.00034332275390625, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.00055694580078125, "__label__education_jobs": 0.00148773193359375, "__label__entertainment": 0.0001926422119140625, "__label__fashion_beauty": 0.0001741647720336914, "__label__finance_business": 0.0004177093505859375, "__label__food_dining": 0.0004742145538330078, "__label__games": 0.0014324188232421875, "__label__hardware": 0.00141143798828125, "__label__health": 0.0007724761962890625, "__label__history": 0.0006232261657714844, "__label__home_hobbies": 0.00020992755889892575, "__label__industrial": 0.0009860992431640625, "__label__literature": 0.0003578662872314453, "__label__politics": 0.0004353523254394531, "__label__religion": 0.0009679794311523438, "__label__science_tech": 0.371337890625, "__label__social_life": 0.00018978118896484375, "__label__software": 0.040557861328125, "__label__software_dev": 0.5751953125, "__label__sports_fitness": 0.000446319580078125, "__label__transportation": 0.00042510032653808594, "__label__travel": 0.0002682209014892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53022, 0.02462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53022, 0.38187]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53022, 0.90047]], "google_gemma-3-12b-it_contains_pii": [[0, 1689, false], [1689, 4056, null], [4056, 6479, null], [6479, 8933, null], [8933, 10850, null], [10850, 12695, null], [12695, 14031, null], [14031, 15970, null], [15970, 17915, null], [17915, 20172, null], [20172, 22720, null], [22720, 25575, null], [25575, 28368, null], [28368, 30867, null], [30867, 33821, null], [33821, 36634, null], [36634, 39085, null], [39085, 41745, null], [41745, 45235, null], [45235, 47075, null], [47075, 50028, null], [50028, 51942, null], [51942, 53022, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1689, true], [1689, 4056, null], [4056, 6479, null], [6479, 8933, null], [8933, 10850, null], [10850, 12695, null], [12695, 14031, null], [14031, 15970, null], [15970, 17915, null], [17915, 20172, null], [20172, 22720, null], [22720, 25575, null], [25575, 28368, null], [28368, 30867, null], [30867, 33821, null], [33821, 36634, null], [36634, 39085, null], [39085, 41745, null], [41745, 45235, null], [45235, 47075, null], [47075, 50028, null], [50028, 51942, null], [51942, 53022, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53022, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53022, null]], "pdf_page_numbers": [[0, 1689, 1], [1689, 4056, 2], [4056, 6479, 3], [6479, 8933, 4], [8933, 10850, 5], [10850, 12695, 6], [12695, 14031, 7], [14031, 15970, 8], [15970, 17915, 9], [17915, 20172, 10], [20172, 22720, 11], [22720, 25575, 12], [25575, 28368, 13], [28368, 30867, 14], [30867, 33821, 15], [33821, 36634, 16], [36634, 39085, 17], [39085, 41745, 18], [41745, 45235, 19], [45235, 47075, 20], [47075, 50028, 21], [50028, 51942, 22], [51942, 53022, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53022, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
215ebf1611ca56d99a72969c4f9be5f2ff3306ee
|
Cloud Customer Architecture for e-Commerce
Executive Overview
This architecture is a vendor neutral and best practices approach to describe the flows and relationships between business capabilities and architectural components for e-Commerce applications that use cloud computing infrastructure, platforms and/or services. The elements of this architecture are used to instantiate an e-Commerce system whether using private, public or hybrid cloud deployment models. Applications comprising the core components of the architecture may be delivered as a service, from on-premises or hosted.
This e-Commerce architecture explains how to support enhanced customer engagement as well as supplier and partner engagements. The customer engagement core components of Marketing, Customer Analytics and e-Commerce architecture enables enriched engagement with customers on a deeper, human level and allows them to be delighted with the right experience at the perfect moment to build lasting loyalty. The supplier and partner engagement core components of Payments, Procurement and B2BIntegration enables enhanced supplier and partner engagements that move beyond responsiveness to a synchronized, predictive value chain that mitigates risk and reveals hidden value on a global scale.
The interfaces or dependencies between these and other systems are important considerations when designing the final system architecture. In many cases some of these core systems may remain on-premises, such as Warehouse Management or Point of Sale. One of the most important decisions to make when planning the e-Commerce system is deciding if on-premises components are candidates for deployment in an off-premises cloud service. Resilience and elasticity are among the considerations discussed when evaluating on-premises and “as a service” components. The intent of the evaluation is to ensure that a secure, reliable, high performance architecture is present across the e-Commerce solution. To ensure completeness a number of other components are required, such as firewalls, load balancers, databases, file repositories, content delivery networks, email and messaging.
The architecture described in this paper shows many system components that exist in a provider cloud environment. Yet it is important to understand that it is possible for some of these components to exist on-premises in the enterprise network and not in a cloud environment, particularly where there is an existing component in place which provides the required capabilities. Other considerations to make when evaluating as a service offerings, particularly for Software as a Service (SaaS) and Platform as a Service (PaaS), are the skillsets and number of personnel needed for ongoing operations and management of the component, as well as the capital expense of standing up hardware.
For the scenario where the cloud service is a PaaS offering, it is often the case that many elements of the architecture are available as part of the platform and only configuration and deployment is required. When a SaaS solution is selected the responsibilities for management are frequently reduced to configuration and user management.
The cloud deployment model affects the locations of many of the components in an e-Commerce architecture. In particular, for SaaS and public cloud deployment, the elements are instantiated in the public cloud. For private cloud deployment, the components are instantiated within the private cloud, either on-premises or within a privately managed environment made available by a cloud service provider. The likelihood that the final cloud architecture will be a hybrid IT design is high. For hybrid cloud architectures, the choice of where to locate each component, either in a public or dedicated external cloud environment or an on-premises private cloud service, is governed by security, compliance and performance considerations. The Cloud Deployment Considerations section describes options in more depth. Links to other CSCC resources are included at the end of the paper.
Holistic understanding of e-Commerce architectures is based on understanding the architectures for the mobile, web application hosting, big data and analytics and IoT capabilities that it composes. An appreciation of service provider SLAs is also helpful. Please refer to the CSCC’s Cloud Customer Reference Architecture papers for Web Application Hosting, Mobile, Big Data and Analytics, and IoT [1] [2] [3] [4] for a thorough discussion and best practices on each specific topic.
Cloud Customer Reference Architecture for e-Commerce
Figure 1 shows the elements that may be needed for any e-Commerce solution across three domains: public networks, provider clouds, and enterprise networks.
Figure 1: Elements of e-Commerce Solution
The public network domain contains commerce users and their e-Commerce channel for interaction with the enterprise. The public network also includes communication with peer clouds. The edge services handle traffic between the public network and the cloud. The provider cloud can host comprehensive e-Commerce capabilities—such as merchandising, location awareness, B2B2C commerce, payment processing, customer care, distributed order management, supply chain management and warehouse management. Marketing takes advantage of commerce analytics which helps with digital, cross channel, social and sentiment analytics. Using data cloud services, such as weather analytics, can help in adjusting the merchandise inventory and optimizing transportation in the provider cloud. Data services can be used to generate and aggregate insight reports from the other data cloud services, enterprise data and applications via business performance components in the provider cloud. These insights are used by users and enterprise applications and can also be used to trigger actions to be performed in the e-Commerce environment. All of this needs to be done in a secure and governed environment.
The enterprise network domain contains existing enterprise systems including enterprise applications, enterprise data stores and the enterprise user directory. Results are delivered to users and applications using transformation and connectivity components that provide secure messaging and translations to and from systems of engagement, enterprise data, and enterprise applications.
Figure 2 shows the relationships for supporting e-Commerce using cloud computing.
Figure 2: Cloud Component Relationships for e-Commerce
Components
Public network components
The public network contains elements that exist in the Internet: data sources and APIs, users, and the edge services needed to access the provider cloud or enterprise network.
e-Commerce User
An e-Commerce User is a customer who uses various channels to access the commerce solutions on the cloud provider platform or enterprise network.
Channel
Channel retailing solutions aim to provide a seamless, personalized brand experience whether the customer shops on the Web, over the phone, using a mobile device or all of the above. Not only can you create a next-generation Web channel, you can leverage the Web to improve revenues and customer service in all channels.
Key capabilities in this domain include:
- **Web Site**: Capabilities necessary for a direct to consumer online store. Web storefronts enhance the shopping experience with rich capabilities— from advanced faceted search and mini shopping carts, to integrated inventory availability and product comparisons. See the Cloud Customer Web Application Hosting Reference Architecture for more information. [1]
- **Mobile**: Supports commerce storefronts that take full advantage of mobile device browsers, touchscreens, and location based information to deliver an optimized and highly personalized mobile shopping experience. See the Cloud Customer Mobile Reference Architecture for more information. [2]
- **Connected Devices**: Provides the ability to have connected devices place orders for depleted products helping retailers drive an alternate channel of sales where connected devices are making buying decisions when needed. This option provides convenience to the customer and low touch sales to the retailer.
Edge Services
Services needed to allow data to flow safely from the internet into the provider cloud and into the enterprise. Edge services also support end user applications.
Key capabilities in this domain include:
- **Domain Name System Server**: Resolves the URL for a particular web resource to the TCP-IP address of the system or service that can deliver that resource.
- **Content Delivery Networks (CDN)**: Supports end user applications by providing geographically distributed systems of servers deployed to minimize the response time for serving resources to geographically distributed users, ensuring that content is highly available and provided to users with minimum latency. Which servers are engaged will depend on server proximity to the user, and where the content is stored or cached.
• **Firewall**: Controls communication access to or from a system permitting only traffic meeting a set of policies to proceed and blocking any traffic that does not meet the policies. Firewalls can be implemented as separate dedicated hardware, or as a component in other networking hardware such as a load-balancer or router or as integral software to an operating system.
• **Load Balancers**: Provides distribution of network or application traffic across many resources (such as computers, processors, storage, or network links) to maximize throughput, minimize response time, increase capacity and increase reliability of applications. Load balancers can balance loads locally and globally. Load balancers should be highly available without a single point of failure. Load balancers are sometimes integrated as part of the provider cloud analytical system components like stream processing, data integration, and repositories.
**Cloud Provider Components**
**e-Commerce Applications**
With the advent of social and mobile platforms and technologies, suppliers and retailers have started to collaborate in new ways providing capabilities to the end customer that were not possible just a few years ago. Having a retailer participate as a delivery channel for a supplier has not only provided convenience to customers by allowing direct ordering from a manufacturer, it has extended the supplier’s ability to tap into new markets and channels. For the retailer, in addition to providing convenience, it has allowed them to reach new customers and promote their brand.
Key capabilities in this domain include:
• **Mobile Digital & Store**: Enables the convergence of physical and digital stores to provide new ways of reaching and satisfying customer requirements for shopping, delivery and personalization.
• **Product Search and Personalization**: Enables customers to find products more effectively. A multi-channel search solution also supports keyword search, type-ahead and search suggestions. It also extends the scope of searchable content for business users for both structured and unstructured content. The search is based around the search index which must be built before it can be used for any searches. Site administrators and business users can work with search to fine-tune search merchandising to display preferred products to shoppers. Once a set of qualified products are found, further product selection and/or display order of products can be personalized based on customer and product attributes. Personalization decision rules and scores can be applied by real time analytical engines (see Marketing and Commerce Analytics).
• **Catalog**: Provides a consistent view of items offered by a retailer and allows a customer to search or place an order using a mobile device, application or other IoT connected channel. The catalog offered to customers can be controlled by sales offerings, contracts or many additional rules of entitlement including customer behavior and transactional insights. The catalog offering is driven by merchants and for B2B sites, is additionally driven by individual contracts with participating external B2B entities. Catalog can include the capability to provide correct prices for the product and services offered in the catalog. The pricing and promotions functions include calculations based on product, quantities, combinations or contents of the shopping cart and, in B2B sites, contracts. In a B2B commerce scenario, it is very common to have pricing and promotions being driven off a contract with the external B2B entity participating in commerce. Catalog, pricing and promotions can also be considered a common enterprise service that enables catalog, price and promotion calculations across all participating commerce channels within the enterprise. In cases where common services are not feasible, federation of catalog, pricing and promotion data from merchandizing applications or enterprise systems to individual systems can be used.
• **Order Capture**: Enables the creation of shopping carts, wish lists for future purchases, shipping information, payment information, and the conversion of shopping cart to an order. Order capture also allows orders started on a mobile device to be completed in a physical store or on a web application. It also provides the capabilities of bulk order placements - orders placed from a marketplace. Integration and update of the captured order information into the customer’s existing distributed order management processes and components is supported. It also provides order information to customer’s order inquiries and integrates with customer care for providing updates on the status of the order using customer preferences. It provides the customer service rep with functionality for assisted order capture and placements.
• **Marketplace**: Allows customers to shop across multiple sellers. The marketplaces are analogous to physical malls which provide customers multiple shopping experiences in one convenient location. The online marketplaces typically own customer data and control the shopping experience across the sellers within the marketplace. The marketplace drives the marketing, catalog, product placements, cart management, checkout services, payment handling, order brokering and orchestration and after sales customer services. The customers place the order within the marketplace, and the marketplace brokers the orders to individual sellers selling the product. Orders can include product from multiple sellers. In many cases, the marketplace can also provide fulfillment services that can help provide a consistent fulfillment experience for the end customer.
Digital Experience
A rich, meaningful digital experience is the key to engaging customers in today’s integrated digital world.
Key capabilities in this domain include:
- **Content:** Enables relevant personalized content to help attract and educate visitors on the benefits of the brand. Content related capabilities include content authoring tools, content management systems, content search and personalization tools (may require integration with marketing or analytical decision engines) and content servers/portals.
- **Federated Search:** Federated search enables customer and business users to find information about the products, catalog, product reviews and recommendations, marketing content, how-to, knowledge repository, customer and internal blogs in a consistent interface across multiple domain applications. The search can be used across customer facing applications within commerce along with internal applications like customer care, order management, merchandizing and supply chain. Search has come a long way from being key-word based to now becoming cognitive and capable of answering customer queries in natural language. Technology advances in search can now allow customer and business users to interact with systems through natural conversations. Search is also becoming a key platform to enable personalization of content for end users based on contextual learning of the user’s past and current interactions with the enterprise.
- **Social Engagement:** Enables the provision of reviews and rankings for engaging the company and other customers in meaningful dialogs, strengthens the relationship between customers and the brand, and turns customers into brand advocates.
- **Digital Messaging:** Improves the creation, delivery, storage and retrieval of outbound communications, including those for marketing, new product introductions, renewal notifications, claims correspondence and documentation, and bill and payment notifications. These interactions can happen through a widespread range of media and output, including documents, email, Short Message Service (SMS) and Web pages.
- **Email Service:** Keep customers informed of new products, clearances and special opportunities that are being offered at the store. Dynamic email allows the retailer to customize email impressions with tailored programs for each individual customer.
- **Notification:** Notifications help drive customer touch points by allowing notifications of store activities to be sent via email or cell phone.
**Gateway**
Allows smart devices to communicate with in-store networks to search or shop and pay. This can have the same capabilities and requirements for security and scalable messaging as a mobile gateway or IoT transformation and connectivity gateway as referenced in the CSCC’s IoT reference architecture [4].
**Customer Care**
Supports customer care across the entire transaction lifecycle and all commerce channels where customer care personnel supporting the user can see behaviors of a customer in more than one channel. Whether customer care is entirely self-service or provided by customer service personnel, delivering personalized care requires access to a range of data typically residing on multiple systems (including data from warehouse, logistics, web site and PoS, purchase/account history, etc.).
customer care is frequently offered in real time based on user behaviors such as abandoning a shopping cart or clicking back and forth between pages multiple times.
Strides in cognitive computing and natural language processing have enhanced customer care functions. Customer care can now be provided anytime, anywhere by using cognitive customer care applications that provide natural language interactions from the mobile or web application. Understanding the customer, the history of interactions, transactions and current context can all be used to offer a personalized customer care experience.
Key capabilities in this domain include:
- **Customer Relationship Management (CRM)**: Provides broad and deep visibility into a consumer’s current and historic behavior. This is accomplished through the collection and aggregation of information from tools and sources that make up the Commerce Analytics domain.
- **Loyalty Management**: Allows the e-Commerce provider to build and track customer loyalty across the customer’s interactions. These systems have the capability to enable various kinds of loyalty programs tied to customer profiles. The systems provide capabilities to track loyalty either directly or through transactional feedback from the Order e-Commerce application. In addition to creating, tracking and managing loyalty programs, these systems manage various kinds of reward programs. Loyalty programs play a key role in defining the interactions between the customer and the e-Commerce provider, help to build brand awareness, and also contribute to customer retention, e.g., special offers for VIP customers, free services to platinum members, etc.
**Payment**
Payment processing and payment gateway are two different functions. Payment gateways are always needed for internet commerce. If a business does not accept payments online, a payment gateway is not necessary. For internet merchants, both payment processing and payment gateways are required. Many payment processors also offer payment gateway services – selecting a single provider can simplify issue resolution if there is an outage or dispute.
Key capabilities in this domain include:
- **Payment Processing**: supports payment transactions using credit cards or electronic fund transfers which includes at least these roles:
- Merchant
- Customer
- Merchant payment processing service provider
- Merchant bank if different than payment processor
- Customer’s bank or bank issuing credit or purchase card
Payment processing can be a for fee service - merchants are charged per transaction processed, typically a percentage of the individual purchase. The cost of the service is offset by the benefits of a faster settlement process; the merchant has more cash on hand. For customers, the
benefits include security and convenience compared to mailing checks or the additional cost of money orders.
Because of the complexity of the transaction process and the stringent data security requirements and regulations it can be more cost efficient and less risky for most companies to outsource this function to a payment processor who is an expert in the relevant standards and fraud detection. The payment processor is responsible for authenticating the validity of the eCheck, credit, debit or gift card with the card issuer, sending the confirmation to the merchant’s bank and ultimately carrying out the electronic funds transfer (EFT) from customer to merchant bank to complete the payment. The payment processor covers only part of the overall Security and Fraud Detection domain.
In the U.S., payment processors initiate the EFT through the US Federal Reserve Bank’s Automated Clearing House (ACH). There is no single counterpart to this system in other parts of the world. The global standard for how companies process, store and/or transmit credit or debit card information is known as PCI DSS – Payment Card Industry Data Security Standard. This standard covers payment processors, point of sales and the interchange systems operated by the card brands.
Tokenization and end-to-end card encryption are essential to protect cardholder data and to ensure the e-Commerce providers adhere to the PCI standards. End-to-end card encryption on the card scan device is used at POS and stores, as customers can physically present/scan their cards there. Tokenization (internal or external) is used on web and mobile channels where physical card scans are not possible. The payment processor often provides credit card machines or other equipment for processing cards at the POS.
There is also an increasing adoption of newer forms of electronic payments like digital wallets like Apple-pay, Android-pay, PayPal and others. These digital wallets provide secure and convenient transactions options to the end customer and also significantly reduce the risk of fraud and associated chargebacks for e-Commerce providers. In addition, e-Commerce providers can have their own virtual currencies like virtual coins, points, loyalty rewards; each of which require special handling during payment processing.
- **Payment Gateway**: The Payment Gateway is the mediator between the e-Commerce transaction and the Payment Processing Service. Security requirements for purchase card transactions prohibit the direct transmission of information from the website or PoS system and the payment processor. Payment Gateways may be offered as a service by payment processing vendors or contracted from a vendor who only offers a gateway service.
**Distributed Order Management**
Supports inventory, order processing and order visibility. It orchestrates the workflow of orders from distribution centers/warehouses, suppliers, and third-party vendors for direct fulfillment and stores. Distributed Order Management can help deliver a superior customer experience when enabled to execute and coordinate order fulfillment processes across an extended supply chain network. It can provide flexible, process-based management of orders from multiple channels and enable customized fulfillment based upon user-defined business requirements. It also delivers the required visibility and event management across all fulfillment
activity – allowing quick response to unexpected problems and meet customer expectations.
Key capabilities in this domain include:
- **Order Management and Orchestration:** Manages, aggregates, and monitors orders from virtually all channels usually with an intelligent sourcing engine that coordinates fulfilment across the extended enterprise. Supporting a virtual single order repository gives customers, channels, suppliers, and trading partners access to modify, cancel, track, and monitor the order lifecycle in real-time. Flexible fulfilment gives the capability to check for inventory availability, provide rule-based dynamic allocation, enable transfers when a required item is out-of-stock, select locations based on inventory availability, split orders as needed, and source or drop-ship from a channel partner.
- **Global Inventory Visibility:** Provides a consolidated view of inventory in warehouses, stores, and third party vendors, helping to coordinate inventory across multiple sites, enterprises and sellers, allowing managers to track inventory anywhere at both internal and external ship nodes. Global Inventory Visibility solutions provide a synchronized real-time availability view of virtually all supply and demand from multiple systems and channels, including in store, in warehouses, at distributors, at suppliers, and in transit. Global Inventory Visibility solutions utilize an intelligent sourcing engine that optimizes inventory use across the extended enterprise to provide the best available-to-promise (ATP) dates and the most efficient fulfilment options available. It also identifies shortages and allows inventory planners to resolve problems by manipulating inventory balances through the allocation of sales orders and execution of purchases or movement of inventory. Data can be shared with external systems, customers, suppliers, and partners for demand and supply management.
- **Returns Management:** Provides all of the capabilities that are needed to manage the entire order returns process. This capability enables the buyer and the seller to effectively track items throughout the return and repair process and automates the procedures that return items to stock. Real-time status updates from service and repair organizations also enable sellers to leverage the returns processing cycle as a source of supply. Returns management links multiple returns or repair requests to the original sales orders, providing repair lifecycle tracking to track items throughout the returns and repair processes, including exchange orders, refurbishment and repair requests, and return disposition.
**Supply Chain and Logistics Management**
Enables systems to plan and manage product lifecycle, supply network, inventory including replenishments, distribution strategies, partner alliances and related analytics. Logistics management helps manage the internal logistics for purchasing, production, warehousing, and transportation within the enterprise to ensure products are available to end customers in the most efficient and cost effective way possible.
Key capabilities in this domain include:
• **Supply Chain Management:** Enables procurement of raw goods and materials to the effective delivery of the finished product to a customer. Traditionally, the Supply Chain operated independently from Marketing, Provisioning, or Inventory Management and often had conflicting goals. With the advent of digital marketing and focus on omni-channel, the supply chain has become an integral and critical part of service and delivery to a customer. For example, weather analytics can immensely affect Supply Chain Management and should be part of its key considerations.
• **Product Lifecycle Management (PLM) & Manufacturing:** Supports all aspects of a product from inception to the ultimate end of life. Phases include design, engineering, manufacturing, distribution, and marketing, including people, process and technology. Efficient PLM systems assist retailers with the ever increasing complexity realized by a global marketplace. It helps retailers effectively manage complex supply chains, ever-changing customer preferences, and design challenges.
• **Sourcing & Procurement:** Sourcing supports the component of the procurement process that deals with supplier selection and management. With a global economy and ability to source and manufacturer products across the globe, an efficient and intelligent supply chain has become critical for the survival and success of a retailer, as well as their ability to efficiently personalize and deliver the right product at the right price. Procurement systems have advanced significantly and form the pivotal link between commerce, order management, and ultimate delivery to the end customer.
• **Supplier and Partner Data Communications:** Enables retailers and other commerce constituents to securely communicate business documents on shipments, inventory, invoices, purchase orders, acknowledgments, contracts, etc. Such systems should have the ability to extract, classify, encrypt/decrypt, transform and transmit data with other commerce constituents that are external or internal to a retailer. Inherent in such data communications are security, high performance, audit, verification, automation, transport layer independence, and authentication for the retailer and their business and trading partners. The use of the blockchain technology for supply chain management will dramatically reduce time delays, added costs, and human error that plague transactions today. It allows more secure and transparent tracking of all types of transactions across the supply chain.
• **Transactional Event Ledgers:** Transactions across trading partners like suppliers, vendors, carriers, and B2B customers are driven through individual enterprise applications and the partner’s application. For example, in a vendor shipping situation, a purchase requisition is sent to the vendor/supplier as an electronic document through various communication channels and the transaction event log is maintained in the PO management system of the sender, the broker and the receiving vendor’s sales order management system. These systems then work on their own in isolation without any good way to ensure transaction integrity across partners or their applications. There is also no easy way to find the transactional events and logs through a trusted repository. In cases of discrepancies, this would normally mean some kind of audit/dispute between the involved parties and manual intervention to pull out records from individual systems to validate what was the correct set of events and transactions. Sometimes there is a need for intermediaries that both parties will trust - as in the case of certificate issuing authorities for digital certificates, stock exchanges for stocks, the US Federal Reserve for banking, etc. Intermediaries sometimes become a bottleneck and are not practical for many transactions within commerce. Distributed transactions and events managed through crypto-technologies like blockchain provide a secure and intermediary free ledger and validation of transactions across parties like trading partners. Blockchain based applications are evolving rapidly to provide distributed transactional ledgers to capture transactions and transactional events across commerce applications and partners involved in commerce.
• **Transportation Management & Optimization:** Transportation Management provides a retailer with the ability to source, transport, and deliver goods effectively while managing customer demand across multiple modes of transportation and providers. Transportation Optimization provides the capability to capitalize on the most efficient and cost effective manner needed to deliver products and services. It provides the retailer with options allowing them to make critical business decisions around profitability or cost, while providing them the ability to maintain delivery schedules in the event of a failure within their supply chain.
**Warehouse Management**
This domain enables efficient management of warehouse operations. Combining a warehouse management system with a wireless network, mobile computers, radio frequency identification (RFID) technology, voice picking applications, and barcoding can help fully extend your enterprise to the mobile worker, while increasing operational efficiencies and enhancing your customer service.
In omni-channel commerce, stores are increasingly being used as fulfilment centers to enable pickup in store, ship from store, and process returns at stores. In addition to their roles as centers to capture cash and carry sales, stores are participating extensively to enable commerce for other channels such as online, mobile, and call centers.
Key capabilities in this domain include:
• **Warehouse Inventory Management:** Supports systems to replenish stock, track costs of inventory, track profits, forecast inventory, forecast prices, forecast demand and more. The process interacts with systems to track orders, shipping, costs, stock, and sales and software that may be used to predict inventory status and track materials. Optimized inventory management will help keep costs in check, maintain a proper merchandise assortment, set targets, and monitor profits efficiently.
• **Inventory Optimization**: Optimizes capital investment constraints or objectives over a large assortment of stock-keeping units (SKUs) while managing demand and supply volatility, and providing the correct products on shelves.
• **Inventory**: Holds the complete list of merchandising items or goods in stock – on hand, in transit, or returned.
**Merchandising**
Merchandising planning is involved in marketing the right merchandise or service at the right place, at the right time, in the right quantities, and at the right price with the goals of optimization of margins, gross revenue, or shelf life.
Key capabilities in this domain include:
• **Assortment Management**: Manages product variations and makes them available to the customer in a meaningful way. Enables customers to easily identify products based on familiar categories while shopping, helps merchants work with different customer profiles fitting their categories, and enables the merchants to make the right decisions of what product and assortments to sell in a particular channel.
• **Pricing Management and Optimization**: Involves ways to set and manage pricing and promotion of product and services based on pricing policies, strategies, goals and objectives. The pricing and promotion optimization allows merchants to set appropriate price points and promotions to achieve goals like increasing market-share, maintain margins and profitability, respond to competitive pressures and volatile costs, and maintain channel presence. Recent advancements allow commerce providers to enable dynamic pricing and promotions to provide personalized pricing and promotional offers to end customers. The pricing and promotion optimizers also enable the merchants to run “what-if” simulations to understand the impact of certain pricing strategies and tactics.
• **Product Placement**: Enables merchants to figure out where and how to display the products and services across customer channels to have maximum positive impact on sales. The product placements on the site tremendously influence sales. Merchant tools should allow merchants to do A/B testing and evaluate the impact of product placement, content selection within a site or partner channel to achieve their objectives. Statistical experimentation to carry out various schemes and approaches are being driven by cognitive systems that ease the burden of manual trial and error.
**Commerce Analytics**
Enables optimization of the shopper’s journey and improves the sales and revenue for the business. Various types of analytics are used to achieve this, such as digital analytics, cross channel analytics, social commerce and sentiments, and merchandise analytics. Commerce Analytics should drive the “next best action” solution delivering the most appropriate action at the right time across channels maximizing customer and business value. Personalized interactions are enabled by a comprehensive view of customers, real-time predictive analytics to anticipate customer behavior, preferences and attitudes, and cross-channel delivery of best action to address customer needs and enhance long-term business revenue.
Key capabilities in this domain include:
• **Digital Analytics:** Enables the monitoring of customer interactions online with different web pages including the time customers stay on a specific section or product page and click-throughs for a specific website. Digital analytics improves the customer’s web experience and directs their attention to specific products or services which they have liked in past visits. The use of digital analytics helps in rendering a dynamic offer for the customer on the website.
• **Cross Channel Analytics:** Enables predictive, cognitive and prescriptive analytics across all channels in which customers engage. Predictive analytics helps to deliver targeted customer messages by using historical data as well as future predictions in specific markets and regions. Cognitive and prescriptive analytics are based on historical data to personalize each shopper interaction.
• **Social Commerce & Sentiment Analytics:** Enables the feeding of information about specific products or services from social media sites, such as Twitter and Facebook, into commerce analytics. For example, due to a weather pattern in certain regions, specific items may be in greater demand. Areas experiencing bad weather may need extra inventory of generators, umbrellas, and emergency food items.
• **Merchandise Analytics & Optimization:** Ensures that insights can be drawn from the customer’s journey and further optimization can be achieved for maintaining inventories and improving sales and revenues.
**Marketing**
This domain supports customer experiences from product exploration to purchase decision to transaction completion with personalized offers, content, and product presentations via a variety of communication channels including traditional, direct mail, email, as well as emerging mobile and social media.
Digital marketing along with mobile and social channels has altered traditional means for marketing and campaign management. Understanding what drives consumption and shopping behavior is now key to maintaining and growing market share.
Key capabilities in this domain include:
• **Marketing Resource Management (MRM):** Supports planning to determine strategic marketing channels, messages, and initiatives and the allocation of appropriate budgets and resources. Execution of marketing activities involves managing timelines and costs, as well as internal and external resources for implementing initiatives according to the marketing plan.
• **Campaign Management:** Enables the presentation of personalized, timely and relevant marketing messaging across multiple channels by using dynamic micro-segmentation with richer consumer data and sophisticated marketing analytics. Execution of an outbound marketing campaign involves:
- Defining and segmenting the target audience
- Defining content and offers/messages for each segment
- Sending out communications and tracking responses
- Defining measurement strategy (control groups, A/B) testing
In addition, location awareness is becoming even more critical in a retailer’s ability to reach out and personalize a campaign to its customers. By combining the use of mobile apps and also location-based services, campaigns can be personalized and their audience targeted specifically with content that is relevant and in context of the customer’s situation.
• **Real Time Recommendations**: Supports defining customer/visitor experiences in interactive digital channels by defining rules for personalizing offer, content, and product recommendations. The Real Time Recommendation engine provides a centralized mechanism for selection and prioritization of marketing offers, content, and products across interactive digital channels. Product, content, and offer recommendations are based on the customer’s current shopping interests, search queries, wisdom of the crowd, predictive models, history of the visitor’s behavior, and data captured in the visitor’s profile.
**Data Service**
Provides the ability to access data and replicate and synchronize the data. Data services, such as a weather analytics service, will help in adjusting merchandise inventory and optimizing transportation. Other data services can be used to generate and aggregate the reports from the enterprise data and applications via business performance components in the provider cloud.
**Business Performance**
Enables describing and understanding the alerts, metrics, and key performance indicators (KPIs) an organization uses to monitor day-to-day commerce activity, keep track of progress against defined goals, and adjust offerings across commerce channels in response to market demand. To facilitate the output from multiple systems, the data is often combined in simple-to-view ‘dashboard’ formats tailored for a specific Line of Business or business role. Commerce Analytics and Data Services support highly granular and real-time visibility across overall customer activity, as well as the capability to drill down to individual transactions.
The advent of digital channels combining multiple touch points, technologies and platforms, plus the quest for optimal omni-channel retail, has complicated the landscape when it comes to accurately measuring a retailer’s business performance.
While there are many more consumer touch points and interactions to measure, retailers still rely on simple, fundamental metrics to provide an accurate view of their performance. These 5 areas are a common part of retail KPIs:
1. Number of customers in a store or traffic to their website
2. Conversion rates – of people that come into a store or visit a website, how many actually purchase
3. Average sales value of items purchased
4. The size of a shopping basket
5. Gross margin
Transformation and Connectivity
The transformation and connectivity component enables secure connections to enterprise systems with the ability to filter, aggregate, modify, or reformat data as needed. Data transformation is often required when data doesn’t fit enterprise applications.
Key capabilities in this domain include:
- **Enterprise Secure Connectivity**: Monitors usage and secures results as information is transferred to and from the cloud provider services domain into the enterprise network to enterprise applications and enterprise data.
- **Transformations**: Transform data between analytical systems and enterprise systems. Data is improved and augmented as it moves through the processing chain.
- **Enterprise Data Connectivity**: Enables analytics system components to connect securely to enterprise data.
- **Extract Transform & Load**: Helps in batch data load for digital catalog updates in B2B2C.
Enterprise Network Components
The enterprise network is where the on-premises systems and users are located.
**Business User**
A business user or merchant who accesses the commerce solutions on the cloud provider platform or enterprise network.
**Internal Channel**
Internal Channel Retailing solutions provide an interactive experience whether the customer shops in the store, over the phone with CSR, or using a web-based call center. Not only can businesses create a next-generation Web channel, they can leverage the Web to improve revenues and customer service in all channels.
Key capabilities in this domain include:
- **In Store**: Integrates the online channel with the store’s channel, to offer the best of both scenarios and propel overall revenue growth. Retailers are working to differentiate themselves by delivering outstanding shopping choices and services in stores. By deploying innovative point-of-sale solutions, informational kiosks, Web tablets and wireless communications, they can gain a competitive advantage.
- **Call Center**: Provides customer service representatives (CSRs) with a single point of access, web-based call center solution to access commerce information. It enables more informed omni-channel interactions with customers to help increase sales.
**Enterprise Application**
Enterprise Applications are key data sources for a commerce solution. Enterprise applications leverage cloud services and host legacy applications. Three key capabilities are described here, and the list can be expanded to include other legacy applications.
Key capabilities in this domain include:
- **Finance**: Supports financial systems that are an integral part of the overall e-Commerce system. Whether a standalone application or a module within an Enterprise Resource Planning application, it is connected, at minimum, to payment processing and distributed order management. Assuring timely recognition of revenue and reconciliation of accounts and inventory are among the reasons integration of this back end system is a requirement.
- **Human Resources**: Supports the commerce workforce including employees and contractors and all aspects of human resources management such as hiring, training, safety, retaining, payroll, benefits, etc. Effective HR management is an important part of any successful organization.
- **Contract Management**: Supports management of vendor and customer contracts for the enterprise. A typical contract on the “buy side” can cover negotiated terms for procurement, pricing, assortment, inventory, logistics, orders with items, quantity and times, legal terms, measurements and enforcements with external vendors and suppliers that provide product and services to the commerce provider. The typical contract on the “sell side” can cover negotiated terms for selling, pricing and promotions, assortment, inventory level, logistics terms, bulk orders, legal terms, measurements and enforcements with B2B buying organizations that purchase from the enterprise. The contract management system is also responsible for federating parts of the contract to individual domain applications for contract execution and enforcement. This includes federating approved B2B catalogs with pricing and promotions to e-Commerce sites, order management and other channel providers; sending warehousing and logistics contract portions to supply chain, warehouse management and logistics management solutions; and so on. Contract systems also have a feedback mechanism to evaluate metrics and enforcement for contracts across domain applications.
**Enterprise Data**
This component hosts a number of applications that deliver critical business solutions along with supporting infrastructure like data storage. Such applications are key sources of data that can be extracted and integrated with services provided by the analytics cloud solution.
Key capabilities in this domain include:
- **Reference Data**: Master data information about products, location, supplier, customer, store associates and employees. Master data management (MDM) provides a trusted view of critical entities typically stored and potentially duplicated in siloed applications - customers, suppliers, partners, products, materials, accounts, etc. Sometimes master data management is separated for customer data solutions and product data solutions. Master data management also helps to match if a customer is an existing or new customer.
- **Transactional Data**: This data describes how the business operates and includes transactional master data such as purchase orders (POs), advance shipping notices (ASN), sales orders, shipments, receipts and returns.
- **Activity/Big Data**: Includes customer transaction history, web activity, ratings and review, social chatter, market data, weather and events, and contextual location.
- **Operation Master Data**: Includes price, promotion, inventory, cost, carrier, digital content and service provider information. It also maintains the master digital product catalog.
Enterprise User Directory
This component provides access to the user profiles for both the cloud users and the enterprise users. A user profile provides a login account and lists the resources (data sets, APIs, and other services) that the individual is authorized to access. The security services and edge services use this directory to drive access to the enterprise network, enterprise services, or enterprise-specific cloud provider services.
Security
Supports rigorous security needed at each step in the lifecycle of commerce applications—from raw input sources to valuable insights to sharing of data among many users and application components. Security services enable identity and access management, protection of data and applications, and actionable security intelligence across cloud and enterprise environments. It uses the catalog and user directory to understand the location and classification of the data it is protecting.
Key capabilities in this domain include:
- **Identity and Access Management**: Enables authentication and authorization (access management), as well as privileged identity management. Access management ensures each user is authenticated and has the right access to the environment to perform their task based on their role (that is, customers, employees, partners, supply chains, and business users). Capabilities should include granular access control (giving users more precision for sharing data) and single sign-on facility across big data sources and repositories, data integration, data transformation, and analytics components. Privileged identity management capabilities protect, automate, and audit the use of privileged identities to ensure that the access rights are being used by the proper roles, to thwart insider threats, and to improve security across the extended enterprise, including cloud environments. This capability generally uses an enterprise user directory.
- **Application and Data Protection**: Services that enable and support data encryption, infrastructure and network protection, application security, data activity monitoring, and data lineage.
- **Data encryption**: Secures the data interchange between components to achieve confidentiality and integrity with robust encryption of data at rest as well as data in transit.
- **Infrastructure and network protection**: Supports the ability to monitor the traffic and communication between the different nodes (like distributed analytical processing nodes) as well as prevent man-in-the-middle, disk operating system attacks. This service also sends alerts about the presence of any bad actors or nodes in the environment.
- **Application security**: Ensures security is part of the development, delivery, and execution of application components, including tools to secure and scan applications as part of the application development lifecycle. Application security identifies and remedies security vulnerabilities from components that access critical data before they are deployed into production.
- **Data activity monitoring**: Tracks all submitted queries and maintains an audit trail for all queries run by a job. The component provides reports on sensitive data access to understand who is accessing which objects in the data sources.
- **Data lineage**: Traces the origin, ownership, and accuracy of the data and complements audit logs for compliance requirements.
• **Security intelligence:** Enables security information event management, audit and compliance support for comprehensive visibility, and actionable intelligence to detect and defend against threats through the analysis of events and logs. High-risk threats that are detected can be integrated into enterprise incident management processes. This component enables auditing capability to demonstrate that the analytics delivered by the big data platform sufficiently protects Personally Identifiable Information (PII) and delivers anonymity. It also enables automated regulatory compliance reporting. Security / fraud detection is an important part of the payment processing steps and if fraud is detected it gets reported directly to the enterprise for immediate action.
The Complete Picture
Figure 3 provides a more detailed architectural view of components, subcomponents and relationships for a cloud-based e-Commerce solution.
Figure 3: Detailed Components Diagram
Runtime Flow
A customer wants to buy new garments to attend a wedding in 4-5 weeks. He searches for a specific retailer online. The retailer offers the merchandise online and in retail stores in the mall. The retailer also maintains new designer clothes for special occasions. Figure 4 illustrates the e-Commerce flow of this typical scenario enabled by cloud computing.
Figure 4: Flow for the Digital Transformation of Retailer’s Commerce
1. The customer browses information about the needed garment using a mobile phone as the channel. The customer finds that a new design of the garment will be available in a couple of weeks. The customer registers on the website to receive information about the availability of the new design. The customer’s presence on specific mobile pages and preferences entered as part of the registration process will also be captured via the commerce analytics and marketing components.
2. After a few days, the manufacturer/retail merchant introduces the new design in their product catalog. The product launch is done through a marketing campaign on various channels including an e-Campaign. The updated e-Commerce catalogs are available for access on various channels.
3. The customer gets an email from the merchant about the new garment design. The customer opens the email and clicks on the link to go to the website to learn more about the product. Digital experience components, such as digital messaging, are used to engage the customer.
4. Based on the customer’s profile, three different variations of the product are shown on the website. When the customer appears on the site, marketing dynamic preferences can be rendered using the customer’s preferences (captured in #1 and #3).
5. The customer uses a special promotion offered to him as a preferred customer. This is achieved based on past purchases and the order capture component of e-Commerce Applications.
6. The customer places their order (payment processing occurs) using the e-Commerce Applications. The specific customer order capture information is forwarded to distributed order management.
7. The merchant fulfills the order, ships to the customer, and sends an email to the customer with tracking information. Supply chain management is called by distributed order management to fulfill the captured order.
8. The merchant also checks the inventory in order to replenish from their contract supplier - using warehouse management.
9. The merchant sends out appropriate purchase orders, drop ship requests, and service requests, and receives shipment notices, acknowledgements, and invoices. The supply chain and logistics management components enable these steps.
10. Information obtained from social analytics (including a survey from the customer) suggests that the new variation of the product is more popular than the original. The commerce analytics sub components (Social Commerce & Sentiment Analytics) are used for this purpose.
11. Information obtained from social analytics is passed to merchandise inventory for further analytics and optimization. The merchandising gets adjusted based on feedback from commerce analytics and warehouse management.
**Deployment Considerations**
Cloud computing is classified based on two models: service type and deployment type. The CSCC’s *Practical Guide to Cloud Computing* [5] provides thorough definitions of both models and prescriptive information for choosing among the various cloud computing deployment models. The CSCC’s *Practical Guide to Hybrid Cloud Computing* [7] provides guidance and strategies to navigate the intricacies of hybrid planning and governance. Because e-Commerce by definition uses payment processing and/or payment gateways, the typical deployment model will be hybrid since most e-Commerce merchants do not provide their own payments processing or gateway. *ISO/IEC 17788:2014 Cloud Computing Overview and Vocabulary* [6] and *ISO/IEC 17789:2014 Cloud Computing – Reference Architecture* [7] are the basis for the CSCC’s definitions of cloud computing characteristics, roles, deployment models and service models and can also be referred to for more details. The essential characteristics of cloud computing, as described in the CSCC’s *Practical Guide to Cloud Computing*, are useful for focusing discussions on whether to ‘buy or build’ and which e-Commerce components to deploy where. The following section offers an overview of some of the most important considerations for deployment decisions. The suggestions of criterion for choosing a cloud adoption pattern are meant as a starting point.
Organizations should define their own decision criteria for selection of public, private or hybrid cloud components based on the specific needs of their business. Cloud technology is a fast changing space. As technologies evolve, cloud service providers (CSP) will likely offer new services and more negotiable SLAs than are now available. Key to initial deployment and successfully sustaining a viable e-Commerce solution is assurance that the governance mechanisms for keeping up to date with cloud changes and
other technology and business changes are in place. These include updating the decision criteria for CSP selection and whether commerce workloads belong in public, private or hybrid clouds on a regular basis.
The E-Commerce architecture defined in Figure 2 can be applied in most industries: banking, retail, manufacturing, communications, industrial products, etc. The deployment considerations will vary depending on the maturity and investments made by organizations in existing applications, customer expectations, the goods or services offered by the merchant and the elasticity or availability requirements needed to meet peak usage for the merchant. The kinds of systems integrated and the peak usage will vary considerably between a consumer-oriented seller of toys, a B2B site for drill bits, and the online booking tool for a 24x7 car towing services provider. If no data is available to define peak usage or performance requirements – perhaps because the business is a start-up or new to e-Commerce – then collecting this information is essential to long term viability. Include analytic components upfront in the design as the data collected will provide guidance for future architectural decisions.
Customer security concerns are universal. Whether a commerce application is B2C or B2B, users need to have confidence that their personal data and payment information is held securely and that their privacy is maintained. The cost and effort to implement PCI or data security compliance frameworks will guide decisions on what cloud deployment and services model to adopt.
The first step in the deployment consideration is the definition of business capabilities that are needed by an organization. This is the time to evaluate on-premises components and current cloud services providers against typical and peak usage requirements such as ‘Black Friday’ or other seasonal increases in traffic, planned sales events, or weather related traffic. Understanding the usage scenarios will help the organization decide on the selection of “as a service” options such as PaaS and SaaS or come up with a migration or hybrid strategy based on capacity and performance forecasts. B2B sites offering repair or other services covered by delivery SLAs need to assure that the e-Commerce architecture will support these customer facing SLAs. Businesses that are new to e-Commerce in general or are expanding commerce channels to include mobile apps will need to take into consideration social media rankings that encompass goods, services, and the shopping/app experience itself. A low rating of a consumer facing app can reduce shopper traffic irrespective of the quality of what is sold. B2B merchants are not immune from social media scrutiny and criticism, and need to make the same considerations based on their users’ expectations. Once the business capabilities are agreed upon they can be mapped to the architecture components defined in Figure 2 as well as used to review existing development or other IT governance to see if modifications are needed to support a new architectural model.
The second step in the deployment consideration is the evaluation of existing investments in e-Commerce. This is also the time to look closely at the responsiveness of those development practices and change management processes in place that support the e-Commerce system. Being able to modify catalogues, content, and applications rapidly is a consumer expectation for all but the smallest online merchants. Organizations may find that adopting DevOps is needed to meet customer expectations going forward. When the automation or infrastructure to adopt DevOps is not already in place, private/public cloud services can fill the void.
These new cloud capabilities then need to be integrated with existing investments in commerce running in the merchant’s data center creating a hybrid cloud.
Once there is an understanding of the known and potential peak usage scenarios and the viability of existing technology investments to support the desired future state, the third step, the selection of cloud adoption patterns for infrastructure, platform and software in the public, private or hybrid model...
can be made. Among the common criteria for selection of cloud adoption patterns are service level agreements (SLAs) like security, resiliency, scalability, the resources and skillsets of personnel, whether the organization budgets with a preference towards OPEX or CAPEX, and the operation and governance strategy in place. These concerns apply equally to all cloud service models: IaaS, PaaS and SaaS. Whether or not a commerce architecture component is available across cloud adoption patterns and on-premises will also dictate the choice and use of a public, private or hybrid cloud. The CSCC papers referenced in this guide provide guidance on SLAs as well as considerations for resiliency, interoperability and the like. Guidance on how to define cloud governance for public, private and hybrid architecture is also provided.
Organizations that are start-ups or have very little investments in e-Commerce may choose to use public or dedicated off-premises cloud to realize infrastructure components in support of self-managed commerce architecture components. Further, they have the option to use standardized SaaS commerce components. Organizations that have made investments in on-premises systems, such as ERPs, billing or warehouse/inventory management have a different set of opportunities for their architecture. When adding new components, they have choices such as using standardized public cloud commerce SaaS offerings or using dedicated off-premises cloud solutions to support capacity bursts. Any organization needs to make regular assessments of both their technology and development processes that support e-Commerce to assure that these meet the business needs of agility and innovation. When there are needs for improvement, a decision can be made to move into either a private or public cloud component. Organizations that have made heavy investments in on-premises commerce assets, yet find their SLAs are not met by current public or dedicated cloud capabilities provided by CSP vendors would likely benefit from a hybrid cloud adoption pattern.
Once a decision has been made on a cloud pattern a final review of all system integration points is essential. Harmonizing SLAs, assuring end-to-end change notification, and updating security policies are among the areas deserving of thoughtful analysis. Integration points are also a potential area of bottlenecks. Assure that middleware, API appliances, PaaS, service busses etc. are capable of meeting peak usage. Continual transaction monitoring of integration points, from both a security and performance perspective, is highly desirable.
Summary of Key Considerations
The architect of an e-Commerce system needs to match business requirements to tools and technologies capable of satisfying customers, merchants, compliance entities and financial services providers. The ubiquity of social media as a vehicle for criticism means that an unsatisfactory experience for any one of these constituents could turn into a viral, real-time public relations problem. Cloud services such as SaaS and PaaS are typical approaches to meeting the requirements of rapid updates. Cloud elasticity and resilience assures unanticipated bursts in traffic can be supported. The following are key considerations for architects aiming for an optimal user experience across all e-Commerce channels:
- Design to meet needs for rapid change and updates in customer facing components
- Assure high performance across all components
- Take care in analyzing system interfaces and dependencies
- Assure future interoperability by choosing open standards-based components wherever possible
- Make data security a focal point across the architecture
References
http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-web-application-hosting.htm
http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-mobile.htm
http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-big-data-and-analytics.htm
http://www.cloud-council.org/deliverables/cloud-customer-architecture-for-iot.htm
http://www.iso.org/iso/catalogue_detail?csnumber=60545
Acknowledgements
The major contributors to this whitepaper are: Gautham K. Acharya (IBM), Karl Cama (IBM), Glenn Daly (IBM), Sunil Dube (IBM), Mark Griner (IBM), Gopal Indurkhya (IBM), Heather Kreger (IBM), Karolyn Schalk (IBM), Michael Yesudas (IBM), Bob Balfe (IBM) and Raghvendra Gupta (Medullan Inc).
© 2016 Cloud Standards Customer Council.
All rights reserved. You may download, store, display on your computer, view, print, and link to the *Customer Cloud Architecture for e-Commerce* white paper at the Cloud Standards Customer Council Web site subject to the following: (a) the document may be used solely for your personal, informational, non-commercial use; (b) the document may not be modified or altered in any way; (c) the document may not be redistributed; and (d) the trademark, copyright or other notices may not be removed. You may quote portions of the document as permitted by the Fair Use provisions of the United States Copyright Act, provided that you attribute the portions to the Cloud Standards Customer Council’s *Customer Cloud Architecture for e-Commerce* (2016).
|
{"Source-Url": "http://www.cloud-council.org/deliverables/CSCC-Cloud-Customer-Architecture-for-eCommerce.pdf", "len_cl100k_base": 11863, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 53224, "total-output-tokens": 13234, "length": "2e13", "weborganizer": {"__label__adult": 0.0008311271667480469, "__label__art_design": 0.00481414794921875, "__label__crime_law": 0.0011892318725585938, "__label__education_jobs": 0.003662109375, "__label__entertainment": 0.0005002021789550781, "__label__fashion_beauty": 0.0005044937133789062, "__label__finance_business": 0.1591796875, "__label__food_dining": 0.0006875991821289062, "__label__games": 0.0024051666259765625, "__label__hardware": 0.005664825439453125, "__label__health": 0.0010471343994140625, "__label__history": 0.0009446144104003906, "__label__home_hobbies": 0.0004963874816894531, "__label__industrial": 0.0029201507568359375, "__label__literature": 0.0006380081176757812, "__label__politics": 0.0004982948303222656, "__label__religion": 0.000652313232421875, "__label__science_tech": 0.1832275390625, "__label__social_life": 0.0002007484436035156, "__label__software": 0.285888671875, "__label__software_dev": 0.3408203125, "__label__sports_fitness": 0.0002932548522949219, "__label__transportation": 0.0020198822021484375, "__label__travel": 0.0007977485656738281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68332, 0.00797]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68332, 0.03667]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68332, 0.91585]], "google_gemma-3-12b-it_contains_pii": [[0, 3184, false], [3184, 4546, null], [4546, 4799, null], [4799, 6368, null], [6368, 6506, null], [6506, 9032, null], [9032, 11690, null], [11690, 14729, null], [14729, 18072, null], [18072, 20870, null], [20870, 24284, null], [24284, 27421, null], [27421, 29950, null], [29950, 33630, null], [33630, 36825, null], [36825, 40149, null], [40149, 42546, null], [42546, 45049, null], [45049, 48511, null], [48511, 51920, null], [51920, 52692, null], [52692, 52892, null], [52892, 54097, null], [54097, 58004, null], [58004, 62209, null], [62209, 65909, null], [65909, 68332, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3184, true], [3184, 4546, null], [4546, 4799, null], [4799, 6368, null], [6368, 6506, null], [6506, 9032, null], [9032, 11690, null], [11690, 14729, null], [14729, 18072, null], [18072, 20870, null], [20870, 24284, null], [24284, 27421, null], [27421, 29950, null], [29950, 33630, null], [33630, 36825, null], [36825, 40149, null], [40149, 42546, null], [42546, 45049, null], [45049, 48511, null], [48511, 51920, null], [51920, 52692, null], [52692, 52892, null], [52892, 54097, null], [54097, 58004, null], [58004, 62209, null], [62209, 65909, null], [65909, 68332, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68332, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68332, null]], "pdf_page_numbers": [[0, 3184, 1], [3184, 4546, 2], [4546, 4799, 3], [4799, 6368, 4], [6368, 6506, 5], [6506, 9032, 6], [9032, 11690, 7], [11690, 14729, 8], [14729, 18072, 9], [18072, 20870, 10], [20870, 24284, 11], [24284, 27421, 12], [27421, 29950, 13], [29950, 33630, 14], [33630, 36825, 15], [36825, 40149, 16], [40149, 42546, 17], [42546, 45049, 18], [45049, 48511, 19], [48511, 51920, 20], [51920, 52692, 21], [52692, 52892, 22], [52892, 54097, 23], [54097, 58004, 24], [58004, 62209, 25], [62209, 65909, 26], [65909, 68332, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68332, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
f1a8b4437d1f038075813fd20d278d0b9e363fdc
|
Kernel Support for the Wisconsin Wind Tunnel*
Steven K. Reinhardt, Babak Falsafi, and David A. Wood
Computer Sciences Department
University of Wisconsin--Madison
1210 West Dayton Street
Madison, WI 53706 USA
wwt@cs.wisc.edu
Abstract
This paper describes a kernel interface that provides an untrusted user-level process (an executive) with protected access to memory management functions, including the ability to create, manipulate, and execute within subservient contexts (address spaces). Page motion callbacks not only give the executive limited control over physical memory management, but also shift certain responsibilities out of the kernel, greatly reducing kernel state and complexity.
The executive interface was motivated by the requirements of the Wisconsin Wind Tunnel (WWT), a system for evaluating cache-coherent shared-memory parallel architectures. WWT uses the executive interface to implement a fine-grain user-level extension of Li's shared virtual memory on a Thinking Machines CM-5, a message-passing multicomputer. However, the interface is sufficiently general that an executive could act as a multiprogrammed operating system, exporting an alternative interface to the threads running in its subservient contexts.
The executive interface is currently implemented as an extension to CMOST, the standard operating system for the CM-5. In CMOST, policy decisions are made on a central, distinct control processor (CP) and broadcast to the processing nodes (PNs). The PNs execute a minimal kernel sufficient only to implement the CP's policy. While this structure efficiently supports some parallel application models, the lack of autonomy on the PNs restricts its generality. Adding the executive interface provides limited autonomy to the PNs, creating a structure that supports multiple models of application parallelism. This structure, with autonomy on top of centralization, is in stark contrast to most microkernel-based parallel operating systems in which the nodes are fundamentally autonomous.
*This work is supported in part by NSF PYI Award CCR-9157366, NSF Grant MIP-9225097, a Wisconsin Alumni Research Foundation Fellowship, an A.T.&T. Bell Laboratories Ph.D. Fellowship, and donations from Xerox Corporation, Thinking Machines Corporation, and Digital Equipment Corporation. Our Thinking Machines CM-5 was purchased through NSF Institutional Infrastructure Grant No. CDA-9024618 with matching funding from the Univ. of Wisconsin Graduate School.
© 1993 USENIX Association. Permission to copy without fee all or part of this material is granted, provided that the copies are not made or distributed for commercial advantage, the USENIX Association copyright notice and the title and date of publication appear, and that notice is given that copying is by permission of the USENIX Association. To copy or republish otherwise requires specific permission from the USENIX Association.
1 Introduction
This paper describes the kernel interface designed to support the Wisconsin Wind Tunnel (WWT) [13], a system for parallel simulation of parallel computers. WWT currently runs on the Thinking Machines CM-5 (a message-passing machine) and simulates cache-coherent shared-memory multiprocessors. Shared memory applications execute directly on the CM-5 node processors, with WWT simulating references to remote data. Shared memory functionality is provided using a fine-grain user-level extension of Li’s shared virtual memory [10], as described in Section 2.2. WWT uses a separate address space for each simulated (target) node and services all of its exceptions (e.g., MMU faults) and system call requests (e.g., file I/O). In order to study machines larger than the host, several target nodes timeshare a single CM-5 node. In many ways, WWT behaves like an operating system for shared-memory applications. Alternatively, WWT can be thought as providing a virtual machine abstraction— with a shared-memory MIMD machine atop a message-passing pseudo-SIMD machine—for these applications [8].
WWT requires several unusual features from the underlying operating system. Specifically, the kernel must allow WWT to:
- Create subservient contexts (address spaces)
- Manipulate page mappings within sub-contexts
- Initiate execution in sub-contexts
- Handle traps generated during execution in sub-contexts
- Manage physical memory tags in sub-contexts
All of these features must coexist with traditional memory management functions, including paging and/or swapping.
We have defined and implemented an interface which provides these features and call any application that makes use of them an executive. While the interface is motivated by our specific application, it provides any untrusted user-level process with protected access to memory management functions, including the ability to create, manipulate, and execute within subservient contexts (address spaces). Page motion callbacks not only give the executive limited control over physical memory management, but also shift certain responsibilities out of the kernel, greatly reducing kernel state and complexity.
Because an executive creates contexts and controls them completely, it can act as the operating system for other applications, providing an execution model not available under the native operating system. For example, an executive can export various thread and memory abstractions without adding complexity to the kernel itself.
While this flexibility is useful in a uniprocessor context, it is particularly important on the CM-5, since standard CMOST is a centralized, synchronous operating system, allowing little autonomy for the node kernels. CMOST’s structure efficiently supports an important class of parallel applications, i.e. fine-grain data-parallel codes, but cannot take advantage of more autonomous execution models. By extending CMOST with the executive interface, we provide a kernel structure that can efficiently support both synchronous and asynchronous applications.
---
1We have effectively extended the CM-5 architecture to support two bits of tag information for each 32-byte block of physical memory; see Section 2.3.
The combination of CMOST and executive interface results in a unique kernel structure that provides autonomy on top of synchrony, rather than the more traditional approach of coordinating fundamentally autonomous nodes. This new kernel structure may prove superior because centralized control appears to have advantages for supporting fine-grain synchronous codes and managing global hardware resources, while the executive interface provides flexible support for other execution models.
The next section provides background on the Thinking Machines CM-5 system, the Wisconsin Wind Tunnel, and our method of synthesizing memory tags on the CM-5. While the interface is not tied to any of these, this section provides context and motivation for the rest of the paper. Section 3 defines the interface, consisting of context manipulation calls, page motion callbacks, and execution management calls. Section 4 describes our implementation of the interface in CMOST and its performance. Section 5 discusses the implications of this work for the structure of multiprocessor operating systems. Finally, we discuss related work and our conclusions.
2 Background
2.1 Thinking Machines CM-5 and CMOST
The Thinking Machines CM-5 [16] is a distributed-memory message-passing multiprocessor. Each processing node consists of a 33 MHz SPARC microprocessor with a cache and memory management unit, up to 128 MB of memory, a custom network interface chip, and optional custom vector units. The processing nodes are grouped into partitions of 32 or more processors. Each partition is managed by a control processor (CP), distinct from the processing nodes (PNs).
The standard operating system for the CM-5 is CMOST. Under CMOST, all policy decisions, including scheduling, swapping, and memory allocation, are made on the control processor. The processing nodes execute a minimal microkernel (the PN kernel) which provides the bare mechanisms required to implement the CP's policy. Because all processors in the partition are managed as a synchronous unit, CMOST gives the CM-5 some SIMD-like qualities. For example, when the CP decides to context switch, all nodes simultaneously switch to the new context. Similarly, when a new process is created, the CP selects the physical pages which the process will occupy on the PNs and broadcasts that process's memory map.
CMOST and the CM-5 are optimized to run data-parallel applications, where all nodes synchronously apply similar operations to a local subset of a global data structure. In particular, the CM-5 contains a “control network”, distinct from the message-passing network, which provides hardware support for global operations such as barriers, reductions, and broadcasts [9]. To efficiently utilize this control network, all nodes in a partition must concurrently execute the same user process. The centralized CMOST structure automatically satisfies this condition.
2.2 The Wisconsin Wind Tunnel
The Wisconsin Wind Tunnel (WWT) provides a platform for evaluating parallel computer systems—specifically cache-coherent shared-memory computers—by accurately modeling the performance of real workloads on proposed hardware [13]. WWT helps computer engineers evaluate computer architectures much like a wind tunnel helps aeronautical engineers design aircraft. WWT uses the execution of a parallel shared-memory application to drive a distributed discrete-event simulation, accurately calculating the execution time of that application on a modeled hardware system (the target). Events generated by the simulation, such as lock acquisitions and memory reference completions, are used in scheduling the application, guaranteeing
that the application’s execution proceeds exactly as it would on the target system.
We call WWT a virtual prototype because it uses direct execution to leverage similarities between the target system and the system on which it executes (the host) [5]. This means that the target application executes directly on the host hardware as much as possible—for example, a target floating-point multiply runs as a host floating-point multiply. Software simulation is required only for those features of the target system not provided by the host.
Because WWT executes on a message-passing machine, the primary feature it must simulate is the shared memory abstraction. We do this using a fine-grain extension of Li’s shared virtual memory [10]. Shared virtual memory constructs a distributed shared memory using standard address translation hardware to control memory access on each node. If a node has a copy of a shared data page, it is mapped into the address space on that node; if a node has no copy, the virtual page is not mapped. Multiple read-only copies are easily supported using the page protection facilities. Program accesses that require a data transfer to acquire a valid or exclusive copy are signaled as page faults. Unfortunately, relying on address translation hardware alone restricts the granularity of coherence to at least the virtual memory page size.
The shared-memory machines we wish to model maintain coherence at a finer granularity, typically tens of bytes rather than thousands. We have synthesized the ability to tag each 32-byte block in physical memory as invalid, read-only, or writable (see Section 2.3). Using these tags in combination with the address translation hardware, we implement a distributed shared memory that maintains coherence at a 32-byte granularity. The first reference to a shared page causes a page fault, as with shared virtual memory. We allocate and map a physical page, but initially mark each cache block invalid. Cache blocks are marked valid (read-only or writable) only as they are referenced. Accesses to invalid blocks (and writes to read-only blocks) cause faults, and initiate software that fetches the data and marks the block valid. The distinction between read-only and writable tags allows read replication at the cache block granularity.
WWT uses this fine-grain shared virtual memory to directly execute shared-memory applications as they would execute on a cache-coherent target machine. A context is allocated for each target node; the shared data accessible from this context reflects the contents of the simulated cache on that target node. Page and tag faults correspond to target cache misses, which invoke WWT and are handled according to the target’s coherence protocol. Large target systems are studied by allocating several contexts per host node and multiplexing their execution.
### 2.3 Memory tags
To implement fine-grain shared virtual memory, blocks in memory must have three states: invalid, read-only, and writable. Any access to an invalid block and write accesses to read-only blocks must provide restartable exceptions. To achieve this functionality, we have logically extended the CM-5 architecture to support two additional bits of information—writable and invalid—per 32-byte physical memory block.
Although memory tags with access semantics have appeared in numerous machines, e.g., the Denelcor HEP [15], most contemporary commercial machines—including the CM-5—do not provide this capability. However, we are able to synthesize an invalid tag on the CM-5 by forcing uncorrectable errors in the memory’s error correcting code (ECC) via a diagnostic mode. Using the SPARC cache in write-back mode causes all SPARC cache misses to appear to the memory as cache block fills. A fill that encounters an uncorrectable ECC error generates a precise exception.
Synthesizing a read-only state is more convoluted, since it requires using the page tables to make entire pages read-only. On a write fault, we must distinguish between a write to a read-only block and a write to a writable block that resides on the same page as one or more
read-only blocks. We make this distinction by maintaining a bit vector—one bit per block—to indicate whether the block is writable. The write fault handler checks this bit; if set it performs the write and resumes the application, rather than signaling a fault.
Memory tags introduce extra state—two additional bits per 32-byte block—making paging and swapping more complex to implement. The tag bits make the “extended” physical page no longer a power of two, causing a mismatch with typical disk block sizes, and requiring more bookkeeping and I/O operations. In addition, since memory tags are unused for many pages, e.g., text and non-shared data, any overhead maintaining them is wasted. The executive interface reduces the kernel’s complexity by shifting responsibility for maintaining memory tags to the executive.
3 The executive interface
The executive interface provides an executive—an untrusted user-level process—with protected access to memory management functions. An executive can use the interface’s memory management calls to create subservient contexts (address spaces), and exert complete control over them, including adding, modifying, and deleting page-level mappings. An executive can invoke execution within a subcontext, and regain control on all faults and exceptions. Page motion callbacks not only give the executive limited control over physical memory management, but also shift certain responsibilities out of the kernel, greatly reducing kernel state and complexity.
A primary goal of the executive interface is to minimize kernel state and complexity. Beside the aesthetic appeal, keeping most of the code and complexity at user level makes bugs less catastrophic and easier to eliminate. In addition, because we are modifying a continuously developing system, minimizing and isolating the kernel source changes makes it easier for us to keep up with vendor revisions.
A particular challenge is maintaining the address mappings and memory tag values installed by an executive in the face of paging/swapping activity by the kernel. The brute-force solution is to have the kernel remember all of the mapping requests made by the executive and transparently maintain them when a page is swapped out and back in at a different physical address, and to transparently swap tag information as well as page data. We have solved this problem with greatly reduced kernel state and complexity using page motion callbacks. These callbacks allow the kernel to notify an executive immediately before a page is to be swapped out and immediately after it is swapped back in so that the executive itself can maintain address mappings and tag values.
Another requirement is to avoid trusting the executive. A fully protected interface makes the system robust through even the earliest phases of executive development, and means that normal users have the ability to write or modify executives. A protected interface also makes other sites more willing to adopt our kernel so that we can distribute the Wisconsin Wind Tunnel. Two features of the interface contribute to this protection:
- The executive cannot even refer to resources not explicitly allocated to it by the kernel. The executive never sees physical addresses or hardware context numbers.
- The kernel guarantees that the executive never has an alias to a physical page it does not own by maintaining a count of aliases for each physical page. If the executive does not decrement this count to zero by deleting mappings before a page is removed from its control, the kernel will terminate it.
int create_ctx();
void *executive_brk(void *new_brk);
void *executive_sbrk(int incr);
void *executive_vbrk(void *new_brk);
void *executive_vsbk(int incr);
int add_mapping(int cd, void *va, void *pp, int attr);
int change_pg_attr(int cd, void *va, int attr);
int delete_mapping(int cd, void *va);
void jump_to_ctx(int cd, struct regs *p, void *stackp);
(a) General kernel calls
int set_page_motion_cbs(void *(*page_going)(),
void *(*page_coming)(),
void *stackp);
int set_ctx_fault_cb(void (*ctx_fault_cb)());
(b) Callback registration calls
don *page_going(void *pp);
don *page_coming(void *pp);
don ctx_fault(int fault_code, struct regs *p, ...);
c) Callbacks
Table 1: The executive interface. Parts (a) and (b) list the functions exported by the kernel. Part (c) describes the callbacks exported by the executive.
Table 1 lists the calls and callbacks that comprise the executive interface. The kernel exports the general calls and callback registration functions, while the executive exports the callbacks, which it registers during initialization. The bulk of the interface is directly related to managing virtual and physical memory. The remaining functions, jump_to_ctx() the ctx_fault() callback, provide the ability to execute within subcontexts.
3.1 Memory management
The executive manages virtual and physical memory resources via the context management calls and page motion callbacks. Pages managed by the executive are distinct from the executive’s own text, data, and stack pages. This allows the kernel to easily distinguish the former for special handling while manipulating the latter as it would for any other user process. For example, the kernel can swap the executive’s text segment, share it among multiple instances of the same executive, or allow a debugger to attach to an executive without interfering with the executive’s memory management functions.
3.1.1 Context management calls
The context management calls allow an executive to create new contexts, allocate pages to map into them, and add, modify, and delete page-level mappings. Using these calls, an executive has complete control over these subservient address spaces.
The create_ctx() call allocates a new context and returns an integer context descriptor (similar
to a Unix file descriptor). We refer to these contexts as subcontexts when it is necessary to
distinguish them from the executive’s context. A new subcontext is completely empty, i.e., it
contains no valid address mappings (except for the kernel mappings required by the SPARC
architecture). Context descriptor 0 is never returned by create_ctx() and is used to indicate the
executive’s own address space. The notation cd:va refers to virtual address va in context cd.
To keep executive-managed pages distinct from kernel-managed pages, the executive allocates
pages from a special segment in the executive’s own context, the executive-managed heap. The
executive_brk() and executive_sbrk() calls allow the executive to change the size of this segment in
the same way that CMOST’s Unix-like brk() and sbrk() work with the standard heap. Allocation
on the executive-managed heap is always rounded up to the next multiple of the page size, so
that an integral number of pages are allocated. The virtual addresses of these pages in the
executive’s context are the primary mappings (i.e., handles) which the executive uses to refer to
these pages across the kernel interface. Only pages in the executive-managed heap—referred to
as executive-managed pages—may be aliased via add_mapping() or have their memory tag values
changed.
The executive virtual heap allows the executive to manage a region of its own address space
the same way that it manages subcontexts. The executive_vbrk() and executive_vsbrk() calls
simply reserve a contiguous collection of virtual pages, but do not allocate physical pages behind
them. The executive can then alias these virtual pages to pages in the executive-managed heap
(possibly with different attributes, e.g., read-only or non-cacheable) without fear that the kernel
will grow the standard heap or stack to conflict. This call is not necessary for subcontexts
because the executive automatically has complete control of those.
The add_mapping() call creates a secondary mapping from virtual address va in context cd to
the page at virtual address pp, i.e. it aliases cd:va and 0:pp, where pp must point to an executive-
managed page. The mapping attributes (protection and cacheability) are set according to attr.
To prevent interference between the kernel and executive, if cd is zero then va must be in the
executive virtual heap. The alias count for the corresponding physical page is incremented.
The change_pg_attr() and delete_mapping() calls allow the executive to change the attributes
of and delete mappings, respectively. Only mappings created via add_mapping() can be modified
or deleted. delete_mapping() also decrements the physical page’s alias count.
3.1.2 Page motion callbacks
The two page-motion callbacks—page.going(), invoked when the kernel must reclaim an executive-
managed page, and page.coming(), invoked when a page returns—serve a dual role. First, they
allow the kernel and executive to cooperate in physical memory management, similar to the way
scheduler activations allow management of physical processors in a shared-memory multiprocessor [2]. By explicitly saving and restoring data in response to page.going() and page.coming() calls, the executive can control exactly which data are resident in physical memory. Second, the callbacks significantly reduce the kernel’s bookkeeping requirements, by giving the executive
responsibility for maintaining secondary mappings and memory tags across physical page
movements.
The two page motion callbacks only affect executive-managed pages; the executive must
register the callbacks before allocating pages on the executive-managed heap. When the kernel
decides to reclaim an executive-managed page (e.g., to allocate it to a different process), it
notifies the appropriate executive using the page.going() callback. In general, the kernel will
call page.going() with the argument pp set to NULL, indicating that the executive can select any
executive-managed page for reclamation. The executive can apply its own replacement policy
7
and return the selected page-aligned pp as the return code. By default, the kernel discards the page contents; however, the executive may request that they be saved (i.e. moved to backing store) by overloading the return code (setting the least-significant bit to one). Occasionally, the kernel may need contiguous physical pages—e.g., for the CM-5 vector units—requiring it to reclaim a specific executive-managed page. In this case, the argument pp points to the selected page, and only the least-significant bit of the return value is meaningful.
The executive must use delete_mapping() to delete all secondary mappings for the selected page before returning, at which point the kernel removes the primary mapping from pp to the physical page. Note that virtual address pp is still “in use” as it uniquely identifies this page and will be provided as the argument to a future call to page_coming().
The kernel returns an executive-managed page via the page_coming() callback. The primary mapping is recreated, i.e. the virtual address pp again maps to a physical page, though not necessarily the same physical page as before the page_going() call. If the executive returned a zero in the LSB on the previous call to page_going(), the page has been zeroed; otherwise, the contents have been restored. In either case, the physical memory tags have been cleared. This call allows the executive to restore any secondary mappings and/or memory tags for the page. Secondary mappings could also be restored on demand.
These callbacks allow user-level management of physical memory: even when the kernel reclaims a specific physical page, the executive can choose the data that get replaced at the expense of additional copying and page-table manipulation. Alternatively, the executive can let the kernel save and restore data in evicted pages. The executive can always regain access to a “gone” page by dereferencing pp and causing a fault. The kernel will handle the fault by obtaining a free page (possibly by calling page_going() on this or another executive), restoring the old data (if necessary), and calling page_coming() to signal the return of the needed page.
Because the executive is untrusted, the kernel cannot rely on it to delete all secondary mappings on a page_going() callback. Conversely, it must guarantee that these mappings are removed—otherwise, the executive may retain an alias to a physical page re-allocated to a different process. A brute-force solution is for the kernel to automatically delete all secondary mappings. However, this approach requires that the kernel maintain all of the reverse translations, duplicating state already maintained by the executive. Instead, we require that the executive delete all secondary mappings to the selected page before returning from the page_going() call or face process termination. Process termination guarantees that all mappings are deleted, because the process and all its sub-contexts are destroyed. Thus the kernel only need know how many secondary mappings exist, but not their individual identities. A simple counter per physical page, incremented on each add_mapping() call and decremented on each delete_mapping call, is sufficient to maintain this state.
In addition, to protect against deadlock or infinite loops in the executive, the kernel requires that all callbacks be completed within a fixed time. The kernel sets a virtual timer before invoking a callback; if the timer expires before the callback returns the executive’s process is terminated.
### 3.2 Execution management
Creating and manipulating address spaces is uninteresting without the ability to execute within them. Two calls suffice to provide this functionality. The kernel call jump_to_ctx() causes the current thread of execution to switch into the specified context. When the thread executing in the subordinate context encounters a fault (either an instruction fault or an explicit software trap), control resumes in the executive’s context via the ctx_fault() callback.
Because the `jump_to_ctx()` call continues the current thread in a different context rather than creating a new thread, the contents of the register file are largely unchanged across the switch. The `struct regs` structure passed as an argument to `jump_to_ctx()`, though implementation-dependent, conceptually consists of only the program counter and stack pointer in the new context and the original contents of the registers required to pass the three arguments of `jump_to_ctx()` into the kernel. The third argument, `stackp`, specifies a stack in the executive context to use when `ctx_fault()` is invoked. From the executive’s perspective, `jump_to_ctx()` does not return. After a thread passes through a `jump_to_ctx()` call, it is in a subordinate execution context where all faults are handled by the executive.
When the thread executes a trapping instruction (either due to a fault or an explicit software trap), control resumes in the executive’s context at the `ctx_fault()` entry point. The first argument indicates the type of fault and the second passes back a pointer to the same structure originally passed to `jump_to_ctx()`. The state saved is exactly the state required by `jump_to_ctx()`, so supplying this buffer unmodified as the second argument to `jump_to_ctx()` restarts execution in the other context at the faulting instruction.
Additional arguments are passed from the kernel to the executive depending on the type of the fault—for example, an MMU fault will also provide the virtual address of the access and the nature of the fault (invalid address, protection violation, etc.).
We have, to the greatest extent possible, separated thread management issues from this interface. However, with the addition of thread management code, this simple interface is sufficient for the executive to behave as a multiprogrammed operating system. Implementing a threads package on top of this interface simply requires code to allocate and manage multiple stacks, and to save and restore the registers not contained in the `struct regs` structure. For example, a context switch is as simple as having the `ctx_fault()` handler save and restore the CPU registers and call `jump_to_ctx()` with a different context descriptor and `struct regs` pointer. Separating the thread and context abstractions gives the executive flexibility, e.g. to support a kernel thread abstraction within its sub-contexts. If the underlying kernel provides a programmable timer interrupt, the interrupt can be made to appear through the `ctx_fault()` entry as well, making the multithreading preemptive.
The executive does not normally handle its own faults, except for those to the executive virtual heap. Having the kernel handle the executive’s faults facilitates demand paging of the executive’s text and data, and makes growing the executive’s stack the same as for any other user process. If the executive wants to handle its own faults, it can call `jump_to_ctx()` with the first argument set to zero (its own context). This creates a singular situation where the thread is in a subordinate execution context—so faults still invoke the `ctx_fault()` callback—but not a subordinate addressing context. If the executive decides that the kernel should handle a specific fault, it need only retry the faulting instruction from within its fault handler. There is no danger of a recursive call to `ctx_fault()` because any fault encountered by the executive’s handler will be handled directly by the kernel.
Subordinate execution contexts cannot be nested because `jump_to_ctx()` is a system call implemented as a software trap; “recursive” calls will show up in the executive via `ctx_fault()`. As with any other traps, system calls made in a subordinate execution context can be forwarded to the kernel simply by re-executing the call in the executive. The only complication occurs with pointer arguments, since the kernel will interpret these in the executive, rather than subordinate, context.
---
2On the SPARC architecture, this structure also contains the NPC (to resume after faults in delayed branch slots) and the condition codes (since these cannot be saved and restored from user mode).
<table>
<thead>
<tr>
<th>Function</th>
<th>Lines of C</th>
<th>Machine Instructions</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PN</td>
<td>CP</td>
</tr>
<tr>
<td>execute_{v} {s} brk</td>
<td>10</td>
<td>35</td>
</tr>
<tr>
<td>Total, four calls</td>
<td>10</td>
<td>35</td>
</tr>
<tr>
<td>create_ctx</td>
<td>134</td>
<td>0</td>
</tr>
<tr>
<td>Page table initialization</td>
<td>17</td>
<td>11</td>
</tr>
<tr>
<td>Process termination</td>
<td>90</td>
<td>0</td>
</tr>
<tr>
<td>Page callback support</td>
<td>222</td>
<td>426</td>
</tr>
<tr>
<td>Total</td>
<td>473</td>
<td>472</td>
</tr>
</tbody>
</table>
(a) C language additions
<table>
<thead>
<tr>
<th>Function</th>
<th>Machine Instructions</th>
</tr>
</thead>
<tbody>
<tr>
<td>add_mapping</td>
<td>187</td>
</tr>
<tr>
<td>change_pg_attr</td>
<td>108</td>
</tr>
<tr>
<td>delete_mapping</td>
<td>123</td>
</tr>
<tr>
<td>jump_to_ctx</td>
<td>50</td>
</tr>
<tr>
<td>ctx_fault</td>
<td>40</td>
</tr>
<tr>
<td>set_page_mutation_cbs</td>
<td>7</td>
</tr>
<tr>
<td>set_ctx_fault_cb</td>
<td>5</td>
</tr>
<tr>
<td>Total</td>
<td>520</td>
</tr>
</tbody>
</table>
(b) PN assembly additions
Table 2: Code added to CMOST to implement executive interface.
4 Implementation
We have implemented the executive interface in CMOST version 7.2 Beta 1. As shown in Table 2, the entire interface required less than one thousand lines of C and just over 500 machine instructions. Only `jump_to_ctx()` and `ctx_fault()` required assembly-level coding; the other functions were implemented in assembly to improve performance. Although the majority of the additional code is in the PN kernel, most of the complexity lies in the CP portion. This follows from the centralized structure of CMOST: the PN kernel implements only mechanisms, while all policy decisions occur on the CP.
The executive interface requires very little additional kernel state. The PN kernel requires a few additions to the process control block (PCB) and an array with an entry per physical page. The PCB maintains the entry points and stack addresses for the callbacks and the `struct regs` and `stackp` pointers from the last `jump_to_ctx()` call (for use in the subsequent `ctx_fault()` callback). The array, indexed by physical page number, contains the alias count for each page and a pointer field. The pointer field is used to maintain a linked list, whose head is in the PCB, of all physical pages in the executive-managed heap to facilitate resetting the alias counts when the executive process terminates.
We also added two new memory segments to every process: the executive-managed heap and the executive virtual heap. Both segments have special semantics:
- Any time the CP decides to move a page in the executive-managed heap, it must first invoke the executive’s page motion callbacks.
- Allocations in the executive virtual heap segment are not backed by physical memory. This segment simply provides a region in the executive’s virtual address space that is guaranteed not to conflict with regions used by the control processor. Also, faults to this segment are always handled by the executive.
On the CP, two fields were added to the per-physical-page structure to record the process ID and virtual address of each physical page that is allocated to an executive-managed heap segment. This information is required to perform the `page.going()` callback when the physical page needs to be moved.
Table 3 summarizes the performance of the executive interface calls, as measured using
Table 3: Performance of executive interface calls on the CM-5.
<table>
<thead>
<tr>
<th>Function</th>
<th>Time(^3^) cycles ((\mu s))</th>
<th>Function</th>
<th>Time(^3^) cycles ((\mu s))</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>executive_sbrk</td>
<td></td>
<td>create_ctx</td>
<td>19K (575)</td>
</tr>
<tr>
<td>with CP communication</td>
<td></td>
<td>add_mapping</td>
<td></td>
</tr>
<tr>
<td>alloc 1 page</td>
<td>1.6M (48 ms)</td>
<td>no table allocation</td>
<td>359 (11)</td>
</tr>
<tr>
<td>alloc 100 pages</td>
<td>4.9M (148 ms)</td>
<td>alloc level 3 table</td>
<td>1166 (35)</td>
</tr>
<tr>
<td>w/o CP communication</td>
<td>20K (606)</td>
<td>alloc level 2 & 3 tables</td>
<td>2641 (80)</td>
</tr>
<tr>
<td>executive_vsbk</td>
<td></td>
<td>change_pg_attr</td>
<td>340 (10)</td>
</tr>
<tr>
<td>with CP communication</td>
<td>1.4M (42 ms)</td>
<td>delete_mapping</td>
<td>855 (26)</td>
</tr>
<tr>
<td>w/o CP communication</td>
<td>20K (606)</td>
<td>jump_to_ctx</td>
<td>180 (5)</td>
</tr>
<tr>
<td>set_page_motion_cbs</td>
<td>117 (4)</td>
<td>ctx fault</td>
<td>154 (5)</td>
</tr>
<tr>
<td>set_ctx_fault_cb</td>
<td>108 (3)</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
the CM-5's cycle counter and averaging tens of iterations.\(^3\) The executive_{v}{s}brk() and create_ctx() calls are implemented as full traps, where the user's register windows are flushed, execution switches to the kernel stack, and interrupts are re-enabled. All other calls are implemented as "fast" traps, and execute without flushing any windows or re-enabling interrupts. The overheads for the two types of traps are approximately 300 \(\mu s\) and 3 \(\mu s\), respectively.
The callback registration functions simply store their arguments in fields in the process's PCB. No validation is required since illegal values can at worst cause an immediate fault on the invocation of a callback, which will terminate the executive process.
4.1 Memory management
4.1.1 Context management calls
The executive_{v}{s}brk() calls are implemented in the same fashion as CMOST's brk() and sbrk(). The first node requesting a page generates a system-wide interrupt, which causes the control processor to grow the appropriate segment on all nodes. Subsequent requests on other nodes can be satisfied locally by returning pages from the now-larger segment.
The create_ctx() call allocates a SPARC hardware context number and an empty level-one table (the SPARC has a three-level page table structure). A call to add_mapping() validates its arguments, performs the page table insertion (allocating level two and three tables as necessary), and increments the alias counter for the physical page. Both change_pg_attr() and delete_mapping() validate arguments and do a page table walk, with the latter also decrementing the appropriate alias counter. None of these calls require communication from the node to the control processor.
Because the CM-5 node has a virtually-tagged writeback cache, delete_mapping() must also flush data brought in using the deleted mapping. This requires iteratively flushing 128 cache lines, causing it to take significantly longer than change_pg_attr().
\(^3\)This information was derived using a pre-release version of CMOST (7.2 Beta 1 of Feb. 1993). Performance on released versions may be significantly different.
4.1.2 Page motion callbacks
The page motion callbacks account for most of the design complexity. Our current implementation focused on minimizing changes in the CP code. As a result, our implementation is influenced by several existing CMOST features:
- Swapping is supported, but not demand paging. All of a process's pages must be in memory before it is allowed to run.
- Memory management is performed entirely on the control processor, which assumes that the memory maps of all nodes are identical. When the CP reclaims a page, it must select a specific physical page and force all nodes to release it, even if there is no need for physical contiguity. In other words, in this implementation, `page.going()` is never called with a `NULL` argument because the CP cannot deal with different nodes freeing different pages.
- Both the PN kernel and the CP portion of CMOST are single-threaded, i.e. there is only one stack in each. In addition, the PN kernel does not maintain any kernel stack state across communications with the CP.
While this implementation satisfies our needs for WWT, and serves as a proof-of-concept for the executive interface, the CP code requires more radical changes for a clean and efficient implementation.
In CMOST, pages are freed for two reasons: i) to satisfy an allocation request for a currently executing process or, ii) to swap in an idle process. In either case, the CP performs the necessary memory management operations while the PNs are idle, waiting for instructions to resume. Page moves not requiring callbacks are performed immediately, but those requiring callbacks are recorded and deferred. Before resuming the user process, the CP performs the deferred callbacks, scheduling the affected processes (executives) as needed.
The callbacks are executed in the executive context by invoking the registered callback function using the registered stack pointer. The callback cannot be run on the current process stack because it could be in a different address space (i.e. if the process was suspended while running in a subcontext). A fault in the callback will result in the executive’s termination. When the callback returns, the PN kernel either executes another callback pending for this executive, or waits for the other nodes to complete. Once all callbacks are done for this executive (across all nodes), the scheduler is re-invoked to run the next executive with pending callbacks. When all deferred callbacks have executed, the originally scheduled process can run.
Because the executive is scheduled for callbacks the same way it is scheduled for normal execution, there are only a few constraints on callback execution. First, the callback cannot allocate memory of any kind since this could create a circular dependency. Second, each callback is provided at most one scheduling quantum; the executive is terminated if the quantum expires during a callback. Third, the callback also cannot call any blocking kernel function, since this would interfere with the callback timeout mechanism. Finally, the current implementation does not allow the callback to use the CM-5 network interface. This last restriction is not inherent to the architecture, but would add significant complexity and overhead for a feature our application would not use.
Our implementation is designed to support swapping, but we have not yet tested this portion of the code. However, the CM-5 vector unit architecture severely constrains physical memory allocation, causing CMOST to frequently reallocate specific pages, moving data from one physical page to another. By treating these page moves as a swap out followed immediately by a swap in, we have completely exercised the callback mechanisms.
Figure 1: Mixed-model parallelism using the executive interface.
4.2 Execution management
The subordinate execution context is simply the same CMOST process executing with a different trap vector and (perhaps) a different hardware MMU context. All trap vector entries, except hardware interrupts, jump to the `ctx_fault()` kernel stub.
The `jump_to_ctx()` and `ctx_fault()` functions are implemented as “fast” traps, i.e. they execute without re-enabling interrupts in a partial SPARC register window. In addition to loading state from the `struct regs` structure, (or storing it in the case of `ctx_fault()`), both calls must change the hardware context, manipulate the register window mask, and change the trap vector base address. `jump_to_ctx()` requires ten extra instructions because it must read the processor status register, mask in the desired condition codes, and write it back.
4.3 Other implementation issues
The executive needs to specify a stack for all interrupt or signal handlers, as it does for the page motion callbacks, since the current process stack may not exist in the executive’s own address space. We have not added this extension yet, but it would be simple to do so.
The cache controller used on the CM-5 node (Cypress 604) is a 64KB direct-mapped virtually-tagged cache. The hardware will handle aliases that map to the same cache block, but cannot guarantee consistency otherwise. To avoid cache flushing, aliases must be congruent modulo 64K. Rather than complicating the interface with this issue, we force the executive to deal with it on its own. We have added an additional call to the PN kernel, `int cache_flush_page(int cd, void *pp)`, which flushes the specified page from the SPARC cache. This allows the executive to make the tradeoff between keeping aliases congruent and performing cache flushes according to its own needs.
5 Discussion
While the executive interface is interesting in isolation, it is more striking when considered in the context of the CM-5 and CMOST. The resulting kernel structure is—we believe—unique. In most microkernels designed for parallel systems, nodes are fundamentally autonomous. Cooperation, e.g. for gang-scheduling, occurs as a policy at a higher level of abstraction. Our extended version of CMOST turns this structure on its head: the control processor maintains central, synchronous control of physical memory and scheduling. The control processor still forces all processors to context switch simultaneously; however, some of the processes may now
be executives. Executives, because of the autonomy provided by the executive interface, can schedule execution in their subcontexts however they choose. This flexibility can be used to support applications or groups of applications which may benefit from more dynamic allocation and scheduling policies (see Figure 1). Thus our extended CMOST provides autonomy on top of synchrony, rather than the more traditional alternative of synchrony on top of autonomy.
The CMOST/executive structure was motivated by our implementation of the Wisconsin Wind Tunnel on the CM-5. However, the resulting structure is arguably the right way to structure an operating system for large-scale parallel machines. Efficiently supporting fine-grain parallel applications requires a global perspective for resource allocation, because a page fault or scheduling delay on one node can seriously impact the performance of the entire application. Centralizing control, as in CMOST, makes global resource allocation significantly easier. For example, CMOST's guarantee that one user process runs simultaneously on all nodes allows direct user access to the CM-5's network interface hardware, avoiding costly system calls for message operations.
Operating systems that fail to efficiently manage global resources will have a particularly difficult time exploiting hardware features such as the CM-5's control network [9], which performs a global barrier or reduction in a few microseconds. Because hardware barriers are both cheap and fast—they are essentially AND-gates—we expect them to appear in most future parallel machines. The operating systems for these machines must be able to exploit this hardware to efficiently execute fine-grain data-parallel codes. We believe this may prove easier with a synchronous microkernel structure, rather than a more traditional asynchronous kernel structure.
While the CMOST/executive structure supports timesharing among different execution models, a hierarchical control structure can integrate space-sharing as well. For example, a central scheduler on a 128-node machine can reserve some time-slices for 128-node synchronous applications and some for 128-node applications or sets of applications with more dynamic executive-managed scheduling. The remaining time-slices can be delegated to two other schedulers, each of which can recursively do identical centralized scheduling within disjoint 64-node processor groups.
In order for a single executive to manage multiple users' applications, the executive must be run with some additional privilege, e.g. as a Unix "setuid root" process, to access system resources with the effective permissions of the user on whose behalf the current application is being executed. Such an executive would also need a way to adjust its scheduling priority within the kernel so that processing resources can be fairly allocated across all user jobs, whether they are executing directly under the kernel or are one of several running under an executive.
6 Related Work
The interface described in this paper was motivated by the needs of the Wisconsin Wind Tunnel. The centralized structure of CMOST, with all policy enacted on the control processor, was insufficient to support the user-level fine-grain distributed shared memory needed by WWT. The executive interface extends CMOST to provide nodes with limited autonomy in the way they manage their virtual address spaces and physical memory. This interface is interesting from two different perspectives: on its own, as a means of exporting memory-management functions to the user of a uniprocessor; and in conjunction with CMOST, as a means of supporting multiple models of application parallelism on a single machine.
4The Cray T3D also has hardware support for global barriers; barriers are actually faster than remote memory operations on this machine [14].
6.1 Uniprocessor aspects
The executive interface provides a complete set of low-level virtual memory functions. The interface is simpler and lower-level than the virtual memory interfaces of either Mach [12] or Chorus [1], which both impose significant semantics on the use of memory. To the first order, the executive interface merely exposes the underlying hardware mechanisms to the user in a protected manner.
The executive interface is similar to the “inferior spheres of protection”, described by Dennis and Van Horn [6]. Their execution model allowed processes to create subcontexts, initiate execution within them, and handle any resulting faults. The primary difference is our page orientation rather than their more general segments and capabilities.
More recently, Probert, et al, proposed SPACE, an object-oriented operating system [11]. SPACE allows applications to create, manipulate, and execute within *spaces*, i.e., address spaces, thereby facilitating protected objects. However, SPACE is much more general than our interface, allowing different “executives” to manage different parts of a single address space.
Appel and Li surveyed the most common uses of user-level virtual memory, and identified the set of primitives needed by these applications [3]. The set includes primitives to modify protection on pages and create aliases within an address space; however, they did not include the ability to create new address spaces, nor get callbacks when pages are reclaimed by the kernel.
Using page motion callbacks to manage physical memory allocation is analogous to using scheduler activations to manage physical processor allocation [2]. Both provide the user with notification of kernel allocation decisions so that the application can adapt knowledgeably to its new circumstance. A key difference is that the *page.going()* callback notifies the user *before* the page is taken away, while the scheduler activation model notifies the user *after* a processor has been taken away. This adds some complexity to the page motion callbacks (i.e. the necessity for the kernel to enforce a finite completion time), but reflects a fundamental difference between memory and processors: it is reasonable to have the user allocate space to save a processor’s state in case the kernel takes it away, but nonsensical to apply the same principle to memory pages.
The Mach external memory manager interface is similar to our page motion callbacks. However, if an external memory manager does not remove a page in a timely fashion, the Mach kernel can always write the page to backing store using the default memory manager. Our interface does not permit this, because the kernel cannot clean up secondary mappings itself nor can it permit them to point into another process.\(^5\)
6.2 Multiprocessor aspects
User control over multiprocessor scheduling with the intent of supporting multiple models of parallelism (including gang-scheduling) is provided in Mach by a processor allocation server [4]. In this model, an application requests a certain number of processors to create a “processor set” to which threads can be bound. The binding of actual processors to processor sets is performed by a privileged user-level server. The server can be modified to support site- or usage-specific policies, but there can only be one per platform. In our scheme, a process requiring a fixed number of processors can be handled simply by scheduling it at the appropriate level in the hierarchy. A process with more dynamic needs could be served by an appropriate executive that
\(^5\)Our kernel could write the page to backing store, so long as it also saved and restored the memory tags and guaranteed to return it to the *original* physical page before resuming the process.
balances its requirements by scheduling it in conjunction with other applications having similar dynamic parallelism. The fundamental differences are that our model provides gang-scheduling more as the rule than the exception, and allows for multiple executives running simultaneously to support multiple abstractions. The hierarchical integration of time- and space-sharing discussed in Section 5 is similar to Feitelson and Rudolph’s distributed hierarchical control [7], except that they assume a hardware hierarchy of control processors, while we believe a software hierarchy of control processes may be just as effective. Also, their model does not have the executive interface to provide different scheduling models underneath the global hierarchical structure.
7 Conclusion
The executive interface exports a complete, abstract model of virtual memory management to a user process, including the ability to create, manipulate, and execute in multiple address spaces. The interface allows the user process to participate in physical memory management using page motion callbacks. The callbacks also serve to minimize the kernel complexity of implementing the interface.
Giving a user-level executive the ability to define a complete virtual memory environment in a protected fashion allows multiple executives providing multiple process abstractions to coexist on a single system. Though interesting from a uniprocessor perspective, it is more significant in the context of large-scale multiprocessors. Instead of having the operating system view the machine as a set of autonomous nodes upon which coordination mechanisms must be imposed, it can start with a global perspective and selectively delegate nodes in both space and time to executives which allow increasing amounts of autonomy. The resulting structure combines the advantages of centralization and decentralization: the underlying global perspective simplifies efficient support of fine-grained synchronous (e.g., data-parallel) applications and management of global resources such as barrier hardware, while executives provide the flexible support for other application models that a completely centralized system lacks.
Acknowledgements
Mark Hill, Frans Kaashoek, Jim Larus, Bart Miller, Yannis Schoinas, and Marv Solomon provided helpful comments that greatly improved this paper.
References
|
{"Source-Url": "http://ftp.cs.wisc.edu/wwt/usenix93_kernel.pdf", "len_cl100k_base": 11180, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 106703, "total-output-tokens": 12652, "length": "2e13", "weborganizer": {"__label__adult": 0.0004117488861083984, "__label__art_design": 0.0005345344543457031, "__label__crime_law": 0.00032448768615722656, "__label__education_jobs": 0.0009131431579589844, "__label__entertainment": 0.00014257431030273438, "__label__fashion_beauty": 0.00020444393157958984, "__label__finance_business": 0.0004143714904785156, "__label__food_dining": 0.0004048347473144531, "__label__games": 0.0010595321655273438, "__label__hardware": 0.006801605224609375, "__label__health": 0.0006036758422851562, "__label__history": 0.0006299018859863281, "__label__home_hobbies": 0.00014841556549072266, "__label__industrial": 0.0009965896606445312, "__label__literature": 0.00032711029052734375, "__label__politics": 0.0003597736358642578, "__label__religion": 0.0007352828979492188, "__label__science_tech": 0.365234375, "__label__social_life": 8.034706115722656e-05, "__label__software": 0.013153076171875, "__label__software_dev": 0.6044921875, "__label__sports_fitness": 0.0003752708435058594, "__label__transportation": 0.0010662078857421875, "__label__travel": 0.00028395652770996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59126, 0.02385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59126, 0.23674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59126, 0.88967]], "google_gemma-3-12b-it_contains_pii": [[0, 2926, false], [2926, 6155, null], [6155, 9832, null], [9832, 13960, null], [13960, 17538, null], [17538, 19803, null], [19803, 23858, null], [23858, 27890, null], [27890, 32091, null], [32091, 35929, null], [35929, 39708, null], [39708, 43443, null], [43443, 45981, null], [45981, 49857, null], [49857, 53643, null], [53643, 57063, null], [57063, 59126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2926, true], [2926, 6155, null], [6155, 9832, null], [9832, 13960, null], [13960, 17538, null], [17538, 19803, null], [19803, 23858, null], [23858, 27890, null], [27890, 32091, null], [32091, 35929, null], [35929, 39708, null], [39708, 43443, null], [43443, 45981, null], [45981, 49857, null], [49857, 53643, null], [53643, 57063, null], [57063, 59126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59126, null]], "pdf_page_numbers": [[0, 2926, 1], [2926, 6155, 2], [6155, 9832, 3], [9832, 13960, 4], [13960, 17538, 5], [17538, 19803, 6], [19803, 23858, 7], [23858, 27890, 8], [27890, 32091, 9], [32091, 35929, 10], [35929, 39708, 11], [39708, 43443, 12], [43443, 45981, 13], [45981, 49857, 14], [49857, 53643, 15], [53643, 57063, 16], [57063, 59126, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59126, 0.13043]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
bf938eac1a2c8a3ed1cd6fe8fce73e841cb14e41
|
CS 103 Lecture 2 Slides
C/C++ Basics
Mark Redekopp
Announcements
• Get your VM's installed.
– Do's and Don'ts with your VM
• Installing the 'Guest Additions' for the Linux VM
• Backing up files
• Not installing any updates to the VM
• HW 1
• Lab 1 review answers must be submitted on our website
– Attend lab to meet your TAs and mentors and get help with lab 1 or your VM
A quick high-level view before we dive into the details...
PROGRAM STRUCTURE AND COMPILEDATION PROCESS
C/C++ Program Format/Structure
• Comments
– Anywhere in the code
– C-Style => "/*" and "*/"
– C++ Style => "//"
• Compiler Directives
– #includes tell compiler what other library functions you plan on using
– 'using namespace std;' -- Just do it for now!
• main() function
– **Starting point of execution** for the program
– All code/statements in C must be inside a function
– Statements execute one after the next and end with a semicolon (;)
– Ends with a 'return 0;' statement
• Other functions
– printName() is a function that can be "called"/"invoked" from main or any other function
/* Anything between slash-star and star-slash is ignored even across multiple lines of text or code */
// Anything after "//" is ignored on a line
// #includes allow access to library functions
#include <iostream>
#include <cmath>
using namespace std;
void printName()
{
cout << "Tommy Trojan" << endl;
}
// Execution always starts at the main() function
int main()
{
cout << "Hello: " << endl;
printName();
printName();
return 0;
}
Hello:
Tommy Trojan
Tommy Trojan
# Software Process
1. **Edit & write code**
```
#include <iostream>
using namespace std;
int main()
{
int x = 5;
cout << "Hello"
<< endl;
cout << "x=" << x;
return 0;
}
```
2. **Compile & fix compiler errors**
```
$ g++ -g -Wall -o test test.cpp
or
$ make test
```
3. **Load & run the executable program**
```
$ ./test
```
Software Process
#include <iostream>
using namespace std;
int main()
{
int x = 5;
cout << "Hello"
<< endl;
cout << "x=" << x;
return 0;
}
C++ file(s)
(test.cpp)
$ gedit test.cpp &
$ gedit test.cpp &
$ g++ -g -Wall -o test test.cpp
or
$ make test
Fix compile-time errors w/ a debugger
Load & run the executable program
1 Edit & write code
2 Compile & fix compiler errors
3 Fix run-time errors w/ a debugger
Compiler
Clang++
Executable Binary Image (test)
1110 0010 0101 1001
0110 1011 0000 1100
0100 1101 0111 1111
1010 1100 0010 1011
0001 0110 0011 1000
Std C++ & Other Libraries
Load & Execute
- g = Enable Debugging
- Wall = Show all warnings
- o test = Specify Output executable name
Fix compile-time errors w/ a debugger
Fix run-time errors w/ a debugger
DATA REPRESENTATION
Memory
- Recall all information in a computer is stored in memory
- Memory consists of cells that each store a group of bits (usually, 1 byte = 8 bits)
- Unique address assigned to each cell
- Used to reference the value in that location
- We first need to understand the various ways our program can represent data and allocate memory
Starting With Numbers
• A single bit can only represent 1 and 0
• To represent more than just 2 values we need to use combinations/sequences of many bits
– A byte is defined as a group 8-bits
– A word varies in size but is usually 32-bits
• So how do we interpret those sequences of bits?
– Let's learn about number systems
Binary Number System
• Humans use the decimal number system
– Based on number 10
– 10 digits: [0-9]
• Because computer hardware uses digital signals with 2 values, computers use the binary number system
– Based on number 2
– 2 binary digits (a.k.a bits): [0,1]
Number System Theory
• Let's understand how number systems work by examining decimal and then moving to binary
• The written digits have implied place values
• Place values are powers of the base (decimal = 10)
• Place value of digit to left of decimal point is $10^0$ and ascend from there, negative powers of 10 to the right of the decimal point
• The value of the number is the sum of each digit times its implied place value
$$(852.7)_{10} =$$
Number System Theory
- The written digits have implied place values
- Place values are powers of the base (decimal = 10)
- Place value of digit to left of decimal point is $10^0$ and ascend from there, negative powers of 10 to the right of the decimal point
- The value of the number is the sum of each digit times its implied place value
\[(852.7)_{10} = 8 \times 10^2 + 5 \times 10^1 + 2 \times 10^0 + 7 \times 10^{-1}\]
Binary Number System
- Place values are powers of 2
- The value of the number is the sum of each bit times its implied place value (power of 2)
\[
(110.1)_2 = \]
\[
(11010)_2 =
\]
Binary Number System
- Place values are powers of 2
- The value of the number is the sum of each bit times its implied place value (power of 2)
\[
(110.1)_2 = 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 + 1 \times 2^{-1}
\]
\[
(110.1)_2 = 1 \times 2^2 + 1 \times 2^1 + 0 \times 2^0 + 1 \times 2^{-1} = 2^2 + 2^1 + 2^{-1} = 4 + 2 + 0.5 = 6.5_{10}
\]
\[
(110.1)_2 = 1 \times 4 + 1 \times 2 + 1 \times 0.5 = 4 + 2 + 0.5 = 6.5_{10}
\]
\[
(11010)_2 = 1 \times 2^4 + 1 \times 2^3 + 1 \times 2^1 = 2^4 + 2^3 + 2^1 = 16 + 8 + 2 = 26_{10}
\]
Unique Combinations
- Given \( n \) digits of base \( r \), how many unique numbers can be formed?
<table>
<thead>
<tr>
<th>Type</th>
<th>Number of Digits</th>
<th>Base</th>
<th>Combinations</th>
</tr>
</thead>
<tbody>
<tr>
<td>2-digit, decimal</td>
<td>2</td>
<td>0-9</td>
<td>___</td>
</tr>
<tr>
<td>3-digit, decimal</td>
<td>3</td>
<td>0-9</td>
<td>___</td>
</tr>
<tr>
<td>4-bit, binary</td>
<td>4</td>
<td>0-1</td>
<td>___</td>
</tr>
<tr>
<td>6-bit, binary</td>
<td>6</td>
<td>0-1</td>
<td>___</td>
</tr>
</tbody>
</table>
Main Point: Given \( n \) digits of base \( r \), \_\_\_ unique numbers can be made with the range [\_\_\_\_\_\_]
Sign
• Is there any limitation if we only use the powers of some base as our weights?
– Can't make negative numbers
• What if we change things
– How do humans represent negative numbers?
– Can we do something similar?
C Integer Data Types
- In C/C++ constants & variables can be of different types and sizes
- A Type indicates how to interpret the bits and how much memory to allocate
- Integer Types (signed by default... unsigned with optional leading keyword)
<table>
<thead>
<tr>
<th>C Type</th>
<th>Bytes</th>
<th>Bits</th>
<th>Signed Range</th>
<th>Unsigned Range</th>
</tr>
</thead>
<tbody>
<tr>
<td>[unsigned] char</td>
<td>1</td>
<td>8</td>
<td>-128 to +127</td>
<td>0 to 255</td>
</tr>
<tr>
<td>[unsigned] short</td>
<td>2</td>
<td>16</td>
<td>-32768 to +32767</td>
<td>0 to 65535</td>
</tr>
<tr>
<td>[unsigned] long int</td>
<td>4</td>
<td>32</td>
<td>-2 billion to +2 billion</td>
<td>0 to 4 billion</td>
</tr>
<tr>
<td>[unsigned] long long</td>
<td>8</td>
<td>64</td>
<td>-8<em>10^{18} to +8</em>10^{18}</td>
<td>0 to 16*10^{18}</td>
</tr>
</tbody>
</table>
What About Rational/Real #'s
• Previous binary system assumed binary point was fixed at the far right of the number
– 10010. *(implied binary point)*
• Consider scientific notation:
– Avogadro’s Number: +6.0247 * 10^{23}
– Planck’s Constant: +6.6254 * 10^{-27}
• Can one representation scheme represent such a wide range?
– Yes! **Floating Point**
– Represents the sign, significant digits (fraction), exponent as separate bit fields
• Decimal: ±D.DDD * 10^{±exp}
• Binary: ±b.bbbb * 2^{±exp}
<table>
<thead>
<tr>
<th>S</th>
<th>Exp.</th>
<th>fraction</th>
</tr>
</thead>
</table>
Overall Sign of #
C Floating Point Types
- **float and double types:**
Allow decimal representation (e.g. 6.125) as well as very large integers (+6.023E23)
<table>
<thead>
<tr>
<th>C Type</th>
<th>Bytes</th>
<th>Bits</th>
<th>Range</th>
</tr>
</thead>
<tbody>
<tr>
<td>float</td>
<td>4</td>
<td>32</td>
<td>±7 significant digits * 10^{+/-38}</td>
</tr>
<tr>
<td>double</td>
<td>8</td>
<td>64</td>
<td>±16 significant digits * 10^{+/-308}</td>
</tr>
</tbody>
</table>
Text
- Text characters are usually represented with some kind of binary code (mapping of character to a binary number such as 'a' = 01100001 bin = 97 dec)
- ASCII = Traditionally an 8-bit code
- How many combinations (i.e. characters)?
- English only
- UNICODE = 16-bit code
- How many combinations?
- Most languages w/ an alphabet
- In C/C++ a single printing/text character must appear between single-quotes ('')
- Example: 'a', '!', 'Z'
http://www.theasciicode.com.ar/
UniCode
- ASCII can represent only the English alphabet, decimal digits, and punctuation
- 7-bit code => $2^7 = 128$ characters
- It would be nice to have one code that represented more alphabets/characters for common languages used around the world
- Unicode
- 16-bit Code => 65,536 characters
- Represents many languages alphabets and characters
- Used by Java as standard character code
- Won't be used in our course
Unicode hex value (i.e. FB52 => 1111101101010010)
Interpreting Binary Strings
- Given a string of 1’s and 0’s, you need to know the representation system being used, before you can understand the value of those 1’s and 0’s.
- Information (value) = Bits + Context (System)
01000001 = ?
- 65₁₀
- 41₃₂
- ‘A’₁₆
Unsigned Binary system
BCD System
ASCII system
C CONSTANTS & DATA TYPES
What's Your Type
What am I storing?
Number
Text/Character(s) for display
Logical (true/false) value
What kind of number is it?
Contains a decimal/fractional value
Integer
What range of values might it use?
Positive only
Possible negative
Use a…
double
3.0, 3.14159, 6.27e23
Use an…
unsigned int
0, 2147682...
Use an…
int
0, -2147682, 2147682
Is it a single char or many (i.e. a string of chars)?
Single
Many
Use a…
Use a…
char
's', '1', '
string
"Hi", "2020"
Use a…
bool
ture, false
Constants
• Integer: 496, 10005, -234
• Double: 12.0, -16., 0.23, -2.5E-1, 4e-2
• Float: 12.0F // F = float vs. double
• Characters appear in single quotes
– 'a', '5', 'B', '!', '\n', '\t', '"', '\'
– Non-printing special characters use "escape" sequence (i.e. preceded by a \)
– '\n' = newline/enter, '\t' = tab
• C-Strings
– Multiple characters between double quotes
"hi1\n", "12345\n", "b", "\tAns. is %d"
– Ends with a '\0' = NULL character added as the last byte/character
• Boolean (C++ only): true, false
– Physical representation: 0 = false, (!= 0) = true
You're Just My Type
- Indicate which constants are matched with the correct type.
<table>
<thead>
<tr>
<th>Constant</th>
<th>Type</th>
<th>Right / Wrong</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.0</td>
<td>int</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>int</td>
<td></td>
</tr>
<tr>
<td>'a'</td>
<td>string</td>
<td></td>
</tr>
<tr>
<td>"abc"</td>
<td>string</td>
<td></td>
</tr>
<tr>
<td>5.</td>
<td>double</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>char</td>
<td></td>
</tr>
<tr>
<td>"5.0"</td>
<td>double</td>
<td></td>
</tr>
<tr>
<td>'5'</td>
<td>int</td>
<td></td>
</tr>
</tbody>
</table>
You're Just My Type
- Indicate which constants are matched with the correct type.
<table>
<thead>
<tr>
<th>Constant</th>
<th>Type</th>
<th>Right / Wrong</th>
</tr>
</thead>
<tbody>
<tr>
<td>4.0</td>
<td>int</td>
<td>double (.0)</td>
</tr>
<tr>
<td>5</td>
<td>int</td>
<td>int</td>
</tr>
<tr>
<td>'a'</td>
<td>string</td>
<td>char</td>
</tr>
<tr>
<td>"abc"</td>
<td>string</td>
<td>string (char * or char [])</td>
</tr>
<tr>
<td>5.</td>
<td>double</td>
<td>float/double (. = non-integer)</td>
</tr>
<tr>
<td>5</td>
<td>char</td>
<td>Int...but if you store 5 in a char variable it'd be okay</td>
</tr>
<tr>
<td>"5.0"</td>
<td>double</td>
<td>string (char * or char [])</td>
</tr>
<tr>
<td>'5'</td>
<td>int</td>
<td>char</td>
</tr>
</tbody>
</table>
EXPRESSIONS & VARIABLES
Arithmetic Operators
- Addition, subtraction, multiplication work as expected for both integer and floating point types
- Division works ‘differently’ for integer vs. doubles/floats
- Modulus is only defined for integers
<table>
<thead>
<tr>
<th>Operator</th>
<th>Name</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>+</td>
<td>Addition</td>
<td>2 + 5</td>
</tr>
<tr>
<td>-</td>
<td>Subtraction</td>
<td>41 - 32</td>
</tr>
<tr>
<td>*</td>
<td>Multiplication</td>
<td>4.23 * 3.1e-2</td>
</tr>
<tr>
<td>/</td>
<td>Division</td>
<td>10 / 3 (=3)</td>
</tr>
<tr>
<td></td>
<td>(Integer vs. Double division)</td>
<td>10.0 / 3 (=3.3333)</td>
</tr>
<tr>
<td>%</td>
<td>Modulus (remainder) [for integers only]</td>
<td>17 % 5</td>
</tr>
<tr>
<td></td>
<td>(result will be 2)</td>
<td></td>
</tr>
</tbody>
</table>
Precedence
- Order of operations/evaluation of an expression
- Top Priority = highest (done first)
- Notice operations with the same level or precedence usually are evaluated left to right (explained at bottom)
Evaluate:
- \(2 \times -4 - 3 + 5/2\);
Tips:
- Use parenthesis to add clarity
- Add a space between literals
(2 * -4) – 3 + (5 / 2)
Operators (grouped by precedence)
<table>
<thead>
<tr>
<th>Operator Type</th>
<th>Operator</th>
</tr>
</thead>
<tbody>
<tr>
<td>struct member operator</td>
<td>name.member</td>
</tr>
<tr>
<td>struct member through pointer</td>
<td>pointer->member</td>
</tr>
<tr>
<td>increment, decrement</td>
<td>++, --</td>
</tr>
<tr>
<td>plus, minus, logical not, bitwise not</td>
<td>+, -, !, ~</td>
</tr>
<tr>
<td>indirection via pointer, address of object</td>
<td>*pointer, &name</td>
</tr>
<tr>
<td>cast expression to type size of an object</td>
<td>(type) expr</td>
</tr>
<tr>
<td>multiply, divide, modulus (remainder)</td>
<td>*, /, %</td>
</tr>
<tr>
<td>add, subtract</td>
<td>+, -</td>
</tr>
<tr>
<td>left, right shift [bit ops]</td>
<td><<, >></td>
</tr>
<tr>
<td>relational comparisons</td>
<td>>, >=, <, <=</td>
</tr>
<tr>
<td>equality comparisons</td>
<td>==, !=</td>
</tr>
<tr>
<td>and [bit op]</td>
<td>&</td>
</tr>
<tr>
<td>exclusive or [bit op]</td>
<td>^</td>
</tr>
<tr>
<td>or (inclusive) [bit op]</td>
<td></td>
</tr>
<tr>
<td>logical and</td>
<td>&&</td>
</tr>
<tr>
<td>logical or</td>
<td></td>
</tr>
<tr>
<td>conditional expression</td>
<td>expr1 ? expr2 : expr3</td>
</tr>
<tr>
<td>assignment operators</td>
<td>+=, -=, *=, ...</td>
</tr>
<tr>
<td>expression evaluation separator</td>
<td>,</td>
</tr>
</tbody>
</table>
Unary operators, conditional expression and assignment operators group right to left; all others group left to right.
Exercise Review
• Evaluate the following:
– 25 / 3
– 33 % 7
– 17 + 5 % 2 – 3
C/C++ Variables
• A computer program needs to operate on and store data values (which are usually inputted from the user)
• Variables are just memory locations that are reserved to store a piece of data of specific size and type
• Programmer indicates what variables they want when they write their code
– Difference: C requires declaring all variables at the beginning of a function before any operations. C++ relaxes this requirement.
• The computer will allocate memory for those variables when the program reaches the declaration
```c
#include <iostream>
using namespace std;
int main(int argc, char *argv[]) {
char c;
int feet = 50;
...
int inches = 12 * feet;
}
```
Variables must be declared before being used.
Variables are actually allocated in RAM when the program is run.
<table>
<thead>
<tr>
<th>char c; A single-byte variable</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 11010010</td>
</tr>
<tr>
<td>1 01001011</td>
</tr>
<tr>
<td>2 10010000</td>
</tr>
<tr>
<td>3 11110100</td>
</tr>
<tr>
<td>4 01101000</td>
</tr>
<tr>
<td>5 11010001</td>
</tr>
<tr>
<td>6 01101000</td>
</tr>
<tr>
<td>7 11010001</td>
</tr>
<tr>
<td>...</td>
</tr>
<tr>
<td>1023 00001011</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>int x = 1564983; A four-byte variable</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 11110100</td>
</tr>
<tr>
<td>1 01001011</td>
</tr>
<tr>
<td>2 10010000</td>
</tr>
<tr>
<td>3 11110100</td>
</tr>
<tr>
<td>4 01101000</td>
</tr>
<tr>
<td>5 11010001</td>
</tr>
<tr>
<td>6 01101000</td>
</tr>
<tr>
<td>7 11010001</td>
</tr>
<tr>
<td>...</td>
</tr>
<tr>
<td>1023 00001011</td>
</tr>
</tbody>
</table>
C/C++ Variables
- Variables have a:
- **type** `[int, char, unsigned int, float, double, etc.]`
- **name/identifier** that the programmer will use to reference the value in that memory location [e.g. `x`, `myVariable`, `num_dozens`, etc.]
- Identifiers must start with `[A-Z, a-z, or an underscore '_'`] and can then contain any alphanumeric character `[0-9A-Za-z]` (but no punctuation other than underscores)
- Use descriptive names (e.g. `numStudents`, `doneFlag`)
- Avoid cryptic names (myvar1, a_thing)
- **location** [the address in memory where it is allocated]
- **Value**
- Reminder: You must declare a variable before using it
```c++
int quantity = 4;
double cost = 5.75;
cout << quantity*cost << endl;
```
### What's in a name?
To give descriptive names we often need to use more than 1 word/term. But we can't use spaces in our identifier names. Thus, most programmers use either camel-case or snake-case to write compound names
**Camel case**: Capitalize the first letter of each word (with the possible exception of the first word)
- `myVariable`, `isHighEnough`
**Snake case**: Separate each word with an underscore `_`
- `my_variable`, `is_high_enough`
When To Introduce a Variable
• When a value will be supplied and/or change at run-time (as the program executes)
• When a value is computed/updated at one time and used (many times) later
• To make the code more readable by another human
```java
double area = (56+34) * (81*6.25);
// readability of above vs. below
double height = 56 + 34;
double width = 81 * 6.25;
double area = height * width;
```
Assignment operator ‘=’
- Syntax:
\[
\text{variable} = \text{expression};
\]
(LHS) \quad \leftrightarrow \quad (RHS)
- LHS = Left Hand-Side, RHS = Right Hand Side
- Should be read: Place the value of \text{expression} into memory location of \text{variable}
- \text{z} = \text{x + y - (2*z)};
- Evaluate RHS first, then place the result into the variable on the LHS
- If variable is on both sides, we use the old/current value of the variable on the RHS
- **Note**: Without assignment values are computed and then forgotten
- \text{x + 5}; \quad // \text{will take x's value add 5 but NOT update x (just throws the result away)}
- \text{x = x + 5}; \quad // \text{will actually updated x (i.e. requires an assignment)}
- Shorthand assignment operators for updating a variable based on its current value: +=, -=, *=, /=, &=, ...
- \text{x += 5}; \quad (x = x+5)
- \text{y *= x}; \quad (y = y*x)
Evaluate 5 + 3/2
• The answer is 6.5 ??
Casting
• To achieve the correct answer for $5 + 3 \div 2$
• Could make everything a double
– Write $5.0 + 3.0 \div 2.0$ [explicitly use doubles]
• Could use **implicit** casting (mixed expression)
– Could just write $5 + 3.0 \div 2$
• If operator is applied to mixed type inputs, less expressive type is automatically promoted to more expressive (int is promoted to double)
• Could use C or C++ syntax for **explicit** casting
– $5 + (\text{double}) \ 3 \div (\text{double}) \ 2$ (C-Style cast)
– $5 + \text{static extunderscore cast<} ext{double}>(3) \div \text{static extunderscore cast<} ext{double}>(2)$ (C++-Style)
– $5 + \text{static extunderscore cast<} ext{double}>(3) \div 2$ (cast one & rely on implicit cast of the other)
– This looks like a lot of typing compared to just writing $5 + 3.0 \div 2$...but what if instead of constants we have variables
– int $x=5, y=3, z=2$; $x + y/z$
– $x + \text{static extunderscore cast<} ext{double}>(y) \div z$
I/O Streams
- I/O is placed in temporary buffers/streams by the OS/C++ libraries
- `cin` goes and gets data from the input stream (skipping over preceding whitespace then stopping at following whitespace)
- `cout` puts data into the output stream for display by the OS (a flush forces the OS to display the contents immediately)
```cpp
#include<iostream>
using namespace std;
int main()
{
int x;
cin >> x;
return 0;
}
```
```cpp
#include<iostream>
using namespace std;
int main()
{
cout << "It was the" << endl;
cout << "best of times."
}
```
```
#include<iostream>
using namespace std;
int main()
{
int x;
cin >> x;
return 0;
}
```
```
#include<iostream>
using namespace std;
int main()
{
cout << "It was the" << endl;
cout << "best of times."
}
```
```
#include<iostream>
using namespace std;
int main()
{
int x;
cin >> x;
return 0;
}
```
```
#include<iostream>
using namespace std;
int main()
{
cout << "It was the" << endl;
cout << "best of times."
}
```
C++ Output
- Include `<iostream>` (not `iostream.h`)
- Add `using namespace std;` at top of file
- `cout` (character output) object used to print to the monitor
- Use the `<<` operator to separate any number of variables or constants you want printed
- Compiler uses the implied type of the variable to determine how to print it out
- `endl` constant can be used for the newline character (`\n`) though you can still use `\n` as well.
- `endl` also ‘flushes’ the buffer/stream (forces the OS to show the text on the screen) which can be important in many contexts.
```cpp
#include<iostream>
using namespace std;
int main(int argc, char *argv[])
{
int x = 5;
char c = 'Y';
double y = 4.5;
cout << "Hello world" << endl;
cout << "x = " << x << " c = ";
cout << c << " ny is " << y << endl;
return 0;
}
```
Output from program:
```
Hello world
x = 5 c = Y
y is 4.5
```
C++ Input
- 'cin' (character input) object used to accept input from the user and write the value into a variable
- Use the '>>' operator to separate any number of variables or constants you want to read in
- Every '>>' will skip over any leading whitespace looking for text it can convert to the variable form, then stop at the trailing whitespace
```cpp
#include <iostream>
#include <string>
using namespace std;
int main(int argc, char *argv[])
{
int x;
char c;
string mystr;
double y;
cout << "Enter an integer, character, string, and double separated by spaces:" << endl;
cin >> x >> c >> mystr >> y;
cout << "x = " << x << " c = ";
cout << c << " mystr is " << mystr;
cout << "y is " << y << endl;
return 0;
}
```
Output from program:
```
Enter an integer, character, string, and double separated by spaces:
5 Y hi 4.5
x = 5 c = Y mystr is hi y is 4.5
```
cin
\[ \text{myc} = \begin{array}{|c|} \hline 0 \\ \hline \end{array} \quad \text{y} = \begin{array}{|c|} \hline 0.0 \\ \hline \end{array} \]
- If the user types in
\[ \begin{array}{|c|} \hline a \t 3.5 \n \hline \end{array} \]
- After the first `\text{\textgreater\textgreater}'
\[ \begin{array}{|c|} \hline 'a' \t 3.5 \n \hline \end{array} \]
- After the second `\text{\textgreater\textgreater}'
\[ \begin{array}{|c|} \hline 'a' \t 3.5 \n \hline \end{array} \]
```cpp
#include<iostream>
using namespace std;
int main()
{
char myc = 0;
double y = 0.0;
cin >> myc >> y;
// use the variables somehow...
return 0;
}
```
Cin... skips leading whitespace; stops at trailing whitespace.
Function call statements
• C++ predefines a variety of functions for you. Here are a few of them:
– `sqrt(x)`: returns the square root of x (in `<cmath>`)
– `pow(x, y)`: returns $x^y$, or x to the power y (in `<cmath>`)
– `sin(x)`: returns the sine of x if x is in radians (in `<cmath>`)
– `abs(x)`: returns the absolute value of x (in `<cstdlib>`)
– `max(x, y)`: returns the maximum of x and y (in `<algorithm>`)
– `min(x, y)`: returns the maximum of x and y (in `<algorithm>`)
• You call these by writing them similarly to how you would use a function in mathematics:
```cpp
#include <iostream>
#include <cmath>
#include <algorithm>
using namespace std;
int main(int argc, char *argv[]) {
// can call functions
// in an assignment
double res = cos(0); // res = 1.0
// can call functions in an
// expression
res = sqrt(2) / 2; // res = 1.414/2
cout << max(34, 56) << endl; // outputs 56
return 0;
}
```
Statements
• C/C++ programs are composed of statements
• Most common kinds of statements end with a semicolon
• Assignment (use initial conditions of `int x=3; int y;`)
– `x = x * 5 / 9;` // compute the expression & place result in x
// x = (3*5)/9 = 15/9 = 1
• Function Call
– `sin(3.14);` // Beware of just calling a function w/o assignment
– `x = cos(0.0);`
• Mixture of assignments, expressions and/or function calls
– `x = x * y - 5 + max(5,9);`
• Return statement (immediately ends a function)
– `return x+y;`
Understanding ASCII and chars
- Characters can still be treated as numbers
```c
char c = 'a'; // same as char c = 97;
char d = 'a' + 1; // c now contains 'b' = 98;
cout << d << endl; // I will see 'b' on the screen
char c = '1'; // c contains decimal 49, not 1
// i.e. '1' not equal to 1
c >= 'a' && c <= 'z'; // && means AND
// here we are checking if c
// contains a lower case letter
```
### ASCII printable characters
<table>
<thead>
<tr>
<th>Character</th>
<th>ASCII Code</th>
</tr>
</thead>
<tbody>
<tr>
<td><code> </code></td>
<td>32</td>
</tr>
<tr>
<td><code>!</code></td>
<td>33</td>
</tr>
<tr>
<td><code>"</code></td>
<td>34</td>
</tr>
<tr>
<td><code>#</code></td>
<td>35</td>
</tr>
<tr>
<td><code>$</code></td>
<td>36</td>
</tr>
<tr>
<td><code>%</code></td>
<td>37</td>
</tr>
<tr>
<td><code>&</code></td>
<td>38</td>
</tr>
<tr>
<td><code>'</code></td>
<td>39</td>
</tr>
<tr>
<td><code>(</code></td>
<td>40</td>
</tr>
<tr>
<td><code>)</code></td>
<td>41</td>
</tr>
<tr>
<td><code>*</code></td>
<td>42</td>
</tr>
<tr>
<td><code>+</code></td>
<td>43</td>
</tr>
<tr>
<td><code>,</code></td>
<td>44</td>
</tr>
<tr>
<td><code>.</code></td>
<td>45</td>
</tr>
<tr>
<td><code>/</code></td>
<td>46</td>
</tr>
<tr>
<td><code>0</code></td>
<td>48</td>
</tr>
<tr>
<td><code>1</code></td>
<td>49</td>
</tr>
<tr>
<td><code>2</code></td>
<td>50</td>
</tr>
<tr>
<td><code>3</code></td>
<td>51</td>
</tr>
<tr>
<td><code>4</code></td>
<td>52</td>
</tr>
<tr>
<td><code>5</code></td>
<td>53</td>
</tr>
<tr>
<td><code>6</code></td>
<td>54</td>
</tr>
<tr>
<td><code>7</code></td>
<td>55</td>
</tr>
<tr>
<td><code>8</code></td>
<td>56</td>
</tr>
<tr>
<td><code>9</code></td>
<td>57</td>
</tr>
<tr>
<td><code>;</code></td>
<td>58</td>
</tr>
<tr>
<td><code>:</code></td>
<td>59</td>
</tr>
<tr>
<td><code><</code></td>
<td>60</td>
</tr>
<tr>
<td><code>=</code></td>
<td>61</td>
</tr>
<tr>
<td><code>></code></td>
<td>62</td>
</tr>
<tr>
<td><code>?</code></td>
<td>63</td>
</tr>
<tr>
<td><code>@</code></td>
<td>64</td>
</tr>
<tr>
<td><code>A</code></td>
<td>65</td>
</tr>
<tr>
<td><code>B</code></td>
<td>66</td>
</tr>
<tr>
<td><code>C</code></td>
<td>67</td>
</tr>
<tr>
<td><code>D</code></td>
<td>68</td>
</tr>
<tr>
<td><code>E</code></td>
<td>69</td>
</tr>
<tr>
<td><code>F</code></td>
<td>70</td>
</tr>
<tr>
<td><code>G</code></td>
<td>71</td>
</tr>
<tr>
<td><code>H</code></td>
<td>72</td>
</tr>
<tr>
<td><code>I</code></td>
<td>73</td>
</tr>
<tr>
<td><code>J</code></td>
<td>74</td>
</tr>
<tr>
<td><code>K</code></td>
<td>75</td>
</tr>
<tr>
<td><code>L</code></td>
<td>76</td>
</tr>
<tr>
<td><code>M</code></td>
<td>77</td>
</tr>
<tr>
<td><code>N</code></td>
<td>78</td>
</tr>
<tr>
<td><code>O</code></td>
<td>79</td>
</tr>
<tr>
<td><code>P</code></td>
<td>80</td>
</tr>
<tr>
<td><code>Q</code></td>
<td>81</td>
</tr>
<tr>
<td><code>R</code></td>
<td>82</td>
</tr>
<tr>
<td><code>S</code></td>
<td>83</td>
</tr>
<tr>
<td><code>T</code></td>
<td>84</td>
</tr>
<tr>
<td><code>U</code></td>
<td>85</td>
</tr>
<tr>
<td><code>V</code></td>
<td>86</td>
</tr>
<tr>
<td><code>W</code></td>
<td>87</td>
</tr>
<tr>
<td><code>X</code></td>
<td>88</td>
</tr>
<tr>
<td><code>Y</code></td>
<td>89</td>
</tr>
<tr>
<td><code>Z</code></td>
<td>90</td>
</tr>
<tr>
<td><code>[</code></td>
<td>91</td>
</tr>
<tr>
<td><code>{</code></td>
<td>92</td>
</tr>
<tr>
<td>`</td>
<td>`</td>
</tr>
<tr>
<td><code>}</code></td>
<td>125</td>
</tr>
<tr>
<td><code>^</code></td>
<td>94</td>
</tr>
<tr>
<td><code>_</code></td>
<td>95</td>
</tr>
<tr>
<td><code> </code></td>
<td>32</td>
</tr>
<tr>
<td><code>a</code></td>
<td>97</td>
</tr>
<tr>
<td><code>b</code></td>
<td>98</td>
</tr>
<tr>
<td><code>c</code></td>
<td>99</td>
</tr>
<tr>
<td><code>d</code></td>
<td>100</td>
</tr>
<tr>
<td><code>e</code></td>
<td>101</td>
</tr>
<tr>
<td><code>f</code></td>
<td>102</td>
</tr>
<tr>
<td><code>g</code></td>
<td>103</td>
</tr>
<tr>
<td><code>h</code></td>
<td>104</td>
</tr>
<tr>
<td><code>i</code></td>
<td>105</td>
</tr>
<tr>
<td><code>j</code></td>
<td>106</td>
</tr>
<tr>
<td><code>k</code></td>
<td>107</td>
</tr>
<tr>
<td><code>l</code></td>
<td>108</td>
</tr>
<tr>
<td><code>m</code></td>
<td>109</td>
</tr>
<tr>
<td><code>n</code></td>
<td>110</td>
</tr>
<tr>
<td><code>o</code></td>
<td>111</td>
</tr>
<tr>
<td><code>p</code></td>
<td>112</td>
</tr>
<tr>
<td><code>q</code></td>
<td>113</td>
</tr>
<tr>
<td><code>r</code></td>
<td>114</td>
</tr>
<tr>
<td><code>s</code></td>
<td>115</td>
</tr>
<tr>
<td><code>t</code></td>
<td>116</td>
</tr>
<tr>
<td><code>u</code></td>
<td>117</td>
</tr>
<tr>
<td><code>v</code></td>
<td>118</td>
</tr>
<tr>
<td><code>w</code></td>
<td>119</td>
</tr>
<tr>
<td><code>x</code></td>
<td>120</td>
</tr>
<tr>
<td><code>y</code></td>
<td>121</td>
</tr>
<tr>
<td><code>z</code></td>
<td>122</td>
</tr>
<tr>
<td><code> </code></td>
<td>32</td>
</tr>
<tr>
<td><code> </code></td>
<td>32</td>
</tr>
<tr>
<td><code> </code></td>
<td>32</td>
</tr>
</tbody>
</table>
In-Class Exercises
• Checkpoint 1
LECTURE 2 / LECTURE 3 END POINT
Assignment Means Copy
- Assigning a variable makes a copy
- Challenge: Swap the value of 2 variables
```c
int main()
{
int x = 5, y = 3;
x = y; // copy y into x
return 0;
}
```
```c
int main()
{
int a = 7, b = 9;
// now consider swapping
// the value of 2 variables
a = b;
b = a;
return 0;
}
```
More Assignments
• Assigning a variable makes a copy
• Challenge: Swap the value of 2 variables
– Easiest method: Use a 3rd temporary variable to save one value and then replace that variable
```c
int main()
{
int a = 7, b = 9, temp;
// let's try again
temp = a;
a = b;
b = temp;
return 0;
}
```
A Few Odds and Ends
• Variable Initialization
– When declared they will have "garbage" (random or unknown) values unless you initialize them
– Each variable must be initialized separately
• Scope
– Global variables are visible to all the code/functions in the program and are declared outside of any function
– Local variables are declared inside of a function and are only visible in that function and die when the function ends
/*----Section 1: Compiler Directives ----*/
#include <iostream>
#include <cmath>
using namespace std;
// Global Variables
int x; // Anything after "//" is ignored
int add_1(int input)
{
// y and z not visible here, but x is
return (input + 1);
}
int main(int argc, char *argv[])
{
// y and z are "local" variables
int y, z=5; // y is garbage, z is five
z = add_1(z);
y = z+1; // an assignment stmt
cout << y << endl;
return 0;
}
Pre- and Post-Increment Operators
• ++ and -- operators can be used to "increment-by-1" or "decrement-by-1"
– If ++ comes before a variable it is call pre-increment; if after, it is called post-increment
– x++; // If x was 2 it will be updated to 3 (x = x + 1)
– ++x; // Same as above (no difference when not in a larger expression)
– x--; // If x was 2 it will be updated to 1 (x = x - 1)
– --x; // Same as above (no difference when not in a larger expression)
• Difference between pre- and post- is only evident when used in a larger expression
• Meaning:
– Pre: Update (inc./dec.) the variable before using it in the expression
– Post: Use the old value of the variable in the expression then update (inc./dec.) it
• Examples [suppose we start each example with: int y; int x = 3;]
– y = x++ + 5; // Post-inc.; Use x=3 in expr. then inc. [y=8, x=4]
– y = ++x + 5; // Pre-inc.; Inc. x=4 first, then use in expr. [y=9, x=4]
– y = x-- + 5; // Post-dec.; Use x=3 in expr. then dec. [y=8, x=2]
Exercise
• Consider the code below
– int x=5, y=7, z;
– z = x++ + 3*--y + 2*x;
• What is the value of x, y, and z after this code executes
In-Class Exercises
• Checkpoint 2
Not for lecture presentations
BACKUP
C PROGRAM STRUCTURE AND COMPILATION
C Program Format/Structure
• Comments
– Anywhere in the code
– C-Style => “/\*” and “\*/”
– C++ Style => “//”
• Compiler Directives
– #includes tell compiler what other library functions you plan on using
– 'using namespace std;' -- Just do it for now!
• Global variables (more on this later)
• main() function
– Starting point of execution for the program
– Variable declarations often appear at the start of a function
– All code/statements in C must be inside a function
– Statements execute one after the next
– Ends with a ‘return’ statement
• Other functions
```c
/* Anything between slash-star and star-slash is ignored even across multiple lines of text or code */
/*-----Section 1: Compiler Directives ----*/
#include <iostream>
#include <cmath>
using namespace std;
/*---------------- Section 2 ----------------*/
/*Global variables & Function Prototypes */
int x; // Anything after "//" is ignored
void other_unused_function();
/*-----Section 3: Function Definitions ---*/
void other_unused_function()
{
cout << "No one uses me!" << endl;
}
int main(int argc, char *argv[])
{
// anything inside these brackets is part of the main function
int y; // a variable declaration stmt
y = 5+1; // an assignment stmt
cout << y << endl;
return 0;
}
```
Software Process
Std C++ & Other Libraries
Compiler
Executable Binary Image ("test")
C++ file(s) (test.cpp)
#include <iostream>
using namespace std;
int main()
{
int x = 5;
cout << "Hello" << endl;
cout << "x=" << x;
return 0;
}
- g = Enable Debugging
- Wall = Show all warnings
- o test = Specify Output executable name
$ gedit test.cpp &
$ g++ -g -Wall -o test test.cpp
or
$ make test
$ gedit test.cpp &
$ g++ -g -Wall -o test test.cpp
$ ./test
Edit & write code
Compile & fix compiler errors
Load & run the executable program
Software Process
1. Edit & write code
- gedit test.cpp &
2. Compile & fix compiler errors
- $ g++ -g -Wall -o test test.cpp
or
- $ make test
- Fix compile-time errors w/ a debugger
3. Load & run the executable program
- $ gedit test.cpp &
- $ g++ -g -Wall -o test test.cpp
- $ ./test
- Fix run-time errors w/ a debugger
C++ file(s) (test.cpp)
```cpp
#include <iostream>
using namespace std;
int main()
{
int x = 5;
cout << "Hello" << endl;
cout << "x=" << x;
return 0;
}
```
Compiler
Executable Binary Image (test)
1110 0010 0101 1001
0110 1011 0000 1100
0100 1101 0111 1111
1010 1100 0010 1011
0001 0110 0011 1000
Load & Execute
Std C++ & Other Libraries
-g = Enable Debugging
-Wall = Show all warnings
-o test = Specify Output executable name
gdb / ddd / kdbg
• To debug your program you must have compiled with the ‘–g’ tag in g++ (i.e. g++ –g –Wall –o test test.cpp).
• gdb is the main workhorse of Unix/Linux debuggers (but it is text-based while 'ddd' and 'kdbg' are graphical based debuggers)
– Run using: $ gdb ./test
• Allows you to...
– Set breakpoints (a point in the code where your program will be stopped so you can inspect something of interest)
• 'break 7' will cause the program to halt on line 7
– Run: Will start the program running until it hits a breakpoint of completes
– Step: Execute next line of code
– Next: Like ‘Step’ but if you are at a function step will go into that function while ‘Next’ will run the function stopping at the next line of code
– Print variable values ('print x')
Memory Operations
• Memories perform 2 operations
– Read: retrieves data value in a particular location (specified using the address)
– Write: changes data in a location to a new value
• To perform these operations a set of address, data, and control inputs/outputs are used
– Note: A group of wires/signals is referred to as a ‘bus’
– Thus, we say that memories have an address, data, and control bus.
Activity 1
• Consider the code below & memory layout
– `int x=5, y=7, z=1;`
– `z = x + y - z;`
• Order the memory activities & choose Read or Write
1. R/W value @ addr. 0x01008
2. Allocate & init. memory for x, y, & z
3. Read value @ addr. 0x01000
4. Write value @ addr. 0x01000
5. R/W value @ addr. 0x01004
• Answer: 2, 1(R), 5(R), 3, 4
|
{"Source-Url": "http://ee.usc.edu:80/~redekopp/cs103/slides/Unit1_CIntro.pdf", "len_cl100k_base": 12035, "olmocr-version": "0.1.53", "pdf-total-pages": 60, "total-fallback-pages": 0, "total-input-tokens": 97715, "total-output-tokens": 13457, "length": "2e13", "weborganizer": {"__label__adult": 0.000522613525390625, "__label__art_design": 0.0005269050598144531, "__label__crime_law": 0.00028586387634277344, "__label__education_jobs": 0.00958251953125, "__label__entertainment": 0.0001316070556640625, "__label__fashion_beauty": 0.00022459030151367188, "__label__finance_business": 0.00025916099548339844, "__label__food_dining": 0.0006861686706542969, "__label__games": 0.0017986297607421875, "__label__hardware": 0.0012559890747070312, "__label__health": 0.0004050731658935547, "__label__history": 0.00033545494079589844, "__label__home_hobbies": 0.0001951456069946289, "__label__industrial": 0.0005307197570800781, "__label__literature": 0.0003807544708251953, "__label__politics": 0.0002734661102294922, "__label__religion": 0.0006437301635742188, "__label__science_tech": 0.00763702392578125, "__label__social_life": 0.00020372867584228516, "__label__software": 0.004711151123046875, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.0005426406860351562, "__label__transportation": 0.0007710456848144531, "__label__travel": 0.0003719329833984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35742, 0.04579]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35742, 0.51929]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35742, 0.69722]], "google_gemma-3-12b-it_contains_pii": [[0, 53, false], [53, 394, null], [394, 498, null], [498, 1594, null], [1594, 2017, null], [2017, 2818, null], [2818, 2838, null], [2838, 3177, null], [3177, 3510, null], [3510, 3781, null], [3781, 4231, null], [4231, 4656, null], [4656, 4839, null], [4839, 5376, null], [5376, 5964, null], [5964, 6190, null], [6190, 6956, null], [6956, 7533, null], [7533, 7954, null], [7954, 8438, null], [8438, 8922, null], [8922, 9230, null], [9230, 9255, null], [9255, 9769, null], [9769, 10352, null], [10352, 10816, null], [10816, 11532, null], [11532, 11556, null], [11556, 12607, null], [12607, 15041, null], [15041, 15193, null], [15193, 16934, null], [16934, 18128, null], [18128, 18532, null], [18532, 19472, null], [19472, 19513, null], [19513, 20499, null], [20499, 21525, null], [21525, 22430, null], [22430, 23343, null], [23343, 24054, null], [24054, 25019, null], [25019, 25550, null], [25550, 28629, null], [28629, 28664, null], [28664, 28696, null], [28696, 29034, null], [29034, 29358, null], [29358, 30253, null], [30253, 31269, null], [31269, 31414, null], [31414, 31449, null], [31449, 31487, null], [31487, 31523, null], [31523, 32830, null], [32830, 33387, null], [33387, 34189, null], [34189, 34975, null], [34975, 35388, null], [35388, 35742, null]], "google_gemma-3-12b-it_is_public_document": [[0, 53, true], [53, 394, null], [394, 498, null], [498, 1594, null], [1594, 2017, null], [2017, 2818, null], [2818, 2838, null], [2838, 3177, null], [3177, 3510, null], [3510, 3781, null], [3781, 4231, null], [4231, 4656, null], [4656, 4839, null], [4839, 5376, null], [5376, 5964, null], [5964, 6190, null], [6190, 6956, null], [6956, 7533, null], [7533, 7954, null], [7954, 8438, null], [8438, 8922, null], [8922, 9230, null], [9230, 9255, null], [9255, 9769, null], [9769, 10352, null], [10352, 10816, null], [10816, 11532, null], [11532, 11556, null], [11556, 12607, null], [12607, 15041, null], [15041, 15193, null], [15193, 16934, null], [16934, 18128, null], [18128, 18532, null], [18532, 19472, null], [19472, 19513, null], [19513, 20499, null], [20499, 21525, null], [21525, 22430, null], [22430, 23343, null], [23343, 24054, null], [24054, 25019, null], [25019, 25550, null], [25550, 28629, null], [28629, 28664, null], [28664, 28696, null], [28696, 29034, null], [29034, 29358, null], [29358, 30253, null], [30253, 31269, null], [31269, 31414, null], [31414, 31449, null], [31449, 31487, null], [31487, 31523, null], [31523, 32830, null], [32830, 33387, null], [33387, 34189, null], [34189, 34975, null], [34975, 35388, null], [35388, 35742, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 35742, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35742, null]], "pdf_page_numbers": [[0, 53, 1], [53, 394, 2], [394, 498, 3], [498, 1594, 4], [1594, 2017, 5], [2017, 2818, 6], [2818, 2838, 7], [2838, 3177, 8], [3177, 3510, 9], [3510, 3781, 10], [3781, 4231, 11], [4231, 4656, 12], [4656, 4839, 13], [4839, 5376, 14], [5376, 5964, 15], [5964, 6190, 16], [6190, 6956, 17], [6956, 7533, 18], [7533, 7954, 19], [7954, 8438, 20], [8438, 8922, 21], [8922, 9230, 22], [9230, 9255, 23], [9255, 9769, 24], [9769, 10352, 25], [10352, 10816, 26], [10816, 11532, 27], [11532, 11556, 28], [11556, 12607, 29], [12607, 15041, 30], [15041, 15193, 31], [15193, 16934, 32], [16934, 18128, 33], [18128, 18532, 34], [18532, 19472, 35], [19472, 19513, 36], [19513, 20499, 37], [20499, 21525, 38], [21525, 22430, 39], [22430, 23343, 40], [23343, 24054, 41], [24054, 25019, 42], [25019, 25550, 43], [25550, 28629, 44], [28629, 28664, 45], [28664, 28696, 46], [28696, 29034, 47], [29034, 29358, 48], [29358, 30253, 49], [30253, 31269, 50], [31269, 31414, 51], [31414, 31449, 52], [31449, 31487, 53], [31487, 31523, 54], [31523, 32830, 55], [32830, 33387, 56], [33387, 34189, 57], [34189, 34975, 58], [34975, 35388, 59], [35388, 35742, 60]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35742, 0.19048]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7a948f9d69282f646d8979b5aa1496df17aa2ea9
|
Importing SMT and Connection proofs as expansion trees
Giselle Reis
INRIA-Saclay, France
giselle.reis@inria.fr
Different automated theorem provers reason in various deductive systems and, thus, produce proof objects which are in general not compatible. To understand and analyze these objects, one needs to study the corresponding proof theory, and then study the language used to represent proofs, on a prover by prover basis. In this work we present an implementation that takes SMT and Connection proof objects from two different provers and imports them both as expansion trees. By representing the proofs in the same framework, all the algorithms and tools available for expansion trees (compression, visualization, sequent calculus proof construction, proof checking, etc.) can be employed uniformly. The expansion proofs can also be used as a validation tool for the proof objects produced.
1 Introduction
The field of proof theory has evolved in such a way to create the most various proof abstractions. Natural deduction, sequent calculus, resolution, tableaux, SAT, are only a few of them, and even within the same formalism there might be many variations. As a result, automated theorem provers will generate different proof objects, usually corresponding to their internal proof representation. The use of distinct formats has some disadvantages: provers cannot recognize each others proofs; proofs cannot be easily compared; all analysis and algorithms need to be developed on a prover by prover basis.
GAPT is a framework for proof theory that is able to represent, process and visualize proofs. Currently it implements the sequent calculus LK (with or without equality rules) for first and higher order classical logic, Robinson’s resolution calculus [11], the schematic calculus LKS [4] and expansion trees [8]. GAPT also provides algorithms for translating proofs between some of these formats, for cut-elimination (reductive methods à la Gentzen [5] and CERES [2]), and for cut-introduction (proof compression) [6], as well as an interactive proof visualization tool [3]. But all these tools depend on having proofs to operate on.
In this work we show how to parse and translate SMT and Connection proofs from veriT and leanCoP, respectively, into expansion proofs in GAPT. SMT are unsatisfiability proofs with respect to some theory and, in veriT, these are represented by resolution refutations of a set including (instances of) the axioms of the theory considered and the negation of the input formula. Connection proofs decide first-order logic formulas by connecting literals of opposite polarity in the clausal normal form of the input. These different conceptions of proofs will be unified under the form of expansion proofs, which can be considered a compact representation of sequent calculus proofs.
The advantages of this work is three-fold. First of all, the use of expansion proofs provides a compact representation for otherwise big and hard to grasp proof objects. Using this representation and GAPT’s visualization tool, it is easy to see the theorem that was proved and the instances of quantified formulas used. Second of all, the use of a common representation facilitates the comparison of proofs and makes it possible to run and analyse algorithms developed for this representation without the need to adapt it to different formats. In particular, we have been using the imported proofs for experimenting proof compression via introduction of cuts [6]. Finally, it provides a simple sanity-check procedure and the possibility of building LK proofs.
This paper is organized as follows. Section 2 defines basic concepts and extends the usual definition of expansion trees to accommodate polarities. Section 3 explains how to extract the necessary information from both formats and how it is then used to build expansion trees. Section 4 presents the results of the transformation applied to a database of proofs in the considered formats. It also discusses the advantages of having the proofs as expansion trees. Section 5 discusses some related work and, finally, Section 6 concludes the paper pointing to future work.
2 Expansion proofs
We will work in the setting of first-order classical logic. We introduce now a few basic concepts.
Definition 1 (Polarity in a sequent). Let \( S = A_1, \ldots, A_n \vdash B_1, \ldots, B_m \) be a sequent. We will say that formulas on the left side of \( \vdash \), i.e., \( A_1, \ldots, A_n \) have negative polarity while formulas on the right, i.e., \( B_1, \ldots, B_m \) have positive polarity.
Definition 2 (Polarity). Let \( F \) be a formula and \( F' \) a sub-formula of \( F \). Then we can define the polarity of \( F' \) in \( F \), i.e., \( F' \) can be positive or negative in \( F \), according to the following criteria:
- If \( F \equiv F' \), then \( F' \) has the same polarity as \( F \).
- If \( F \equiv A \land B \) or \( F \equiv A \lor B \) or \( F \equiv \forall x . A \lor F \equiv \exists x . A \) and \( F \) is positive (negative), than \( A \) and \( B \) are positive (negative).
- If \( F \equiv A \rightarrow B \) and \( F \) is positive (negative), then \( A \) is negative (positive) and \( B \) is positive (negative).
- If \( F \equiv \neg A \) and \( F' \) is positive (negative) then \( A \) is negative (positive).
Throughout this document we will use 0 for negative polarity, 1 for positive polarity and \( \overline{p} \) to denote the opposite polarity of \( p \), for \( p \in \{0,1\} \).
Definition 3 (Strong and weak quantifiers). Let \( F \) be a formula. If \( \forall x \) occurs positively (negatively) in \( F \), then \( \forall x \) is called a strong (weak) quantifier. If \( \exists x \) occurs positively (negatively) in \( F \), then \( \exists x \) is called a weak (strong) quantifier.
Strong quantifiers in a sequent will be those introduced by the inferences \( \forall \), and \( \exists \) in a sequent calculus proof.
Expansion proofs are a compact representation for first and higher order sequent calculus proofs. They can be seen as a generalization of Gentzen’s mid-sequent theorem to formulas which are not necessarily prenex [8]. Expansion proofs are composed by expansion trees. An expansion tree of a formula \( F \) has this formula as its root. Leaves are atoms occurring in \( F \) and inner nodes are connectives or a quantified sub-formula of \( F \). The edges from quantified nodes to its children are labelled with terms that were used to instantiate the outer-most quantifier. We extend the original definition with the notion of formula polarity and use \( \Pi \) and \( \Lambda \) for strong and weak quantifiers respectively in expansion trees.
Definition 4 (Expansion tree). Expansion trees and a function \( \text{Sh}(E,p) \) (for shallow), that maps an expansion tree \( E \) to a formula with polarity \( p \in \{0,1\} \), are defined inductively as follows:
- If \( A \) is an atom, then \( A \) is an expansion tree with top node \( A \) and \( \text{Sh}(A,p) = A \) for any choice of \( p \).
- If \( E_0 \) is an expansion tree, then \( E = \neg E_0 \) is an expansion tree with \( \text{Sh}(E,\overline{p}) = \neg \text{Sh}(E_0,p) \).
- If \( E_1 \) and \( E_2 \) are expansion trees and \( \circ \in \{\land,\lor\} \), then \( E = E_1 \circ E_2 \) is an expansion tree with \( \text{Sh}(E,p) = \text{Sh}(E_1,p) \circ \text{Sh}(E_2,p) \).
- If \( E_1 \) and \( E_2 \) are expansion trees, then \( E = E_1 \rightarrow E_2 \) is an expansion tree with \( \text{Sh}(E,p) = \text{Sh}(E_1,\overline{p}) \rightarrow \text{Sh}(E_2,p) \).
• If \( \{t_1, \ldots, t_n\} \) is a set of terms and \( E_1, \ldots, E_n \) are expansion trees with \( \text{Sh}(E_i, p) = A[x/t_i] \), then \( E = \Lambda x. A +^{0} E_1 +^{n} E_n \) (denoting a node with \( n \) children) is an expansion tree with \( \text{Sh}(E, 0) = \forall x. A \) and \( \text{Sh}(E, 1) = \exists x. A \).
• If \( E_0 \) is an expansion tree with \( \text{Sh}(E_0, p) = A[x/\alpha] \) for an eigenvariable \( \alpha \), then \( E = \Pi x. A +^{\alpha} E_0 \) is an expansion tree with \( \text{Sh}(E, 0) = \exists x. A \) and \( \text{Sh}(E, 1) = \forall x. A \).
Expansion trees can be mapped to a quantifier free formula via the \textit{deep} function, which we also redefine taking the polarities into account.
\textbf{Definition 5.} We define the function \( \text{Dp}(\cdot, p) \) (for deep), \( p \in \{0, 1\} \), that maps an expansion tree to a quantifier free formula of polarity \( p \) as:
\begin{itemize}
\item \( \text{Dp}(A, p) = A \) for an atom \( A \).
\item \( \text{Dp}(\neg A, p) = \neg \text{Dp}(A, \bar{p}) \)
\item \( \text{Dp}(A \circ B, p) = \text{Dp}(A, p) \circ \text{Dp}(B, p) \) for \( \circ \in \{\land, \lor\} \)
\item \( \text{Dp}(A \rightarrow B, p) = \text{Dp}(A, \bar{p}) \rightarrow \text{Dp}(B, p) \)
\item \( \text{Dp}(\Lambda x. A +^{t_i} E_1 +^{n} E_n, 0) = \bigwedge_{i=1}^{n} \text{Dp}(E_i, 0) \)
\item \( \text{Dp}(\Pi x. A +^{\alpha} E, p) = \text{Dp}(E, p) \)
\end{itemize}
\textbf{Definition 6 (Expansion sequent).} An expansion sequent \( \varepsilon \) is denoted by \( E_1, \ldots, E_n \vdash F_1, \ldots, F_m \) where \( E_i \) and \( F_i \) are expansion trees. Its deep sequent is the sequent \( \text{Dp}(E_1, 0), \ldots, \text{Dp}(E_n, 0) \vdash \text{Dp}(F_1, 1), \ldots, \text{Dp}(F_m, 1) \) and its shallow sequent is \( \text{Sh}(E_1, 0), \ldots, \text{Sh}(E_n, 0) \vdash \text{Sh}(F_1, 1), \ldots, \text{Sh}(F_m, 1) \).
An expansion sequent may or may not represent a proof. To decide whether this is the case, we need to reason on the dependency relation in the sequent.
\textbf{Definition 7 (Domination).} A term \( t \) is said to dominate a node \( N \) in an expansion tree if it labels a parent node of \( N \).
\textbf{Definition 8 (Dependency relation).} Let \( \varepsilon \) be an expansion sequent and let \( <^0_\varepsilon \) be the binary relation on the occurrences of terms in \( \varepsilon \) defined as: \( t <^0_\varepsilon s \) if there is an \( x \) free in \( s \) that is an eigenvariable of a node dominated by \( t \). Then \( <_\varepsilon \), the transitive closure of \( <^0_\varepsilon \), is called the dependency relation of \( \varepsilon \).
\textbf{Definition 9 (Expansion proof).} An expansion sequent is considered an expansion proof if its deep sequent is a tautology and the dependency relation is acyclic.
Intuitively, the dependency relation gives an ordering of quantifier inferences in a sequent calculus proof of the shallow sequent of \( \varepsilon \). That is, \( t <^\varepsilon s \) means that the existential quantifiers instantiated with \( t \) must occur lower in the proof than those instantiated with \( s \). Using this relation it is possible to build an LK proof from an expansion proof [8].
\section{Importing}
GAPT is a framework for proof transformations implemented in the programming language Scala. It supports different proof formats, such as LK (with or without equality) for first and higher order logic, Robinson’s resolution calculus [11], the schematic calculus LKS [4] and, more recently, expansion trees. It provides various algorithms for proofs, such as reductive cut-elimination [5], cut-elimination by resolution [2], cut-introduction [6], Skolemization, and translations between the proof formats. GAPT also comes with \texttt{prooftool} [3], an interactive proof visualization tool supporting all these formats.
VeriT and leanCoP are automated theorem provers that produce unsatisfiability (in the shape of a resolution refutation) and connection proofs respectively. Both output the proof objects to a structured
\url{https://github.com/gapt/gapt}
text file, having in common the fact that all inferences are listed with the operands and the conclusion. We have implemented parsers (using Scala’s parser combinators) for both formats in GAPT. By taking the necessary information of each proof file and processing it accordingly, we can build expansion proofs. We explain the kind of processing needed for each format in Sections 3.1 and 3.2.
The expansion tree of a formula with associated substitutions to its bound variables can be defined as follows:
**Definition 10.** Let $F$ be a formula in which all bound variables have pairwise distinct names, $\Sigma$ a set of substitutions for these variables and $p \in \{0, 1\}$ a polarity. Assume that each strong quantifier in $F$ is bound to exactly one term in $\Sigma$. We define the function $\text{ET}(F, \Sigma, p)$ that translates a formula to an expansion tree as follows:
- $\text{ET}(A, \Sigma, p) = A$, where $A$ is an atom.
- $\text{ET}(\neg A, \Sigma, p) = \neg \text{ET}(A, \Sigma, \overline{p})$.
- $\text{ET}(A \circ B, \Sigma, p) = \text{ET}(A, \Sigma, p) \circ \text{ET}(B, \Sigma, p)$, for $\circ \in \{\land, \lor\}$.
- $\text{ET}(A \rightarrow B, \Sigma, p) = \text{ET}(A, \Sigma, \overline{p}) \rightarrow \text{ET}(B, \Sigma, p)$.
- $\text{ET}(\forall x . A, \Sigma, 0) = \lambda x . A +^n \text{ET}(\sigma_1, \{1\}, 0) \ldots +^n \text{ET}(\sigma_n, \{1\}, 0)$, where $\sigma_i$ is the substitution in $\Sigma$ mapping $x$ to $t_i$ (n is the number of times the weak quantifier was instantiated).
- $\text{ET}(\forall x . A, \Sigma, 1) = \Pi x . A +^n \text{ET}(\sigma', \{1\}, 1)$ where $\sigma'$ is the substitution in $\Sigma$ mapping $x$ to $\alpha$.
- $\text{ET}(\exists x . A, \Sigma, 0) = \Pi x . A +^n \text{ET}(\sigma', \{0\}, 0)$ where $\sigma'$ is the substitution in $\Sigma$ mapping $x$ to $\alpha$.
- $\text{ET}(\exists x . A, \Sigma, 1) = \lambda x . A +^n \text{ET}(\sigma_1, \{1\}, 1) \ldots +^n \text{ET}(\sigma_n, \{1\}, 1)$, where $\sigma_i$ is the substitution in $\Sigma$ mapping $x$ to $t_i$ (n is the number of times the weak quantifier was instantiated).
Note that the term $\alpha$ used for the strong quantifiers is determined by the substitution set $\Sigma$. If the eigenvariable condition is not satisfied in these substitutions, then the resulting expansion tree will not be a proof of the formula.
Using the $\text{ET}(F, \sigma, p)$ transformation, it is also possible to define the expansion sequent $\varepsilon$ from a sequent $S$.
**Definition 11.** Let $S : A_1, \ldots, A_n \vdash B_1, \ldots, B_m$ be a sequent with pairwise distinct bound variables and $\sigma$ a set of substitutions for those variables such that each strongly quantified variable is bound to exactly one term. Then we define $\text{ET}(S, \sigma)$ as the expansion sequent $\text{ET}(A_1, \sigma, 0), \ldots, \text{ET}(A_n, \sigma, 0) \vdash \text{ET}(B_1, \sigma, 1), \ldots, \text{ET}(B_m, \sigma, 1)$.
Definitions [10] and [11] show how to build an expansion sequent from a sequent and a set of substitutions. The requirement of pairwise distinct variables can be easily satisfied by a variable renaming. The second requirement, that each variable of a strong quantifier is bound only once, might not be true for arbitrary proofs. Fortunately, it holds for the proofs we are dealing with, either because the input problem contains no strong quantifiers, or because the end-sequent is skolemized. On the second case, it is possible to deduce unique Eigenvariables for each strong quantifier and obtain the expansion tree of the un-skolemized formula.
**Lemma 1.** $\text{Sh}(\text{ET}(F, \sigma, p), p) = F$
**Proof.** Follows from the definition of $\text{ET}(F, \sigma, p)$ and $\text{Sh}(E, p)$.
**Theorem 1.** A sequent $S$ with substitutions $\sigma$, such that each strongly quantified variable in $S$ is bound exactly once, is valid if the expansion sequent $\text{ET}(S, \sigma)$ is an expansion proof.
Proof. By the soundness and completeness of expansion sequents [8], we know that an expansion sequent \( \varepsilon \) is an expansion proof iff its shallow sequent is valid. From Lemma 1 we have that the shallow sequent of \( \text{ET}(S, \sigma) \) is \( S \). Therefore, \( S \) is valid iff \( \text{ET}(S, \sigma) \) is an expansion proof.
This theorem provides a “sanity-check” for the expansion sequents extracted from proof objects. If it is an expansion proof, we know that, at least, the end-sequent with the given substitutions is a tautology. Note that this does not provide a check for the proof, as it is not validating each inference applied, but only if the claimed instantiations can actually lead to a proof.
3.1 SMT proofs
SMT (Satisfiability Modulo Theory) is a decision procedure for first-order formulas with respect to a background theory. It can be seen as a generalization of SAT problems. VeriT is an open-source SMT-solver which is complete for quantifier-free formulas with uninterpreted functions and difference logic on reals and integers. For this work we have used the proof objects produced by VeriT on the QF_UF (quantifier-free formulas with uninterpreted function symbols) problems of the SMT-LIB [3]. The background theory in this case was the equality theory composed by the axioms (symmetry and reflexivity are implicit):
\[
\forall x_0 \ldots \forall x_n. (x_0 = x_1 \land \ldots \land x_{n-1} = x_n \rightarrow x_0 = x_n)
\]
\[
\forall x_0 \ldots \forall x_n \forall y_0 \ldots \forall y_n. ((x_0 = y_0 \land \ldots \land x_n = y_n \rightarrow f(x_0, \ldots, x_n) = f(y_0, \ldots, y_n))
\]
\[
\forall x_0 \ldots \forall x_n \forall y_0 \ldots \forall y_n. (x_0 = y_0 \land \ldots \land x_n = y_n \land p(x_0, \ldots, x_n) \rightarrow p(y_0, \ldots, y_n))
\]
The proofs generated are composed of CNF transformations and a resolution refutation, whose leaves are either one of the quantifier-free formulas from the input problem or an instance of an equality axiom. The proof object consists of a comprehensive list of labelled clauses used in the resolution proof and their origin. They are either an input clause, without ancestors, or the result of an inference rule on other clauses, which is specified via the labels. VeriT’s proof is purely propositional and no substitutions are involved, since the axioms are quantifier-free and contain no free-variables.
The input problem is propositional, therefore the only substitutions needed were the ones instantiating the (weak) quantifiers of the equality axioms. These are found by collecting the ground instances of these axioms occurring on the leaves of the resolution proof and using a first-order matching algorithm. By matching the instances with the appropriate axiom (without the quantifiers), we can obtain the substitutions for the quantified variables. Given those substitutions and the quantified axioms, we can build the expansion trees. It is worth noting that the quantified equality axioms (i.e., transitivity, symmetry, reflexivity, etc.) are built internally in GAPT, since these are not part of the proof object. Also, the reflexivity instances needed are computed separately, since these are implicit in veriT. The expansion tree of the (propositional) input formula can be built with an empty set of substitutions. Since these are unsatisfiability proofs, all expansion trees will be on the left side of the expansion sequent.
3.2 Connection proofs
Connection calculi is a set of formalisms for deciding first-order classical formulas which consists on connecting unifiable literals of opposite polarities from the input. Proof search in these calculi is characterized as goal-oriented and, in general, non-confluent. LeanCoP is a connection based theorem prover that implements a series of techniques for reducing the search space and making proof search feasible.
---
[4] Observe that we do not need any information from the inference steps.
Although its strategy is incomplete, it achieves very good performance in practice. For this work, leanCoP 2.2 was used. It can be obtained from the CASC24 competition website or, alternatively, executed online at SystemOnTPTP.
Given an input problem (a set of axioms and conjectures in the language of first-order logic), leanCoP will negate the axioms, skolemize the formulas and translate them into a disjunctive normal form (DNF). It works with a positive representation of the problem and uses a special DNF transformation that is more suitable for connection proof search. The prover also adds equality axioms when necessary. LeanCoP is able to produce proof objects in four different formats. For this work, we have used leantptp, which is closer to the TPTP (thousands of problems for theorem provers) specification. The output file is divided in three parts: (1) input formulas; (2) clauses generated from the DNF transformation of the input and equality axioms; and (3) proof description. Each part is described using a set of predicates with the relevant information.
In part (1), the formulas from the input file are listed and named. Their variables are renamed such that they are pairwise distinct. Moreover, formulas are annotated with respect to their role, e.g., axiom or conjecture. Part (2) contains the clauses, in the form of a list of literals, that resulted from the disjunctive normal form transformation. This can either be the regular naive DNF translation or a definitional clausal form transformation, which assigns new predicates to some formulas. Each clause is numbered and associated with the name of the formula that generated it. Equality axioms are labelled with a special keyword, since they do not come from any transformation on the input formulas. The proof per se is in part (3), where each line is an inference rule. It contains the number of the clause to which the inference was applied, the bindings used (if any) and the resulting clause.
For building the expansion trees of the input formulas we need the substitutions used in the proof and the Skolem terms introduced during Skolemization. The substitutions will be the terms of the expansion tree’s weak quantifiers and the Skolem terms, translated to variables, will be the expansion tree’s strong quantifier terms. In the leanCoP proofs, Skolem terms have a specific syntax, so they can be identified and parsed as “Eigenvariables”. We use this approach to get an expansion proof of the original problem, instead of the skolemized problem. Since each strong quantifier is replaced by exactly one Skolem term, the condition for the set of substitutions in Definition is satisfied.
The collection of terms used for the weak quantifiers is a bit more involved due to variable renaming. The quantified variables in the input formula are renamed during the clausal normal form transformation. This means that the sets of variables occurring in the original problem and in the clauses are disjoint. The substitutions used in the proof are given with respect to the clauses’ variables, but we are interested in building expansion trees of the input formulas. We need therefore to find a way to map the variables in the clauses to the variables in the input formulas.
The solution found was to implement in GAPT the definitional clausal form transformation, trying to remain as faithful as possible to the one leanCoP uses, but without the variable renaming. After applying our transformation to the input formulas, we try to match the clauses obtained to the clauses from the proof object. The first-order matching algorithm returns a substitution if a match is found. Such substitution maps strongly quantified variables to “Eigenvariables” (the result of parsing Skolem terms), and weakly quantified variables to their renamed versions used in the clauses. By composing this substitution with the ones obtained from the bindings in the proof, we are able to correctly identify the terms used for each quantified variable in the input formulas.
http://pages.cs.miami.edu/~tptp/CASC/24/Systems.tgz
http://pages.cs.miami.edu/~tptp/cgi-bin/SystemOnTPTP
4 Results
We were able to import as expansion trees all the 142 proof objects provided to us by the veriT team, and all but one under one minute. The expansion sequents generated have been used as input for the cut-introduction algorithm [6] and some of their features (e.g. high number of instances) have motivated improvements to the algorithm. As for leanCoP, our database consists of 3043 proofs of problems from the TPTP library [12]. Of those, we can successfully import 1224 as expansion sequents. Some errors still occur while parsing and matching (e.g. our generated clauses do not have the same literal ordering as the clauses in the proof file), but we are working to increase the success rate.
Getting proofs from various theorem provers in the shape of expansion sequents allows us to do a number of interesting things. First of all, one can visualize the end-sequent and the instances used of each quantified formula. This is much more comfortable and easier to grasp than a raw text file. It is also possible to check whether the instances used lead indeed to a proof of the end-sequent. This is reduced to checking if the deep sequent of the expansion sequent is a tautology (which can be done, as this sequent is propositional) and if the dependency relation is acyclic. In case the expansion sequent is a proof, we can build an LK proof from it, using the dependency relation to decide the order in which quantifiers are introduced [8]. Finally, one can attempt proof compression and discovery of lemmas using the cut-introduction algorithm [6].
All of these functionalities are implemented in GAPT. The system comes with an interactive command line where commands for loading proofs, opening prooftool, introducing cuts, eliminating cuts, building an LK proof from an expansion sequent, among others, can be issued. Some examples of proofs imported and their visualizations can be found at https://www.logic.at/staff/giselle/examples.pdf.
5 Related Work
Other projects and tools also address the issues of proof visualization and checking. For proofs in the TPTP language in particular, there is IDV [13], which provides an interactive interface for manipulating the DAG representing a derivation. This tool focuses solely on visualization of proofs in the TPTP format. Our work aims on a more general framework, of which visualization is only a small part. We are also capable to import different proof objects, not only those in the TPTP language.
As for proof checking, [7] proposes a check of leanCoP proofs in HOL Light while [1] shows how to check SAT and SMT proofs using Coq. The former paper involved re-implementing leanCoP’s kernel in HOL Light, which differs a lot from our approach of simply parsing the outputs of theorem provers. In the latter, proofs produces by SAT/SMT theorem provers are certified by Coq. We must clarify that, given the information needed to produce expansion proofs, it is not fair to claim we are checking proof objects, but we merely have a sanity check that the instances used by the theorem prover actually lead to a proof of the proposed theorem. Such compromise makes sense if we want a framework general enough to deal with different proof objects, without asking any change on the side of theorem provers.
Finally, it is worth mentioning ProofCert [9], a research project with the aim of developing a theoretical framework for proof representation. In order not to make such compromise, and actually check each step of each proof for various different proof objects, a solid foundation of proof specification needs to be developed. While this does not happen, this work shows how it is still possible to combine existing proof objects into one representation.
6 Conclusion
We have shown how SMT and Connection proofs can be both imported as expansion sequents. The information needed from the proof objects is just the end-sequent being proven and a set of instances used for the quantified formulas. For both cases presented we relied on a first-order matching algorithm, but this requirement can be lifted if all substitutions are provided directly in the proof object.
The representation using expansion sequents serves various purposes. It provides an easy proof visualization, a simple checking procedure, LK proof construction and introduction of cuts.
This is an ongoing work, and we hope to have many developments in the near future. In particular, the difficulties in importing leanCoP proofs remain to be resolved. This procedure also offers a lot of room for optimization. Once we have a big enough set of parsed leanCoP proofs, we will add those to the benchmark used in the cut-introduction algorithm. As for veriT proofs, we plan to test bigger examples, as the ones provided are only a small subset from the SMT-LIB.
Another future goal is importing other formats from other provers and comparing the different proofs for the same input problem. We also aim on integrating a check for whether the obtained expansion sequent is an expansion proof in the import function.
References
Importing SMT and Connection proofs as expansion trees:
examples
Giselle Reis
INRIA-Saclay, France
giselle.reis@inria.fr
This report contains some examples of proofs from the automated theorem provers leanCoP and veriT and shows how they can be imported in GAPT. All files are available in the examples directory of the software.
The versions of the softwares used were:
- GAPT: master branch as of 30/07/2015
- LeanCoP 2.2
- VeriT 201410
We show here how these proofs are visualized in prooftool, but by typing help in GAPT’s command line, one can see a list of available functions for other purposes.
1 LeanCoP
The following file represents the problem of determining whether there exists two irrational numbers \(x\) and \(y\) such that \(x\) to the power of \(y\) is rational.
```prolog
fof(a, axiom, i(sr2)).
fof(b, axiom, ~i(two)).
fof(c, axiom, times(sr2,sr2) = two).
fof(d, axiom, ![X,Y,Z] : exp(exp(X, Y), Z) = exp(X, times(Y,Z))).
fof(e, axiom, ![X] : exp(X, two) = times(X,X)).
fof(f, conjecture, ?[X,Y] : (~i(exp(X,Y)) & i(X) & i(Y))).
```
LeanCoP’s leantptp proof (with extra line breaks to fit the width) of this problem is:
```prolog
fof(f, conjecture, ?[_13459, _13462] : (~i(exp(_13459, _13462)) & i(_13459) & i(_13462)), file('samples/irrationals.p', f)).
fof(a, axiom, i(sr2), file('samples/irrationals.p', a)).
fof(b, axiom, ~i(two), file('samples/irrationals.p', b)).
fof(c, axiom, times(sr2, sr2) = two, file('samples/irrationals.p', c)).
fof(d, axiom, ![_13784, _13787, _13790] : exp(exp(_13784, _13787), _13790) = exp(_13784, times(_13787, _13790)), file('samples/irrationals.p', d)).
fof(e, axiom, ![_13973] : exp(_13973, two) = times(_13973, _13973), file('samples/irrationals.p', e)).
```
1If you are using the system after this date, all functionality described here should work. If this is not the case, please file a bug report.
Importing SMT and Connection proofs as expansion trees: examples
One can load it in GAPT using the command:
gapt> val es = loadLeanCoPProof("examples/import/irrationals.leancopt")
Running prooftool on this object (prooftool(es.get)) will open a window with the visualization of the expansion proof, as shown in Figure 1. Note that the succedent is already expanded (clicking on a quantified formula will expand it to the instances used) and we can see the two pairs used: \((\sqrt{2}, \sqrt{2})\) and \((\sqrt{2} \sqrt{2}, \sqrt{2})\).
2 VeriT
The following is a simple proof which needs the equality axiom of congruence on predicates:
(set-logic QF_UF)
(set-info :smt-lib-version 2.0)
(declare-sort U 0)
(declare-fun f (U U) U)
Figure 1: Visualization of the expansion tree for the proof of irrational numbers.
(declare-fun a () U)
(declare-fun b () U)
(declare-fun p (U) Bool)
(assert (p a))
(assert (and (= (f a b) (f (f a b) b))
(= (p (f (f a b) b)) (p a))))
(assert (not (p (f a b))))
(check-sat)
(exit)
Running veriT on this problem with the option --proof-version=1 generates the proof object
(with extra line breaks):
veriT 201410 - the SMT-solver veriT (UFRN/LORIA).
success
success
success
success
success
success
success
success
success
unsat
success
(set .c1 (input :conclusion ((p a))))
(set .c2 (input :conclusion ((and (= (f a b) (f (f a b) b)) (= (p (f (f a b) b)) (p a))))))
(set .c3 (input :conclusion ((not (p (f a b))))))
(set .c4 (and :clauses (.c2) :conclusion ((= (f a b) (f (f a b) b))))
(set .c5 (and :clauses (.c2) :conclusion ((= (p (f (f a b) b)) (p a))))))
(set .c6 (equiv1 :clauses (.c5) :conclusion ((not (p (f (f a b) b)))) (p a))))
(set .c7 (equiv2 :clauses (.c5) :conclusion ((p (f (f a b) b)) (not (p a))))))
(set .c8 (resolution :clauses (.c7 .c1) :conclusion ((p (f (f a b) b)))))
(set .c9 (eq_congruent_pred :conclusion ((not (= (f a b) (f (f a b) b))))
(not (p (f (f a b) b)))) (p (f a b))))
(set .c10 (resolution :clauses (.c9 .c4 .c8 .c3) :conclusion ()))
Analogous to the leanCoP case, we can load the proof in GAPT and open the corresponding expansion
proof in prooftool:
val p = loadVeriTProof("examples/import/predcong.verit.s")
p: Option[at.logic.gapt.proofs.expansionTrees.ExpansionSequent] = ...
gapt> prooftool(p.get)
The result is in Figure 2. In this case, the instance of the predicate congruence axiom used in expanded.
Acknowledgments The author would like to thank Pascal Fontaine and Jens Otten for clarifications about the tools used and fruitful discussions; Pascal Fontaine and Geoff Sutcliffe for providing the dataset of proofs; Sonia Marin for comments on an early draft; and the reviewers for very useful remarks and for taking the time to try the system.
|
{"Source-Url": "http://www.gisellereis.com/papers/smt-conn-exp-trees.pdf", "len_cl100k_base": 8735, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 38571, "total-output-tokens": 10705, "length": "2e13", "weborganizer": {"__label__adult": 0.0004367828369140625, "__label__art_design": 0.0005812644958496094, "__label__crime_law": 0.0007224082946777344, "__label__education_jobs": 0.001430511474609375, "__label__entertainment": 0.00016629695892333984, "__label__fashion_beauty": 0.0002262592315673828, "__label__finance_business": 0.0004346370697021485, "__label__food_dining": 0.0007352828979492188, "__label__games": 0.001491546630859375, "__label__hardware": 0.0010309219360351562, "__label__health": 0.000911712646484375, "__label__history": 0.00044345855712890625, "__label__home_hobbies": 0.0001806020736694336, "__label__industrial": 0.0010023117065429688, "__label__literature": 0.0006337165832519531, "__label__politics": 0.0005574226379394531, "__label__religion": 0.000843048095703125, "__label__science_tech": 0.264892578125, "__label__social_life": 0.00015866756439208984, "__label__software": 0.01273345947265625, "__label__software_dev": 0.708984375, "__label__sports_fitness": 0.00044155120849609375, "__label__transportation": 0.0008592605590820312, "__label__travel": 0.00023925304412841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36070, 0.03008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36070, 0.63321]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36070, 0.85278]], "google_gemma-3-12b-it_contains_pii": [[0, 3595, false], [3595, 7619, null], [7619, 11756, null], [11756, 15714, null], [15714, 19763, null], [19763, 23913, null], [23913, 27645, null], [27645, 31434, null], [31434, 33303, null], [33303, 34038, null], [34038, 35618, null], [35618, 36070, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3595, true], [3595, 7619, null], [7619, 11756, null], [11756, 15714, null], [15714, 19763, null], [19763, 23913, null], [23913, 27645, null], [27645, 31434, null], [31434, 33303, null], [33303, 34038, null], [34038, 35618, null], [35618, 36070, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36070, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36070, null]], "pdf_page_numbers": [[0, 3595, 1], [3595, 7619, 2], [7619, 11756, 3], [11756, 15714, 4], [15714, 19763, 5], [19763, 23913, 6], [23913, 27645, 7], [27645, 31434, 8], [31434, 33303, 9], [33303, 34038, 10], [34038, 35618, 11], [35618, 36070, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36070, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
60fc5ed5e1b1abbbe04c10548db423198ff02b7b
|
Constraint based implementation of a PDDL-like language with static causal laws and time fluents
Agostino Dovier and Jacopo Mauro
1 Dipartimento di Matematica e Informatica, Università di Udine
dovier@dimi.uniud.it
2 Dipartimento di Scienze dell’Informazione, Università di Bologna
jmauro@cs.unibo.it
Abstract. Planning Domain Definition Language (PDDL) is the most used language to encode and solve planning problems. In this paper we propose two PDDL-like languages that extend PDDL with new constructs such as static causal laws and time fluents with the aim of improving the expressivity of PDDL language. We study the complexity of the main computational problems related to the planning problem in the new languages. Finally, we implement a planning solver using constraint programming in GECODE that outperforms the existing solvers for similar languages.
1 Introduction
In the context of knowledge representation and reasoning, a very important application of logic programming within artificial intelligence is that of developing languages and tools for reasoning about actions and change and, more specifically, for the problem of planning [2]. The proposals on representing and reasoning about actions and change have relied on the use of concise and high-level languages, commonly referred to as action description languages. Some well-known examples include the languages $A$ and $B$ [11] and extensions like $K$ [7]. Action languages allow one to write propositions that describe the effects of actions on states, and to create queries to infer properties of the underlying transition system. An action description is a specification of a planning problem using the action language.
Since 1998 a declarative language for planning has been defined either for establishing a common syntax or for allowing different research groups to participate to the planning competitions. This language is known as PDDL and its last release is 3.1 (see [16, 10, 12] for information on planning competitions and PDDL).
The goal of this work is to build two languages on the top of PDDL, called $APDDL$ and $BPDDL$, allowing new constructs and then explore the relevance of constraint solving for handling them. The main ideas of the constraint encoding comes from [6]. However, here we employed the C++ constraint solver platform GECODE [1] which is faster than the constraint solver of SICStus
Prolog used in [6], and we solve more precisely the frame problem. Moreover, we define a front-end from the PDDL-like languages to GECODE. We always outperform the running time of [6]; in some cases the improvements are really sensible.
The presentation is organized as follows. In Section 2 we introduce the language APDDL and in Section 3 we formally define its semantics. In Section 4 we define the language BPDDL. In Section 5 we report the complexities of some interesting problems related to planning in this language. The solver implementation and the tests are then discussed in Sections 6 and 7. Some proofs are reported in Appendix.
2 The language APDDL
APDDL is an extension of the well-known language PDDL and every APDDL program consists of two parts: the domain definition used to model the planning problem, and the instance definition used to define the instance of the problem to solve. We need to define a set \( \mathcal{F} \) of fluent names. Each \( f \in \mathcal{F} \) is assigned to a domain \( \text{dom}(f) \). We also need to define a set \( \mathcal{A} \) of action names. Each action \( a \) is associated to a precondition \( \text{pre}(a) \) and an effect \( \text{eff}(a) \) all expressed as a Boolean combinations of arithmetic constraints on fluents (see Table 1 for a simplified syntax\(^3\)).
| C | ::= 0 | 1 | (not C) | (and C\(^+\)) | (or C\(^+\)) | (AOP AC AC) |
| AC | ::= n | TIME_FLUENT | (OP AC AC) |
| AOP | ::= > | \geq | < | \leq | = | \neq |
| OP | ::= + | - | * | / | mod | rem |
| TIME_FLUENT | ::= f | (at n f) |
Table 1. Abstract syntax of constraints (C)—\( n \in \mathbb{Z}, f \in \mathcal{F} \)
The concrete syntax of the language is described by a EBNF grammar available at [15]. We just give here a taste of the syntax using a simple example: the famous Sam Lloyd’s \( n \)-puzzle that consists of a frame of numbered square tiles in random order with one tile missing. The tiles should be arranged in increasing order, with the hole in the bottom right corner. Types can be used for differentiating objects. In this example we can define a type for the position of a tile, one for the direction of the move to do and one for the numbers on the tile. This can be done in the following way:
\[ (:\text{types positions directions tiles_numbers}) \]
\(^3\) where eqv, imp, xor, mod, rem are respectively the equivalence, implication, exclusive or, module and reminder operators.
We can associate names (also known, with abuse of terminology, as constants) to constant and function symbols (with arity 0 and greater than 0, respectively). We can define the function symbol `near` with arity 2 used to determine what position should occupy a tile moved from a position `pos` according to a particular direction `dir`.
\[
(near \ ?pos - positions \ ?dir - directions) - positions
\]
In the example we can consider a missing tile as a tile numbered 0. We thus define the constant `empty_tile_number` for representing it.
`empty_tile_number - tiles_numbers`
We use these preliminary definitions to encode the relevant proprieties of the objects that we want to consider. These properties are the *fluents* and they are represented by a (multivalued) function. Boolean functions are called predicates. In our example we are interested in knowing the number in a particular position. This multi-valued fluent can be defined in the following way:
\[
(has \ ?position - positions) - tiles_numbers
\]
The last ingredient for the domain definitions are the definitions of the possible effects of the actions given their preconditions. In this example there is only an action: moving a tile. This action can be defined as follows:\(^4\):
\[
[action \ move
[parameters (?from \ ?to - positions)]
[precondition (and
(exists \ ?direction - directions
(== (near \ ?from \ ?direction) \ ?to))
(== (has \ ?to) \ empty_tile_number) )]
[effect (and
(== (has \ ?from) \ empty_tile_number)
(== (has \ ?to) \ (at \ -1 \ (has \ ?from))) )]
]
\]
Let us suppose that we want to solve this problem in a $3 \times 3$ board where the missing tile is at the bottom right corner. This can be encoded into the problem definition. First, all the objects involved are listed. In this case we have the following three types of objects:\(^5\):
\[
[objects
[set \ 1 \ 9] - positions
Left \ Right \ Up \ Down - directions
[set \ 0 \ 8] - tiles_numbers
]
\]
---
\(^4\) The term \((at \ -1 \ (has \ ?from))\) is a time fluent and it will be described later.
\(^5\) set is an APDDL operator for defining set of integers in a concise way.
Second, all the constants should be instantiated.
(:constants
(== empty_tile_number 0)
(== (near 1 Right) 2) (== (near 1 Down) 4)
(== (near 2 Right) 3) ...
)
The problem definition includes also the definition of the values of the fluents in the initial state and the goal. In the problem we are considering this can be done in the following way:
(:init (== (has 1) 2) (== (has 2) 5) ... )
(:goal (== (has 1) 1) (== (has 2) 2) ... )
We impose that the fluents in the initial state are completely defined.
If the goal of the problem is to obtain an optimal plan it is possible to define a cost function. This function referred as metric has the following syntax:
\[
M ::= (:metric MOP MC) \\
MOP ::= minimize | maximize \\
MC ::= AC | (is_violated C) | (OP MC MC)
\]
where AC and C are defined as in Table 1. For example, suppose that a state has cost 0 if the number 1 is in the first row and 1 otherwise. If we want to minimize the cost, the metric can be defined in the following way:
(:metric minimize
(is_violated (or (== (has 1) 1) (== (has 2) 1) (== (has 3) 1))))
Finally, at the end of the problem definition, it is necessary to specify the length of the plan we want to obtain. This can be done using the length primitive:
(:length 18)
The language APDDL has few more features like the possibility to introduce a metric function to maximize or minimize and additional constraints called plan constraints. For example in the n-puzzle problem it is possible to state that the tile with the number 1 should be in the last line at least once:
(:constraints (sometimes (or (== (has 7) 1)
(== (has 8) 1)
(== (has 9) 1))))
The main difference between the PDDL language and the APDDL is the possibility to use more operators in the action precondition and effects (division, remainder, exclusive or, ...) and the notion of time fluent.
A time fluent is an expression of the form (at i f) where i ∈ Z is an integer and f is a fluent. This construct is used in actions for referring to the value of a fluent f in time instant i. If i = 0 then (at 0 f) (or, in short, f) is called present
fluent because it refers to the value of the fluent \( f \) in the current state. If \( i < 0 \) (resp. \( i > 0 \)) the term \((at \ i \ f)\) is called instead past fluent (resp. future fluent). Let us observe that if the time fluent is used in an action precondition it refers to the state at which the action is executed. If it is used in the effect it refers to the state produced by the execution of the action.
An example of the use of time fluents is the following action that decreases the number of objects in a barrel in the next two states if during the last two state transitions at least one object is added into the barrel.
\[
\text{(:action empty} \\
\hspace{1em} :\text{parameters} \ (?\text{barrel} - \text{barrel}) \\
\hspace{1em} :\text{precondition} \ (\text{and} \\
\hspace{2em} (> \ (\text{contains} \ ?\text{barrel}) \ (at\ -1 \ (\text{contains} \ ?\text{barrel}))) \\
\hspace{2em} (> \ (at\ -1 \ (\text{contains} \ ?\text{barrel})) \ (at\ -2 \ (\text{contains} \ ?\text{barrel})))) \\
\hspace{1em} :\text{effect} \ (\text{and} \\
\hspace{2em} (=\ (\text{contains} \ ?\text{barrel}) \ (-\ (at\ -1 \ (\text{contains} \ ?\text{barrel})) \ 1)) \\
\hspace{2em} (=\ (at\ 1 \ (\text{contains} \ ?\text{barrel})) \ (-\ (\text{contains} \ ?\text{barrel}) \ 1))))
\]
In goal constraints, plan constraints and metrics it is not possible to use past or present fluents while in init constraint it is not possible to use past fluents.
3 APDDL Semantics
Given an APDDL program \( P \) it is possible to obtain an equivalent ground instance \( \text{ground}(P) \) by grounding all variables with all constants satisfying the types. In \( \text{ground}(P) \) actions preconditions and effects, goal conditions and all the information on the initial state are (Boolean combinations of) Finite Domain constraints on time fluents.
A state is characterized by the values of all the fluents involved. We will use the term \( \text{val}(s, f) \) to denote the value of the fluent \( f \) in the state \( s \). Any action \( a \) is characterized by its preconditions \( \text{pre}(a) \) and its effects \( \text{eff}(a) \). We allow parallel executions of different actions \( a_1 \) and \( a_2 \) provided their effects are independent. We impose a strong syntactic requirement: the sets of time fluents occurring in \( \text{eff}(a_1) \) and \( \text{eff}(a_2) \) must be disjoint.
Let us consider a sequence of states \( s_0, s_1, \ldots, s_n \), and a constraint \( c \). With \( \text{shift}_i(c) \) we denote the constraint obtained replacing each time fluent \((at \ t \ f)\) with the value \( \text{val}(s_{t+i}, f) \). If \( t + i < 0 \) or \( t + i > n \) then the value is \( \perp \) (undefined). Let us observe that \( c' = \text{shift}_i(c) \) is a Boolean combination of ground arithmetic constraints (or \( \perp \)). If \( \perp \) occurs in it, then its value is \text{false}. Otherwise, its value is determined by the usual semantics of arithmetic and Boolean operators on ground formulas. If the value of \( c' \) is \text{true}, we say that \( s_0, s_1, \ldots, s_n \models c' \); otherwise \( s_0, s_1, \ldots, s_n \not\models c' \).
Let \( G \) be the set of goal constraints and \( I \) be the set of initial constraints. Then a plan of length \( n \) \((n \geq 0)\) is a sequence \( s_0, A_1, s_1, \ldots, A_n, s_n \) where
1. \( s_0, \ldots, s_n \) are states
2. $A_1, \ldots, A_n$ are (possibly empty) sets of actions
3. $\forall i \in \{1, \ldots, n\} \forall a \in A_i. s_0, \ldots, s_n \models \text{shift}_{i-1}(\text{pre}(a))$
4. $\forall i \in \{1, \ldots, n\} \forall a \in A_i. s_0, \ldots, s_n \models \text{shift}_i(\text{eff}(a))$
5. $\forall c \in G. s_0, \ldots, s_n \models \text{shift}_n(c)$
6. $\forall c \in I. s_0, \ldots, s_n \models \text{shift}_0(c)$
7. $\forall a_1 \in A_1 \forall a_2 \in A_i. a_1 \neq a_2 \text{ eff}(a_1)$ and $\text{eff}(a_2)$ do not share future or present fluents.
8. if no action executed refers to a fluent $f$ in $s_i$ ($i > 0$) then $\text{val}(s_i, f) = \text{val}(s_{i-1}, f)$ (inertia condition)
When further plan constraints are used, the plan definition must entail more constraints. For instance, if the constraint (sometimes $c$) is added, then there must be $i$ such that $s_0, \ldots, s_n \models \text{shift}_i(c)$.
4 BPDDL
Starting from the language APDDL we add the possibility to use a construct like the static causal laws (briefly denoted here as rules) introduced in the language $B$ [11], obtaining the new language BPDDL. A rule has a precondition and an effect similar to the action preconditions and effects, but without future fluents. Informally, the semantics of a rule is that at every state of the plan if the precondition is true then also the effect must be true.
Rules are more powerful than PDDL axioms [18]. As a matter of fact, differently from axioms, rules do not require to define predicates using only stratified programs (and this is a strong constraint for knowledge representation); moreover, they are allowed to change values of fluents that can be also used in action effects.
The possibility of using rules increases the expressiveness of the language. For instance it is possible to change a fluent value after a transition even if no action has been executed. A simple example is the implementation of a clock simply using a fluent and a rule
\[
(:\text{rule} \\
:\text{parameters} \ (\ ?\text{time} - \text{time}) \\
:\text{effect} \ (== \ ?\text{time} (+ (\text{at} -1 \text{time}) + 1)))
\]
Another interesting use of rules is the propagation of an action effect. Let us consider for example a colored directed graph where we want that all the nodes connected with edges in a set $E_{1}$ have the same color. This propriety can be encoded in the following way:
\[
(:\text{rule} \\
:\text{parameters} \ (\ ?\text{edge} - \text{edge}) \\
:\text{precondition} \ (\text{is\_edge\_in\_E\_1} \ ?\text{edge}) \\
:\text{effect} \ (== (\text{node\_colour} (\text{head} \ ?\text{edge}))) \\
\quad (\text{node\_colour} (\text{tail} \ ?\text{edge})))
\]
When clear from the context, we will use the abstract notation $c_1 \rightarrow c_2$ for rules.
### 4.1 BPDDL Semantics and Inertia
Dealing with inertia in presence of rules is obviously more difficult than in a language that does not allow them. This is particularly true if the implementation of a language is based on the notion of constraint. Two rules stating the implications $p \rightarrow q$ and $q \rightarrow p$ are satisfied either by $p = q = 0$ or by $p = q = 1$. However, an arbitrary change of the values from 0 to 1 or vice versa cannot be simply justified by these rules.
A simple attempt of solution considered is that of choosing the states with a minimum change of fluents. Unfortunately, this definition cuts off a lot of solutions, as already pointed out in [3].
We instead used a solution based on the following principle: “Given some action effects if something can be left unchanged then it must be unchanged”
Let us define with $\text{Act}(s_0, A_1, s_1, \ldots, A_n, s_n)$ the fluents of $s_n$ that can be modified as a direct effect of an action in $\bigcup A_i$. Suppose that $\Delta F(s, s')$ is the set of fluents that have different values between the state $s$ and $s'$. Now, in a plan $s_0, A_1, s_1, \ldots, A_n, s_n$ we say that there is a critical situation between $s_{i-1}$ and $s_i$ if there is a sequence $s_0, A_1, s_1, \ldots, s_{i-1}, A_i, s'$ where the state $s'$:
- entails all the rules
- $\Delta F(s_{i-1}, s') \subseteq \Delta F(s_{i-1}, s_i)$
- $\forall f \in \text{Act}(s_0, A_1, s_1, \ldots, A_i, s_i). \text{val}(s_i, f) = \text{val}(s', f)$
Intuitively when there is a critical situations there is at least a fluent that can remain the same but instead has been changed by a rule. Therefore when there is a critical situation in a plan the over mentioned principle is violated.
Just a comment on the past references in rules. If a rule refers to a value of a fluent prior to the initial state $s_0$ of a plan the rule is trivially satisfied.
The semantics of the language BPDDL is similar to the semantics of APDDL with only two further requirements:
- the inertia condition is now the absence of critical conditions between two consecutive states in the plan
- the states must entail the applicable rules
### 5 Complexity
In this section we study the complexity of the main computational problems related to planning expressed within the APDDL and BPDDL languages. In particular, we focus our attention to ground APDDL/BPDDL programs. For APDDL programs some of the problems are equivalent, other are simpler or not meaningful. We assume moreover that no plan constraints is used in the program. Their inclusion would make the proofs more complicated but they would not affect the results. We studied the complexity of the following decision problems:
1. has\_critical\_situation (APDDL: not meaningful. BPDDL: NP-complete)
**input:** a program, a sequence of states and actions \( s_0, A_1, s_1, \ldots, A_n, s_n \) and two consecutive states \( s_i, s_{i+1} \) that entail all the conditions in the plan definition except the inertia
**output:** 1 iff there is a critical situation between \( s_i, s_{i+1} \), otherwise 0
2. validity (APDDL: P; BPDDL: co-NP-complete)
**input:** a program and a sequence of states and actions \( s_0, A_1, s_1, \ldots, A_n, s_n \)
**output:** 1 iff \( s_0, A_1, s_1, \ldots, A_n, s_n \) is a plan, otherwise 0
3. \( k \)-plan (APDDL: NP-complete; BPDDL: \( \Sigma_2^p \)-complete)
**input:** a program
**output:** 1 iff there is a plan of length \( k \) that solves the problem encoded into the BPDDL program, otherwise 0
4. plan (APDDL and BPDDL: PSPACE complete)
**input:** a program
**output:** 1 iff there is a plan that solves the problem encoded into the BPDDL program, otherwise 0
We give here only the main ideas used. The complete proofs of the results are reported in Appendix.
The proof of NP-hardness of has\_critical\_situation is based on a reduction from a variant of SAT in which all false and all true assignments are forbidden. Let us consider the Boolean formula \( \varphi = (X \lor Z) \land (\neg X \lor \neg Y \lor \neg Z) \) and consider the following program based on three rules with fluents \( f_X, f_Y, f_Z \) (\( \oplus \) stands for exclusive or while \( f_W^{-1} \) for the past fluent (at \(-1 f_W\))):
\[
\begin{align*}
\text{true} & \rightarrow f_X \lor f_Z \lor (f_X \land f_Y \land f_Z) \lor (\neg f_X \land \neg f_Y \land \neg f_Z) \\
\text{true} & \rightarrow \neg f_X \lor \neg f_Y \lor \neg f_Z \lor (f_X \land f_Y \land f_Z) \lor (\neg f_X \land \neg f_Y \land \neg f_Z) \\
\text{true} & \rightarrow (f_X^{-1} \oplus f_X) \lor (f_Y^{-1} \oplus f_Y) \lor (f_Z^{-1} \oplus f_Z)
\end{align*}
\]
Let us consider now two states \( s_0 \) and \( s_1 \), where for every fluent \( f \) in \( \{ f_X, f_Y, f_Z \} \) it holds that \( \text{val}(s_0, f) = 0 \) and \( \text{val}(s_1, f) = 1 \). And, let us analyze the problem: is there a critical situation between \( s_0, s_1 \)? The first two rules are satisfied if all the fluents are true or all are false or if there is an assignment that satisfies the formula \( \varphi \). The last rule, instead, forces at least one fluent to change.
Let us observe that \( \varphi \) is not satisfiable by a trivial assignment \(^7\). One of the possible non trivial assignments that satisfy \( \varphi \) is instead \( \{ X/\text{true}, Y/\text{false}, Z/\text{false} \} \). Using this assignment we can define a state \( s' \) such that \( \text{val}(s', f_X) = 1, \text{val}(s', f_Y) = 0, \text{val}(s', f_Z) = 0 \) that satisfies all the rules and with fluent variations included w.r.t. those between \( s_0 \) and \( s_1 \). Therefore there is a critical situation.
\(^6\) APDDL and BPDDL are PSPACE complete if the maximum temporal reference used is polynomially bounded on the length of the program encoding. See details in proofs.
\(^7\) An assignment is trivial if all the variables are assigned to true or all the variables are assigned to false.
As far as the validity problem is concerned, checking if all the plan conditions but the inertia holds can be done in polynomial time (and this is what is needed in APDDL). Verifying if there are no critical situations in BPDDL can be done in polynomial time using a co-NP oracle machine that solves the complement of has_critical_situation.
$k$-plan problem membership to $\Sigma_2^p$ derives directly by the NP-completeness of has_critical_situation. To prove the $\Sigma_2^p$ hardness of $k$-plan we reduce it to the problem of finding an answer set for the extended disjunctive logic program (EDLP programs) [4]. An EDLP program is a set of rules of the form
$$l_1|\ldots|l_p \leftarrow l_{p+1},\ldots,l_m, not\ l_{m+1},\ldots, not\ l_n$$
where $n \geq m \geq p \geq 0$ and each $l_i$ is a literal, i.e. an atom $a$ or the classical negation $\neg a$ of an atom in a first-order language, and not is a negation-as-failure operator. The symbol $|$ is used to distinguish disjunction in the head of a rule from disjunction $\lor$ used in classical logic.
The problem of deciding if a propositional (i.e. ground) EDLP has an answer set is $\Sigma_2^p$ complete [8].
We introduce the reduction with an example. Consider the following propositional EDLP from [17] which states that everyone is pronounced not guilty unless proven otherwise:
```
innocent|guilty ← charged
¬guilty ← not proven
charged ←
```
From this program we can generate a program based on the following rules with fluents innocent, guilty, charged, proven, $w$.
$$
\begin{align*}
(w^{-1} = 2 & \land charged = 1) \rightarrow (innocent = 1 \lor guilty = 1) \\
(w^{-1} = 2 \land (proven = 1)) & \rightarrow guilty = 0 \\
w^{-1} = 2 & \rightarrow charged = 1
\end{align*}
$$
If the fluents innocent, guilty, charged, proven, $w$ can have values in \{0, 1, 2\} and at the initial state all fluents have value 2 then there is a plan of length 1 iff the EDLP program has a stable model. Intuitively the answer set has an atom $a$ (resp. $\neg a$) if in the final state $a = 1$ (resp. $a = 0$). If $a = 2$ then it is not in the answer set. In the previous case the single answer set is \{¬guilty, innocent, charged\} and the plan that solves the problem is
$$\{guilty/2, innocent/2, charged/2, proven/2, w/2\}, \emptyset, \{guilty/0, innocent/1, charged/1, proven/2, w/2\}$$
PSPACE membership can be proven viewing the planning problem as a reachability problem on a graph where the nodes are states and the arcs are set of actions. Encoding a state and checking if there is an arc between two states is feasible in polynomial space and therefore reachability can be decided in PSPACE.
The plan problem is PSPACE complete because APDDL/BPDDL is more expressive than STRIPS [9]. A STRIPS program can be mapped into a APDDL or BPDDL program straightforwardly and thus since plan in STRIPS is PSPACE complete [5] the plan problem is PSPACE complete also in APDDL or BPDDL.
6 Solver
The positive results of the approach [6] encouraged us to write a constraint-based solver for the languages APDDL and BPDDL. The implementation of BPDDL subsumes that of APDDL. We decided to exploit the constraint solver GECODE, implemented in C++, that offers competitive performance w.r.t. both runtime and memory usage. Starting from the context-free grammar of BPDDL we have defined a lexical analyzer and a parser using the standard tools flex (Fast Lexical Analyser) and Bison. The developed solver solves the $k$-plan problem.
The overall structure of the solver is similar to that developed in [6] and it deals with the following variables:
1. for every fluent in every state one FD variable (a Boolean variable if the fluent is a predicate) represents the value of the fluent in that state
2. for every action in every transition one boolean variable represents if the action is executed
Constraints for checking action preconditions and imposing action effects are then added. In the case of the BPDDL-solver we also introduced a set of constraints to verify the closure of a state w.r.t. the rules and to solve the frame problem.
Verifying that there are no critical situations can lead to a definition of an exponential number of constraints. As pointed out in [13] and [14] for Answer Set Programming, it is not possible to solve the frame problem adding only a polynomial number of formulas of polynomial length unless $P = NP$.
Let now define a function $\text{shiftRule}(F, r)$ that taking a set of fluents $F$ and a rule $r$, decreases by one the time reference of all the time fluents (at 0 $f$) in $r$ if $f \in F$.
Let $\text{ruleModified}(f, s_0, A_1, s_1, \ldots, A_n, s_n)$ be the constraint that is true iff $f \notin \text{Act}(s_0, A_1, s_1, \ldots, A_n, s_n)$ and $\text{val}(s_n, f) \neq \text{val}(s_{n-1}, f)$.
There is no critical situation between two states $s_{i-1}, s_i$ in a plan $s_0, A_1, s_1, \ldots, s_n$ if for all non empty subset of fluents $F$
$$s_0, \ldots, s_n \models \text{shift}_i \left( \bigwedge_{r \text{ rule}} \text{shiftRule}(F, r) \rightarrow \neg \bigwedge_{f \in F} \text{ruleModified}(f, s_0, A_1, s_1, \ldots, A_i, s_i) \right)$$
Intuitively, we check if the rules are fulfilled even if the fluents in $F$ are left unchanged. When this happens we must assure that in this case there is at least a fluent in $F$ that is not modified only by rules.
In the BPDDL solver we tried to minimize the number of constraints added to state the inertia. Suppose $P$ is a partition such that for every (at 0 $f_1), (at
0, \ f_2 \) defined in a rule one of its elements contains \( f_1, f_2 \). To avoid critical situations it is possible to add one of the above-mentioned constraints for all the non empty subsets of the elements in \( P \).
If a metric is used we employ the branch and bound algorithm for finding the optimum solution. Otherwise, we use the default algorithm provided by GECODE for exploring the search space (depth first search).
We also developed two optional heuristics for reducing the search space. The first one, called \texttt{no.state.repetition} avoids the possibility of returning to an already visited state (drawback: if a \( k \)-plan exists only with multiple visits of a node, we don’t find it).
The second heuristics is called \texttt{confluent.actions} that imposes a partial order on actions. We say that \( a_1 < a_2 \) when for every plan of length 2 the execution of \( a_1 \) and \( a_2 \) has the same effect of the execution \( a_2 \) and \( a_1 \). We notice that \( a_1 < a_2 \) is always true if the set of fluents in \( a_1 \) effect is disjoint from the set of fluents in \( a_2 \) precondition and vice versa. When this happens we impose that in a plan the action \( a_1 \) should be executed before the action \( a_2 \). This heuristic can reduce the plan symmetries.
Since sometimes we are interested in finding a sequential plan we allow the programmer to choose at most or exactly one action per transition.
7 Tests
For the scope of this paper, we compared the performances of the GECODE based APDDL solver with the performances of the Solver for the language \( \mathcal{B}^{MV} \) [6]. For the tests we used an AMD Opteron 2.2 GHz Linux Machine. The APDDL solver uses GECODE 2.1.1 and was compiled with the 4.1.2 version of g++. The \( \mathcal{B}^{MV} \) solver instead is written and executed in SICStus Prolog 4.0.4. As benchmarks we chose some of the domains studied and presented in [6]. We are planning some other tests also with other systems (e.g., MIPS-XXL, SG-Plan5, SatPlan) when they are applicable (e.g. for domains without time fluents and rules). All the solver codes and the examples of the program used for the testing are available at [15].
As explained in [6], for implementation choice, treatment of inertia in \( \mathcal{B}^{MV} \) can be incorrect for some programs where rules introduce loops. The BPDDL solver, instead, works correctly on those examples. Anyway, in our tests we choose to compare \( \mathcal{B}^{MV} \) with APDDL on domains without rules. BPDDL has basically the same running time as APDDL (with a 5% of overhead) on the tested domains.
For every instance of the problems we considered both the time needed by the solver to post the constraints and the time needed to find the first solution (if any). Timings are expressed in ms and are given as a sum of the post time (first term) and the search time (last term in the addition). Even if the two languages used are different (PDDL like vs Prolog like) we encoded the domains basically in the same way (same actions, same preconditions, etc) and we have used both
the solvers with the default parameters (A/BPDDL solver chooses the variable with the smallest domain size and the smallest value during the search process).
Since the $B^{MV}$ solver is studied for sequential plans we impose the same constraint for the A/BPDDL solver. Table 2 contains the execution times for the $n$-puzzle problem, the solitaire peg game\(^8\) (a plan with 31 moves), the problem of finding a knight walk on a $4 \times 4$ chessboard, and the well-known three barrels problem with barrels of 20-11-9 liters.
Knight and peg has been launched with and without the no_state_repetition heuristic.
<table>
<thead>
<tr>
<th>Prob.</th>
<th>Instance</th>
<th>Len.</th>
<th>Sol.</th>
<th>APDDL</th>
<th>$B^{MV}$</th>
<th>$B^{MV}$ APDDL</th>
</tr>
</thead>
<tbody>
<tr>
<td>puzzle</td>
<td>$I_1$</td>
<td>19</td>
<td>No</td>
<td>10 + 3660</td>
<td>150 + 12580</td>
<td>3,5</td>
</tr>
<tr>
<td>puzzle</td>
<td>$I_1$</td>
<td>20</td>
<td>Yes</td>
<td>0 + 70</td>
<td>140 + 4210</td>
<td>62,1</td>
</tr>
<tr>
<td>puzzle</td>
<td>$I_2$</td>
<td>24</td>
<td>No</td>
<td>10 + 93970</td>
<td>150 + 270080</td>
<td>2,9</td>
</tr>
<tr>
<td>puzzle</td>
<td>$I_2$</td>
<td>25</td>
<td>Yes</td>
<td>10 + 38010</td>
<td>180 + 314930</td>
<td>8,3</td>
</tr>
<tr>
<td>puzzle</td>
<td>$I_3$</td>
<td>20</td>
<td>No</td>
<td>10 + 8910</td>
<td>90 + 31140</td>
<td>3,5</td>
</tr>
<tr>
<td>puzzle</td>
<td>$I_3$</td>
<td>25</td>
<td>No</td>
<td>10 + 129760</td>
<td>170 + 463500</td>
<td>3,6</td>
</tr>
<tr>
<td>knight</td>
<td></td>
<td>24</td>
<td>Yes</td>
<td>40 + 98610</td>
<td>670 + 2743660</td>
<td>27,8</td>
</tr>
<tr>
<td>knight</td>
<td>*</td>
<td>24</td>
<td>Yes</td>
<td>30 + 68120</td>
<td>1550 + 2620060</td>
<td>38,5</td>
</tr>
<tr>
<td>peg</td>
<td>11</td>
<td>No</td>
<td>10 + 13790</td>
<td>620 + 841340</td>
<td>61,0</td>
<td></td>
</tr>
<tr>
<td>peg</td>
<td>*</td>
<td>11</td>
<td>No</td>
<td>10 + 9510</td>
<td>640 + 849610</td>
<td>89,3</td>
</tr>
<tr>
<td>peg</td>
<td>31</td>
<td>Yes</td>
<td>50 + 41390</td>
<td>1850 + 47910</td>
<td>1,2</td>
<td></td>
</tr>
<tr>
<td>peg</td>
<td>*</td>
<td>31</td>
<td>Yes</td>
<td>50 + 16690</td>
<td>1780 + 46360</td>
<td>2,88</td>
</tr>
<tr>
<td>barrels</td>
<td>20-11-9</td>
<td>18</td>
<td>No</td>
<td>10 + 350</td>
<td>60 + 560</td>
<td>1,7</td>
</tr>
<tr>
<td>barrels</td>
<td>20-11-9</td>
<td>19</td>
<td>Yes</td>
<td>10 + 150</td>
<td>60 + 240</td>
<td>1,9</td>
</tr>
</tbody>
</table>
In Table 3 we compare the times for two multivalued problems. The gas diffusion problem where Diabolik wishes to fill a room with sufficient amount of gas in order to generate an explosion below the central bank\(^9\) and a community problem where, according to some rules, rich people wish to give money to poor people in order to reach an equilibrium.
The tests reveal that the APDDL solver is always the fastest one. In particular for the toughest instances the times can be decreased by an order of magnitude or more (see for example the results obtained in the gas diffusion and community problem).
---
\(^8\) This problem is one of the benchmarks of the 2008 planning competition.
\(^9\) This is a variant of the pipesworld domain of the 2006 planning competition.
Table 3. Experimental results for the gas diffusion problem and the community problem
<table>
<thead>
<tr>
<th>Prob.</th>
<th>Instance</th>
<th>Len.</th>
<th>Sol.</th>
<th>APDDL</th>
<th>B^{MV}</th>
<th>B^{MV}</th>
</tr>
</thead>
<tbody>
<tr>
<td>gas</td>
<td>A_1</td>
<td>6</td>
<td>Yes</td>
<td>0 + 220</td>
<td>70 + 1348</td>
<td>6</td>
</tr>
<tr>
<td>gas</td>
<td>A_1</td>
<td>7</td>
<td>Yes</td>
<td>0 + 10</td>
<td>100 + 5350</td>
<td>545</td>
</tr>
<tr>
<td>gas</td>
<td>B_1</td>
<td>10</td>
<td>No</td>
<td>10 + 8500</td>
<td>170 + 3846200</td>
<td>452</td>
</tr>
<tr>
<td>gas</td>
<td>B_1</td>
<td>11</td>
<td>Yes</td>
<td>0 + 10</td>
<td>140 + 1802760</td>
<td>180290</td>
</tr>
<tr>
<td>gas</td>
<td>B_1</td>
<td>12</td>
<td>Yes</td>
<td>10 + 20</td>
<td>150 + 933350</td>
<td>31117</td>
</tr>
<tr>
<td>gas</td>
<td>B_1</td>
<td>13</td>
<td>Yes</td>
<td>10 + 70</td>
<td>160 + 302340</td>
<td>3781</td>
</tr>
<tr>
<td>gas</td>
<td>B_1</td>
<td>14</td>
<td>Yes</td>
<td>10 + 170</td>
<td>140 + 4600</td>
<td>26</td>
</tr>
<tr>
<td>community</td>
<td>A_2</td>
<td>5</td>
<td>No</td>
<td>0 + 9500</td>
<td>50 + 264760</td>
<td>28</td>
</tr>
<tr>
<td>community</td>
<td>A_2</td>
<td>6</td>
<td>Yes</td>
<td>0 + 100</td>
<td>30 + 200</td>
<td>2</td>
</tr>
<tr>
<td>community</td>
<td>A_3</td>
<td>7</td>
<td>Yes</td>
<td>0 + 12610</td>
<td>40 + 930080</td>
<td>74</td>
</tr>
<tr>
<td>community</td>
<td>B_5</td>
<td>5</td>
<td>No</td>
<td>0 + 5080</td>
<td>30 + 131630</td>
<td>26</td>
</tr>
<tr>
<td>community</td>
<td>B_5</td>
<td>6</td>
<td>Yes</td>
<td>10 + 0</td>
<td>30 + 110</td>
<td>14</td>
</tr>
<tr>
<td>community</td>
<td>B_5</td>
<td>7</td>
<td>Yes</td>
<td>0 + 10</td>
<td>40 + 170</td>
<td>21</td>
</tr>
</tbody>
</table>
8 Conclusion and Future Work
In this work we presented two extensions to PDDL-like languages. The first extension introduces new operators and the notion of time fluents that allow to express plan problems in a more concise way. For example suppose that we have a board of lights and when one of the lights is off or on also the status of the neighbor lights change. In BPDDL this situation can be easily modeled in the following way:
\[
\text{(:action press}
\text{ :parameters (?light - lights)}
\text{ :effect (and}
\text{ (xor (at -1 (is_on ?cell)) (is_on ?cell))}
\text{ (forall ?neighbor - lights)
\text{ (imp}
\text{ (== (neighbor ?cell) ?neighbor)}
\text{ (xor (at -1 (is_on ?neighbor)) (is_on ?neighbor))}
\text{ )))}
\]
Then we introduced static causal laws and we provide a solver based on GECODE which has been proved to be effective and it also represents a solid starting point for future extensions. We then characterized the complexities of the major problems relating to planning within the proposed languages. With static causal laws the plan problem became harder than in the case of absence.
Some of the possible next steps are:
- extend the language BPDDL with some construct to query and constrain the occurrences of the action directly in the language
- extend the language to allow more expressive metrics (e.g. it is possible to define metrics giving for every action a certain cost)
– supporting in both languages other features like preferences or hierarchical types
– support multi-agent planning and forms of concurrent actions
– create a compiler from the language PDDL to APDDL
– port the code of the solver using the new GECODE 3.0 environment
– compare the solver with the state-of-the-art PDDL planning solvers
Acknowledgments
We thank Andrea Formisano and Enrico Pontelli for the several useful discussions and technical help. The research is partially supported by PRIN and FIRB RBNE03B8KK projects.
References
A Complexity results and proofs
Let us start with some basic observations. Given a ground BPDDL program of length $n$, the rules, the actions, the goal constraints, the initial constraints have all length bounded by $n$. Similarly, since all the fluents in the initial state must be uniquely determined by the initial constraints, the number of fluents is bounded by $n$. Therefore, checking if a rule, an action precondition or an action effect is entailed by a state can be done in polynomial time on $n$. In a similar way checking if two actions can not be done simultaneously is feasible in polynomial time.
Let us define an assignment $\{X_1/v_1, \ldots, X_n/v_n\}$ non trivial if $\exists i, j \in \{1, \ldots, n\}$ s.t. $v_i = true$ and $v_j = false$. Let non-trivial-SAT be the problem of deciding if a formula is satisfied only by a non trivial assignment.
**Lemma 1.** non-trivial-SAT is NP-complete
**Proof.** NP membership derives from the fact that checking if an assignment satisfies a boolean formula and it is non trivial are two polynomial problems.
NP-hardness can be proved via reduction from SAT. Let $\varphi$ be a SAT formula and $X$ and $Y$ two fresh boolean variables. Let $\phi = \varphi \land (X \lor Y) \land (\neg X \lor \neg Y)$. We have that $\varphi$ is satisfiable iff there is a non trivial assignment that satisfies $\phi$. \hfill \square
**Theorem 1.** has_critical_situation is NP-complete
**Proof.** NP membership can be proved by noticing that a certificate of the problem is a state $s'$ that entails the rules and $\Delta F(s_i, s_{i+1}) \supset \Delta F(s_i, s')$. Checking such a certificate can be done in polynomial time.
NP-hardness can be proved via reduction from non-trivial-SAT.
Let us consider a formula $\varphi = \psi_1 \land \cdots \psi_k$ where $\psi_i$ are clauses. Let $\mathcal{F}$ be the set of variables in $\varphi$. We build a BPDDL program as follows (we use an abstract syntax for the sake of clarity):
- for all $i = 1..k$ ($\psi_i \lor \bigwedge_{f \in \mathcal{F}} f = 0 \lor \bigwedge_{f \in \mathcal{F}} f = 1$)
- $\bigvee_{f \in \mathcal{F}} (at \ -1 \ f) \neq f$
Let us consider the plan $s_0, \emptyset, s_1$ where $val(s_0, f) = 0$ and $val(s_1, f) = 1$ for all $f \in \mathcal{F}$.
It is possible to prove that $\varphi$ is satisfiable only by a non trivial assignment iff there is a critical situation between $s_0$ and $s_1$. \hfill \square
**Theorem 2.** validity is co-NP-complete
**Proof.** Given a plan, checking if all the plan conditions except the inertia are respected can be done in polynomial time. Verifying if there are no critical situations can be done using a co-NP machine that solves the complement of has_critical_situation. Validity is therefore in co-NP.
Let be no_validity the complement of validity. In Theorem 1 we proved that has_critical_situation is still NP hard even with some restrictions (the plan
has 2 states, there are no actions, \ldots). With these restrictions no_validity is has_critical_situation and thus it is NP hard.
Since no_validity is NP hard than validity is co-NP hard. \hfill \Box
In APDDL instead the validity problem is only polynomial on the length of the program encoding. The difference between the two languages is made by the definition of inertia that in the case of the BPDDL leads to a potentially exponential search space of states on the number of the fluents.
**Theorem 3.** k-plan is in $\Sigma^P_2$
**Proof.** Let $M$ be a non deterministic Turing machine that guesses a sequence of states and actions $s_0, A_1, s_1, \ldots, A_k, s_k$ and checks if all the plan conditions except inertia are satisfied. Then for checking the inertia $M$ calls k times an oracle machine that solves has_critical_situation.
If all the checks are positive $M$ returns 1, otherwise 0. $M$ solves k-plan in polytime. Therefore k-plan is in $NP^{NP} = \Sigma^P_2$. \hfill \Box
Given a EDLP program $P$ we define as $T(P)$ the BPDDL program where:
- a multi-fluent $a$ is defined for every atom $a$ in $P$. These fluents can be 0, 1 or 2
- $w$ is a new fluent
- for every rule $l_1 \mid \ldots \mid l_j \leftarrow l_{j+1}, \ldots, l_m, \text{not } l_{m+1}, \ldots, \text{not } l_n$ in $P$ there is a rule
\[
\left( ((at - 1 w) = 2) \land \bigwedge_{i=j+1}^m \sigma(l_i) \land \bigwedge_{i=m+1}^n \neg \sigma(l_i) \right) \rightarrow \bigvee_{i=1}^j \sigma(l_i)
\]
where
\[
\sigma(l_i) = \begin{cases} (a = 1) & \text{if } l_i = a \\ (a = 0) & \text{if } l_i = \neg a \\ \end{cases}
\]
- the initial constraint is $(w = 2) \land \bigwedge_{\text{atom}} (a = 2)$
- there are no actions and goal constraints
Given a set $S$ of $P$ literals we use $T(S)$ for the sequence of state and actions $s_0, A_1, s_1$ where
- $\text{val}(s_0, w) = \text{val}(s_1, w) = 2$
- for every atom in $P$ $\text{val}(s_0, a) = 0$ and
\[
\text{val}(s_1, a) = \begin{cases} 1 & \text{if } a \in S \\ 0 & \text{if } \neg a \in S \\ 2 & \text{otherwise} \\ \end{cases}
\]
- $A_1 = \emptyset$
Just a simple consideration; given a program \( P \) and its transformation \( T(P) \) all the initial states satisfy the rules since no rule is applicable to the first state.
We extend the notion of the translation \( T \) to rules of EDLP. We use the term \( T(r) \) for the BPDDL rule obtained from the EDLP rule \( r \).
With \( P^S \) we refer to the Gelfond-Lifschitz transformation [4].
**Lemma 2.** if \( r \) is a rule of a disjunctive logic program then \( S \models r \iff T(S) \models T(r) \)
**Proof.** For definition of \( T(S) \) for every literal \( l \) \( S \models l \iff T(S) \models l \) and \( T(S) \models (\neg l) \iff (\neg l) \).
For definition of \( S \mid P \) the rules in \( P \) can be divided in the following tree disjoint sets.
1. rules that have a \((\neg l)\) term where \( l \in S \)
2. rules that do not have \((\neg l)\) terms
3. the remaining rules
The lemma can be proven by induction on the number of rules in the EDLP program.
If \( P = \emptyset \) then \( S \models P = P^S \) and \( T(S) \models T(P) \)
Suppose that \( P = P_1 \cup \{r\} \).
\[- \text{if } r \text{ is in the first set } r \notin P^S \text{. Then } S \models P^S \text{ iff } S \models P^S_1 \text{. If } S \text{ is not a model of } P^S_1 \text{ then for inductive hypothesis } T(S) \not\models T(P_1) \text{ and thus } T(S) \not\models T(P) = T(P_1) \land T(r). \text{ Conversely if } T(S) \models T(P_1) \text{ then } T(S) \not\models (\neg \sigma(l)) \text{ if } l \in S \text{. Since in } T(r) \text{ precondition there is at least one of these literals then } T(S) \models T(r) \text{ and thus } T(S) \models T(P_1) \land T(r) = T(P). \]
\[- \text{if } r \text{ is in the second set then for lemma 2 } S \models \{r\} \leftrightarrow T(S) \models T(r). \text{ For inductive hypothesis } S \models P^S \leftrightarrow T(S) \models T(P_1) \text{ and thus } S \models P^S = P^S_1 \cup \{r\} \leftrightarrow S \models P^S_1 \land S \models \{r\} \leftrightarrow T(S) \models T(P_1) \land T(S) \models T(r) \leftrightarrow T(S) \models T(P_1) \land T(r) = T(P) \]
\[- \text{if } r \text{ is in the third set then for all the terms } (\neg l) \text{ in } r l \notin S. \text{ Therefore } T(S) \models (\neg \sigma(l)) \text{ and thus } T(S) \models r \leftrightarrow T(S) \models r'. \text{ where } r' \text{ is the rule } r \text{ without the } (\neg l) \text{ terms. Now since } P^S = P^S_1 \cup \{r'\} \text{ we can derive that } S \models P^S \leftrightarrow T(S) \models T(P) \]
\( \square \)
**Theorem 4.** \( k \)-plan is \( \Sigma^P_2 \) complete
**Proof.** We reduce the existence of the answer set in propositional EDLP program to 1-plan.
Suppose that \( P \) has answer set \( S \). Let us assume that \( T(S) = s_0, \emptyset, s_1 \) is not a plan. For lemma 3 \( T(S) \models T(P) \). Since \( T(S) \) is not a plan there is a critical situation between \( s_0, s_1 \) and therefore there exist \( s' \) s.t. \( s_0, \emptyset, s' \models T(P) \) and \( \Delta F(s_0, s') \subset \Delta F(s_0, s_1) \). If \( S' \) is a set of literals s.t. \( T(S') = s_0, \emptyset, s' \) we have for
lemma 3 that $S'$ is model of $P^{S'}$ but this is a contradiction since $S' \subset S$ and we supposed $S$ answer set.
Suppose that there exist a plan $s_0, \emptyset, s_1$ for $T(P)$. Then there exist $S$ such that $T(S) = s_0, \emptyset, s_1$. For lemma 3 we have that $S$ is a model of $P$ and therefore $P$ has an answer set.
\[\square\]
**Theorem 5.** If $t_{max}, t_{min}$ are the maximum and minimum time reference in fluents and $t_{max}, |t_{min}|$ are polynomially bounded on the length of the encoding then plan is PSPACE complete.
**Proof.** Every fluent can assume $O(2^n)$ values and thus the number of possible states is $O(2^n)$. Given two states $s, s'$ if $t_{max}, |t_{min}|$ are polynomially bounded it is possible to compute in polynomial space if there exist $A$ s.t. $s, A, s'$ is a subsequence of a plan. This can be done generating non deterministically a polynomial number of states and actions and then solving the validity problem without checking the entailment of goal and initial constraints.
The plan problem can therefore be seen as the reachability problem. Since it is possible to define a state and check if two states are connected in polynomial space using a non-deterministic Turing machine then the entire plan problem can be solved in polynomial space using a non-deterministic Turing machine. Since NPSPACE = PSPACE, plan is in PSPACE.
The plan problem is PSPACE complete because A/BPDDL is more expressive than STRIPS [9]. A STRIPS program can be mapped into a A/BPDDL program straightforwardly and thus since plan in STRIPS is PSPACE complete the plan problem is PSPACE complete also in A/BPDDL. \[\square\]
Even if metric functions are used the optimization problem derived is in PSPACE. This is due to the fact that the metric function depends on the value of fluents in the final state and thus the metric value can be encoded in $O(n^2)$ space.
|
{"Source-Url": "https://imada.sdu.dk/~mauro/papers/cilc2009_preprint.pdf", "len_cl100k_base": 13755, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 62266, "total-output-tokens": 15568, "length": "2e13", "weborganizer": {"__label__adult": 0.00037550926208496094, "__label__art_design": 0.0005297660827636719, "__label__crime_law": 0.0005726814270019531, "__label__education_jobs": 0.0011997222900390625, "__label__entertainment": 9.66787338256836e-05, "__label__fashion_beauty": 0.0001984834671020508, "__label__finance_business": 0.0003533363342285156, "__label__food_dining": 0.0004787445068359375, "__label__games": 0.0012769699096679688, "__label__hardware": 0.000949382781982422, "__label__health": 0.0005707740783691406, "__label__history": 0.00034356117248535156, "__label__home_hobbies": 0.00016736984252929688, "__label__industrial": 0.0007567405700683594, "__label__literature": 0.00037550926208496094, "__label__politics": 0.0003647804260253906, "__label__religion": 0.0005540847778320312, "__label__science_tech": 0.0760498046875, "__label__social_life": 0.00010412931442260742, "__label__software": 0.00841522216796875, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0003669261932373047, "__label__transportation": 0.0006899833679199219, "__label__travel": 0.0002052783966064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48209, 0.0399]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48209, 0.62818]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48209, 0.83229]], "google_gemma-3-12b-it_contains_pii": [[0, 2401, false], [2401, 4849, null], [4849, 7047, null], [7047, 9159, null], [9159, 12556, null], [12556, 15242, null], [15242, 18061, null], [18061, 21309, null], [21309, 23968, null], [23968, 26839, null], [26839, 29940, null], [29940, 32309, null], [32309, 34861, null], [34861, 37539, null], [37539, 38155, null], [38155, 41073, null], [41073, 43180, null], [43180, 46309, null], [46309, 48209, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2401, true], [2401, 4849, null], [4849, 7047, null], [7047, 9159, null], [9159, 12556, null], [12556, 15242, null], [15242, 18061, null], [18061, 21309, null], [21309, 23968, null], [23968, 26839, null], [26839, 29940, null], [29940, 32309, null], [32309, 34861, null], [34861, 37539, null], [37539, 38155, null], [38155, 41073, null], [41073, 43180, null], [43180, 46309, null], [46309, 48209, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48209, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48209, null]], "pdf_page_numbers": [[0, 2401, 1], [2401, 4849, 2], [4849, 7047, 3], [7047, 9159, 4], [9159, 12556, 5], [12556, 15242, 6], [15242, 18061, 7], [18061, 21309, 8], [21309, 23968, 9], [23968, 26839, 10], [26839, 29940, 11], [29940, 32309, 12], [32309, 34861, 13], [34861, 37539, 14], [37539, 38155, 15], [38155, 41073, 16], [41073, 43180, 17], [43180, 46309, 18], [46309, 48209, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48209, 0.09783]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
6ca04121e407655e1fb04ede63d570c4723ba116
|
An HMM-based Method for Adapting Service-based Applications to Users’ Quality Preferences
Yousef Rastegari\textsuperscript{a} Afshin Salajegheh\textsuperscript{b}
\textsuperscript{a}Faculty of Computer Science and Engineering, Shahid Beheshti University, Iran.
\textsuperscript{b}Computer Science and Software Engineering Department, Islamic Azad University, South Tehran Branch.
\section{Introduction}
Service-oriented computing is increasingly adopted as a paradigm for building loosely coupled, distributed and adaptive software applications, called service-based applications (SBA). SBA is composed of software services (i.e. constitute services), and those services may be owned by the developing organization or third parties \cite{taylor2010service}. SBA adaptation is required to overcome the runtime changes in functionalities and quality objectives. Therefore, it is desirable to modify SBA’s constitute services through (semi) automatic adaptation mechanisms.
Adaptation mechanisms are the techniques and facilities provided by SBA that enable adaptation strategies like service re-composition, service re-selection, or service re-negotiation \cite{richardson2000adaptation}. The realization of adaptation mechanisms may be done automatically or may require user involvement; that is, human-in-the-loop adaptation. The adaptation mechanisms are classified into Adaptive, Corrective, Preventive, and Extending according to S-CUBE \cite{casati2002modeling} adaptation taxonomy.
Most of the existing approaches focus on Adaptive mechanisms [4–7] which modify the SBA in response to changes affecting its environment like contextual changes or the needs of a particular user. Corrective mechanisms [8–15] replace a faulty service with a new version that provides the same functionality and quality. Preventive mechanisms [16–18] use prediction techniques to detect the probable failures or SLA violations and also assess the accuracy of prediction. There are few approaches targeting Extending mechanisms [19–22] which aim to extend the SBA by adding new required functionalities.
In this paper we focus on SBA customization based on user’s preferences, which is a subset of Adaptive mechanisms. Suppose itinerary purchase scenario in which the travel agency is the owner of SBA. Travel agency orchestrates the available services to fulfil the customers’ requests. Each customer has specific preferences that are expected to be satisfied by travel agency. A customer may prefer cost-effective flight and normal hotel, while the other customer may prefer high-quality flight and luxury hotel. These quality concerns could be achieved by dynamically selecting web services based on user’s preferences. As shown in Figure 1, dynamic service selection includes the following steps [23]:
- Converting a user’s request to a machine understandable model
- Discovering candidate services for each task of a given process
- Selecting the best set of services among candidate services based on QoS constraints and user’s preferences
- Executing the solution (made in step 3) with Business Process Execution Language (BPEL) engine and producing the results
Finding the best services through the ordinary methods leads to an NP-hard problem. Therefore several heuristic and dynamic programming approaches were proposed to model service selection as an optimization problem. In this paper we present a dynamic service selection method which has a strong mathematical basis. The proposed method is based on Hidden Markov Model (HMM), i.e., a mathematical model inspired from Markov Chain. The method considers user’s preferences while selecting appropriate services for a given set of tasks. The process of applying HMM in the service selection problem includes following steps: Modelling, Learning, and QoS-based Selection. In modelling step, HMM is applied in the service selection problem. The output HMM from the previous step is initialized in learning step by supervised or unsupervised learning methods. The Viterbi algorithm is used in QoS-based selection step to select the most appropriate services in a reasonable time.
Execution time and fitness are the critical factors in comparing and proposing service selection methods. Execution time shows the length of time taken for selecting services and producing a composition model in common business process languages like WS-BPEL. Fitness shows how much the output model satisfies user’s preferences. Fitness has direct relationship with user satisfaction. In comparison with GSA-based, PSO-based, and GA-based service selection methods, our method achieves the maximum fitness in a reasonable time.
The rest of this paper is organized as follows. The characteristics of this work are compared with related studies in Section 2. Section 3 describes HMM briefly. The proposed service selection method based on HMM is described in Section 4. Section 5 presents the experimental results. Finally the paper is concluded in Section 6.
2 State of the Art
To develop the related work, we have followed the principles and guidelines of Systematic Literature Reviews (SLRs) as defined by Kitchenham [24]. Nevertheless, the goal in this paper is not to develop an exhaustive SLR with all the work available in the literature, but to report in a systematic manner the list of relevant contributions similar to our work focusing on the quality of service adaptation mechanisms in service-based applications. We have performed a manual search with the term “adaptation” AND “service based application” AND “quality of service” on top ranked journals and conferences from 2010 to 2015. The terms have been applied to title, abstract and keywords. By applying this search protocol, we found 145 papers covering the search criteria. 80 papers were discarded by title, 38 by abstract, and 8 papers were discarded after a fast reading, leading to a total of 19 papers that present different approaches. We classified them in four following classes based on the usage of the adaptation process: Adaptive, Corrective, Pre-
ventive and Extending.
2.1 Adaptive Adaptation
MOSES \cite{4} is a QoS-based adaptation framework based on MAPE components. It is classified as an adaptive adaptation method. MOSES uses abstract composition to create new processes and also service selection to dynamically bind the processes to different concrete web services. MOSES is applicable where a service-oriented system is architectured as a composite service. RuCAS \cite{5} is a rule-based service platform, which helps clients to manage their own context-aware web services via Web-API or GUI-based interface. RuCAS together with an autonomic manager could shape a self-managing ecosystem. Beggas, et al. \cite{6} proposed a middleware that calculates ideal QoS model using a fuzzy control system to fit context information and user preferences. Then, the middleware selects the best service among all variants having the nearest QoS value to the ideal. Chounief et al. \cite{25} proposed a fuzzy framework for service selection. These types of approaches are classified as context-aware or perfective adaptation in which the quality characteristics of SBA are optimized, or the application is customized or personalized according to the needs and requirements of particular users. CHAMELEON \cite{7} is an adaptive adaptation framework which personalize/customize the application according to the device and network contexts in B3G mobile networks. They enriched the standard Java syntax to specify adaptable classes, adaptable methods and adaptation alternatives that specify how one or more adaptable methods can actually be adapted. In \cite{22}, two hidden markov models (HMM1 and HMM2) are used for context-aware service selection. HMM1 is used for modelling context information and HMM2 is used for modelling invoked services. Building HMM is different from this work, since they did not consider quality of services in selection phase. Also the efficiency and the scalability of the model are not evaluated. Since service providers do not expose the details of functional and quality of web services, it is hard for consumers to make an efficient service contract. Wang et al. \cite{27} proposed incentive contract to offer qualities based on consumer preferences. Canton-Puerto et al. \cite{28} used Baum-Welch algorithm to train HMM. They considered QoS parameters like cost, performance etc. Unlike we make relation between web services (hidden states) and tasks (observed states), they mapped web services to different qualities.
2.2 Corrective Adaptation
VieCure \cite{8} is a corrective adaptation method which extracts monitored misbehaviours to diagnoses them with self-healing algorithms and then repairs them in non-intrusive manner. Since VieCure uses recovery mechanisms to avoid degraded or stalled systems, it is also a preventive approach. Psai, et al. \cite{9} proposed a corrective adaptation architecture which reconfigure local interactions among service oriented collaborators or substitute collaborators to maintain system functionalities. The adaptation mechanisms operate based on similarity and socially inspired trust mirroring and trust teleportation. The authors integrate VieCure with GENESIS2 \cite{29} (i.e. an SOA-based testbed generator framework) to realize control-feedback loop and simulate adaptation scenarios in collaborative service-oriented network. Ismail et al. \cite{10} proposed SLA violation handling architecture which performs incremental impact analysis for incrementing an impact region with additional information. To determine the impact region candidates, they defined Time inconsistency (direct dependency between services) and Time unsatisfactory (dependency between a service and the entire process) relationships. Then the recovery instance obtains the relevant information to identify the appropriate recovery plan. The proposed strategy would reduce the amount of change. Zisman et al. \cite{11} proposed a reactive and proactive dynamic service discovery framework. In pull (reactive) mode, queries are subscribed to the framework to be executed proactively. In push (proactive) mode, queries are subscribed to the framework to be executed proactively. They compute the distances between query and service specifications. They used complex queries expressed in an XML-based query language SerDiQueL. In another work by Mahbub et al. \cite{12}, PROSDIN framework is proposed which proactively perform SLA negotiation with candidate services. The goal is to reduce the lengthy negotiation process during service discovery and substitution. DRF4SOA \cite{13} is built on service component architecture (SCA) to model program independent from technologies and encapsulate each MAPE phase in SCA Composites which allows exposing their business as a service. DRF4SOA implements substitution and load balancing strategies to tackle non-functional requirements. SEco \cite{14} is a dynamic architecture for service-based mobile applications. It consist SEco agent and SEco manager. SEco agents gather and send quality data of running applications to SEco manager. SEco manager decide on quality improvement and send adaptation actions to SEco agent. To support architectural dynamisms, SEco agent implements dynamic offloading or dynamic service deployment strategies. SAFDIS \cite{15} is an OSGi-based framework which uses short-term and long-term reasons to maintain the SBA quality above a minimum level. SAFDIS considers only the migration of services by registering and unregistering bundle of services.
2.3 Preventive Adaptation
Some works try to prevent service based applications from future faults or SLA violations. Wang et al. make adaptation decisions through two-phase evaluations. In estimation phase, they estimate the QoS attribute \( e.g. \) execution time in the future and compare the estimated value with the target value defined in the SLA. If a violation is tent to happen, a suspicion of SLA violation is reported to decision phase. In decision phase, they use static and adaptive decision strategies to evaluate the trustworthy level of the suspicion in order to decide whether to accept or to neglect the suspicion.
Unnecessary adaptations can be costly and also faulty even in the proactive case. Metzger et al. propose a preventive approach for augmenting service monitoring with online testing to produce failure predictions with confidence. In a similar work, Metzger selected prediction techniques and defined metrics to assess the accuracy of predictions. Jingjing et al. proposed a proactive service selection method to prevent service provider overloads. The proactive method is based on analysing a time series of services received to forecast the overloads through a negotiation process.
2.4 Extending Adaptation
Auxo is an extending adaptation approach which realize adaptation concerns through modifying the runtime software architecture (RSA) model. Auxo proposes an architecture style (interfaces, connectors and components) and runtime infrastructure which maintains an explicit and modifiable RSA model. To fulfill the modification requests, they modify the RSA model, evaluate the architecture constraints, and enact changes to the real system. SALMon is a monitoring framework that supports different adaptation strategies in the SBA lifecycle by providing the knowledge base (accurate and complete QoS) to the following expert systems: WeSSoS (for service selection based on user requirements), FCM (for service deployment on a cloud federation system), SALMonADA (for identifying and reporting SLA violations), MAESoS, PROSA, PROTEUS, and CASE (for adaptation purposes whenever malfunctions in the system occur). Daubert et al. proposed Kevoree, a reflective framework which provides models@runtime approach to design adaptable SBA. Models@runtime considers the reflection layer as a real model that can be uncoupled from the running architecture for reasoning, validation and simulations purposes and later automatically resynchronized with its running instance. CLAM is a cross-layer adaptation manager for SBA. CLAM provides Application, Service and Infrastructure models. Each model element is associated with Analysers, Solvers and Enactors. A cross-layer rule engine governs the coordination of Analysers, Solvers and Enactors. For each adaptation need, CLAM produces a tree of the possible alternative adaptations, identifies the most convenient one, and applies it.
To classify our work, we defined its characteristics using S-CUBE adaptation taxonomy. The adaptation taxonomy distinguishes approaches by three following questions: 1) Why is adaptation needed (adaptation usage)?, 2) What are the adaptation subject and aspect?, and 3) How does adaptation strategy take place?. As shown in Table I this research presented an adaptive method which customizes SBA based on user’s preferences and quality constraints. The adaptation subjects are SBA’s constitute services and their composition models. We applied HMM to realize service selection adaptation strategy.
3 Hidden Markov Model
An HMM has an underlying stochastic process that is not observable (it is hidden), but can only be observed through another set of stochastic processes that produce the sequence of observed symbols. In Figure 2, hidden states are presented by circles and observed symbols are presented by rectangles. HMMs are applicable in speech processing, natural language processing, extracting target information from documents etc.
HMM is formally defined in formula 1, where \( S \) is the set of states, and \( V \) is the set of possible observations.
\[
\lambda = (A, B, \pi) \tag{1}
\]
\[
S = (s_1, s_2, ..., s_N) \tag{2}
\]
\[
V = (v_1, v_2, ..., v_M) \tag{3}
\]
<table>
<thead>
<tr>
<th>Usage</th>
<th>Subject</th>
<th>Aspect</th>
<th>Strategy</th>
</tr>
</thead>
<tbody>
<tr>
<td>MOSES [4]</td>
<td>Adaptive</td>
<td>Constitute services; Composition</td>
<td>New/modified non-functional requirements</td>
</tr>
<tr>
<td></td>
<td></td>
<td>instance</td>
<td>Service selection; Coordination pattern selection</td>
</tr>
<tr>
<td>Beggas, et al. [6]</td>
<td>Adaptive</td>
<td>Constitute services</td>
<td>QoS, User contextual changes</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>CHAMELEON [7]</td>
<td>Adaptive</td>
<td>Adaptable service class</td>
<td>QoS; User needs; Contextual changes</td>
</tr>
<tr>
<td>VieCure [8]</td>
<td>Corrective and Preventive</td>
<td>Constitute services</td>
<td>QoS; Misbehaviours</td>
</tr>
<tr>
<td>Psaier, et al. [9]</td>
<td>Corrective</td>
<td>Local interactions</td>
<td>Unexpected low performance</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ismail et al. [10]</td>
<td>Corrective</td>
<td>Process instance; Services</td>
<td>SLA violations</td>
</tr>
<tr>
<td>Zisman et al. [11]</td>
<td>Corrective</td>
<td>Constitute services</td>
<td>QoS</td>
</tr>
<tr>
<td>PROSDIN [12]</td>
<td>Corrective</td>
<td>Constitute services</td>
<td>QoS</td>
</tr>
<tr>
<td>DRF4SOA [13]</td>
<td>Corrective</td>
<td>Components; Services</td>
<td>Non-functional requirements changes</td>
</tr>
<tr>
<td>SEco [14]</td>
<td>Corrective</td>
<td>Constitute portable services</td>
<td>QoS; Manageability</td>
</tr>
<tr>
<td>SAFDIS [15]</td>
<td>Corrective</td>
<td>Constitute services</td>
<td>QoS</td>
</tr>
<tr>
<td>Wang [16]</td>
<td>Preventive</td>
<td>SBA instance; Constitue services</td>
<td>QoS, Prevent unnecessary adaptation</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Metzger [17]</td>
<td>Preventive</td>
<td>Constitute services</td>
<td>QoS, Prevent unnecessary adaptation</td>
</tr>
<tr>
<td>Metzger [18]</td>
<td>Preventive</td>
<td>Constitute services; Third-party</td>
<td>QoS, Failure prediction</td>
</tr>
<tr>
<td></td>
<td></td>
<td>services</td>
<td></td>
</tr>
<tr>
<td>Auxo [19]</td>
<td>Extending</td>
<td>Component; Connector; Interface</td>
<td>Unexpected environments</td>
</tr>
<tr>
<td>SALMon [20]</td>
<td>Extending</td>
<td>Constitute services</td>
<td>QoS</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Keverree [21]</td>
<td>Extending</td>
<td>Business process; Composition and</td>
<td>QoS-based cross-layer adaptation</td>
</tr>
<tr>
<td></td>
<td></td>
<td>coordination; Infrastructure</td>
<td></td>
</tr>
<tr>
<td>CLAM [22]</td>
<td>Extending</td>
<td>Whole SBA model</td>
<td>cross-layer adaptation</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Current research</td>
<td>Adaptive (customization)</td>
<td>Constitute services, Composition</td>
<td>QoS changes; User’s preferences</td>
</tr>
<tr>
<td></td>
<td></td>
<td>model</td>
<td></td>
</tr>
</tbody>
</table>
Q is a sequence of hidden states with length T and O is a sequence of observations. Each observation in O is emitted by a hidden state in Q. As presented in formula 6 and formula 7, A is the transition array, storing the time-independent probability of state j following state i, and B is the observation array, storing the probability of observation m being produced from the state i, independent of t.
\[ Q = (q_1, q_2, ..., q_T) \]
(4)
\[ O = (o_1, o_2, ..., o_T) \]
(5)
\[ A = [a_{ij}], a_{ij} = P(q_t = s_j | q_{t-1} = s_i) \]
(6)
\[ B = [b_i(m)], b_i(m) = P(x_t = v_k | q_t = s_i) \]
(7)
In formula 8, \( \pi \) is defined as the initial probability array:
\[ \pi = [\pi_i], \pi_i = P(q_1 = s_i) \]
(8)
The model makes two assumptions including the Markov assumption and the independence assumption, presented in formula 9 and formula 10. The Markov assumption, states that the current state is dependent only on the previous state. The independence assumption, states that the output observation at time t is dependent only on the current state; it is independent of the previous observations and states:
\[ P(q_t | q_{t-1}) = P(q_t | q_{t-1}) \]
(9)
\[ P(o_t | o_{t-1}) = P(o_t | q_t) \]
(10)
Given a HMM \( \lambda \) and a sequence of observations \( O \), we would like to compute \( P(O, \lambda) \), i.e., the probability of observing sequence \( O \).
The probability of the observations \( O \) for a specific state sequence \( Q \) and the probability of the state sequence are shown in formula 11 and formula 12 respectively:
\[ P(O|Q, \lambda) = \prod_{t=1}^{T} P(o_t | q_t, \lambda) \]
(11)
\[ P(Q|\lambda) = \pi_{q1} \times a_{q1} a_{q2} \times a_{q2} a_{q3} \times ... \times a_{q(T-1)} a_{qT} \]
(12)
So we can calculate the probability of the observations given the model as:
\[ P(O|\lambda) = \sum_{Q} P(O|Q, \lambda) \times P(Q|\lambda) \]
(13)
The result shows the probability of observing sequence \( O \) by considering state sequence \( Q \).
4 Service Selection Based on HMM
In this section, we applied HMM for the service selection problem in three following steps: Modelling, Learning, and QoS-based Selection.
4.1 Modelling
The formal definition of service selection problem is as follows:
\[ \lambda = (VisitRate, SelRate, \pi) \]
(14)
\( AWS \) is the set of available web services, and \( AT \) is the set of tasks of all processes.
\[ AWS = (ws_1, ws_2, ..., ws_N) \]
(15)
\[ AT = (t_1, t_2, ..., t_M) \]
(16)
We define \( WS \) as a sequence of web services with length \( T \). We also define \( T \) as a sequence of tasks. Each task in \( T \) consumes a corresponding web service in \( WS \).
\[ WS = (ws_1, ws_2, ..., ws_T) \]
(17)
\[ T = (t_1, t_2, ..., t_T) \]
(18)
As defined in Eq. 2 and Eq. 3, HMM includes sequence of hidden states and sequence of possible observations. As defined in Eq. 7, hidden state \( s_i \) has the probability \( b_i(m) \) to produce the observation \( v_m \). Considering \( s_i \) as the \( i \)th candidate web service, and \( v_m \) as the \( m \)th task of a process, we could say \( b_i(m) \) indicates the probability of selecting \( i \)th web service for \( m \)th task. Figure 3 represents this assumption in graphical mode. As you can see, web services are modeled with hidden states and process’s tasks are modeled with observations.
We defined \( VisitRate_{ij} \) to indicate the probability of visiting \( j \)th web service, just after the \( i \)th web service is visited. Visit rate is similar to Eq. 6 which defines the transition probabilities between hidden
states. Visit rates are initialized in learning step and their values could be updates by the log records of the services repository server (i.e. UDDI server).
\[
\text{VisitRate}_{ij} = \frac{\text{no. of times } W_{S_j} \text{ is visited after } W_{S_i}}{\text{no. of times } W_{S_i} \text{ is visited}} \tag{19}
\]
We need to present rational definitions for the state transition probabilities (Eq. 6), and the output probabilities (Eq. 7). We defined \( \text{SelRate}_{i}(m) \) to indicate the probability of selecting \( m \)th web service for \( m \)th task. Selection rate is similar to Eq. 7 which defines the output model should be trained by supervised learning methods or unsupervised learning methods.
\[
\text{SelRate}_{i}(m) = \frac{\text{no. of times } W_{S_i} \text{ is selected for } T_m}{\text{no. of times } W_{S_i} \text{ is visited}} \tag{20}
\]
### 4.2 Learning
After modelling the service selection problem by HMM, the output model should be trained by supervised learning methods or unsupervised learning methods. The unknown parameters of an HMM are the transition probabilities and the output (observation) probabilities.
In supervised learning, we use a database of sample HMM behaviours and estimate the transition probabilities and the output probabilities. The Visit rate array (Eq. 19) could be estimated either based on the log records of services repository or based on the log records of BPEL engine. The Selection rate array (Eq. 20) could be estimated based on the history of service invocations. The selected web services are described in WS-BPEL format. BPEL engine executes given WS-BPEL process and invokes the selected web services. Therefore, the log of service invocations is kept by BPEL engine.
In unsupervised learning, HMM parameters have to be estimated from the observed sequences and the parameters are updated based on new samples. If database of samples are not available, the standard unsupervised approaches like Maximum Likelihood Estimation or Viterbi Training could be applied [33].
### 4.3 QoS-Based Selection
It is necessary to consider user’s quality preferences while selecting the best services for each task. So we define the fitness function based on user’s preferences and the QoS parameters of services. Some quality parameters like execution-time and cost have inverse relationship with their measurements (i.e., a higher value shows a lower degree of quality), whereas some quality parameters like reliability and availability are in direct relationship with their measurements (i.e., a higher value shows a higher degree of quality). Since we need a fitness function composed of the above measures, with a value in range of [0, 1], we use Eq. (21) for the quality parameters with inverse relationship and Eq. (22) for the quality parameters with direct relationship.
\[
V(Q^K_{ij}) = \begin{cases}
\frac{Q^K_{ij} - \min_v(Q^K_{ij})}{\max_v(Q^K_{ij}) - \min_v(Q^K_{ij})}, & \text{if } \max_v(Q^K_{ij}) \neq \min_v(Q^K_{ij}) \\
1, & \text{if } \max_v(Q^K_{ij}) = \min_v(Q^K_{ij})
\end{cases} \tag{21}
\]
\[
V(Q^K_{ij}) = \begin{cases}
\frac{\max_v(Q^K_{ij}) - Q^K_{ij}}{\max_v(Q^K_{ij}) - \min_v(Q^K_{ij})}, & \text{if } \max_v(Q^K_{ij}) \neq \min_v(Q^K_{ij}) \\
1, & \text{if } \max_v(Q^K_{ij}) = \min_v(Q^K_{ij})
\end{cases} \tag{22}
\]
Fitness function is defined in Eq. (23):
\[
F_{ij} = \sum_{k=1}^{K} V^K_{ij} W_k, 0 \leq W_k \leq 1, \sum W_k = 1 \tag{23}
\]
Where, \( W_k \) is the weight of \( k \)th quality parameter identified in user’s preferences, \( V^K_{ij} \) is the standardized value of the \( k \)th QoS parameter of the \( j \)th candidate web service for the \( i \)th task, and \( P_{ij} \) is the standardized fitness value of the \( j \)th candidate web service for the \( i \)th task.
In order to consider the effects of user’s preferences in selecting the best services, Eq. 11 is modified to Eq. 24. The fitness function (Eq. 23) adjusts the output probabilities based on user’s preferences.
\[
P(T|WS) = \prod_{t=1}^{T} P(T_t|WS_t) \times F_{it} \tag{24}
\]
Finally, the Viterbi algorithm is used to find the best services for a given sequence of tasks. The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states that produces a sequence of observations. The time complexity of the Viterbi algorithm is \( O(T \times |S|^2) \), where \( |S| \) indicates the number of hidden states (i.e. web services) and \( T \) indicates the length of output (i.e. tasks).
### 4.4 Pseudo Code
The pseudo code of the proposed method is shown in Algorithm [1]. The algorithm starts with checking whether the HMM structure exists or not. If not, the structure is defined using VisitRate to produce state
Algorithm 1 QoS-Based Service Selection
**input:**
all available web services (SERVICES_REPOSITORY),
all processes’ tasks (PROCESSES),
user’s preferences (QoS_WEIGHT),
requested process (REQUESTED_PROCESS),
log of service invocations (LOG)
**output:**
a sequence including the best web services for realizing REQUESTED_PROCESS (SOLUTION)
1: begin
2: if (HMM does not exist) then //build and train HMM
3: for each service i in SERVICES_REPOSITORY do
4: \( A_{WS_i} \leftarrow \) service i // Eq. 15
5: for each task i in PROCESSES do
6: \( A_{T_i} \leftarrow \) task i // Eq. 16
7: //Use LOG records and Eq. 19 to produce state transition (probabilities)
8: //Use LOG records and Eq. 20 to produce output probabilities
9: end for
10: end for
11: end if
12: for each task i in REQUESTED_PROCESS do
13: \( T_i \) task i //Eq. 18
14: for each task i in REQUESTED_PROCESS do
15: for each service j in SERVICES_REPOSITORY do
16: \( set_{F_{ij}} usingQoS_{WEIGHT} \) //Eq. 23
17: end for
18: end for
19: end for
20: end for
21: //Use Viterbi algorithm to produce sequence WS (Eq. 17) as a SOLUTION which includes the most appropriate web services for the given sequence of tasks \( T \) (Eq. 18)
22: return SOLUTION
23: end
transition probabilities (refer to Eq. 19) and SelRate to produce output probabilities (refer to Eq. 20). Next, vector \( T \) is defined as a sequence of tasks in requested process. Each task in \( T \) consumes a corresponding web service. Then, two nested loops make Fitness matrix (refer to Eq. 23) which is used to adjust the output probabilities based on user’s preferences. Finally, the Viterbi algorithm is used to produce sequence WS, which includes the most appropriate web services for the given sequence of tasks \( T \).
5 Experimental Results
5.1 Hypothesis
Prior to defining the experimental hypotheses, we utilized the “Goal/Question/Metric” (GQM) template \(^{[34]}\) to explicitly define the experimentation goal \( G1 \) as follows and its regarding evaluation questions and metrics.
**Goal G1:** “To analyse the efficiency of the proposed method for the purpose of selecting the most appropriate services among all candidate services based on user’s preferences”.
**Question Q1:** Does the method show any improvement in selecting the services that are mostly aligned with user’s preferences and improves user satisfaction?
- **Metric M1.1: Fitness.** Fitness shows how much the output model satisfies user’s preferences. Fitness has direct relationship with user satisfaction.
**Question Q2:** Does the method show any improvement in scalability?
- **Metric M2.1: Execution time.** Execution time shows the length of time taken for selecting services and producing a composition model in common business process languages like WS-BPEL. As show in Figure 4, we measure the execution time changes with increasing the number of tasks from 10 to 100 and with increasing the number of candidate services from 5 to 50.
As shown in Table 2, we evaluated the efficiency and scalability of the proposed method for QoS-based service selection. Particularly, we aimed at evaluating the fitness and the execution time of the method.
This work was compared with GSA-based [23] (i.e. our previous work) and PSO-based service selection methods. We did not consider genetic algorithm (GA) in our comparison, since the PSO algorithm is better in finding the optimized selection with higher fitness than the genetic algorithm [35]. The measurements have been conducted on an Intel Celeron CPU 2.2 GHz PC with 1GB of RAM running Ubuntu 12.4LTS and JDK 1.7.0-17. As shown in Figure 4, the number of tasks changes from 10 to 100, and the number of candidate services changes from 5 to 50 depending on the experiment.
We used web service QoS dataset released by Al-Masri et al. [36] to evaluate service selection methods in performing users’ requests with different preferences. This dataset includes 5,000 web services with the measurements of their quality of services. The selected QoS parameters that are used in our experiments are listed below:
- Response time (ms): Time taken to send a request and receive a response
- Availability (%): The ratio of the number of successful invocations to the number of total invocations
- Reliability (%): The ratio of the number of error messages to the number of total messages
The quality parameters are classified in three levels including Bronze (qos_weight: 0.2), Silver (qos_weight: 0.3) and Gold (qos_weight: 0.5). In our experiments, we generated weights of quality parameters randomly.
5.2 User Satisfaction
Since heuristic algorithms (e.g. GSA, PSO, GA) depend on the initial population and the number of iterations, we measured fitness value in below scenarios:
A) Fitness changes, with increasing the number of iterations from 10 to 100
First scenario was performed 10 times with different user’s preferences. We considered 10 tasks and 5 candidate services for each task. As shown in Figure 5, HMM produced the most optimized sequence of web services that resulted in highest fitness value in each experiment. Since the Viterbi algorithm is a dynamic programming technique, the fitness value does not depend on the number of iterations. GSA moves much more quickly towards the convergence point (i.e. finding the fitter composite web service) in comparison with the PSO algorithm.
B) Fitness changes, with increasing the number of candidate services from 5 to 50
In second scenario, we measured the changes of fitness value with increasing the number of candidate services. The results are shown in Figure 6. The number of candidate services increases the population of candidate solutions and affects the final result. This scenario was also measured in 10 experiments with different user’s preferences. In each experiment, the GSA
Table 2. The GQM Metrics for Evaluation.
<table>
<thead>
<tr>
<th>Goal</th>
<th>Question</th>
<th>Metric</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>G1</td>
<td>Q1: User satisfaction</td>
<td>M1.1: Fitness</td>
<td>Fitness changes, with increasing the number of iterations from 10 to 100. (No. of tasks=10; No. of candidate services=5)</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Fitness changes, with increasing the number of candidate services from 5 to 50. (No. of tasks=10; Iterations=100)</td>
</tr>
<tr>
<td></td>
<td>Q2: Scalability</td>
<td>M2.1: Execution time</td>
<td>Execution time changes, with increasing the number of tasks from 10 to 100. (Iterations=100; No. of candidate services=5)</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Execution time changes, with increasing the number of candidate services from 5 to 50. (No. of tasks=10; Iterations=100)</td>
</tr>
</tbody>
</table>
Figure 6. Fitness Changes, With Increasing the Number of Candidate Services (C.S.). (No. of Tasks=10; Iterations=100; No. of Candidate Services: From 5 to 50).
algorithm and the PSO algorithm were performed in 100 iterations. Although increasing the number of candidate services improves the fitness in both GSA and PSO algorithms, HMM is still more effective in selecting web services and producing composite models which are more aligned with user’s preferences.
5.3 Scalability
Figure 7 shows changes of execution time with increasing the number of tasks. In this experiment we considered 5 candidate services for each task. As shown in Figure 7, when a given process has 100 tasks, there is a negligible value of 0.15 second gap between performing HMM and performing the least time consuming heuristic algorithm, i.e., PSO. Furthermore, most of the business processes have less than 100 tasks. Therefore we could claim that our proposed method is still applicable.
Figure 8 shows changes of execution time with increasing the number of candidate services. In each experiment, the GSA algorithm and the PSO algorithm were performed in 100 iterations.
In our proposed method, web services are modelled with hidden states and tasks are modelled with observations. So, for each task, candidate services are the hidden states that have an emission probability to the target task. Since the time complexity of the Viterbi algorithm has direct relationship with the square of the number of hidden states, our proposed method is
mostly efficient in cases that there is less than $\sim 20$ candidate services for each task (see Figure 8).
In GSA and PSO algorithms, the number of candidate services increases the population of candidate solutions and affects the execution time. As shown in Figure 8, these types of algorithms are able to consider hundreds of candidate services in performing user's requests.
This section could be concluded as follows. Our method achieves the maximum fitness in each experiment. Although our method is a little more time-consuming than the heuristic methods (e.g., GA, PSO, and GSA), it selects most appropriate services in a reasonable time even when the number of web services increases. The comparison of HMM with heuristic algorithms in service selection filed shows that HMM is a useful method, which improves the lacks of heuristic algorithms like fitness in a reasonable time.
6 Conclusion
In this paper, we applied Hidden Markov Model for the QoS-based service selection problem. We presented the method in following steps: Modelling, Learning, QoS-based selection. In modelling step, the HMM definition of service selection problem was described. The output HMM from the modelling step is initialized in learning step by supervised or unsupervised learning methods. The Viterbi algorithm is used in QoS-based selection step to find the most appropriate services in reasonable time.
We compared this work with GSA-based service selection method and PSO-based service selection method. Our method achieves the maximum fitness in each experiment. Although our method is a little more time-consuming than the heuristic methods (e.g., GA, PSO, and GSA), it selects most appropriate services in a reasonable time even when the number of web services increases.
In future, we would use unsupervised approaches like Maximum Likelihood Estimation or Viterbi Training to overcome continues modifications in HMM including: available web services, existing tasks, transition probabilities, and output probabilities.
References
[31] L. R. Rabine and B.-H. Juang. An introduction...
Yousef Rastegari received his PhD from Department of Computer Engineering and Science, Shahid Beheshti University. He is member of two research groups namely ASER (Automated Software Engineering Research) (aser.sbu.ac.ir) and ISA (Information Systems Architecture) (isa.sbu.ac.ir).
Afshin Salajegheh has received his BS from Tehran University and MS in Artificial Intelligence and PhD in software engineering from Islamic Azad University Science and Research Branch. He is faculty member of software engineering and computer science at Islamic Azad University South Tehran Branch since 1998. His major interests are software engineering, software architecture, Data mining and Data base. He also has worked as Senior IT Consultant, system analyst and designer, programmer, Project Manager and Data Scientist for more than 23 years.
|
{"Source-Url": "http://jcomsec.ui.ac.ir/article_23815_76ad35fb7a61b9f04f3277ad0b79ff7b.pdf", "len_cl100k_base": 9066, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42251, "total-output-tokens": 13191, "length": "2e13", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.0006341934204101562, "__label__crime_law": 0.0003192424774169922, "__label__education_jobs": 0.0027217864990234375, "__label__entertainment": 0.0001239776611328125, "__label__fashion_beauty": 0.000213623046875, "__label__finance_business": 0.0004854202270507813, "__label__food_dining": 0.0003447532653808594, "__label__games": 0.0006961822509765625, "__label__hardware": 0.0010814666748046875, "__label__health": 0.0006113052368164062, "__label__history": 0.0004119873046875, "__label__home_hobbies": 0.00011485815048217772, "__label__industrial": 0.0003783702850341797, "__label__literature": 0.0004355907440185547, "__label__politics": 0.000270843505859375, "__label__religion": 0.0004341602325439453, "__label__science_tech": 0.0904541015625, "__label__social_life": 0.0001226663589477539, "__label__software": 0.0131072998046875, "__label__software_dev": 0.8857421875, "__label__sports_fitness": 0.0002503395080566406, "__label__transportation": 0.0005421638488769531, "__label__travel": 0.0002455711364746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50944, 0.06623]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50944, 0.15761]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50944, 0.86269]], "google_gemma-3-12b-it_contains_pii": [[0, 1492, false], [1492, 6068, null], [6068, 11603, null], [11603, 15803, null], [15803, 22890, null], [22890, 26502, null], [26502, 31260, null], [31260, 34223, null], [34223, 37088, null], [37088, 39390, null], [39390, 44184, null], [44184, 49096, null], [49096, 50944, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1492, true], [1492, 6068, null], [6068, 11603, null], [11603, 15803, null], [15803, 22890, null], [22890, 26502, null], [26502, 31260, null], [31260, 34223, null], [34223, 37088, null], [37088, 39390, null], [39390, 44184, null], [44184, 49096, null], [49096, 50944, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50944, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50944, null]], "pdf_page_numbers": [[0, 1492, 1], [1492, 6068, 2], [6068, 11603, 3], [11603, 15803, 4], [15803, 22890, 5], [22890, 26502, 6], [26502, 31260, 7], [31260, 34223, 8], [34223, 37088, 9], [37088, 39390, 10], [39390, 44184, 11], [44184, 49096, 12], [49096, 50944, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50944, 0.13755]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
f8e49d0f3d5ede7eacb1922ee55af618327bed32
|
TIGHT LOWER AND UPPER BOUNDS
FOR SOME DISTRIBUTED ALGORITHMS
FOR A COMPLETE NETWORK OF PROCESSORS
by
E. Korach*, S. Moran and S. Zaks**
Technical Report #298
November 1983
* IBM-Scientific Center, Technion City, Haifa.
** Computer Science Dept., Technion-IIT, Haifa.
Tight Lower and Upper Bounds for Some Distributed Algorithms for a Complete Network of Processors
E. Korach
IBM Scientific Center
Technion City
Haifa, Israel 32000
S. Moran and S. Zaks
Computer Science Department
Technion - Israel Institute of Technology
Haifa, Israel 32000
ABSTRACT
The main result is $O(n \log n)$ lower and upper bounds for a class of distributed algorithms for a complete network of processors. This class includes algorithms for problems like finding a leader or constructing a spanning tree. This shows, in fact that finding a spanning tree in a complete network is easier than finding a minimum weight spanning tree in such a network, which may require $O(n^2)$ messages. $O(n^2)$ bounds for other problems, like constructing a maximal matching or a Hamiltonian circuit are also given. In the upper bounds, the length of any message is at most $\log_2(4m \log_2 n)$ bits, where $m$ is the maximum identity of a node in the network.
1. INTRODUCTION
The model under investigation is a network of \( n \) processors with distinct identities \( \text{identity}(1), \text{identity}(2), \ldots, \text{identity}(n) \). No processor knows any other processor's identity. Each processor has some communication lines, connecting him to some others. The processor knows the lines connected to himself, but not the identities of his neighbors. The communication is done by sending messages along the communication lines. The processors all perform the same algorithm, that includes operations of (1) sending a message to a neighbor, (2) receiving a message from a neighbor and (3) processing information in their (local) memory.
We assume that the messages arrive, with no error, in a finite time, and are kept in order in a list until processed (this list is not always treated as a queue). We also assume that any non-empty set of processors may start the algorithm; a processor that is not a starter remains asleep until a message reaches him.
The communication network is viewed as an undirected graph \( G = (V, E) \) with \( |V| = n \), and we assume that the graph \( G \) is connected. We refer to algorithms for a given network as algorithms acting on the underlying graph.
Working within this model, when no processor knows the value of \( n \), a spanning tree is found in [4] in \( O(n \log n + |E|) \) messages for a general graph. A leader in a network is found in [3], where \( n \) is known to every processor, in an expected number of messages which is \( O(n \log n) \) (independent of \( |E| \)), and the worst case is not analyzed (but is said to be \( O(n |E|) \)).
\( O(n \log n) \) lower and upper bounds for the problem of distributively finding a leader in a circular network of processors are known; see [1,7] for the lower bound and [2,4,5,8] for the upper bound.
We address two classes of algorithms for complete graphs: the first must use edges of a spanning subgraph and the second must use edges of a maximum matching in every possible execution. The problems of choosing a leader, finding a maximum and constructing a spanning tree clearly require algorithms that belong to the first class, while finding a complete matching or constructing a
Hamiltonian cycle clearly require algorithms that belong to the second class.
We prove a lower bound of $O(n \log n)$ for the number of edges (hence messages) used by any algorithm in the first class and a lower bound of $O(n^2)$ edges for the second class. An algorithm of $O(n^2)$ messages can easily be designed for the second class.
Next we present an algorithm that attains the bound of $O(n \log n)$ messages for the problem of choosing a leader in a complete graph. This algorithm can be used for optimally solving (up to a constant factor) other problems in this class, among which are the problems of finding the maximum (minimum) identity and constructing a spanning tree. The correctness of the algorithm is proved and its complexity is analyzed. This algorithm together with the lower bound of $O(n^2)$ for finding a minimum weight spanning tree presented in [8] show that in complete networks it is easier to find a spanning tree than to find a minimum weight spanning tree.
Our algorithms heavily use the fact that the underlying graph is complete, which enables us to use, in the worst case, a number of messages that is much smaller than the number of edges ($O(n \log n)$ vs. $O(n^2)$). This property is not shared by the algorithms discussed in [1] - [8]; in fact, we show that $|E| - 1$ messages are required for similar algorithms on a certain class of "almost complete" graphs, in which the ratio between the number of edges and $\binom{n}{2}$ tends to one as $n$ tends to infinity. This implies that almost $|E|$ messages may be required by any such algorithm, even when the underlying graph is known to be extremely dense (but not necessarily complete).
3. LOWER BOUNDS
2.1. Definitions and Axioms
In this section we study lower bounds for global algorithms and for matching algorithms (to be defined later). We first need some definitions.
Let $A$ be a distributed algorithm acting on a graph $G = (V,E)$. An execution of $A$ consists of events, each being either sending a message, receiving a message or doing some local computation. Without loss of generality, we may assume that during every execution no two messages are sent in exactly the same time. Therefore, with each execution we can associate a sequence $SEND = \langle send_1, send_2, \ldots, send_t \rangle$ that includes all the events of the first type in their order of occurrence (if there are no such events then $SEND$ is the empty sequence). Each event $send_i$ we identify with the pair $(v(send_i), e(send_i))$, where $v(send_i)$ is the node sending the message and $e(send_i)$ is the edge used by it.
Let $SEND(t)$ be the prefix of length $t$ of the sequence $SEND$, namely $SEND(t) = \langle send_1, \ldots, send_t \rangle$ ($SEND(0)$ is the empty sequence). If $t < t'$ then we say that $SEND(t')$ is an extension of $SEND(t)$, and we denote $SEND(t) < SEND(t')$. $SEND$ is called a completion of $SEND(t)$. Note that a completion of a sequence is not necessarily unique.
Let $NEW = NEW(SEND)$ be the subsequence $< new_1, new_2, \ldots, new_r >$ of the sequence $SEND$ that consists of all the events in $SEND$ that use previously unused edges. (An edge is used if a message has been already sent along it from either side.) This means that the message $send_i = (v(send_i), e(send_i))$ belongs to $NEW$ if and only if $e(send_i) \notin e(send_j)$ for all $i > j$. $NEW(t)$ denotes the prefix of size $t$ of the sequence $NEW$.
Define the graph $G(NEW(t)) = (V,E(NEW(t)))$, where $E(NEW(t))$ is the set of edges used in $NEW(t)$, and call it the graph induced by the sequence $NEW(t)$. If for every execution of the algorithm $A$ the corresponding graph $G(NEW)$ is connected then we term this algorithm global. Note that all the graphs $G(NEW)$ above have a fixed set $V$ of vertices (some of which may be isolated).
The edge complexity $e(A)$ of an algorithm $A$ acting on a graph $G$ is the maximal length of a sequence $NEW$ over all executions of $A$.
The message complexity $m(A)$ of an algorithm $A$ acting on a graph $G$ is the maximal length of a sequence $SEND$ over all executions of $A$. Clearly $m(A) \geq e(A)$.
For each algorithm $A$ and graph $G$ we define the exhaustive set of $A$ with respect to $G$, denoted by $EX(A,G)$ (or $EX(A)$ when $G$ is clear from the context), as the set of all the sequences $NEW(t)$ corresponding to possible executions of $A$.
By the properties of distributed algorithms the following facts - defined below as axioms - hold for every algorithm $A$ and every graph $G$:
**axiom 1:** the empty sequence is in $EX(A,G)$.
**axiom 2:** if two sequences $NEW_1$ and $NEW_2$ which do not interfere with each other, are in $EX(A,G)$, then so is also their concatenation $NEW_1 \circ NEW_2$. ($NEW_1$ and $NEW_2$ do not interfere if no two edges $e_1$ and $e_2$ that occur in $NEW_1$ and $NEW_2$ respectively have a common end point; this means that the corresponding partial executions of $A$ do not affect each other and can, in fact, be merged in any specified order.)
**axiom 3:** if $NEW(t)$ is a sequence in $EX(A,G)$ with a last element $(v,e)$, and if $e'$ is an unused edge adjacent to $v$, then the sequence obtained from $NEW(t)$ by replacing $e$ by $e'$ is also in $EX(A,G)$. (This reflects the fact that a node cannot distinguish between his unused edges.)
Note that these three facts do not imply that $EX(A,G)$ contains any non-empty sequence. However, if the algorithm $A$ is global then the following fact holds as well:
**axiom 4:** if $NEW(t)$ is in $EX(A,G)$ and $C$ is a proper subgraph of $G(NEW(t))$ which is a union of some connected components, then there is an extension of $NEW(t)$ in which the first new message $(v,e)$ satisfies $v \in C$. (This reflects the fact that some unused edge will eventually carry a message and that arbitrarily long delays can be imposed on the nodes not in $C$.)
(*) These axioms reflect only some properties of distributed algorithms which are needed here.
2.2. Lower Bound for Global Algorithms
The following lemma is needed in the sequel:
**Lemma 1:** Let $A$ be a global algorithm acting on a complete graph $G=(V,E)$, and let $U \subseteq V$. Then there exists a sequence of messages NEW in $EX(A,G)$ such that $G(NEW)$ has one connected component whose set of vertices is $U$ and the vertices in $V-U$ are isolated.
**Proof:** A desired sequence NEW can be constructed in the following way. Start with the empty sequence (using *axiom 1*). Then add a message along a new edge that starts in a vertex in $U$ (*axiom 4*) and that does not leave $U$ (*axiom 3* and the completeness of $G$): This is repeated until a graph having the desired properties is eventually constructed.
**Theorem 1:** Let $A$ be a global algorithm acting on a complete graph $G$ with $n$ nodes. Then the edge complexity $e(A)$ of $A$ is at least $O(n \log n)$.
**Proof:** For a subset $U$ of $V$ we define $e(U)$ to be the maximal length of a sequence NEW in $EX(A,G)$ which induces a graph that has a connected component whose set of vertices is $U$ and isolated vertices otherwise (such a sequence exists by lemma 1). Define $e(k), 1 \leq k \leq n$, by
$$e(k) = \min \{ e(U) \mid U \subseteq V, |U| = k \}$$
Note that $e(n)$ is the edge complexity of the algorithm $A$.
The Theorem will follow from the inequality
$$e(2k+1) \geq 2e(k) + k + 1 \quad (k < \frac{n}{2})$$
Let $U$ be a disjoint union of $U_1, U_2$ and $\{v\}$, such that $|U_1| = |U_2| = k$, and $e(U) = e(2k+1)$. We denote $C = U_1 \cup U_2$.
Let $NEW_1$ and $NEW_2$ be sequences in $EX(A,G)$ of lengths $e(U_1), e(U_2)$ inducing subgraphs $G_1, G_2$ that have one connected component with vertex set $U_1, U_2$, and
---
1 In general, one expects $e(k) = e(U)$ for any subset $U$ of $k$ vertices. However, the reader may construct simple algorithms for which $e(U_1) \neq e(U_2)$ for two distinct subsets $U_1$ and $U_2$ of equal cardinality. It is clear that such an algorithm must use the actual identities of the processors in the network.
all other vertices isolated), respectively. These two sequences do not interfere with each other, and therefore by axiom 2 their concatenation NEW1 o NEW2 is also in EX(A, G). The proper subgraph C of G(NEW) satisfies the assumptions of axiom 4. Note that each node in C has at least k adjacent unused edges within C. By axiom 4 there is an extension of NEW by a message (u, e), where u ∈ C. By axiom 3 we may choose the edge e to connect two vertices in C. This process can be repeated until at least one vertex in C saturates all its edges to other vertices in C. This requires at least k messages along previously unused edges. One more application of axiom 4 and axiom 3 results in a message from some node in C to the vertex u. The resulting sequence NEW induces a graph that contains one connected component on the set of vertices U and isolated vertices otherwise. Thus we have
\[ e(2k+1) = e(U) \geq e(U_1) + e(U_2) + k + 1 \geq 2e(k) + k + 1. \]
The above inequality implies that for \( n = 2^t - 1 \) and the initial condition \( e(1) = 0 \) we have
\[ e(n) \geq \frac{n+1}{2} \log \left( \frac{n+1}{2} \right). \]
This implies the Theorem.
Q.E.D.
From this Theorem it follows that
**Theorem 2:** Let A be a global algorithm acting on a complete graph G with n nodes. Then the message complexity \( m(A) \) of A is at least \( O(n \log n) \).
**Note 1:** The lower bounds in Theorems 1 and 2 hold even in the case when every node knows the identities of all other nodes (but cannot tell which edge leads to which node).
**Note 2:** In the example constructed in the proof of Theorem 1 the number of processors which initialize the algorithm is \( O(n) \) (it equals \( \frac{n+1}{2} \) for \( n = 2^t - 1 \)). In
2.3. Lower Bounds for Matching-Type Algorithms
The above theorems imply that algorithms for tasks like constructing a spanning tree, finding the maximum identity, finding a leader, constructing a Hamiltonian path or constructing a maximum matching* have a lower bound of $O(n \log k)$ edges (and messages); however, for the last two cases we show even a stronger result. Let a matching-type algorithm be an algorithm that is guaranteed to cover a maximum matching (that is, to induce a graph which contains a matching of size $\left\lfloor \frac{n}{2} \right\rfloor$ where $\lceil z \rceil$ is the largest integer not larger than $z$).
**Theorem 3:** Let $A$ be a matching-type algorithm acting on a complete graph $G$ with $n$ nodes. Then the edge complexity $e(A)$ of $A$ is at least $O(n^2)$.
**Proof:** Let $A$ be a matching-type algorithm. We construct a sequence in $EX(A, G)$ of length $O(n^2)$. Arbitrarily number the vertices from 1 to $n$. We construct the sequence $NEW$ in the following manner:
Let $NEW_0$ be the empty sequence. For $i \geq 0$ if $G(NEW_i)$ does not contain a maximum matching, then $NEW_{i+1}$ is an extension of $NEW_i$ by a message $(v, e)$ where $e = (v, j)$ is chosen with smallest possible $j$ (we use here axiom 1, axiom 3 and the appropriate variant of axiom 4 for matching-type algorithms).
Eventually we construct in this way a sequence $NEW$ in $EX(A, G)$ that does contain a maximum matching. Let this matching be $\{(u_i, v_i) \mid 1 \leq u_i < v_i \leq \frac{n}{2}\}$.
Let $n_i$ be the number of messages in $NEW$ which use an edge that connects $u_i$ or $v_i$ to some $j < u_i$. By the construction of $NEW$ $n_i \geq u_i - 1 \geq i - 1$. Thus the
*It is not hard to see that an algorithm that is guaranteed to construct a maximum matching must be global for complete graphs of $n$ vertices for even $n$, and to induce connected graphs of at least $n - 1$ vertices for odd $n$.
length of NEW is greater than
\[ 0 + 1 + \cdots \left( \frac{n}{2} \right) - 1 = \frac{n^2}{8} + O(n). \]
(Note that we did not count the edges \((u_i, v_i)\) of the matching). This completes the proof of Theorem 3.
Q.E.D.
From this Theorem it follows that
**Theorem 4:** Let \( A \) be a matching-type algorithm acting on a complete graph \( G \) with \( n \) nodes. Then the message complexity \( m(A) \) of \( A \) is at least \( O(n^2) \).
Note that Theorems 3 and 4 are independent of the number of initiators, which is not the case for Theorems 1 and 2.
In [4] it was noted that global algorithms in general graphs require \(|E|\) messages when the number of vertices is unknown. We conclude this section by observing that even when the numbers of nodes and edges are known - and in fact the graph is almost complete and known up to isomorphism - then \(|E|-1\) messages may be requires in the worst case. To see this, consider a complete graph of \( n \) nodes to which a new vertex \( v \) is added on some unknown edge (the resulting graph has \( n+1 \) vertices and \( \binom{n+1}{2} \) edges). Apply the algorithm on such a graph with \( v \) asleep, and as long as there are unused edges, assume that \( v \) is on one of them. Thus \(|E|-1\) edges must be used in order to wake the vertex \( v \).
3. **UPPER BOUNDS**
3.1. **General Discussion**
We proved in the previous section a lower bound of \( O(n^2) \) for the maximum matching problem. An Algorithm of \( O(n^2) \) messages for this problem can be easily designed (for example, let each node send messages to all his neighbors, and then form the matching by increasing order of identities, such that the node with
smallest identity matches the one with second smallest identity, etc.).
We also proved in the previous section a lower bound of \( O(n \log n) \) for problems like finding a leader. We present now an algorithm of \( O(n \log n) \) messages for this task. This algorithm can be used to design global-algorithms of \( O(n \log n) \) messages for other problems (like constructing a spanning tree).
3.2. Informal Description of the Algorithm.
We now present and discuss an \( O(n \log n) \) distributed algorithm for choosing a leader in a complete network of processors.
Each node in the network has a state, that is either KING or CITIZEN. Initially every node is a king (i.e. state = KING), and except for one - every one will eventually become a citizen (a citizen will never become a king again). The algorithm starts by a WAKE message, received by any nonempty set of nodes.
During the algorithm, each king is a root of a directed tree which is his kingdom. All the other nodes of this tree are citizens of this kingdom, and each node knows his father and sons. Each node \( i \) also stores the identity \( k(i) \) and the phase \( \text{phase}(i) \) of his king, which are updated during the execution of the algorithm. \( \text{status}(i) = (\text{phase}(i), k(i)) \) is called the status of node \( i \). Before the algorithm starts \( k(i) = \text{identity}(i) \) and \( \text{phase}(i) = -1 \) for each \( i \).
A king is trying to increase his kingdom by sending messages towards other kings (possibly through their citizens), asking them to join, together with their kingdoms, his kingdom.
A citizen, upon receiving a message, can delay it, ignore it, or transfer it to (or from) his king along a tree edge, or an edge connecting it to another king (which was already used by that king).
When king \( i \) receives a message asking him to join the kingdom of king \( j \), he does so if \((\text{phase}(i), k(i)) < (\text{phase}(j), k(j))\) (lexicographically; namely, if either (a) \( \text{phase}(i) < \text{phase}(j) \) or (b) \( \text{phase}(i) = \text{phase}(j) \) and \( k(i) < k(j) \)).
The process of joining j's kingdom is combined of two stages: first king i sends a message to king j along the same path which transferred j's message to i, telling him he is willing to join his kingdom; during this stage the directions of the edges in this path are reversed. In the second stage, if \( \text{phase}(i) < \text{phase}(j) \) then king j announces his new citizens that he is their new king, and if \( \text{phase}(i) = \text{phase}(j) \) then he first increases his phase by 1 and then sends an appropriate updating message towards all his citizens (new and old).
### 3.3. The Messages used by the Algorithm
Six kinds of messages are used in this algorithm:
1. \( \text{WAKE} \): this message, from some outside source, wakes a node and makes him start his algorithm. At most one such message can reach any node.
2. \( \text{ASK}(\text{phase}(i), k(i)) \): this message is sent by king i through an unused edge in an attempt to increase his kingdom, and might be transferred onwards by citizens. Each \( \text{ASK} \) message has a status, which is the status \( (\text{phase}(i), k(i)) \) of the king that originated it (in the time it was originated).
3. \( \text{ACCEPT}(\text{phase}(j)) \): this message is sent by king j in return to an \( \text{ASK} \) message from another king, telling him that he is willing to join his kingdom. (this message also might be transferred onwards by citizens.)
4. \( \text{UPDATE}(\text{phase}(i), k(i)) \): this message is sent by king i (after receiving an \( \text{ACCEPT} \) message from another king) updating his new (and in some cases also his old) citizens of his identity and phase.
5. \( \text{YOUR-CITIZEN} \): this message is returned by a citizen upon receiving an \( \text{ASK} \) message originated by his own king.
6. \( \text{LEADER} \): this message is sent by the leader to all other nodes, announcing his leadership and terminating the algorithm.
### 3.4. The Algorithm for a King
We now give the formal description of the algorithms to be performed by node i (as long as he is a king).
unused(i) denotes the set of all his unused edges, and initially contains all his n−1 adjacent edges. father_edge(i) denotes the edge connecting i to his father. sons(i) denotes the set of edges connecting i to his sons. receive(m) means that if the list of received messages is not empty, then m is the first message in it, and is taken out of the list (else the processor waits until he receives a message m). The algorithm for a king follows.
The Algorithm for a King
begin
phase(i) := −1; state(i) := king;
unused(i) := set of all adjacent edges; sons(i) := ∅;
receive(m); [m will be either WAKE or ASK]
if m = ASK(phase(j),k(j))
then state := CITIZEN
else phase(i) := 0;
while (unused(i) ≠ ∅ and state = KING) do
begin
choose e ∈ unused(i);
send an ASK message along e;
unused(i) := unused(i)−{e};
label: receive(m);
[m will be one of the following:
YOUR_CITIZEN, ACCEPT, ASK]
case m of
YOUR_CITIZEN :
[do nothing and enter the while loop again]
ACCEPT(phase(j)) :
[let e be the edge that delivered this message]
sons(i) := sons(i)∪{e}
if phase(i) > phase(j)
then,
send $UPDATE(\text{phase}(i), k(i))$ along $e$;
else [i.e., $\text{phase}(i) = \text{phase}(j)$]
begin
$\text{phase}(i) := \text{phase}(i) + 1$;
send $UPDATE(\text{phase}(i), k(i))$ to all your sons [new and old]
end;
$\text{ASK}(\text{phase}(j), k(j))$:
if $(\text{phase}(i), k(i)) > (\text{phase}(j), k(j))$
then goto label
else $\text{state} := \text{CITIZEN}$
[Sorry, you are no more a king!]
end [of the case statement]
end; [of the while loop]
[now $\text{state} = \text{CITIZEN}$ or $\text{unused}(i) = \emptyset$]
if $\text{state} = \text{CITIZEN}$
then perform the procedure for a citizen
else send a message LEADER to all other nodes.
[Congratulations; you are the (only) leader!]
end.
3.5. The Algorithm for a Citizen
The algorithm for a citizen is basically simple, since the only task of a citizen is passing messages to, or from his king. However, doing it in the straightforward way might use $O(n^2)$ messages. We incorporate some control mechanism into the algorithm, and reduce the number of messages to $O(n \log n)$. Beside the procedures and variables used by the algorithm for a king, we also use here the function $\text{search}(x)$ that fetches the first message of type $x$ from the list and takes it
out of it (or waits for such a message otherwise). It can be used with several arguments; e.g., \textit{search} (\textit{ASK}, \textit{UPDATE}) will fetch the first message that is either an \textit{ASK} or an \textit{UPDATE} message (or will wait for such a message otherwise).
After receiving an \textit{ASK} message which he forwards towards his king (i.e., of status higher than his), a citizen $i$ has to remember - besides his status $(\textit{phase}(i), k(i))$ - the status of this \textit{ASK} message, which must be greater than $i$'s status. At this stage, $i$ waits for an \textit{UPDATE} or \textit{ACCEPT} message and he does not process any other \textit{ASK} message. The processing of the received messages is done by the procedure \textit{process\_ask}, as follows:
On receiving an \textit{UPDATE}$(\textit{phase}(j), k(j))$ message (from his father), node $i$
1. updates his own status,
2. sends this message to all his sons,
3. compares his (new) status $a$ with the status $b$ of the last \textit{ASK} message he had passed, and performs the following:
3.1 if $a < b$ he continues to wait for a response \textit{ACCEPT} for the \textit{ASK} message,
3.2 if $a = b$, and the \textit{ASK} message was not received along a tree edge (i.e., it was received directly from the sending king), he returns through this edge a \textit{YOUR\_CITIZEN} message and waits for a new \textit{ASK} message,
3.3 if $a > b$ he waits for a new \textit{ASK} message.
In 3.2 and 3.3 above $i$ discards the last \textit{ASK} message he had passed, and exits the procedure \textit{process\_ask}.
On receiving an \textit{ACCEPT} message from his father (in reply to the \textit{ASK} message), he delivers it back through the appropriate edge, exits the procedure \textit{process\_ask}, and then waits for the corresponding \textit{UPDATE} message (this part is done by the procedure \textit{process\_new\_accept}).
Note that a citizen may receive an \textit{ACCEPT} message along an edge which is not a tree edge (such a message must be a response to an \textit{ASK} message originated by this citizen in those good old days when he still was a king). In such a
case he adds this edge to his set of sons, and sends through it the last UPDATE message that he received. (this part is done by the procedure process_old_accept). The algorithm for a citizen follows.
The Algorithm for a Citizen.
procedure process_ask;
[you have just received a message m =
ASK(phase(j),k(j)) along edge e]
begin
if (phase(j),k(j)) > (phase(i),k(i))
then
begin
send m to your father;
while (phase(j),k(j)) > (phase(i),k(i)) do
begin
m1 := search (UPDATE,ACCEPT);
case m1 of
UPDATE:
begin
process_update;
if (phase(i),k(i)) = (phase(j),k(j)) and e
is not a tree edge
then send YOUR_CITIZEN along e
end;
ACCEPT:
[you have just received an ACCEPT(phase(j))
message along edge e']
end [of the case statement]
end [of the while loop]
end [of the if statement]
if \((\text{phase}(j), k(j)) = (\text{phase}(i), k(i))\) and \(e\) is not a tree edge
then send \text{YOUR\_CITIZEN} along \(e\);
[if \((\text{phase}(j), k(j)) < (\text{phase}(i), k(i))\) then the \text{ASK} message is ignored]
end [of process\_ask]
procedure process\_old\_accept;
{you have just received an \text{ACCEPT}(\text{phase}(j)) message along
edge \(e\)' which is not a tree edge}
begin
\text{sons}(i) := \text{sons}(i) \cup \{e\};
send \text{UPDATE}(\text{phase}(i), k(i)) along \(e\)
end [of process\_old\_accept]
procedure process\_new\_accept;
{you have just received an \text{ACCEPT}(\text{phase}(j)) message along
edge \(e\)' which is your father_edge; this \text{ACCEPT}
must be a response to an \text{ASK} message you received along edge \(e\)}
begin
\text{sons}(i) := \text{sons}(i) \cup \{e\};
\text{father\_edge}(i) := e;
\text{m2} := \text{search}(\text{UPDATE});
process\_update;
end [of process\_new\_accept]
procedure process\_update;
{you have just received an \text{UPDATE}(\text{phase}(j), k(j)) message along
edge \(e\)' which is your father_edge}
begin
\text{phase}(i) := \text{phase}(j); [\text{phase}(i) is increased by at least one]
\[ k(i) := k(j); \]
send \( UPDATE(\text{phase}(i), k(i)) \) to all your sons
end [of process_update]
begin [of the main program for a citizen]
[you have just received an \( ASK(\text{phase}(j), k(j)) \) message
(which changed your status from king to citizen) along some edge \( e \).]
if \( e \in \text{sons}(i) \) then \( \text{sons}(i) := \text{sons}(i) - \{e\} \);
\( \text{father\_edge}(i) := e; \)
send \( ACCEPT(\text{phase}(i)) \) along the edge \( e; \)
\( m := \text{search}(UPDATE); \)
[now \( m = UPDATE(\text{phase}(j), k(j)); m \) was sent along \( e \) ]
\( \text{phase}(i) := \text{phase}(j); [\text{phase}(i) \text{ is increased by at least one}] \)
\( k(i) := k(j); \)
send \( UPDATE(\text{phase}(i), k(i)) \) to all your sons;
repeat
receive \( (m); \)
[\( m, \) received along \( e' \), will be one of the following: \( ASK, UPDATE, ACCEPT, LEADER \)]
case \( m \) of
\( ASK(\text{phase}(j), k(j)) : \) process_ask;
\( UPDATE(\text{phase}(j), k(j)) : \) process_update;
\( ACCEPT(\text{phase}(j)) : \)
if \( e' \) is not a tree edge
then process_old_accept
else process_new_accept;
end [of the case statement]
until \( m = LEADER; \)
3.6. Correctness of the Algorithm
We prove in this section the correctness of the algorithm. The property which implies this correctness is given in the next Theorem.
Theorem 5: In any execution of the algorithm, eventually only one king remains.
Proof: Assume that the theorem is false. Since it is impossible to have no king, the following must hold:
\( (FA): \) In some execution of the algorithm, \( s > 1 \) nodes remain kings, forever. Denote them \( king_1, \ldots, king_s \), and suppose that \( status(king_i) < status(king_j) \) for \( i < j \).
The proof proceeds by three lemmas.
Lemma 2: Under the assumption \((FA)\), eventually every node in the network will have his status equal to \((x, king_i)\) for some \( 1 \leq i \leq s \).
Proof: Otherwise, some node \( j \) has a different status \((\text{phase}(j), k(j))\) forever. This must be a status that he received from some king \( t \) that is now a citizen \((t \) may be equal to \( j \). \) \( t \)'s status was changed when he became a citizen. At this point he sent an \( ACCEPT \) message that eventually was answered by an \( UPDATE \) message. This \( UPDATE \) message contained a status with a new king, and eventually reached \( j \). Clearly, \( t \) never again became a king, which contradict the assumption that \( k(j) = t \).
Lemma 3: Under the assumption \((FA)\), if for some \( 1 \leq i \leq s \) king \( i \) is not asleep, then he eventually sends an \( ASK \) message to a node not in his kingdom.
Proof: Suppose king \( i \) sends his first \( ASK \) message at his final phase to a node \( j \) in his kingdom. By Lemma 2, node \( j \) eventually knows that his status is equal to \( status(king_i) \) and will send him back a message \( YOUR\_CITIZEN \). king \( i \) will now send an \( ASK \) message along some other unused edge. Since the underlying graph
is complete, this process continues until an ASK message is sent outside i's kingdom.
Lemma 4: Suppose $king_s$ sends an ASK message to a node $a$ in $king_j$'s kingdom ($j < s$). Then $king_j$ will eventually become a citizen.
Proof: If $a = king_j$ then he will become a citizen of $king_s$ immediately after receiving the message. Otherwise, this ASK message either arrived at $king_j$ or was stopped somewhere on the way between $a$ and $king_j$ (it cannot be discarded, since its status is greater then that of $king_j$). In the second case some other ASK message of status higher than $status(king_j)$ was forwarded towards $king_j$ by the node that blocked $king_s$'s message. Applying this reasoning as long as needed, we conclude that an ASK message of a status higher than $status(king_j)$ will eventually reach $king_j$. At this point $king_j$ will become a citizen.
By Lemma 4 we get a contradiction to the assumption that $s > i$, and this completes the proof of the Theorem.
Q.E.D.
Corollary: The unique remaining king eventually announces his leadership (and the algorithm stops).
Proof: By a proof similar to the one of Lemma 3, it can be shown that this king will eventually exhaust all his unused edges by sending ASK messages and receiving YOUR.CITIZEN replies. When no unused edges remains, he will send the LEADER message to all the nodes in the network, and each node, upon receiving this message, will stop his algorithm.
3.7. Complexity Analysis of the Algorithm
We conclude by giving a complexity analysis of the algorithm as follows.
Theorem 6: If \( k \) nodes start the algorithm by a \textit{WAKE} message, then the number of messages used by the algorithm is bounded by \( 5n\log_2 k + O(n) \).
We first need the following lemma:
Lemma 5: If \( k \) nodes start the algorithm by \textit{WAKE} messages and node \( i \) is the leader, then when the algorithm stops we have
\[
\text{phase}(i) \leq \left\lfloor \log_2 k \right\rfloor.
\]
Proof: Whenever a king at phase \( t \) increases his phase, he annexes another king of phase \( t \). Therefore, we have at most \( \frac{k}{2} \) kings in phase 1, \( \frac{k}{2^2} \) kings in phase 2,
\[ \frac{k}{2^i} \] kings in phase \( i \), for every \( 1 \leq i \leq \left\lfloor \log_2 k \right\rfloor. \]
Proof of the Theorem: We give an upper bound for the number of messages of each kind.
1. \textit{WAKE}: exactly \( k \) messages.
2. \textit{LEADER}: exactly \( n-1 \) messages.
3. \textit{YOUR-CITIZEN}: each node sends at most one such message - as a reply to an \textit{ASK} message - per phase. Therefore, the total number of such messages is bounded by \( n\log_2 k \).
4. \textit{ASK}: at a given phase, a king with \( m \) citizens can send at most \( m+1 \) such messages, therefore all the kings in this phase sent together at most \( n \) messages, therefore the total number of such message sent by kings is bounded by \( n\log_2 k \). Every citizen transfers at most one \textit{ASK} message per phase, hence the total number of such messages sent by all citizens is also bounded by \( n\log_2 k \).
5. \textit{ACCEPT}: the total number of such messages sent by a king (during the algorithm) is \( k - 1 \) the total number of such messages sent by all citizens is not larger than the total number of \textit{ASK} messages sent by all citizens, hence it is
also bounded by $n \log_2 k$.
(6) **UPDATE**: each citizen receives at most one such message per phase, hence the total number of such messages is also bounded by $n \log_2 k$.
To conclude, the total number of messages used by the algorithm does not exceed $5n \log_2 k + O(n)$.
Q.E.D.
A message of type **WAKE**, **YOUR_CITIZEN** and **LEADER** requires a constant number of bits each. A message of type **ASK**, **ACCEPT**, **UPDATE** requires 2 bits for specifying their type, $\log \log n$ bits for the phase, and $\log m$ bits for the identity (where $m$ is the largest identity). Therefore, the maximal number of bits per message is bounded by $\log_2[4m \log_2 n]$.
**Acknowledgement**: we would like to thank Doron Rotem for a discussion which initiated this research.
REFERENCES
|
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1983/CS/CS0298.pdf", "len_cl100k_base": 9551, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 62794, "total-output-tokens": 11229, "length": "2e13", "weborganizer": {"__label__adult": 0.0005650520324707031, "__label__art_design": 0.00041103363037109375, "__label__crime_law": 0.0006771087646484375, "__label__education_jobs": 0.0010919570922851562, "__label__entertainment": 0.0001652240753173828, "__label__fashion_beauty": 0.000240325927734375, "__label__finance_business": 0.0005102157592773438, "__label__food_dining": 0.0006833076477050781, "__label__games": 0.0014467239379882812, "__label__hardware": 0.003269195556640625, "__label__health": 0.0019102096557617188, "__label__history": 0.0005588531494140625, "__label__home_hobbies": 0.0001852512359619141, "__label__industrial": 0.0008711814880371094, "__label__literature": 0.0005102157592773438, "__label__politics": 0.0005102157592773438, "__label__religion": 0.0009679794311523438, "__label__science_tech": 0.2435302734375, "__label__social_life": 0.00012576580047607422, "__label__software": 0.006801605224609375, "__label__software_dev": 0.73291015625, "__label__sports_fitness": 0.0005450248718261719, "__label__transportation": 0.0012331008911132812, "__label__travel": 0.0003273487091064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35927, 0.01297]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35927, 0.37037]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35927, 0.89481]], "google_gemma-3-12b-it_contains_pii": [[0, 271, false], [271, 1233, null], [1233, 3470, null], [3470, 5340, null], [5340, 7439, null], [7439, 9446, null], [9446, 11489, null], [11489, 13219, null], [13219, 15151, null], [15151, 16846, null], [16846, 18959, null], [18959, 21031, null], [21031, 22097, null], [22097, 23349, null], [23349, 25513, null], [25513, 26286, null], [26286, 27474, null], [27474, 28654, null], [28654, 30518, null], [30518, 32086, null], [32086, 33883, null], [33883, 34665, null], [34665, 35927, null]], "google_gemma-3-12b-it_is_public_document": [[0, 271, true], [271, 1233, null], [1233, 3470, null], [3470, 5340, null], [5340, 7439, null], [7439, 9446, null], [9446, 11489, null], [11489, 13219, null], [13219, 15151, null], [15151, 16846, null], [16846, 18959, null], [18959, 21031, null], [21031, 22097, null], [22097, 23349, null], [23349, 25513, null], [25513, 26286, null], [26286, 27474, null], [27474, 28654, null], [28654, 30518, null], [30518, 32086, null], [32086, 33883, null], [33883, 34665, null], [34665, 35927, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35927, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35927, null]], "pdf_page_numbers": [[0, 271, 1], [271, 1233, 2], [1233, 3470, 3], [3470, 5340, 4], [5340, 7439, 5], [7439, 9446, 6], [9446, 11489, 7], [11489, 13219, 8], [13219, 15151, 9], [15151, 16846, 10], [16846, 18959, 11], [18959, 21031, 12], [21031, 22097, 13], [22097, 23349, 14], [23349, 25513, 15], [25513, 26286, 16], [26286, 27474, 17], [27474, 28654, 18], [28654, 30518, 19], [30518, 32086, 20], [32086, 33883, 21], [33883, 34665, 22], [34665, 35927, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35927, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
caea273bbf27b7dc0fbffeb397d2aad81661a449
|
[REMOVED]
|
{"Source-Url": "https://www.flyn.org/publications/2015-libtlssep.pdf", "len_cl100k_base": 10577, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 53922, "total-output-tokens": 13542, "length": "2e13", "weborganizer": {"__label__adult": 0.0004019737243652344, "__label__art_design": 0.0003199577331542969, "__label__crime_law": 0.0011997222900390625, "__label__education_jobs": 0.0004122257232666016, "__label__entertainment": 7.593631744384766e-05, "__label__fashion_beauty": 0.00015652179718017578, "__label__finance_business": 0.00033354759216308594, "__label__food_dining": 0.00028204917907714844, "__label__games": 0.000690460205078125, "__label__hardware": 0.001422882080078125, "__label__health": 0.0004382133483886719, "__label__history": 0.000255584716796875, "__label__home_hobbies": 7.87973403930664e-05, "__label__industrial": 0.00046634674072265625, "__label__literature": 0.00021517276763916016, "__label__politics": 0.0004019737243652344, "__label__religion": 0.0003995895385742187, "__label__science_tech": 0.06298828125, "__label__social_life": 9.363889694213869e-05, "__label__software": 0.0174560546875, "__label__software_dev": 0.9111328125, "__label__sports_fitness": 0.0002624988555908203, "__label__transportation": 0.0005502700805664062, "__label__travel": 0.0001829862594604492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52734, 0.05689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52734, 0.15716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52734, 0.83042]], "google_gemma-3-12b-it_contains_pii": [[0, 2395, false], [2395, 5363, null], [5363, 8391, null], [8391, 10759, null], [10759, 13222, null], [13222, 16421, null], [16421, 19426, null], [19426, 20990, null], [20990, 23763, null], [23763, 26187, null], [26187, 27974, null], [27974, 30502, null], [30502, 31868, null], [31868, 35115, null], [35115, 38191, null], [38191, 41210, null], [41210, 43932, null], [43932, 46198, null], [46198, 49667, null], [49667, 52734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2395, true], [2395, 5363, null], [5363, 8391, null], [8391, 10759, null], [10759, 13222, null], [13222, 16421, null], [16421, 19426, null], [19426, 20990, null], [20990, 23763, null], [23763, 26187, null], [26187, 27974, null], [27974, 30502, null], [30502, 31868, null], [31868, 35115, null], [35115, 38191, null], [38191, 41210, null], [41210, 43932, null], [43932, 46198, null], [46198, 49667, null], [49667, 52734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52734, null]], "pdf_page_numbers": [[0, 2395, 1], [2395, 5363, 2], [5363, 8391, 3], [8391, 10759, 4], [10759, 13222, 5], [13222, 16421, 6], [16421, 19426, 7], [19426, 20990, 8], [20990, 23763, 9], [23763, 26187, 10], [26187, 27974, 11], [27974, 30502, 12], [30502, 31868, 13], [31868, 35115, 14], [35115, 38191, 15], [38191, 41210, 16], [41210, 43932, 17], [43932, 46198, 18], [46198, 49667, 19], [49667, 52734, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52734, 0.17439]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
9951c1f24add854e3e7c05b768f8d301c34ed368
|
Modular verification of higher-order methods with mandatory calls specified by model programs
Steve M. Shaner
Iowa State University
Follow this and additional works at: https://lib.dr.iastate.edu/etd
Part of the Computer Sciences Commons
Recommended Citation
Shaner, Steve M., "Modular verification of higher-order methods with mandatory calls specified by model programs" (2008). Graduate Theses and Dissertations. 11193.
https://lib.dr.iastate.edu/etd/11193
This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu.
Modular verification of
higher-order methods with mandatory calls
specified by model programs
by
Steve M. Shaner
A thesis submitted to the graduate faculty
in partial fulfillment of the requirements for the degree of
Master of Science
Major: Computer Science
Program of Study Committee:
Gary T. Leavens, Major Professor
Samik Basu
Leslie Hogben
Iowa State University
Ames, Iowa
2008
Copyright © Steve M. Shaner, 2008. All rights reserved.
DEDICATION
This thesis is dedicated to my wife Lisa, for tolerating and encouraging me through everything.
# TABLE OF CONTENTS
LIST OF FIGURES ........................................... v
ACKNOWLEDGMENTS ........................................ vi
ABSTRACT ................................................... vii
CHAPTER 1. OVERVIEW ....................................... 1
1.1 Introduction ........................................... 1
1.2 The Problem .......................................... 3
1.3 Our Solution ......................................... 3
1.4 Contributions & Outline ................................ 6
CHAPTER 2. RELATED WORK .................................... 7
2.1 Solutions for Higher-order Methods ....................... 7
2.1.1 Higher-order Logic ................................ 7
2.1.2 Trace-based Semantics ............................. 8
2.1.3 Contracts in Scheme ............................... 8
2.2 Applications for Model Programs ......................... 9
2.2.1 Monitoring Runtime Behavior ...................... 9
2.2.2 Greybox Refinement ............................... 9
CHAPTER 3. SOLUTION APPROACH ............................. 10
3.1 Verifying Implementations ................................ 11
3.2 Client Reasoning ..................................... 13
3.3 Extracting Implicit Model Programs from Code .......... 14
3.4 Example Verifications .................................. 14
3.4.1 Template Methods: Following a Recipe .......... 14
3.4.2 Chain of Responsibility: Testing Static Configurations .. 17
3.4.3 Technical Limitations ............................ 18
CHAPTER 4. EXTENDING JML WITH MODEL PROGRAMS .......... 21
4.1 JML Background ...................................... 21
4.2 Our Extension ....................................... 22
# LIST OF FIGURES
| Figure 1.1 | One possible ecology of software genres. | 1 |
| Figure 1.2 | A Java class with JML specifications. | 4 |
| Figure 1.3 | Specification of the Listener interface. | 5 |
| Figure 1.4 | Specification of the LastVal class. | 5 |
| Figure 1.5 | Java code that draws a strong conclusion about HOM call \texttt{bump}. | 5 |
| Figure 2.1 | Specification in the style of Ernst, \textit{et al.} \cite{9} for \texttt{bump}. | 7 |
| Figure 2.2 | Specification in the style of Soundarajan and Fridella \cite{22} for \texttt{bump}. | 8 |
| Figure 2.3 | Greybox model programs (bottom) synthesize blackbox (left) and whitebox (right) specification styles. | 9 |
| Figure 3.1 | Model program specifying the mandatory call to \texttt{actionPerformed}. | 10 |
| Figure 3.2 | Code matching the model program specification for Counter’s mandatory call. | 12 |
| Figure 3.3 | The result of substituting the model program’s body for the call \texttt{c.bump()} from Figure 1.5. | 13 |
| Figure 3.4 | Class CakeFactory with its template method \texttt{prepare}, and two hook methods. | 15 |
| Figure 3.5 | \texttt{prepare}'s extracted specification. | 15 |
| Figure 3.6 | Class StringyCake, a subclass of CakeFactory. | 16 |
| Figure 3.7 | Client code that calls \texttt{prepare}. | 16 |
| Figure 3.8 | Client code that calls \texttt{prepare}, after using the copy rule. | 17 |
| Figure 3.9 | The Mailer interface. | 18 |
| Figure 3.10 | An example mailing network connecting Alice to Bob. | 18 |
| Figure 3.11 | Client code that makes an assertion of guaranteed message delivery. | 19 |
| Figure 3.12 | Class Map implements a staple of functional programming in Java. | 19 |
| Figure 3.13 | Client code that calls \texttt{map} while asserting its desired effect. | 20 |
| Figure 3.14 | Code of Figure 3.13 after substituting a model program for \texttt{map}. | 20 |
ACKNOWLEDGMENTS
I would like to take this opportunity to give thanks to those who helped me with various aspects of conducting research and the writing of this thesis. First and foremost, Dr. Gary T. Leavens for his direction, patience and support throughout this research and the writing of this thesis. I would also like to thank my committee members for their comments on this work: Dr. Samik Basu and Dr. Leslie Hogben. I would additionally like to thank fellow grad students and friends Ryan Babbitt and David Niedergeses for cheering me up on those gloomiest of days.
Formal specification languages improve the flexibility and reliability of software. They capture program properties that can be verified against implementations of the specified program. By increasing the expressiveness of specification languages, we can strengthen the argument for adopting formal specification into standard programming practice.
The higher-order method (HOM) is a kind of method whose behavior critically depends on one or more mandatory calls in its body. Programmers using HOMs would like to reason about the HOM’s behavior, but revealing the entire code for such methods restricts writers of HOMs to a specific implementation.
This thesis presents a simple, intuitive extension of JML, a formal specification language for Java, that enables client reasoning about the behavior of HOMs in a sound and modular way. Furthermore, our particular technique is capable of fully automatic checking with lower specification overhead than previous solutions.
Supporting client reasoning about HOMs enables formal verification of some of the behavioral properties of HOM-using object-oriented design patterns, like Observer and Template Method. The technique also applies to specifying HOM behavior in any procedural language.
CHAPTER 1. OVERVIEW
This chapter introduces the reader to the ongoing project of formal software specification, exposes a current problem for client reasoning and develops an extension to specification languages that solves this problem. We close the chapter by identifying key contributions of this thesis and giving an outline of the content of subsequent chapters.
1.1 Introduction
All programs are written. As a collection of written artifacts, they form a body of literature for analysis. Classifying programs into genres of software is one way to study these writings. Depending on one’s choice of perspective, many possible taxonomies might be used for classifying programs. We prefer an ecological perspective, since programs often interact with, consume and produce other programs. They share and compete for resources while constant development and user evaluation allow software to co-evolve over time. If one were to group programs according to their ecological roles, one might arrive at a system resembling Figure 1.1.
Applications, systems and platforms are the most visible software genres in such a taxonomy. Applications consume resources provided by platforms, while systems communicate with each other and are often composed of smaller sub-systems, platforms and tools. All three of these genres evolve by adopting or deploying frameworks and libraries. These latter genres function as a basic functional unit of the software ecology, whose size and complexity can range from a single function, script or object that performs a single task to near-turnkey solutions for a particular domain. The final program
category, the genre of tools, drives software development forward. Whether transforming between rep-
resentations, editing source or interpreting bytecode, tools enable the construction and comprehension
of modern software in every genre. These classifications are not meant to be authoritative, merely
descriptive. Nor do we intend the boundaries between genres to be rigid and absolute. Many pro-
grams overlap multiple genres and can play ambiguous or shifting roles in the resulting ecology. The
genres themselves have changed over time and will continue to change in the future. We provide this
perspective to capture a snapshot of the present that addresses the variety of modern software.
Within the worldview of Figure 1.1 we consider the programmer whose job it is to straddle these
genres. We would argue such a programmer represents the majority of today’s software writers. For ex-
ample, writers of libraries and frameworks must consider not only competing libraries and frameworks,
but also the tools, applications and platforms with which their code may interact. Applications written
using different tools behave differently, and smart programmers exploit these differences to improve
the quality of their software. In every case where existing code is reused, both from within and outside
of a development project, there must be an understanding of how the reused code works. Pragmatically,
no code can be reused until programmers know how to call, link, compile or execute it. But behavioral
descriptions go beyond this level of understanding. They allow programmers to reason about where,
how and why the existing code will be reused. If this kind of reasoning is to be assisted by tools, then
we need formal specifications to capture the relevant behavior.
As a genre, tools play a privileged role in our software ecology. A virtuous cycle exists in software
evolution: improving support for formal specification in our tools increases the quality of reuse in the
software created by those tools. Formal verification provides one way to observe this cycle in action.
During analysis and design, specifications pose as models of the software to be created. Where tools
are aware of them, these models can be checked for consistency with varying degrees of automation.
During the development, testing and deployment of a program, specifications can act as pliable ora-
cles for conformance. If programs fall short of the specified ideal, then either the specification or the
program may be at fault and needing revision. In both cases, specification-aware tools enable program-
ners to improve their understanding of the software under inspection. Furthermore, after revisions are
made, both specification and software have increased in value. Software performs according to the
specification, and specifications describe software behavior for programmers seeking to reuse it.
Tools for writing and checking formal specifications have been developed for some time. Many ef-
effective specification conventions exist and current techniques to describe program behavior work well
in most cases. As this thesis will show, however, some writers of software require more detail than
current specification techniques provide. By providing an extension to the vocabulary of formal spec-
ification, we aim to bridge this gap. Software engineering advances insofar as the new specifications
deliver more useful program properties to programmers at an acceptable cost. We aim to convince the
reader that our work meets these criteria.
1.2 The Problem
As a supplement to conventional formal specification, we seek to specify the properties of mandatory calls made by higher-order methods. A higher-order method (or HOM) is any method whose behavior critically depends on one or more mandatory calls. A mandatory call is a method call that must occur within a particular calling context. In order to reason about the behavior of a particular HOM, we need to know both the identity of its mandatory calls as well as a sufficient description of the context in which the mandatory call will be made. Mandatory calls are useful because they enable structural patterns of code reuse and abstraction. However, in order to remain flexible, mandatory calls are often weakly-specified. We consider a method specification to be weak if it only states some limited property that does not completely describe the state transformation of interest to the clients of the HOM.
The calling structure of mandatory calls can be found in the actual implementation code, but current techniques for specifying functional behavior do not capture this structure sufficiently. Examples of such inadequacy can be found when considering the behavior of callbacks, supporting client reasoning for select object-oriented design patterns and also when testing an implementation for API or library conformance. Work on support for some object-oriented design patterns has been done by the author, with Leavens and Naumann in a paper appearing in OOPSLA 2007 [21] from which we adapt an example of client reasoning below.
Szyperski identified some specification problems with callbacks through a simple example using directories [23]. This is a specific design that invokes the Observer design pattern, where the addEntry method allows any number of directory observers to respond to the event after it occurred. Reasoning about calls to addEntry will require knowing both about how addEntry notifies those observers and what side effects will occur as the observers respond to notification.
Callbacks with this problematic behavior show up again and again in the context of other common object-oriented design patterns. Specifically, whenever a pattern delegates behavior inside of a method to some other call, that pattern calls for the creation of a higher-order method whose mandatory call will be weakly specified. Three such examples, one of which is introduced in the next section, will be explored in Chapter 3.
1.3 Our Solution
Generalizing from these examples, each involves a weakly-specified call whose occurrence must be verified inside some higher-order method. Current specification practice prefers to describe HOMs in terms of pre-/postcondition pairs, with possibly a frame axiom describing the set of transformed states. Preconditions capture what a method assumes to be true before it executes, and postconditions describe what is true after method execution. Frame axioms simply define what data might be changed.
in the post-state. These concepts are not sufficient for our purposes, since clients often want to use their knowledge about the mandatory call to reason about the HOM’s behavior. These issues are probably best explained using the following example from our OOPSLA 2007 paper [21].
Start by considering the class Counter, shown in Figure 1.2 whose HOM bump is to be observed, and which holds a single listener to observe it. This class declares two private fields, count and lstnr. The JML annotations declare both fields to be spec_public, meaning that they can be used in public specifications [14]. The field count is the main state in counter objects. The field lstnr holds a possibly null Listener instance. Counter’s register method has a Hoare-style specification. The precondition is omitted, since it is just “true.” Its assignable clause gives a frame axiom, which says that it can only assign to the field lstnr. Its postcondition is given in its ensures clause. The figure does not specify the HOM bump, as a major part of the problem is how to specify such methods.
```java
public class Counter {
private int count = 0;
private Listener lstnr = null;
public void register(Listener lnr) {
this.lstnr = lnr;
}
public void bump() {
this.count = this.count+1;
if (this.lstnr != null) {
this.lstnr.actionPerformed(this.count);
}
}
}
```
Figure 1.2 A Java class with JML specifications. JML specifications are written as annotation comments that start with an at-sign (@), and in which at-signs at the beginnings of lines are ignored. The specification for method register is written before its header.
The Listener interface, specified in Figure 1.3 contains a very weak specification of its callback method, actionPerformed. Counter’s bump method invokes this callback to notify the registered Listener object (if any). Its specification is weak because it has no pre- and postconditions. The only thing constraint on its actions is given by the specification’s assignable clause. This clause names this.objectState, which is a datagroup defined for class Object. A datagroup is a declared set of fields that can be added to in subtypes [16, 17].
The LastVal class, specified in Figure 1.4 is a subtype of Listener. Objects of this type hold the last value passed to their actionPerformed method in the field val. This field is placed in
---
1 In JML fields are automatically specified to be non-null by default [7, 16], so nullable must be used in such cases.
public interface Listener {
//@ assignable this.objectState;
void actionPerformed(int x);
}
Figure 1.3 Specification of the Listener interface.
the objectState datagroup by the in clause following the field’s declaration. Doing so allows the actionPerformed method to update it [16, 17]. Objects of this class also have a method `getVal` to allow other code to access the field’s value.
public class LastVal implements Listener {
private /*@ spec_public @*/ int val = 0;
//@ in objectState;
/*@ also @*/
/*@ assignable this.objectState; @*/
public /*@ ensures this.val == x; @*/
void actionPerformed(int x) {
this.val = x;
}
//@ ensures \result == this.val;
public /*@ pure @*/ int getVal() {
return this.val;
}
}
Figure 1.4 Specification of the LastVal class.
LastVal lv = new LastVal();
//@ assert lv != null && lv.val == 0;
Counter c = new Counter();
c.register(lv);
//@ assert c.lstnr == lv && lv != null;
//@ assert c.count == 0;
c.bump();
//@ assert lv.val == 1;
Figure 1.5 Java code that draws a strong conclusion about HOM call `bump`. The conclusion is the assertion in the last line.
With these pieces in place, we turn our attention to a typical example of client reasoning with the observer pattern in Figure 1.5. In the code, we set up a Counter object `c` with a registered observer `lv` and our client wants to be able to reason about the effect of calling the `bump()` method on `c`. The `bump()` method is informally known to invoke a method on `c`’s registered observer, but without formally revealing how that call is made, the strong conclusion of Figure 1.5 can’t be verified. In this thesis, we argue that the best way to capture the missing information is found in the greybox approach.
Büchi and Weck define the greybox approach [3, 4, 5] as a technique for generating verification conditions that captures both the mandatory nature of these calls and the context in which they occur. Their basic contribution is the notion of a model program for revealing this information as a smaller trade-off in the level of abstraction of the specification. Model programs are considered to be greyboxes since they combine the blackbox (or obscured) nature of pre- and postconditions with the whitebox (or revealed) nature of exposing the code directly. The model program itself represents a sequential interleaving of these two paradigms that reads like an abstract description of the algorithm being specified. Where abstraction is preferred, one gives only a blackbox contract on the implementation. Where more detail is required (i.e. at the site of a mandatory call), one reveals the exact implementation as it must appear in the code. Model programs represent a combination of the finest level of detail that also grants some flexibility to implementors of the modeled method. The details of how model programs constrain HOM implementation can be found in Chapter 3.
Several solutions to this problem of how to modularly reason about HOMs have appeared previously in the literature, as well as some work on model programs in different contexts. Chapter 2 compares these attempts to our own.
1.4 Contributions & Outline
This thesis implements model programs for the Java Modeling Language (JML), a formal specification language for Java [13, 16]. To do so, we must provide what Büchi and Weck do not: their technique assumes that the structure of a model program is preserved by an implementation. This work gives a practical, though restrictive, algorithm for discharging that assumption among other claims.
In adapting the greybox approach to JML, this work makes the following contributions:
- a practical “pattern matching” algorithm for discharging the structure-preserving assumption of Büchi and Weck, and
- a design overview of the code that brings model program verification to JML.
This work proceeds as follows. Chapter 2 discusses related contributions, ending with Büchi and Weck’s original formulation of greybox model programs. Chapter 3 goes into detail about our adaptation of the greybox approach with JML’s model programs. Chapter 4 presents design details from the implementation of model programs in the JML Common Tools. Chapters 2 and 3 have been adapted from earlier material in our OOPSLA 2007 paper [21], while the material of Chapter 4 is original to this thesis. Chapter 5 presents paths for future work before drawing summary conclusions.
CHAPTER 2. RELATED WORK
This chapter examines the literature for existing solutions to the problem of higher-order methods as well as some applications for model programs. We wrap up this examination with a definition for greybox reasoning, which serves as a foundation for the solution proposed by this thesis.
2.1 Solutions for Higher-order Methods
Many other researchers have worked on the problem of higher-order methods using a variety of techniques. The first technique we will examine applies higher-order logic to parametrize specifications; the second reasons in terms of permitted traces of method calls.
2.1.1 Higher-order Logic
Ernst, Navlakha and Ogden [9] verify the effect of calling a HOM by allowing its specification to be parametrized. Specifically, the authors support assertions that represent the pre- and postconditions of a mandatory call, parametrized to reflect the context in which the higher-order method invokes it. Superficially, the assertions involving mandatory calls’ pre- and post-states make specification longer and in some cases more obfuscated than the code specified. One such example can be found in Figure 2.1. These specifications are checked using higher-order logic during verification, to quantify over all possible mandatory calls. Automating the verification task is complicated by the interactive nature of most theorem provers for higher-order logic. Furthermore, mandatory calls must occur as part
of the behavior of a higher-order method. This technique only verifies which effects have occurred in the post-state, leaving clients to guess about behavioral dependencies.
2.1.2 Trace-based Semantics
Soundarajan and Fridella [22] use a trace-based semantics to verify the set of the calls made during any execution. The trace set that is produced is checked against the set of traces specified for the higher-order method. Figure 2.2 provides a demonstration of what such a specification might look like for our HOM `bump`.
\[
\begin{align*}
epre & . \text{Counter}.bump() & \equiv & [\tau = \epsilon] \\
epost & . \text{Counter}.bump() & \equiv & [(\text{this}.lstnr \neq \text{null}) \Rightarrow \\
& & & ((|\tau| = 1) \land \\
& & & (\tau[1],hm = \text{this}.lstnr.actionPerformed)) \land \\
& & & (\text{this}.lstnr = \text{null} \Rightarrow \tau = \epsilon]
\end{align*}
\]
Figure 2.2 Specification in the style of Soundarajan and Fridella [22] for `bump`, from previous work [21].
This solution requires that the correct calls are made from the desired states, but verification is complicated with the way by which the set of permitted traces is computed. Describing sequences of mandatory calls quickly adds to the complexity of these specifications. Specifiers are required to reason in terms of a higher-order logic that quantifies over all possible implementations. The contribution of this thesis should simplify how higher-order method specifications are written, used and verified.
2.1.3 Contracts in Scheme
Casting further afield, Findler and Felleisen [10] use assertion-style contracts on the function argument of a higher-order procedure in Scheme. Relative to our work, which focuses on client reasoning for the higher-order method, the authors seek to report contract violations where a function argument is misused. Their system allows blame assignment when the contract for a function argument of a higher-order procedure can be checked at runtime. This work generalizes first-order contract systems for those languages supporting first-class procedures. The extended contract system would be able to enforce calling constraints on function arguments passed to higher-order procedures, but do not specify information about when, where or if those argument procedures are invoked in the body of the higher-order procedure.
2.2 Applications for Model Programs
We are not the first to attempt to apply model programs to program specification. Other researchers have used model programs to enforce run-time constraints on implementations.
2.2.1 Monitoring Runtime Behavior
Barnett and Schulte [2] use model program specifications to construct execution monitors for reactive systems in the .NET environment. The authors write model programs using AsmL to flexibly express nondeterministic compositions of mandatory calls. An algorithm to translate such expressions into automata for runtime verification is given. These efforts solve a different problem from the work contained in this thesis. Barnett and Schulte provide a solution for checking runtime behavior against a model program whereas we give static structural constraints on the implementation of HOMs. When we discuss future work in Chapter 5, we will consider some novel ideas for manipulating abstract statements inspired by this approach.
2.2.2 Greybox Refinement
Recall Büchi and Weck’s “greybox” approach from the previous chapter. This work forms the primary inspiration for our own. As we mentioned in Chapter 1, the basic intuition here is that of Figure 2.3. Greybox model programs can be viewed as a sequential interleaving of blackbox and whitebox specifications. What is missing from previous work is a specified means to practically express these specifications that is also capable of verifying that implementations share a structure similar to their model programs. This thesis explores the consequences of our choices in bridging that gap.

CHAPTER 3. SOLUTION APPROACH
Our solution for capturing mandatory calls inside of higher-order methods (HOMs) adapts grey-box, model program specifications [3 4 5] and uses a copy rule [18] to reason about calls to HOMs specified with model programs. An example model program specification for Counter’s HOM bump is shown in Figure 3.1. In this figure, the public modifier says that this specification is intended for client use [14]. The keyword model_program introduces the model program. Its body contains a statement sequence consisting of a specification statement followed by an if-statement. The specification statement starts with normal_behavior and includes the assignable and ensures clauses. Specification statements can also have a requires clause, which would give a precondition; in this example the precondition defaults to “true.” A specification statement describes the effect of a piece of code that would be used at that place in an implementation. Such a piece of code can assume the precondition and must establish the postcondition, assigning only to the datagroups permitted by its assignable clause. Thus specification statements can hide implementation details and make the model program less specific. Although the example uses a specification statement in a trivial way, they can be used to abstract arbitrary pieces of code, and have been used to do so in the refinement calculus [1, 19].
Our approach prescribes how to do two verification tasks:
- **Verification of a method implementation against its model program specification.** Our approach imposes verification conditions on the code by “matching” the code against the model program, which yields a set of verification conditions for the code fragments that implement the model program’s specification statements.
- **Verification of calls to HOMs specified with model programs.** Our approach uses a verification rule that copies the model program to the call site, with appropriate substitutions. The caller (or client) can then draw strong conclusions using a combination of the copied specification and the caller’s knowledge of the program’s state at the call site. In particular, at the site of the mandatory calls made by the substituted model program, the client may know more specific types of such calls’ receivers. These more specific receiver types may have stronger specifications, which client reasoning can exploit.
We will look at the details required for each verification, then give a practical way to derive implicit model programs directly from annotated code. Examples that formalize common object-oriented design patterns are then discussed in detail. This chapter closes by identifying some limits to our current technique.
### 3.1 Verifying Implementations
Verifying a method implementation against its model program is itself a two-step procedure. The first step is matching, to check whether the method body has a similar structure to that of the model program. The matching we use to establish this property is simple. We require that implementations must match the model program exactly except where the model program contains a specification statement. Specification statements can only be matched by a `refining` statement in the implementation. To associate `refining` statements with the corresponding point in the model program, each `refining` statement must have a specification identical to the specification statement it implements.
To see an example of this, compare `bump`’s code in Figure 3.2 with the model program in Figure 3.1. This is a correct match, because the `refining` statement in the code matches the specification statement in the model program, and the call to `actionPerformed` in the code matches the same call in the model program. The mandatory call exposed in this example is `actionPerformed`, inside of the HOM `bump`. Each piece of code matches a corresponding piece of the model program, so we are guaranteed that both model program and implementation share a similar structure.
The second stage of this task is proving that every refining statement in the code correctly implements its specification. Let us demonstrate this with a proof using weakest-precondition semantics. That is, assuming the specification statement’s postcondition, we must show that the end of the body of
```java
public /*@ extract */ void bump() {
/*@ refining normal_behavior
* assignable this.count;
* @ ensures this.count == \old(this.count+1);
* @*/
this.count = this.count+1;
if (this.lstnr != null) {
this.lstnr.actionPerformed(this.count);
}
}
```
Figure 3.2 Code matching the model program specification for Counter’s mandatory call. The `extract` syntax is explained in Section 3.3.
The refining statement is reachable from the specification’s precondition and only assigns to the fields permitted by its frame. In Figure 3.2, the only value allowed to change in the refining code is an instance’s `count` field, which is incremented by one. The body of the refining statement is the statement
```java
this.count = this.count+1;
```
so we must show
```java
{true} this.count = this.count+1; {this.count == \old(this.count+1)}
```
where `true` is the assumed precondition of our `normal_behavior` specification statement. By the standard proof rules for assignment [25], we can derive
```java
\old(this.count+1) == \old(this.count+1),
```
or `true`, so this code is a permissible refinement of its model program counterpart. Since all other code (the `if`-statement containing a mandatory call) matches exactly, this is sufficient to show that the method implementation refines its model program. It also ensures that mandatory calls occur in the HOM implementation only in the specified states.
Despite its simplicity, our technique is practical. It allows programmers to trade the amount of effort they invest in specification and verification for flexibility in maintenance. Programmers can write abstract specification statements that hide details in order to allow multiple possible implementations to satisfy their intentions. Conversely, programmers may choose to avoid most of the overhead of specification and verification and simply use the code for a HOM as a white-box specification, with the obvious loss of flexibility in maintenance. The only details that our technique forces programmers to reveal are the mandatory calls for which client-side reasoning is to be enabled and the control structures surrounding such calls. For all other details the choice is left to them and is not dictated by this technique.
3.2 Client Reasoning
To verify calls of HOMs with model program specifications, we have developed a technique that supports strong conclusions without requiring the use of higher-order logic or trace semantics in specifications. Instead, we use a copy rule\(^1\), in which the body of the model program specification is substituted for the HOM call at the call site, with appropriate substitutions\(^1\). For example, to reason about the call to `c.bump()` in Figure 1.5, one copies the body of the model program specification to the call site, substituting the actual receiver `c` for the specification’s receiver, `this`. We show such a substitution in Figure 3.3.
```java
LastVal lv = new LastVal();
//@ assert lv != null && lv.val == 0;
Counter c = new Counter();
c.register(lv);
//@ assert c.lstnr == lv && lv != null;
//@ assert c.count == 0;
/*@ normal_behavior
@ assignable c.count;
@ ensures c.count == old(c.count+1);
/*@ if (c.lstnr != null) {
c.lstnr.actionPerformed(c.count);
}//@ assert lv.val == 1;
```
Figure 3.3 The result of substituting the model program’s body for the call `c.bump()` from Figure 1.5.
This code exposes a call to `actionPerformed` by `c.lstnr` field, which makes it easy to verify the final assertion. Clients can infer from the assertions before the `normal_behavior` specification statement that just before the mandatory call is made, `c.lstnr` is equal to `lv`. For all matching implementations, any code refining the specification statement preserves this property, satisfying the `assignable` clause of the `normal_behavior`. To prove the final assertion is true, verifiers can apply the specification of `actionPerformed` from the LastVal class.
Our approach works well for clients, because their understanding of the code no longer relies on a less-than-helpful blackbox specification of the HOM or the very weak specification of its mandatory calls. Instead clients reason with the substituted body of a model program and their knowledge of often stronger specifications on the actual mandatory calls made at the call site. Thus clients can apply their specific knowledge about particular HOM calls to draw strong conclusions.
\(^1\) The copy rule can be used repeatedly to verify recursive HOM calls, as long as there is a way to limit the depth of recursive copying for each case. Providing additional information to derive a maximum recursive depth, perhaps by defining a progress metric or declaring an explicit limit, is one way to enable reasoning about recursive specifications. For this presentation, however, we do not assume any such rule.
3.3 Extracting Implicit Model Programs from Code
Due to the simplicity of our matching, model program specifications necessarily contain redundant copies of all implementation code not hidden behind `normal_behavior` specification statements. This duplication introduces the possibility of errors and is a maintenance headache.
When the specification does not have to be kept separate from the code, we can avoid the problems of duplication by writing the code and the specification at the same time. We used this functionality earlier in Figure 3.2. When a method has the `extract` modifier, we extract an implicit specification from the code. This extraction process derives a model program, in this case resembling Figure 3.1, by taking the specification of each `refining` statement as a specification statement in the model program (thus hiding its implementation part), and by taking all other statements as written in the code. The resulting model program automatically matches the code without creating another explicit copy. The specification shown in Figure 3.1 could be what a specification browsing tool would show to readers, even if the specification was written in the code as in Figure 3.2. Offering this shortcut makes model programs more practical for specifiers to adopt in many cases.
The ability to keep model program specifications separate from the code they specify remains useful in the two following cases. The first is when there is no code, i.e., for an abstract method. The second is when the code cannot be changed at all, e.g., when the code is owned by a third party. In both cases, explicit model programs are valuable specification artifacts with no direct copy to maintain.
3.4 Example Verifications
We have already shown how to specify the `bump` method for the Counter class, an example of the Observer design pattern [11]. Here we discuss the verification of other design patterns as well as a more general application for model programs. Specifically, we will show how model programs enhance verification of the Template Method and Chain of Responsibility design patterns [11]. These patterns make good examples because each uses our technique in a different way to improve on verifying object-oriented designs. The last example shows a non-OO application that demonstrates some technical shortfalls to our approach.
3.4.1 Template Methods: Following a Recipe
Template methods are HOMs that are used in frameworks, where they sequence calls to “hook methods” that are overridden to be customized by the framework’s users. Typically hook methods have weak specifications in order to allow a wide variety of possible behavior in subclasses. A template method makes mandatory calls to these hook methods, which works very well with model program specification.
Consider the HOM `prepare()` in Figure 3.4. The model program specification extracted from the method `prepare` is shown in Figure 3.5. This model program has two mandatory calls to the weakly specified hook methods, `mix` and `bake`. Class StringyCake in Figure 3.6 is a specializer supplying code and stronger specifications for overridden methods. A client using StringyCake instances would be able to use the model program specification of `prepare` plus the specifications of the hook methods to prove the assertion in Figure 3.7. This works because the client can substitute the model program specification wherever they call `prepare`, which exposes the strongly specified hook method calls.
import java.util.Stack;
public abstract class CakeFactory {
public /*@ extract */ Object prepare() {
Stack pan = null;
/*@ refining normal_behavior
@ assignable pan;
@ ensures pan != null && pan.isEmpty(); @*/
pan = new Stack();
this.mix(pan);
this.bake(pan);
return pan.pop();
}
//@ requires items.size() == 0;
//@ assignable items.theCollection;
//@ ensures items.size() == 1;
public abstract void mix(Stack items);
//@ requires items.size() == 1;
//@ assignable items.theCollection;
//@ ensures items.size() == 1;
public abstract void bake(Stack items);
}
Figure 3.4 The class CakeFactory, with its template method `prepare`, and two hook methods: `mix` and `bake`.
/*@ public model_program { */
@ Stack pan = null;
@
@ normal_behavior
@ assignable pan;
@ ensures pan != null && pan.isEmpty();
@
@ this.mix(pan);
@ this.bake(pan);
@ return pan.pop();
@ } @*/
public Object prepare();
Figure 3.5 `prepare`’s extracted specification.
import java.util.Stack;
public class StringyCake extends CakeFactory {
/*@ also @
@ requires items.size() == 0;
@ assignable items.theCollection;
@ ensures items.size() == 1
@ && items.peek().equals("batter");
@*/
public void mix(Stack items) {
items.push("batter");
}
/*@ also @
@ requires items.size() == 1
@ && items.peek().equals("batter");
@ assignable items.theCollection;
@ ensures items.size() == 1
@ && items.peek().equals("CAKE");
@*/
public void bake(Stack items) {
items.pop();
items.push("CAKE");
}
}
Figure 3.6 Class StringyCake, a subclass of CakeFactory. The keyword also indicates that the given specification is joined with the one it overrides[12, 15].
CakeFactory c;
Object r;
c = new StringyCake();
r = c.prepare();
//@ assert r.equals("CAKE");
Figure 3.7 Client code that calls prepare.
Figure 3.8 shows the result of substituting the actuals into the model program from Figure 3.5 for the call to the prepare method. In this substitution, we have changed the return in the code into the assignment to the variable receiving the call’s value, as usual [25]. Since Figure 3.8 exposes hook methods where we can identify the more specialized type of their receiver, we can now prove the final assertion.
At this call site, the critical knowledge clients hold is that c is a StringyCake instance. The definitions of its overridden hook methods have stronger specifications than CakeFactory objects do in general. For this proof, we start by assuming an empty initial state and applying the effects of each line from Figure 3.8. Initially, declare the variables c and r, then bind c to a new instance of type StringyCake. Inside the block representing our substituted model program, declare the variable pan before "executing" an arbitrary statement whose effect is described by the normal_behavior speci-
CakeFactory c;
Object r;
c = new StringyCake();
{
Stack pan = null;
normal_behavior
assignable pan;
ensures pan != null && pan.isEmpty();
c.mix(pan);
c.bake(pan);
r = pan.pop();
}
//@ assert r.equals("CAKE");
Figure 3.8 Client code that calls prepare, after using the copy rule and substituting the actual receiver c for this.
ification. At this point, before calling either hook method on c, we know that pan is no longer null and its isEmpty method returns true. Since isEmpty is true, the precondition of c’s mix method has been met. The effect of that call is to add the string “batter” to the top of the pan stack. After returning from this call, the precondition of c’s bake method has been satisfied, so the top of the pan stack is now the string “CAKE”. At this point, we know enough to establish that the value given to r by this code (i.e., the value returned by calling pan.pop()) is, in fact, the string “CAKE”. This final state supports the final assertion and concludes our proof.
This proof works because it applies a formal understanding of how the StringyCake class implements the mix and bake hook methods without overriding its template, the prepare method. Client reasoning with model programs exposes this feature of a template method design: the interaction of overridden hook methods with a standard template describing their order of invocation.
3.4.2 Chain of Responsibility: Testing Static Configurations
Chain of Responsibility is another object-oriented pattern whose use can be formalized by calls to the pattern’s characteristic methods [11]. Every receiver along the chain has up to two responsibilities: to implement the shared method and/or to pass unhandled cases farther along the chain. The method that chains receivers together must be a weakly-specified mandatory call, for the value in applying this pattern relies on the diversity of classes belonging to the chain.
One implementation of this pattern might be a mail system, some network of relays that are responsible collectively for transmitting a message (in our case, a letter) from one endpoint to another. The chain of responsibility is shared by every member of the network implementing the Mailer interface, shown in Figure 3.9. Suppose further that this network resembles Figure 3.10. For Alice to send a letter to Bob, she sends the letter l to the office she is nearest, Office a. As a member of the chain of
responsibility, Office a either must pass the letter off to Bob directly (which it can’t) or pass the letter along the chain. This passing is handled by the `send` method, with Person, Office and Sorter instances all implementing the Mailer interface. Note that it would not be helpful to write a model program for the Mailer interface, because information about the receiver of the mandatory call will differ for each implementing class. Instead, model programs should be written for each specific implementation of `send`, but preferably with an eye to minimizing the total number of model programs.
```java
public interface Mailer {
public void send(Letter l);
}
```
Figure 3.9 The Mailer interface identifies a single method `send` for all objects that transmit messages in our mailing network.

Figure 3.10 An example mailing network connecting Alice to Bob.
One concern for implementors of this network might be guaranteeing the delivery of a given message along a known static configuration. For our mailing network, this problem can be phrased as the question "Does Bob receive the letter Alice sent?" The assertion of Figure 3.11 is a formalization of this question. To reason about that result, we invoke the copy rule on `alice.send(l)`, whose model program exposes a call to `sorter.send(l)`. Invoking the copy rule twice more should reveal that `alice.send(l)` does indeed result in Bob receiving the message, if sufficiently-detailed model programs for those classes are given. In this case, our technique enables strong conclusions for systems with a static configuration of the responsibility chain.
3.4.3 Technical Limitations
Model programs give specifiers a finer degree of abstraction for HOMs, particularly by allowing structural or behavioral details of object-oriented designs to be formally captured. HOMs do not occur
Figure 3.11 Client code that makes an assertion of guaranteed message delivery.
solely inside of object-oriented code though. Functional programming has its share of HOMs to which we can apply our technique.
For example, the common map operator could be implemented in Java with something like Figure 3.12. In this implementation, map is the HOM and the IntFun method f is our mandatory call. Here we use extract to derive an implicit model program directly from the code that implements the map operation over an array of integers. The derived model program hides none of the implementation, however, since the only abstraction we currently provide is the normal_behavior specification statement.
This reveals a pair of related weaknesses for our current technique: the lack of abstract control-flow constructs and the relative strictness in how model programs match against implementations. If an abstract loop statement existed, then the for-loop outside of the mandatory call could remain hidden. Similarly, with a more flexible matching procedure, extract could generate multiple model programs (e.g., one that exposes the call to f on the IntFun argument and another that abstractly iterates over all elements of the array) to allow implementors to reason about the HOM differently depending on the salient features needed at different call sites. Chapter 5 discusses our plan to address these concerns.
We do not mean to imply that our technique cannot benefit such a HOM. Even without hiding any implementation details, our model programs still enable strong conclusions about mandatory calls. To see this is the case, look at the code of Figure 3.13. After substitution of our whitebox model program, the effect of a call to map is plain to see. If we assume that the Scale class is a subclass of IntFun whose f method scales integer arguments by a factor of two, then Figure 3.14 is sufficient to achieve the strong conclusion that map performs as expected.
```java
int[] ai = new int[] {1,3};
Map m = new Map();
Scale by2 = new Scale(2);
m.map(by2, ai);
//@ assert ai[0] == 2 && ai[1] == 6;
```
Figure 3.13 Client code that calls `map` while asserting its desired effect.
```java
int[] ai = new int[] {1,3};
Map m = new Map();
Scale by2 = new Scale(2);
for (int i = 0; i < ai.length; i++) {
by2.f(ai, i);
}
//@ assert ai[0] == 2 && ai[1] == 6;
```
Figure 3.14 Code of Figure 3.13 after substituting a model program for `map`.
CHAPTER 4. EXTENDING JML WITH MODEL PROGRAMS
This chapter summarizes the state of the effort to implement model programs as an extension to the JML static checker \texttt{jmlc}. As described in Chapter \[3\] our greybox model programs add three new features to JML: the model program itself, the \texttt{refining} statement for matching specification statements in the model program to the implementation code that refines them and the syntactic sugar \texttt{extract} for creating implicit model programs directly from an existing implementation. We describe relevant design features of JML, define how model programs extend that design and then provide an informal analysis of that extension.
4.1 JML Background
To understand how the design of these features integrates with an existing tool for JML, we must first understand the design of the tool being extended. The static checker for JML included in the Common JML tools, named \texttt{jmlc}, is built on top of the MultiJava compiler, whose architecture has been documented by Clifton \[8\]. This tool builds on the MultiJava architecture to support JML’s specification syntax and semantics. For clarity of the present discussion, we will highlight only those portions of the design of \texttt{jmlc} that impact our own extension. The three features being implemented for model programs belong to two categories of specification syntax: method annotations and specification statements.
JML adds specification annotations on method declarations in two primary ways: as specification cases that may come either before or after the method signature and as modifiers on the method or its arguments. Specification cases are the primary kind of specification annotation for Java methods. They describe the behavior of the method in terms of pre-/postcondition pairs, frame axioms and other blackbox detail. Model programs will become another kind of specification case. Some examples of method modifiers are \texttt{pure}, for describing a method without side effects, and \texttt{non_null}, which says a method’s argument will never be null. Both of these modifiers act as syntactic sugars for common implicit specification cases. The \texttt{extract} modifier is a sugar, signaling for an implicit model program to be extracted from the method body.
JML also provides a number of statements for verifying specifications by annotating the code directly. These include annotated loops as well as statements for the creation and manipulation of ghost
variables. Heavyweight specification cases (i.e., the many shades of behavior cases) can also be used as specification statements, but will only be valid on their own inside of a model program or as part of a refining statement in the implementation. In this early implementation, only normal_behavior statements are explicitly supported. The refining statement is another specification statement, the role of which will be to tie model program statements to the implementation’s code.
4.2 Our Extension
Having introduced where the new features fit into JML syntactically, we now disclose details of each feature’s design. This chapter will conclude with a look at the direct implications of these choices.
4.2.1 The Model Program Specification Case
At the time of implementation, the jmlc codebase already contains nascent support for parsing model programs, the JmlModelProgram class. The responsibilities of this class include containing the AST representing the model program’s body as well as defining the typechecking rules for model programs. In our implementation, model programs consist of a visibility modifier, a block of (possibly abstract) JML-permissible statements and a flag isExtract, identifying whether the model program was extracted. The visibility modifier has implications for the fields and methods that may be referenced in the model program’s body, while isExtract is helpful when checking an implicitly-generated specification.
4.2.2 Implicit Model Programs via extract
For methods marked extract, instances of the class JmlExtractModelProgramVisitor generate implicit model programs based on the method’s body. Such a visitor transforms the code into a model program as described in Section 3.3. These objects are not called directly by the checker, but instead by JmlModelProgram, with the class method extractInstance. In turn, this method is invoked by the class method makeInstance of the JmlMethodDeclaration class to add the implicit model program to the represented method’s specification set.
4.2.3 refining Specification Statements
Operationally, the refining statement has no effect beyond associating a behavioral contract with the code that refines it. Maintaining this association is key to our technique, as we saw in Section 3.1. Checking that these refining statements occur as expected is the responsibility of the visitor described by the JmlRefineModelProgramVisitor class. This check is straightforward for the current technique: to check equality of AST nodes down to the level of the refining statements.
This has been implemented by providing a unique visit method in the visitor for every leaf of the JML statement grammar. This choice was partly forced by the intricacies of the JML2 AST objects, but also allows modular modifications when considering future work. For example, a new form of specification statement should only require one new method per visitor and each method’s implementation would depend only on the details of the new statement.
4.3 Design Implications
These descriptions provide a snapshot of an early JML2 implementation that supports our described technique. As attention has been given to how and why this works the way it does, so should we consider where and how such an implementation may go from here. The JML Common Tools also provide a runtime assertion checker, jmlrac. Modifying this tool to enforce the contracts associated by refining statements should be trivial. Tool support for the client reasoning prescribed in Chapter 3 follows by simply decoding refining statements as an assume/assert pair. In the course of extending jmlc, it became clear that some re-engineering of how assignability information is gathered will be necessary in the near future. This will be re-examined in Section 5.1. Finally, as the principles governing model program extraction and refinement are themselves adapted in future work, the two-visitor design presented here should prove effective in isolating these adaptations.
CHAPTER 5. FUTURE WORK & CONCLUSIONS
In this chapter we look ahead to further development and other applications for greybox reasoning with model programs. After listing some of those possibilities, we revisit the promises of previous chapters to make concluding remarks.
5.1 Future Work
The work described by Chapters 3 and 4 represents a working draft of specification language features that define how JML can support HOM documentation. The tools developed to solve this problem could assist other open research questions. For example, we use refining statements to associate executable Java code with its relevant specification statement in the model program. This functionality supports granular statement-level annotation of code with specification constructs. We particularly want to explore how this construct compares with temporal logic [20, 24]. Model programs themselves can be used for more than just supporting client reasoning as we have demonstrated here. A complementary form of model program has been developed by Veanes, et al. [27, 26] with an early application found in the work of Barnett and Schulte [2]. The Spec# paradigm uses model programs to specify interface automata, complete with its own notion of refinement as well as an exploration of how model programs compose together to derive more complete models of complex program behavior. One promising direction for JML would be to explore the transformation of a model program into an abstract model of program behavior. Such a behavioral model could foreseeably have applications in model checking, unit testing or as a rapid prototype for design feedback.
As we saw near the end of Chapter 3, our solution does not come without limitations. There is a demonstrable need for more and more-varied abstract constructs for capturing control flow as well as a more flexible matching procedure. Nondeterministic choice is capable of modeling both a choice in implementations as well as an abstract, permutable if-then-else specification statement. Also, there may be multiple ways to specify loops or recursions that invoke mandatory calls. Where matching falls apart lay primarily in its strictness. If the model program does not contain a specification statement at a particular program point, we say the implementation must match exactly. While this simplifies reasoning about concrete statements in the model code, there should be some room for negotiation, particularly for security purposes [6]. Another concern that emerges from the discussion of Chapter 3...
is a clear need for a notion of refinement that allows model programs to refine each other. Solutions to this problem that are modular may well support model program composition for cases where multiple model program definitions are given for a single implementation. Currently, the implementation issues a warning in the presence of multiple model programs and only attempts to match the structure of the closest syntactic definition.
Chapter 4 mentions an intention to modify how jmlc handles its assignable clauses, which we will expand upon here. Where these clauses are traditionally encountered, at the method level, has a standard semantics that covers the entire method implementation. With the introduction of model programs, however, these clauses are brought down to the statement level, for example, as a clause within a normal_behavior specification statement. To properly mesh these new clauses with the established system, however, these assignable clauses need precise analysis. Previous work has explored the kind of delicacy required for the general case [28], but this may need revisiting in a model program context. A trivial implementation could simply union all the assignable clause information inside a given model program, but it remains to be seen if this is the correct intuition. The implementation work done for this thesis does not provide any special handling for assignability information inside of a model program.
5.2 Conclusions
This thesis aimed to convince the reader of the utility of a novel specification technique, greybox reasoning with model programs. We need such reasoning to enable clients to draw strong conclusions in the presence of higher-order methods that make mandatory calls. Object-oriented design patterns that provide structural and behavioral benefits are one domain where strong conclusions are needed to perform rigorous formal verification, though by no means are they unique. We have added a working implementation of model programs to the jmlc compiler in the JML Common Tools. Where possible, we prefer simple, practical techniques that minimize the cognitive overhead of the new constructs while maximizing the specification benefit of their use. As we saw in Section 5.1, multiple paths of progress stand before us. Model programs have a number of applications; both their present and future potential looks bright.
BIBLIOGRAPHY
|
{"Source-Url": "https://lib.dr.iastate.edu/cgi/viewcontent.cgi?article=2179&context=etd", "len_cl100k_base": 12667, "olmocr-version": "0.1.48", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 73846, "total-output-tokens": 16246, "length": "2e13", "weborganizer": {"__label__adult": 0.0003833770751953125, "__label__art_design": 0.00028824806213378906, "__label__crime_law": 0.0002837181091308594, "__label__education_jobs": 0.000885009765625, "__label__entertainment": 4.905462265014648e-05, "__label__fashion_beauty": 0.0001550912857055664, "__label__finance_business": 0.00016617774963378906, "__label__food_dining": 0.00029969215393066406, "__label__games": 0.0004754066467285156, "__label__hardware": 0.000492095947265625, "__label__health": 0.0003352165222167969, "__label__history": 0.00016832351684570312, "__label__home_hobbies": 7.021427154541016e-05, "__label__industrial": 0.00025582313537597656, "__label__literature": 0.0002694129943847656, "__label__politics": 0.0002551078796386719, "__label__religion": 0.00041794776916503906, "__label__science_tech": 0.00289154052734375, "__label__social_life": 8.350610733032227e-05, "__label__software": 0.002780914306640625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00031256675720214844, "__label__transportation": 0.0004582405090332031, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67578, 0.02256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67578, 0.28486]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67578, 0.87956]], "google_gemma-3-12b-it_contains_pii": [[0, 835, false], [835, 1280, null], [1280, 1388, null], [1388, 3140, null], [3140, 3140, null], [3140, 5017, null], [5017, 5592, null], [5592, 6834, null], [6834, 8467, null], [8467, 12038, null], [12038, 15010, null], [15010, 17557, null], [17557, 19358, null], [19358, 22040, null], [22040, 23494, null], [23494, 25846, null], [25846, 27686, null], [27686, 29105, null], [29105, 32024, null], [32024, 34303, null], [34303, 36909, null], [36909, 39713, null], [39713, 41483, null], [41483, 43364, null], [43364, 45806, null], [45806, 47698, null], [47698, 49670, null], [49670, 50149, null], [50149, 52655, null], [52655, 55218, null], [55218, 56661, null], [56661, 59207, null], [59207, 61592, null], [61592, 63985, null], [63985, 66800, null], [66800, 67578, null]], "google_gemma-3-12b-it_is_public_document": [[0, 835, true], [835, 1280, null], [1280, 1388, null], [1388, 3140, null], [3140, 3140, null], [3140, 5017, null], [5017, 5592, null], [5592, 6834, null], [6834, 8467, null], [8467, 12038, null], [12038, 15010, null], [15010, 17557, null], [17557, 19358, null], [19358, 22040, null], [22040, 23494, null], [23494, 25846, null], [25846, 27686, null], [27686, 29105, null], [29105, 32024, null], [32024, 34303, null], [34303, 36909, null], [36909, 39713, null], [39713, 41483, null], [41483, 43364, null], [43364, 45806, null], [45806, 47698, null], [47698, 49670, null], [49670, 50149, null], [50149, 52655, null], [52655, 55218, null], [55218, 56661, null], [56661, 59207, null], [59207, 61592, null], [61592, 63985, null], [63985, 66800, null], [66800, 67578, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67578, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67578, null]], "pdf_page_numbers": [[0, 835, 1], [835, 1280, 2], [1280, 1388, 3], [1388, 3140, 4], [3140, 3140, 5], [3140, 5017, 6], [5017, 5592, 7], [5592, 6834, 8], [6834, 8467, 9], [8467, 12038, 10], [12038, 15010, 11], [15010, 17557, 12], [17557, 19358, 13], [19358, 22040, 14], [22040, 23494, 15], [23494, 25846, 16], [25846, 27686, 17], [27686, 29105, 18], [29105, 32024, 19], [32024, 34303, 20], [34303, 36909, 21], [36909, 39713, 22], [39713, 41483, 23], [41483, 43364, 24], [43364, 45806, 25], [45806, 47698, 26], [47698, 49670, 27], [49670, 50149, 28], [50149, 52655, 29], [52655, 55218, 30], [55218, 56661, 31], [56661, 59207, 32], [59207, 61592, 33], [61592, 63985, 34], [63985, 66800, 35], [66800, 67578, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67578, 0.04721]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
fc99eb572192f5e43efb8db41927322b25cfd788
|
[REMOVED]
|
{"Source-Url": "http://people.csail.mit.edu/hjyang/papers/fleming-fpl2014.pdf", "len_cl100k_base": 8463, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21375, "total-output-tokens": 9823, "length": "2e13", "weborganizer": {"__label__adult": 0.0005626678466796875, "__label__art_design": 0.0005855560302734375, "__label__crime_law": 0.0004265308380126953, "__label__education_jobs": 0.00049591064453125, "__label__entertainment": 0.00010627508163452148, "__label__fashion_beauty": 0.0002453327178955078, "__label__finance_business": 0.0002884864807128906, "__label__food_dining": 0.0004546642303466797, "__label__games": 0.0012235641479492188, "__label__hardware": 0.02081298828125, "__label__health": 0.0006270408630371094, "__label__history": 0.00041747093200683594, "__label__home_hobbies": 0.00019693374633789065, "__label__industrial": 0.0012111663818359375, "__label__literature": 0.0002167224884033203, "__label__politics": 0.00032591819763183594, "__label__religion": 0.0008592605590820312, "__label__science_tech": 0.11834716796875, "__label__social_life": 6.455183029174805e-05, "__label__software": 0.009735107421875, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.0004436969757080078, "__label__transportation": 0.0010833740234375, "__label__travel": 0.0002856254577636719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44713, 0.02569]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44713, 0.45844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44713, 0.90801]], "google_gemma-3-12b-it_contains_pii": [[0, 6541, false], [6541, 13050, null], [13050, 17327, null], [17327, 22544, null], [22544, 28216, null], [28216, 34816, null], [34816, 39177, null], [39177, 44713, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6541, true], [6541, 13050, null], [13050, 17327, null], [17327, 22544, null], [22544, 28216, null], [28216, 34816, null], [34816, 39177, null], [39177, 44713, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44713, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44713, null]], "pdf_page_numbers": [[0, 6541, 1], [6541, 13050, 2], [13050, 17327, 3], [17327, 22544, 4], [22544, 28216, 5], [28216, 34816, 6], [34816, 39177, 7], [39177, 44713, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44713, 0.10884]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
eed8a0f3a0b77c385771e539123a968a41378e34
|
Polychronous mode automata
Jean-Pierre Talpin, Christian Brunette, Thierry Gautier, Abdoulaye Gamatié
To cite this version:
HAL Id: hal-00541469
https://hal.archives-ouvertes.fr/hal-00541469
Submitted on 30 Nov 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Polychronous mode automata
Jean-Pierre Talpin
IRISA/INRIA-Rennes
Campus de Beaulieu
F-35042 Rennes, France
Jean-Pierre.Talpin@irisa.fr
Christian Brunette
IRISA/INRIA-Rennes
Campus de Beaulieu
F-35042 Rennes, France
Christian.Brunette@irisa.fr
Thierry Gautier
IRISA/INRIA-Rennes
Campus de Beaulieu
F-35042 Rennes, France
Thierry.Gautier@irisa.fr
Abdoulaye Gamatié
INRIA Futurs
6b Av. Pierre et Marie Curie
59260 Lezennes, France
Abdoulaye.Gamatie@lifl.fr
ABSTRACT
Among related synchronous programming principles, the model of computation of the Polychrony workbench stands out by its capability to give high-level description of systems where each component owns a local activation clock (such as, typically, distributed real-time systems or systems on a chip). In order to bring the modeling capability of Polychrony to the context of a model-driven engineering toolset for embedded system design, we define a diagramic notation composed of mode automata and data-flow equations on top of the multi-clocked synchronous model of computation supported by the Polychrony workbench. We demonstrate the agility of this paradigm by considering the example of an integrated modular avionics application. Our presentation features the formalization and use of model transformation techniques of the Gme environment to embed the extension of Polychrony’s meta-model with mode automata.
Categories and Subject Descriptors
D.3 [Programming Languages]: Formal Definition and Theory
General Terms
Design, Languages, Theory
1. INTRODUCTION
Inspired by concepts and practices borrowed from digital circuit design and automatic control, the synchronous hypothesis has been proposed in the late ’80s to facilitate the specification and analysis of control-dominated systems. Nowadays, synchronous languages are commonly used in the European industry, especially in avionics, to rapidly prototype, simulate, verify embedded software applications.
In this spirit, synchronous data-flow programming languages, such as Lustre [11], Lucid Synchronie [9] and Signal [15], implement a model of computation in which time is abstracted by symbolic synchronization and scheduling relations to facilitate behavioral reasoning and functional correctness verification. While block diagrammatic modeling concepts are best suited for data-flow dominated applications, control-dominated processes may sometimes be preferably modeled using imperative formalisms, such as Esterel [3], Statecharts [12], or SyncCharts [1].
1.1 Design methodology
In the particular case of the Polychrony workbench, on which Signal is based, time is represented by partially ordered synchronization and scheduling relations, to provide an additional capability to model high-level abstractions of system paced by multiple clocks: globally asynchronous systems. This gives the opportunity to seamlessly model heterogeneous and complex distributed embedded systems at a high level of abstraction, while reasoning within a simple and formally defined mathematical model.
In Polychrony, design proceeds in a compositional and refinement-based manner. It first consists of considering a weakly timed data-flow model of the system under consideration. Then, partial timing relations are provided to gradually refine the synchronization and scheduling structure of the application.
Finally, the correctness of refined specification is checked with respect to initial requirement specifications. That way, Signal favors the progressive design of systems that are correct by construction using well-defined model transformations that preserve the intended semantics of early requirement specifications and that provide a functionally correct deployment on the target architecture.
1.2 Model-driven design framework
Taking advantage of recent works extending Polychrony with a meta-model, Signal-Meta [6], in the model-driven engineering framework of Gme (Generic modeling environment [14]), we position our problem as extending the meta-model on which Signal is based with an inherited meta-
model of multi-clocked mode automata to finally demonstrate how the latter can be translated into the former by operating a model transformation. We put an emphasis on simplicity both for the specification (one third of a page, Fig. 4) and for the formalization (five rules, Section 5.3) of mode automata. The framework of mode automata presented in this article was specified and implemented in the matter of one month, thanks to the facilities offered by the Gme environment. It is currently being ported to Eclipse [5].
1.3 A modeling paradigm
The modeling of integrated modular avionics (IMA) architectures are a typical case in which both the polychronous model of computation and the need for mixed data-flow and control-flow formalisms (as offered by mode automata) are particularly well-suited.
As an example, consider the following diagram, Fig. 1, from the Signal-Meta environment\(^1\). It represents a simple avionic application within Gme. Its main function consists of computing the current position of an airplane and its fuel level and reporting that information. It is decomposed into three processes:
- **Position_indicator** produces information about the current position of the aircraft.
- **Fuel_indicator** produces information about the level of kerosene in the aircraft.
- **Parameter_refresher** refreshes the parameters used by other processes.
To illustrate the use of mode automata at the process-level of this application, we focus on **Position_indicator**, Fig. 2(a)-2(b). It is composed of two main aspects: the ImaAspect includes the computational part and the ImaProcessControl contains the control flow part. The computational part (see Fig. 2(a)) consists of a data-flow graph. It contains Blocks of data-flow equations. The control-flow part is best described in an imperative manner by a mode automaton, shown in Fig. 2(b). Each time the partition is active, the current state of the automaton indicates which of the Blocks in the computational part is executed. From the above descriptions, a corresponding Signal program is automatically generated allowing one to use the functionalities of POLYCHRONY to formally analyze, transform and verify the application model.
1.4 Overview
The scope of this article is to present the definition of polychronous mode automata within the model-driven engineering framework Signal-Meta. It consists of an extension of the synchronous data-flow formalism SIGNAL with multi-clocked mode automata. To this end, the remainder of this paper is organized as follows.
Section 2 first presents related works. Section 3 gives an informal presentation of the SIGNAL formalism and of its extension. Section 4 outlines the meta-model of SIGNAL, defines its extension with mode automata and outlines the
use of GME to define the transformation of mode automata into SIGNAL. Section 5 formalizes the model transformation by considering the intermediate representation of SIGNAL. Section 6 provides operational semantics of mode automata framework. Concluding remarks are given in Section 7.
2. RELATED WORKS
The hierarchical combination of heterogeneous programming models is a notion whose introduction dates back to early models and formalisms for the specification of hybrid discrete/continuous systems.
The most common example is Matlab [18], which supports the Stateflow notation to describe modes in event-driven and continuous systems. Similarly, Ptolemy [7] allows for the hierarchical and modular specification of finite state machines hosting heterogeneous models of computation. Worth noticing is Hyscharts [2], which integrates discrete and continuous modeling capabilities within the same model-driven engineering framework.
In the same vein, mode automata were originally proposed by Maraninchi et al. [16] to gathering advantages of declarative and imperative approaches to synchronous programming and extend the functionality-oriented data-flow paradigm of Lustre with the capability to model transition systems easily and provide an additional imperative flavor. Similar variants and extensions of the same approach to mix systems easily and provide an additional imperative flavor. While multiplexing paradigms or heterogeneous models of computation [7, 8] have been proposed until recently, the latest advance being the combination of stream functions with automata [10]. Nowadays, commercial toolsets such as the Esterel Studio’s Scade or Matlab/Simulink’s Stateflow are largely inspired by similar concepts.
In previous work, the introduction of preemption mechanism in the multi-clocked data-flow formalism SIGNAL was previously studied by Rutten et al. [21]. This was done by associating data-flow processes with symbolic activation periods. However, no attempt has been made to extend mode automata with the capability to model multi-clocked systems, which is the aim of this article.
The main advantage of the multi-clocked approach over previous installments of mode automata principles lies in the capabilities gained for rapid prototyping: not only may functionalities and components be abstracted with multi-clocked specifications but mode describing early control requirements may then allow rapid prototyping of the system, while offering automated program transformation and code generation facilities to synthesis the foreseen implementation in a correct-by-construction manner.
3. POLYCHRONY
We position the problem by considering partially synchronous (or polychronous) specifications using the data-flow formalism SIGNAL [15].
3.1 Polychronous data-flow equations
A SIGNAL process consists of the simultaneous composition of equations on signals. A signal consists of an infinite flow of values that is discretely sampled according to the pace of its clock, noted \( x \). An equation partially relates signals with respect to an abstract timing model. SIGNAL defines the following primitive constructs:
- A functional equation \( x = f(y, z) \) defines an arithmetic or boolean relation \( f \) between its operands \( y, z \) and the result \( x \).
- A delay equation \( x = y \preceq v \) initially defines the signal \( x \) by the value \( v \) and then by the value of the signal \( y \) from the previous execution of the equation. In a delay equation, the signals \( x \) and \( y \) are assumed to be synchronous, i.e. either simultaneously present or simultaneously absent at all times.
- A sampling \( x = y \when z \) defines \( x \) by \( y \) when \( z \) is true and both \( y \) and \( z \) are present. In a sampling equation, the output signal \( x \) is present iff both input signals \( y \) and \( z \) are present and \( z \) holds the value true.
- A merge \( x = y \default z \) defines \( x \) by \( y \) when \( y \) is present and by \( z \) otherwise. In a merge equation, the output signal is present iff either of the input signals \( y \) or \( z \) is present.
- The synchronous composition \((P \parallel Q)\) of the processes \( P \) and \( Q \) consists of simultaneously considering a solution of the equations in \( P \) and \( Q \) at any time.
- The equation \( P/ x \) restricts the lexical scope of a signal \( x \) to a process \( P \).
3.2 Mode automata
To express mode automata, we consider an extension of SIGNAL which comprises the following base syntactic elements. \( \text{init} \) \( s \) specifies the initial state (mode) of an automaton \( a \). \( s : p \) associates the behavior \( p \) with the mode \( s \). \( e \Rightarrow s \rightarrow t \) gives the clock \( e \) (or guard) of the next transition, from mode \( s \) to mode \( t \), while \( e \Rightarrow s \rightarrow t \) immediately transits from mode \( s \) to mode \( t \) upon the condition \( e \) (most likely a condition on input signals such as an alarm). The support of both weak preemption, noted \( e \Rightarrow s \rightarrow t \), and strong preemption, noted \( e \Rightarrow s \rightarrow t \), greatly enhance modeling capabilities to facilitate design. Synchronous composition of automata is noted \( a \mid b \).
\[
a, b ::= \text{init} \ s \mid \sskip \mid (e \Rightarrow s \rightarrow t) \mid (e \Rightarrow s \rightarrow t) \mid a \mid b
\]
3.3 Example of a crossbar switch
To support the presentation of our modeling techniques, we consider the example of a simple crossbar switch. Its interface is composed of two input data signals \( y_1 \) and \( y_2 \) and a reset input signal \( r \).
\[
\begin{array}{c}
r \\
\downarrow
\end{array}
\begin{array}{c}
\text{switch} \\
\downarrow
\end{array}
\begin{array}{c}
\downarrow
\end{array}
\begin{array}{c}
x_1 \\
\downarrow
\end{array}
\begin{array}{c}
x_2
\end{array}
\]
Data signals are routed along the output data signals \( x_1 \) and \( x_2 \) depending upon the internal state \( s \) of the switch. The state is toggled using the reset signal by the functionality \( s = \text{toggle}(r) \). Data is routed along an output signal \( x \) from two possible input sources \( y_1 \) or \( y_2 \) depending on the value of \( s \) by two instances of the functionality \( x = \text{route}(s, y_i, y_j) \) with \( i \neq j \) and \( i, j \in \{1, 2\} \).
\[
(x_1, x_2) = \text{switch}(y_1, y_2, r) \overset{\text{def}}{=} \begin{cases} s = \text{toggle}(r) \\
\quad x_1 = \text{route}(s, y_1, y_2) \\
\quad x_2 = \text{route}(s, y_2, y_1) \end{cases} / s
\]
The subprocess \texttt{toggle} defines the state of the switch by the signal \( s \). If the reset signal \( r \) is present and true, then the next state \( t \) is defined by the negation of current state \( s \) and otherwise by \( s \).
\[
s = \text{toggle}(r) \overset{\text{def}}{=} (s = t \text{ true if } t = \text{ not } s \text{ when } r \text{ default } s) / t
\]
The subprocess \texttt{route} selects which of the values of its input signals \( y_1 \) or \( y_2 \) to send along its output signals \( x_i, i \in \{1, 2\} \) depending on the boolean signal \( s \). If \( s \) is present and true, it chooses \( y_i \) and else, if \( s \) is present and false, it chooses \( y_j \).
Remember that \texttt{Signal} equations partially synchronize input and output signals. In the \texttt{route} process, this implies that none of the signals \( y_1, y_2 \) and \( s \) are synchronized, and that the output signal \( x_i, i \in \{1, 2\} \) is present iff either of \( y_i \) are present and \( s \) true or \( y_j \neq i \) is present and \( s \) false.
\[
x_i = \text{route}(s, y_1, y_2) \overset{\text{def}}{=} x_i = (y_i \text{ when } s) \text{ default } (y_j \text{ when } \not s), \quad \forall 0 < i \neq j \leq 2
\]
The switch is a typical example of specification where an imperative automata-like structure is superimposed on a native data-flow structure gives a shorter and more intuitive description of the system’s behavior.
The mode automaton of the switch consists of two states \( 1 \) and \( 2 \), \( x \) and \( y \) depending on the current mode of the automaton. The mode toggles from flip to flop, or conversely, when the event \( r \) occurs.
\[
(x_1, x_2) = \text{switch}(y_1, y_2, r) \overset{\text{def}}{=} \left\{\begin{array}{ll}
\text{init flip} : (x_1 = y_1 \text{ or } x_2 = y_2) \\
\text{flop} : (x_1 = y_2 \text{ or } x_2 = y_1) \\
\text{r} \Rightarrow \text{flip} \rightarrow \text{flop} \\
\text{r} \Rightarrow \text{flop} \rightarrow \text{flip}
\end{array}\right.
\]
4. A META-MODELING APPROACH
To develop our meta-modeling approach, we have used the GME environment [14], Fig. 3. GME is a configurable UML-based toolkit that supports the creation of domain-specific modeling and program synthesis environments. GME uses meta-models to describe modeling paradigms for specific domains. The modeling paradigm of a given application domain consists of the basic concepts that represent its intended meaning from a syntactic and relational viewpoint.
4.1 The Signal meta-model
The definition of a meta-model in GME is realized using the MetaGME modeling paradigm. First, modeling paradigm concepts are described in an UML class diagram. To achieve it, MetaGME offers some predefined UML stereotypes [13], among which FCO, Atom, Model, and Connection. FCO (First Class Object) constitutes the basic stereotype in the sense that all the other stereotypes inherit from it. It is used for expressing abstract concepts. Atoms are elementary objects that cannot include any sub-part. On the contrary, models may be composed of several FCOs.
Containment and Inheritance relations are represented as in UML. All the other types of relations are specified through Connections. Some of these stereotypes are used in the class diagram represented in Fig. 4. For the Signal meta-model, called Signal-Meta [6], class diagrams describe all syntactic elements defined in SIGNAL v4 [4]. Among these concepts, there are an Atom for each Signal operator (e.g. numeric, clock relations, constraints), a Model for each Signal “container” (e.g. process declaration, module), and a Connection for each relation between Signal operators (e.g. definition, dependence).
With these class diagrams, GME provides a mean to express the visibility of FCOs within a model through the notion of Aspect (i.e. one can decide which parts of the de-
Figure 3: The Signal meta-model in GME.
Figure 4: Extension of the Signal meta-model with mode automata.
4.2 Refinement of the meta-model with modes
To manage mode automata, we extend Signal-Meta with a new class diagram represented in Fig. 4. An Automaton is a Model composed of states, transitions, local signals, and StateObservers. As for classical Statecharts [12], or SyncCharts [1], there are three kinds of states: AndState, Automaton, and State. The two former are Models composed of other states (CompoundState), whereas the latter is a terminal state describing Signal equations. An AndState consists of several states composed in parallel. An Automaton can be added to another Automaton as a state (to create hierarchical automata), or to one of the Signal-Meta Models (represented in the class diagram by the ModelsWithDataflow Model). Thus, mode automata can be composed of Signal programs or of sub-mode automata. This abstract concept represents Models including the two Aspects mentioned in the previous section and all operators described in Signal-Meta. State inherits from this Model to be able to describe Signal equations. Finally, the InitState Atom is intended to be connected to the initial state of the Automaton.
The automata transitions are represented as Connections in the meta-model. Two kinds of transitions are considered: StrongTransition, and WeakTransition. StrongTransitions are used to compute the current state of the Automaton (before entering the state), whereas WeakTransitions are used to compute the state for the next instant.
More precisely, the guards of the WeakTransitions are evaluated to estimate the state for the next instant, and the guards of the StrongTransitions whose source is the state estimated at the previous instant, are evaluated to determine the current state. However, note that for each Automaton, at most one StrongTransition can be taken at each instant. To distinguish both kinds of transitions, a StrongTransition is denoted by light arrow in the graphical representation (see Fig. 5(a)), whereas WeakTransition is represented by a bold one.
Contrarily to SyncCharts [1], in which as many transitions as possible can be taken, in our model, at most two transitions can be taken during one reaction: one StrongTransition and/or one WeakTransition. It is also the case in [10]. This guarantees that there is no infinite loop when determining the current state of an automaton. For example, the determination of the current state for the Atm Automaton represented in Fig. 5(a) when the event r is emitted would be impossible if we allowed to take as many transitions as possible. Note also that the guard of a StrongTransition should not depend on signals defined in the state connected to this transition.
Both kinds of transitions link, inside an Automaton, a state to another one, or to the History Atom of one of the CompoundState sub-state of this Automaton. If the transition taken to arrive at a CompoundState is connected to the state itself, this CompoundState is automatically reinitialized. This reinitialization corresponds, for an Automaton,
to execute it from its initial state, and for an Andstate, to reinitialize all its sub-states. On the contrary, the CompoundState retains its previous state if the transition is connected to its History.
Each kind of transition has two attributes: Guard, in which the guard of the transition is expressed, and TransitionPriority, in which an integer expresses the priority of this transition among all transitions of the same kind (WeakTransition or StrongTransition) with the same source state. The smaller the value associated with the transition is, the higher the priority of the transition is. Thus, we can guarantee the determinism of the automaton. An OCL constraint checks that for each state, all outgoing WeakTransitions (resp. StrongTransitions) have different priorities. A third kind of Connection (InitialTransition) has been added to link the InitState of an Automaton to any state that corresponds to the initial one. There can be only one such Connection in an Automaton.
To observe the state of an automaton, we add a StateObserver Atom, which allows to call a process having the current state of the automaton as input signal. The name of this process is provided through the attribute ProcessName. If this attribute is not defined, the current state is written on the standard output. Basically, the clock of an automaton depends on the clocks of the signals used in all its transitions and states. Alternatively, the clock of an automaton can be explicitly specified. In the meta-model, this is expressed by the inheritance of Automaton from ConstraintInput.
4.3 Modeling of the crossbar switch
We illustrate the use of the mode automata extension in the example of the switch. Fig. 5(a) represents the modeling of the mode automaton of the switch in GME. Atm contains two terminal states (flip and flop). StrongTransitions are guarded by the value of the event r, as labeled on the middle of transitions. The 0 indicates the transition priority (it can be omitted here). The content of flip (resp. flop) state is represented in Fig. 5(b) (resp. 5(c)). In these figures, dotted arrows correspond to partial definitions in SIGNAL. x1, x2, y1, y2 are references to signals from an upper Model. The upper Model is that of the switch, and Atm and all the signals it uses are declared there. In this Model, y1, y2, and r are input signals, and x1 and x2 are output signals. In Fig. 5(d), the clock of Atm is fixed to the union of the clocks of y1, y2, and r. The clocks of x1 and x2 have to be specified explicitly because they are defined using partial definitions: a MinClock operator is used to define the clock of x1 and x2 as the union of clocks of their partial definitions. The DATA_TYPE parameter is used to associate a generic type with input and output signals.
4.4 Implementation in GME
GME offers different means to extend its environment with tools, such as the MetaGME Interpreter, which, like a compiler, checks the correctness of the meta-model, generates the paradigm file, and registers it into GME. This file is then used by GME to configure its environment for the newly defined paradigm.
In a similar way as the MetaGME Interpreter, we have developed a GME Interpreter to analyze Signal-Meta Models and produce the corresponding SIGNAL programs. We extend this Interpreter to produce the SIGNAL equations corresponding to mode automata descriptions. The code in Fig. 6 is that generated by the Interpreter for the switch example specified in Fig. 5 (note that in the concrete SIGNAL syntax: y$ init v is the real notation for y pre v; x:=y stands for $x := y$; + represents the union of clocks; x:=... and x:=... represent respectively a partial definition and a complete definition of x). The transformation works as follows. For each automaton:
- One enumeration type is built (line 21). Each value of the enumeration is the name of a state (the uniqueness of names is checked).
- Four signals of this type are created. They correspond to the current state (currentState), the previous state (previousState), the next state (nextState) of the Automaton (lines 22-23) and its previous value (zNextState).
- An event is created for each transition of the Automaton (line 20). For a WeakTransition (resp. StrongTransition), this event is present when its guard is true and when the currentState (resp. zNextState) is equal to the source state of the transition. In this example, we have only StrongTransitions (lines 5-6).
- If the Automaton contains CompoundStates (it is not the case in our example), then two boolean signals are added: history, and nextHistory. They are true if the StrongTransition (resp. WeakTransition) taken to determine the currentState (resp. nextState) is connected to the History Atom of the destination CompoundState.
- The previousState and zNextState are defined respectively by the last value of currentState (line 12) and nextState (line 13).
- To define the nextState (line 8) (resp. currentState (lines 9-11)), the destinations of all WeakTransitions (resp. StrongTransitions) are conditioned by the event of the corresponding transition. The default values...
of the \textit{nextState} and the \textit{currentState} are respectively the \textit{currentState} and the \textit{zNextState}. If the \textit{Automaton} is a sub-state of another one, the \textit{currentState} is defined by the initial state of this \textit{Automaton} if the \textit{history} signal of the upper level \textit{Automaton} is false. In our example, there is no \textit{Weak-\textit{Transition}}, thus \textit{nextState} is always defined by the \textit{currentState}. Note that the order of the transitions is not important, except for states with several outgoing transitions. In this case, transitions are ordered according to their priority.
- Mode changes are expressed according to the value of \textit{currentState} (lines 14-17).
In a given \textit{Automaton}, the clock of \textit{currentState} is synchronized to that of \textit{nextState}. Nonetheless, it may be defined by that of another \textit{Automaton}. At the top-level, the clock of \textit{currentState} is synchronized (line 7) only if there is some explicit synchronization in the Model, such as the Connection to \textit{Atm} on the right of Fig. 5(d). For \textit{AndStates}, the Interpreter has just to compose the equations of all sub-states. Finally, for \textit{States}, equations are produced as for any Signal-Meta Model [6].
5. \textbf{FORMALIZATION}
We use the \textsc{Polychrony} workbench to perform formal verification (model checking and controller synthesis are provided with the Sigali tool [17]) and sequential and distributed code generation (in C, C++ or Java) starting from models with mode automata. Taking advantage of the metamodeling framework provided by Gme, we define the necessary generation of Signal code from the meta-model for mode automata.
5.1 An intermediate representation
The data-flow synchronous formalism \textsc{Signal} supports an intermediate representation of multi-clocked specifications that exposes control and data-flow properties for the purpose of analysis and transformation. In this structure, noted $G$, a node $c$ is a data-flow relation that partially defines a clock or a signal. A signal node $c \Rightarrow x = f(y,z)$ partially defines $x$ by $f(y,z)$ at the clock $c$. A clock node $\hat{x} = c$ defines a relation between two particular signals or events called clocks.
$$G, H ::= g | (G \parallel H) \parallel G/x \quad \text{(graph)}$$
$$g, h ::= \hat{x} = c \mid c \Rightarrow x = f(y,z) \quad \text{(nodes)}$$
A clock $c$ expresses a discrete sample of time by a set of instants. It defines the condition upon which (or the time at which) a data-flow relation is executed. The clock $\hat{x}$ means that the signal $x$ is present (its value is available). The clocks $[x]$ and $[\neg x]$ mean that $x$ is present and is true (resp. false). A clock expression $e$ is a boolean expression and 0 is the clock that means never (or the empty set of instant).
$$c ::= \hat{x} \mid [x] \mid [\neg x] \quad \text{(clock)}$$
$$e ::= 0 \mid c \mid e_1 \lor e_2 \mid e_1 \land e_2 \mid e_1 \lor e_2 \mid e_1 \land e_2 \quad \text{(expression)}$$
The decomposition of a process into the synchronous composition of clock and signal nodes is defined by induction on the structure of $p$. Each equation is decomposed into a data-flow function and is guarded by a condition, that is usually the clock $\hat{x}$ of the output signal.
$$G[x = y \mathsf{pre} v] \overset{\text{def}}{=} (\hat{x} \Rightarrow x = y \mathsf{pre} v) \mid (\hat{x} = \hat{y})$$
$$G[x = y \mathsf{when} z] \overset{\text{def}}{=} (\hat{x} \Rightarrow x = y) \mid (\hat{x} = \hat{y} \land [z])$$
$$G[x = y \mathsf{default} z] \overset{\text{def}}{=} \begin{cases} (\hat{y} \Rightarrow x = y) & | (\hat{z} \land \hat{y} \Rightarrow x = z) \\ (\hat{x} = \hat{y} \lor \hat{z}) & \end{cases}$$
$$G[p \mid q] \overset{\text{def}}{=} G[p] \parallel G[q]$$
$$G[p] / x \overset{\text{def}}{=} G[p] / x$$
5.2 Application to the crossbar switch
Let us construct the graph of the crossbar switch. It can modularly be defined by one instance of the toggle function-
```
1. process Switch =
2. { type DATA_TYPE;
3. { ? DATA_TYPE y1, y2; event r; \} DATA_TYPE x1, x2; }
4. { min_clock(x2) | min_clock(x1)
5. | \%Atm\%{ __ST_0_flop_To_flip := when (r) when (_Atm_0_zNextState = #flop)
6. | __ST_1_flip_To_flop := when (r) when (_Atm_0_zNextState = #flop)
7. | _Atm_0_currentState = (y1 * y2 * + r)
8. | _Atm_0_nextState := _Atm_0_currentState
9. | _Atm_0_currentState := #flop when __ST_0_flop_To_flip
10. | _Atm_0_zNextState := def #flop when __ST_1_flop_To_flip
11. | _Atm_0_initialState
12. | _Atm_0_previousState := _Atm_0_currentState$ init #flop
13. | _Atm_0_zNextState := _Atm_0_nextState$ init #flop
14. | case _Atm_0_currentState in
15. | (#flop): (| x2 ::= y1 | x1 ::= y2 |)
16. | (#flip): (| x2 ::= y2 | x1 ::= y1 |)
17. | end
18. |}
19. where
20. event __ST_0_flop_To_flip, __ST_1_flip_To_flip;
21. type _Atm_0_type = enum(flop, flip);
22. _Atm_0_type _Atm_0_currentState, _Atm_0_nextState;
23. _Atm_0_type _Atm_0_previousState;
24. end
25. |); % process Switch
```
alinity and two instances of the function. Each function is decomposed into a set of guarded data-flow relations: its signal nodes, and its specific timing model, expressed by clock nodes.
\[ G[\text{switch}] \overset{\text{def}}{=} (G[\text{toggle}] | G[\text{route}_1] | G[\text{route}_2]) / st \]
The compilation of a mode automaton into multi-clocked data-flow equations consists of its structural translation into partial equations modeling guarded commands and of the addition of the necessary synchronization relations described by clock equations. The top level rule \( C[a] \) defines the current state of \( a \), represented by a signal \( x \) (its next value being synchronously carried by the \( x' \)).
The clock of the mode automaton is hence \( x \). It is synchronized to the clock expression \( e_x \), the activity clock of the automaton: if at least one signal \( y \) defined by the automaton has an active clock \( y \), the automaton is activated to compute it and to possibly perform some transition.
The rule \( C^x[\text{init}_0] \) defines \( x \) initially by the initial state \( s \) and then by the previous value of the next state \( x' \) unless one of the conditions \( s \) strongly preemptive transitions prevails. The rule \( C^x[c \Rightarrow s \Rightarrow t] \) defines the next state \( x' \) by \( t \) if the current state is \( s \) and the condition \( c \) holds.
The rule \( C^x[c \Rightarrow s \Rightarrow t] \) defines the current state \( x \) by \( t \) when the condition \( c \) holds upon entering state \( s \) (i.e., when the previous value of the next state \( x' \) is \( s \)). The rule \( C^x[s : p] \) defines a mode \( s \) by guarding the process \( p \) with the condition \( [x = s] \). The condition \( [x = s] \) can equally be regarded as the clock \( y \) where the signal \( y \) is defined by the equation \( y = \text{eq}(x, s) \).
\[ C[a] \overset{\text{def}}{=} (C^x[a] | (x = x')) / x, x' \]
\[ C^c[\text{init}_0] \overset{\text{def}}{=} \text{eq}(x, x') \]
\[ C^c[c \Rightarrow s \Rightarrow t] \overset{\text{def}}{=} (s \Rightarrow x') / x \]
\[ C^c[s : p] \overset{\text{def}}{=} (x = s) \Rightarrow \text{eq}(p) \]
The notation \( [x = s] \Rightarrow G \) conditions \( G \), the behavior of an automaton in the mode \( s \), by the condition \( x = s \). It can be decomposed into a set of core Signal equations by application of the following translation rules:
\[ c \Rightarrow (G[H]) \overset{\text{def}}{=} (c \Rightarrow G) \]
\[ c \Rightarrow (G \land x) \overset{\text{def}}{=} (c \land x) \]
\[ c \Rightarrow (G/x) \overset{\text{def}}{=} (c \Rightarrow G)/x \]
\[ c \Rightarrow (d \Rightarrow x = f(y_{i..n})) \overset{\text{def}}{=} (c \land d) \Rightarrow x = f(y_{i..n}) \]
### 6. SEMANTICS OF MODE AUTOMATA
We complete the formalization of our extension to the Signal meta-model by the definition of the operational semantics of polychronous automata. It starts with the exposition of a micro-step automata theory and continues with the specification of the micro-step automata admitted by polychronous modes.
#### 6.1 Micro-step automata
We first consider the theory of synchronous micro-step automata proposed by Potop et al. [19]. As already demonstrated for Signal in [22], this framework accurately renders concurrency and causality for synchronous (multi-clocked) specifications.
**Micro-step automata** communicate through signals \( x \in X \). The labels \( l \in L_X \) are generated by a set of names \( X \) represented by a partial map of domain from a set of signals \( X \) noted \( \text{vars}(l) \) to a set of values \( \text{V} = V \cup \{\bot\} \). The label \( \bot \) denotes the absence of communication during a transition of the automaton. We write \( \text{supp}(l) = \{x \in X \mid [l(x)] \neq \bot\} \) for the support of a label \( l \) and \( \perp_X \) for the empty support. We write \( l' < l \) if there exists \( l'' \) disjoint from \( l' \) and such that \( l = l' \lor l'' \).
An **automaton** \( A = (s^0, S, X, \rightarrow) \) is defined by an initial state \( s^0 \), a finite set of states \( S \) and \( x = v, labels L_X \) and by a transition relation \( \rightarrow \in S \times L_X \times S \). The **product** \( A_1 \otimes A_2 \) of \( A_1 = (s^0_1, S_1, X_1, \rightarrow_1) \) and \( A_2 = (s^0_2, S_2, X_2, \rightarrow_2) \) is defined by \( (s^0_1, s^0_2) \rightarrow (s_i^1, s_i^2) \) if and only if \( s_i^1 \rightarrow s_i^2 \) for all \( i < 2 \) and \( s_i^0 \rightarrow s_i^0 \).
A **synchronous automaton** \( A = (s^0, S, X, \rightarrow) \), of clock \( c \in C \), consists of a concurrent automaton \( (s^0, S, X, \rightarrow) \) s.t.
1. if \( s \rightarrow l' \) then \( l_1 \lor l \neq l' : a \) clock transition always happens alone.
2. if \( s^0 \rightarrow l' \) and \( s \rightarrow l'' \) then \( s'' \rightarrow c \) : a clock transition can stutter.
3. if \( s_{i-1} \rightarrow l' \), then \( l_i \neq l' \) s.t. \( (s_{i-1}, s_i \in \{0,1\} \) and \( l_i \neq c \) for \( i < n \) and \( l_n = c \) then \( \text{vars}(l_i) \cap \text{vars}(l_j) = \emptyset \) for all \( 0 < i
eq j < n \) : a reaction is composed of transitions on disjoint supports.
The composition of automata is defined by synchronized product and synchronous communication using 1-place synchronous FIFO buffers. The synchronous FIFO of clock \( c \) and channel \( x \) is noted \( \text{s FIFO}(x, c) \). It serializes the emission event \( \forall x = v \) followed by the receipt event \( \exists x = v \) within the same transition (the clock tick \( c \) occurs afterwards).
\[ \text{s FIFO}(x, c) \overset{\text{def}}{=} \left( s_0, \{s_0, x\}, \{x, !x, c\}, c, (s_0, x) \rightarrow (s_1, x) \rightarrow (s_2, x) \right) \]
Let \( A_1 = (s^0_1, S_1, X_1, \rightarrow_1) \) and \( A_2 = (s^0_2, S_2, X_2, \rightarrow_2) \) be two synchronous automata and \( c \) a clock and write \( A[c_2/c_1] \) for the substitution of \( c_2 \) by \( c_1 \). The synchronous composition \( A_1 \circ A_2 \) is defined by the product of \( A_1, A_2 \), and a series of synchronous FIFO buffers \( \text{s FIFO}(c) \) that are all synchronized on the same clock \( c \).
\[ A_1 \circ A_2 \overset{\text{def}}{=} (A_1[c/c_1] \otimes A_2[c/c_2] \otimes \bigotimes \text{s FIFO}(x, c)) \]
6.2 Micro-step semantics of Signal
Micro-step automata provide a simple and expressive operational framework to formalize the semantics of multi-clocked specifications.
Clocks
A clock expression \( c \) corresponds to a transition system \( T_{c}^{s,t} \) from \( s \) to \( t \) which evaluates the presence of signals in accordance to \( c \).
\[
T_{c}^{s,t} \overset{\text{def}}{=} \left( \begin{array}{c}
t \rightarrow s^c \rightarrow t \\
T_{c}^{s,t} \end{array} \right)
\]
We write \( l_c \) for the label \( l \) that corresponds to the clock \( c \) and canonically denote \( v_s \) the generic value of the signal \( x : l_c \overset{\text{def}}{=} (x = v_s) \), \( l_x \overset{\text{def}}{=} (x = 1) \) and \( l_{\neg x} \overset{\text{def}}{=} (x = 0) \).
Relations
A synchronization relation \( \hat{=} \) accepts the events \( \hat{=} \) and \( e \) in any order, or none of them, and then performs a clock transition \( c \). Hence, the conditions expressed by \( \hat{=} \) and \( e \) need to occur at the same time.
\[
\mathcal{A}[[\hat{=} = e]] \overset{\text{def}}{=} \left( \begin{array}{c}
s, (s, t), (c, x) \cup \text{vars}(e), c, (t \leftarrow s) \bigcup_{v_x \in V} T_{x \in c}
\end{array} \right)
\]
Equations
A partial equation \( c \Rightarrow x = \epsilon(y) \) synchronizes \( x \) with the value of \( \epsilon(y) \) at the clock \( c \). But \( x \) may also be present when either \( c \) or \( y \) is absent. Therefore, the automaton requires \( x \) to be emitted with the value \( \epsilon(v_y) \) only after the events \( y \) and \( c \) have occurred. If at least one of either \( c \) or \( y \) is present, then \( x \) may or may not be present with some value \( u \) computed by another partial equation. The semantics (combinatorially) generalizes to the case of \( c \Rightarrow x = \epsilon(y_{1..n}) \) with \( n \geq 0 \).
\[
\mathcal{A}[c \Rightarrow x = \epsilon(y)] \overset{\text{def}}{=} \left( \begin{array}{c}
s^0, \{s^0, \ldots, s^x, s^y \mid v_y \in V \}, \{x, y \} \cup \text{vars}(d), \tau, \bigcup_{v_x, v_y \in V} \text{vars}(c)
\end{array} \right)
\]
Structuring constructs
Composition \( p \mid q \) and restriction \( p/x \) are defined by structural induction starting from the previous axioms with
\[
\mathcal{A}[p \mid q] \overset{\text{def}}{=} \mathcal{A}[p] \mid \mathcal{A}[q] \quad \mathcal{A}[p/x] \overset{\text{def}}{=} (\mathcal{A}[p]) / x
\]
Example 1. Consider the transition system for the switch process (the notation \( y \geq x \) stands for two steps \( o \rightarrow \geq \rightarrow \geq \)). The switch automaton consists of two mirrored structures that allow for concurrently receiving \( y \_x \) and \( y \_y \) and transmitting them along \( x \_1 \) or \( x \_2 \) according to the mode \( s_1 \) or \( s_2 \), toggled using the signal \( r \).
6.3 Operational semantics of mode automata
The operational semantics of a mode automaton is described using one equation, Fig. 7, to define the micro-step automaton \( \mathcal{A}[\alpha] \) corresponding to the mode declaration \( \alpha \).
To this end, a mode automaton \( \alpha \) is considered as a set of synchronously composed modes and transitions. Hence, we write \( (s : p) \in \alpha \) and \( (c) \Rightarrow s \rightarrow t \in \alpha \) for the modes and transitions it contains.
The semantics of a mode automaton \( \alpha \) consists of a transition system that is the union of the transition systems of all modes \( (s : p) \in \alpha \). The transition system of a mode \( (s : p) \) consists of \( T_{p} \) (that of the process \( p \)) where \( s_{p} \) (the initial state) is substituted by \( s \) (the mode state). For all mode transitions \( c \Rightarrow s \rightarrow t \in \alpha \), the transition system is completed with the transitions from the final states \( u \) of \( T_{p} \) to the mode state \( t \).
We write \( \text{init}(T) \) and \( \text{final}(T) = \{t \mid t \rightarrow \tau \in T \} \) for the initial and final states of \( T \) (the sources and sinks of clock transitions \( \tau \) in \( T \)) and \( S_{u} = \{s \mid s : p \in \alpha \} \) for the states of \( u \). As usual, \( s_{n} \) denotes the initial state of \( n \) and, referring to automaton \( \mathcal{A}[p] \) of a process \( p \), \( s_{p} \) its initial state, \( S_{p} \) its states, \( X_{p} \) its variables and \( T_{p} \) its transition system.
Example 2. In the case of the switch, this amounts to superimposing two transitions of condition \( r \) to the transition systems of the flip and flop modes.
7. CONCLUSIONS
We have presented a model of multi-clocked mode automata defined by extending the meta-model of the synchronous data-flow specification formalism Signal in the tool GME. A salient feature of our presentation is the simplicity incurred by the separation of concerns between data-flow (that expresses structure) and control-flow (that expresses a timing model) that is characteristic to the design methodology of Signal.
From a user point of view, this simplicity translates into the ease of hierarchically and modularly combining data-flow blocks and imperative modes and significantly accelerates specification by making its structure closer to design intuitions. An example is the 28 lines long encoding of state transitions in the crossbar switch, Fig. 6, as opposed to its 4 lines specification, end of Section 3. The same remark applies and scales to the more realistic on-flight example, Fig 2(b), by simplifying the specification of the mode transitions using implicit states.
While the specification of mode automata in related works requires a primary address on the semantics and on compilation of control, the use of SIGNAL as a foundation allows to transfer this specific issue to its analysis and code generation engine POLYCHRONY. Furthermore, it exposes the semantics and transformation of mode automata in a much simpler way by making use of clearly separated concerns expressed by guarded commands (data-flow relations) and by clock equations (control-flow relations).
8. REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00541469/document", "len_cl100k_base": 11364, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42849, "total-output-tokens": 13756, "length": "2e13", "weborganizer": {"__label__adult": 0.00039768218994140625, "__label__art_design": 0.0006036758422851562, "__label__crime_law": 0.00031065940856933594, "__label__education_jobs": 0.0008244514465332031, "__label__entertainment": 0.00010323524475097656, "__label__fashion_beauty": 0.0001958608627319336, "__label__finance_business": 0.0003712177276611328, "__label__food_dining": 0.0004031658172607422, "__label__games": 0.000797271728515625, "__label__hardware": 0.003093719482421875, "__label__health": 0.0004870891571044922, "__label__history": 0.0004382133483886719, "__label__home_hobbies": 0.00017082691192626953, "__label__industrial": 0.0011119842529296875, "__label__literature": 0.00032830238342285156, "__label__politics": 0.00037217140197753906, "__label__religion": 0.0007252693176269531, "__label__science_tech": 0.1392822265625, "__label__social_life": 9.21487808227539e-05, "__label__software": 0.00826263427734375, "__label__software_dev": 0.83984375, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0013399124145507812, "__label__travel": 0.0002570152282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48384, 0.03063]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48384, 0.52373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48384, 0.81949]], "google_gemma-3-12b-it_contains_pii": [[0, 956, false], [956, 5003, null], [5003, 7782, null], [7782, 14381, null], [14381, 18291, null], [18291, 21381, null], [21381, 26521, null], [26521, 31607, null], [31607, 37945, null], [37945, 42969, null], [42969, 48384, null]], "google_gemma-3-12b-it_is_public_document": [[0, 956, true], [956, 5003, null], [5003, 7782, null], [7782, 14381, null], [14381, 18291, null], [18291, 21381, null], [21381, 26521, null], [26521, 31607, null], [31607, 37945, null], [37945, 42969, null], [42969, 48384, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48384, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48384, null]], "pdf_page_numbers": [[0, 956, 1], [956, 5003, 2], [5003, 7782, 3], [7782, 14381, 4], [14381, 18291, 5], [18291, 21381, 6], [21381, 26521, 7], [26521, 31607, 8], [31607, 37945, 9], [37945, 42969, 10], [42969, 48384, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48384, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
1ec67c33959d14394eea39dca72558fe4906f783
|
Abstract
An emerging technology called Near Field Communication (NFC) is used to enable touch between mobile devices. This paper describes how a mobile social media system called ‘Hot in the City’ (HIC) enables people to make friend connections on the spot when they meet each other. We first describe the HIC system, and then explain how the visibility of friends is arranged in the system. The research focuses on the context during the action of friend connection and how context data should be taken into account in design. A use pilot was organized to study the use of HIC. Observations from the pilot lead us to reconsider the HIC mobile application logic, study location and status information, and plan to add event and time as useful contextual data to organize collected and generated mobile information. Finally, design issues for the next steps are delineated.
1. Introduction
Internet-based applications that enable friends to network and share content – such as Facebook, MySpace and Flickr – have introduced us to social media and the social networks that emerge through them. The idea is that friends, past or present, connect and link with each other by sharing images and videos, exchanging messages, or playing games. The user sends a friend invitation to someone he or she knows and the recipient then accepts or rejects the invitation.
These social networking media have been catering to mobile users. It is possible to use Facebook with a mobile device, for example. However, social media systems have not focused on establishing instant friend connections when people meet each other. Friend connections are still made with computers.
In this paper, we present an approach where friend connections can be created on the spot when people meet each other. We present a pilot study where mobile users create friend connections with their mobile phones by simply touching one mobile device to another. This pilot was two weeks long, including a training period, and culminated in a Christmas party. We are interested in the context in which the friend connection is made and how the context data should be taken into account in the design of such a system.
The technology enabler is Near Field Communication – NFC (www.nfc-forum.org), an emerging technology related to RFID, Radio Frequency Identification. NFC allows peer-to-peer connections (P2P) that can be used for various applications. One example is the exchange of business cards between mobile devices. In general, mainstream efforts to harness NFC have targeted payment and ticketing applications. Moreover, it is possible to write information to electronic tags, which can then be attached on a wall or poster to be read with an NFC capable device. Hence, tags in the context of NFC are essentially different from keyword tags describing an online or offline artifact [e.g. 7, 11]
In order to study the creation of mobile friendships, we developed an application called ‘Hot in the City’ (HIC). This system lets users exchange data and connect to a back-end system linking users as friends. The system, its architecture and features are explained in section 2. Tagging experiments were carried out during the pilot. The tagged spaces were local venues selected by mobile users and the restaurant rooms where an experimental evening was held.
The HIC pilot is presented in section 3. We introduce the pilot setting, training and a social party evening organized for HIC experimentation. We sought to study use situations, learn more about P2P friend connections and then consider how to improve the system design. Observations are presented in section 4. They were collected from the experiments in a qualitative manner by making notes and video clips and then using them in reflective design sessions. Finally, a set of concrete design issues was assembled to feed iterative work on the HIC system. These issues are presented in section 5.
Creating an on-the-spot friend connection is a delicate event with social pressure to become friends. The technology used – both the hardware and software – should allow fast connection so as not to disrupt friendship creation. It would be possible to allow categorization of friendships, e.g. colleague, if the fast connection requirement is fulfilled. In addition, event and time information can be used to create context data for viewing and structuring friendship connection data.
later. Furthermore, we noticed that the users created locations and made individual spaces with the tags given to them. Consequently, several tags were scattered in some public spaces. It is not always possible, however, to trust the location information a tag provides. The tag may indicate status rather than location: in meeting, on holiday, etc.
1.1 Method
The development of a social media system is a design effort where both technological advances and social phenomena must be understood and brought together in a way that enables innovative design solutions. How can friendship creation be studied systematically? Such an event is local and individual, and it cannot be predicted when it will happen.
We started out by making scenarios and selecting key technologies. HIC was then developed iteratively with feedback from the target domain and by working on the scenarios in development sessions of the design team, while taking technical advances into consideration. While this work was in progress, specifications for NFC technology have been under development by the NFC Forum. The first technology prototype of the HIC service was available in autumn 2008. Would HIC be a useful approach for social networking and creating mobile friendships? We realized that this was the first time such friendship creation was possible. How can we study friendship creation with a useful amount of data in a way that would be beneficial for further system development?
Piloting a prototype service is a widely used method in testing technology and involving potential users. However, it is challenging to pilot social media because social networking is not easy to grasp in laboratory conditions or living labs. With this in mind, in September 2008 we decided to link the development effort with one of our company’s social events, i.e., Christmas party. This event provided elements for the first HIC pilot: technologically savvy colleagues and an event where they all gathered. Our research approach has been design led with an expert mindset rather than participatory design, although users do participate by giving feedback on the current designs [15].
Using an online questionnaire, informal discussions and observations, our plan was to focus on collecting data on how ten users experienced the use of the application. We would analyze the data afterwards.
Earlier research has shown that some social networking sites are used primarily for reinforcing and maintaining offline relationships instead of forming new online relationships [6]. We kept that in mind when selecting pilot users. Colleagues at a company party are not necessarily close friends. However, this event gives colleagues an opportunity to bond. Systems such as HIC may open up a new social dimension in helping colleagues connect with each other. The idea was to gain a better understanding of the context of friend connection in order to delineate design requirements for a social media system. To that end, we planned and implemented a pilot experiment in November 2008.
1.2 Related Work
Over the past years, there has been growing interest in providing location-based information and supporting collaboration between people. There are several studies, some dating to the 1990s, that have dealt with tags and collaboration awareness [e.g. 16, 2, 9].
Previous research has also focused on location-aware computing environments, privacy preferences vary with place and social context [1]. Although social network sites have been studied from a rather wide range of viewpoints, not that much is known about the characteristics of mobile social networks. That said, interest in research on social networks in mobile environments has been growing recently [e.g. 4, 13, 10, 17].
In location-aware computing environments, privacy preferences vary with place and social context [1]. Location privacy also plays a significant role in the use of the HIC application. With tags, users can inherently control their location visibility within and from the HIC system. This can be seen as a basic difference to
some more traditional location-aware systems. Privacy issues are out of the scope of this paper.
The mobile social network service presented in this paper utilizes NFC technology and thus differs technologically from earlier social network sites and services. On one hand, the P2P feature in the NFC-enabled mobile phone together with the HIC mobile application allows friends to connect with each other when meeting face to face. On the other hand, the use of an NFC device changes the way social interaction occurs at the moment when people become friends. In comparison, friends are not co-located in social network sites. Furthermore, we assume that users’ contextual information can be used for managing the co-located friendship data.
2. ‘Hot in the City’ Application for Mobile Social Networks
‘Hot in the City’ is a system that in the first design iteration had basic features to enable users to create friend connections using a mobile phone. Additionally, information can be written into NFC tags. Touching the tag reveals the location of the user. This HIC version also included the HIC Facebook application. HIC is not dependent on the Facebook platform in any way, however. Similarly, HIC could be extended to use any other social media service that provides an interface for external applications. Section 2.1 delineates the HIC software architecture, and section 2.2 one of the crucial design issues for system success: how users see their friends through HIC applications, whether in the mobile or on Facebook.
2.1 Software Architecture
The current HIC software architecture consists of three parts. The first is an NFC-enabled mobile phone. Currently, a mobile Java application resides inside a mobile phone and interprets any HIC data, connecting to a back-end master server as necessary. The HIC mobile application can be used to write NFC tags. The information is text stating e.g. the location of the tag. The mobile HIC application is delivered over the air.
The second part is the HIC Facebook application. Facebook offers interfaces for third parties to create applications. The HIC Facebook application is a sub-website inside Facebook. Files are hosted by the web server and Facebook links to the external website.
The third part is the HIC back-end system. This is where the business logic and the data are located. The back-end system hosts automatic update files for the mobile applications that check the latest version every time the application is launched. The actual update time is controlled by the user, however.
Users have two HIC user interfaces: one in a mobile phone application and another in the Internet browser. These interfaces are independent and one can be used without the other. HIC registration is required from the mobile when starting the service. A user who has a Facebook account can add the HIC Facebook application into the use environment. In the HIC Facebook application, the user can see a list of friends and their location as well.
In practice, Facebook contacts the HIC web server by reporting that a user with a Facebook ID has entered the HIC Facebook application. The HIC web server constructs the page in Facebook format and displays it to the user. To construct the page, the web server needs to fetch user data from the HIC master server.
To log in to locations, the user needs to use the HIC mobile application. When touching an NFC tag, the phone reads data from the tag and sends it to the back-end system, which in turn interprets it as a login. Login information answers the questions who, where and when. As feedback, the HIC mobile application receives a list of friends and their login information. That information is parsed and shown to the user. Currently, when the HIC mobile application is running, the information is updated every minute.
A user can create a friend connection by touching another user’s NFC-enabled mobile phone with his or her own phone. Due to the technical limitations of NFC technology, one of the users must act as the inviter and the other as the invitee. The latter accepts the invitation. The roles must be chosen before a connection is made. In this peer-to-peer connection, the HIC mobile applications exchange data and the inviter informs the back-end system that the users have created a friend connection.
Several design meetings were conducted in order to collect requirements for the HIC pilot application. In these design meetings, the participants freely presented their ideas, which could be included in the HIC service concept. Also, a domain expert was in the loop. To ensure that we could create a running pilot application with feasible features, many of the ideas were not implemented. In the future scalable and extendable mass volume service would require reconsideration of the software architecture.
2.2 Visibility of Friends
An essential concept of the HIC software is the concept of friend. Our motivation for studying the visibility of friends is that we wanted to try HIC as a part of a commonly known social media platform. This takes us closer to solving the ‘yet another social media
system’ problem. The use terms of Facebook require the HIC software to have two separate friend groups. One is the friend group that the user has in Facebook and the other is the one that the user has collected with the HIC mobile application and mobile device. Facebook friend data is owned, controlled and used exclusively by Facebook, and consequently may not be displayed in the HIC mobile application. The requirement of having two friend groups meant we had to consider how visible friends are to each other.
User visibility covers three possible use cases. There are three different types of users: 1) users who only use the HIC Facebook application, 2) users who use the HIC mobile application on a mobile device and 3) users who have access to both of these applications.
A user who only uses the HIC Facebook application is able to see only those Facebook friends who have the HIC application installed on Facebook. From the point of view of the HIC system, the user is only a viewer. A user who only uses the HIC mobile application will see his or her HIC mobile friends, as they comprise the only group of friends available through the HIC system on a mobile device.
Finally, someone who uses both the HIC mobile application and the HIC Facebook application has two perspectives on the HIC system. In the phone, the user sees only the mobile friends and their activity. In the HIC Facebook application, the situation is slightly different, since both Facebook friends and HIC mobile application friends are shown in the list of friends.
All in all, while this implementation seems complicated, it was the only way to proceed since software interfaces set restrictions on implementation. In general, social network communities have been independent, isolated and incompatible, because no standards have been established for sharing information between them [14]. Efforts have been made to create open interfaces – such as Google’s OpenSocial – to ease the management of personal information distributed across many sites. In the future, more social networks may be able to share information with HIC, or HIC could be designed and implemented differently.
3. Pilot Settings
The HIC system pilot, including all parts and features of the implemented system, consisted of three phases: 1) user training, 2) introduction period and 3) a pilot evening event. These phases are presented next.
3.1 User Training
Before the introduction period and pilot evening event, a training and demonstration session was held for all potential pilot users. All the persons who came to and participated in the user training session were technologically savvy mobile phone users. Consequently, the user group was tolerant of possible errors or anomalies in the HIC system. Another positive aspect of this selection of users was that they were immediately able to formulate what is wrong with the design and identify the actual reason for a specific problem. They even suggested improvements. Although they are real users, they cannot be counted as representatives of consumers. Due to the stage of system development, average consumer or end-user groups would be brought into the pilot later.
Eventually, 10 users were selected for the introduction period and pilot. They were employees of a research-oriented organization who were experienced with mobile applications and services through their research work. Additionally, four pilot users were closely involved in the design and development of the pilot application. They supported the introduction of the system to the pilot users and used the HIC system actively during the work. From the demographic point of view, four out of fourteen users were women and ten were men. Six users were in the 25-34 year age group, four in the 35-44 year age group, two in the 18-24 year age group, and two in the 45-55 year age group.
In the actual training session, the HIC application was briefly introduced to the users and the exact use of the application was demonstrated. After this, the users received NFC-enabled mobile phones for use during the pilot period. First, the users had to download the HIC mobile application to their phones by touching an NFC tag that provided a download address. The HIC Facebook application was not demonstrated in the session. Users were encouraged to examine the HIC Facebook application at their own pace. After the training session, they received an email inviting them to use Facebook application. In fact, this occasion was the first time many of the participants used Facebook. The training and demonstration session took approximately one hour.
Observers collected notes on the pilot users’ comments about the HIC application. These notes were later used for analysis and design.
3.2 Introduction Period
The user training and demonstration session was followed by a one-week self-trial and introduction. The aim was to make sure that the pilot users would be
familiar with the HIC mobile and HIC Facebook applications by the pilot evening. The introduction period gave them the time to create friend connections and have real ‘user experience’ of a mobile social media system. Furthermore, the aim of this period was to test that the HIC system works correctly and would be robust during the pilot evening. From the research point of view, the introduction period provided an opportunity to collect information on how people experience the HIC system in a familiar environment.
Friend connection was not the only target of study since the idea was to study the context, starting with the use of location information. Every pilot user was given two to three NFC tags. Furthermore, they were allowed to freely place their own tags in any location. NFC tags have a permanent adhesive on their back, making it easy to attach them on different surfaces. The only requirement is that the surface may not be metallic, as this would disturb the reading of the tag. Users were able to write location information as a string of data into a tag by using their NFC phone. As we wanted to observe user behavior, we did not specifically instruct them where or how to place tags.
Help and support were available in problem situations during the whole week. Pilot users asked questions by email and face to face. During the week, we made observations on how the users decided to use the NFC phone and tags. Some of the use situations, such as the creation of friend connections or writing tags, were recorded on video for later analysis.
3.3 Pilot Evening Event
The pilot environment and the venue for the Christmas party was a two-floor restaurant in the city center. It had several rooms of different sizes. Tagged spaces on the first floor were 1) a cloakroom, 2) music room, 3) small bar and 4) large bar. The second floor of the restaurant was rather small and the following spaces were tagged there: 1) one big room with two tags, 2) one small room (smoking room), 3) dance floor and 4) dining room. In addition, one tag was located in the pub on the next block. A total of ten such tags were put in place for the pilot evening.
The tags were arranged in this way because location and tag placement is one of the main concepts in the HIC application. In the future, we imagined, not only individual user tags will be available, but service providers such as bars and restaurants will have an incentive to place tags of their own on their premises for marketing purposes. People would use them because this would provide extra benefits for them. In this case, we made the first guesses about how people would behave and use such tag infrastructure.
People heading to a certain location or participating in an elaborate event may want to know how others are finding their way to the venue. Maybe they are in the nearby bar, and want to let others know where they are quickly and on their own initiative.
If the restaurant is large, friends may want to tell each other where they are. Therefore, many tags were distributed in different rooms. A tag in the cloakroom could be used to register to an event. This information would then be available for restaurant owners or event organizers. At the pilot venue, the visual design of the tags featured the text HIC in white and a red circle.
Also during the pilot evening, users could collect friends with other pilot users and keep track of their location in the pilot environment.
All tags were available in the restaurant when the users arrived. This time, the use of own tags was not allowed in order to prevent tag littering. Figure 1 depicts a situation where two users are becoming HIC application friends by touching each other’s mobile phone. After this action, they are able to see each other’s location from the HIC application.
Figure 1. Connecting to a friend by touch.
It was challenging to observe users and collect data on their experiences at the Christmas party. Non-formal discussions were held with users in order to get their feedback. Furthermore, observations were made and collected when users tried out the application. There were four observers. They could provide assistance if any trouble with the technology was encountered. This data was entered in a wiki page for further study.
After the pilot evening, an online questionnaire was sent to ten users. The questionnaire was not sent to those four users who were closely involved in the design and development of the HIC system. Everyone responded to the questionnaire. In addition, it was possible to analyze the use of the application from the data created by the back-end system.
4. Findings
In this section, we outline the findings based on the observations during the pilot, including all the phases, online questionnaire, and system monitoring. These findings were collected during analysis sessions of the gathered material.
4.1 Feedback on Application Logic
The HIC mobile application logic did not immediately become clear to all users. Many users faced problems when trying to create friend connections by touching each other’s mobile phones. The main reason for this was that, in order to make a friend connection by touching phones, users had to choose who would send the invitation and who would accept it from the HIC mobile application menu. This was found to be a cumbersome feature. The promise of an immediate friend connection collapsed into figuring out who actually is inviting and who is accepting. At that time, the HIC mobile application did not give any feedback on friend connection errors. Inadequate feedback led to uncertainty among users.
The NFC reader in the mobile phone did not always react when another phone was brought into its reading range. Some users said that they had to be very attentive and careful when trying to create a friend connection by bringing the two mobile phones close to each other. We imagined that by default this specific action would be a simple touch, a benefit of the system. However, in fact the users had to try to figure out the position of the NFC antenna inside the mobile phone and be very patient when the phones were physically touching. Sometimes becoming friends required several touches, but most of the time, when the antenna location was found, the connection succeeded on the first try. Obviously, ensuring fast and easy connections is the most important requirement for the success of HIC. The phone antenna design should consider how to support social media through the P2P feature.
One of the early design decisions made on the HIC mobile application was that the user would use the same tag for logging in and out from a location. Some users said that they had to be very attentive and careful when trying to create a friend connection by touching phones, users had to choose who would send the invitation and who would accept it from the HIC mobile application menu. This was found to be a cumbersome feature. The promise of an immediate friend connection collapsed into figuring out who actually is inviting and who is accepting. At that time, the HIC mobile application did not give any feedback on friend connection errors. Inadequate feedback led to uncertainty among users.
The NFC reader in the mobile phone did not always react when another phone was brought into its reading range. Some users said that they had to be very attentive and careful when trying to create a friend connection by bringing the two mobile phones close to each other. We imagined that by default this specific action would be a simple touch, a benefit of the system. However, in fact the users had to try to figure out the position of the NFC antenna inside the mobile phone and be very patient when the phones were physically touching. Sometimes becoming friends required several touches, but most of the time, when the antenna location was found, the connection succeeded on the first try. Obviously, ensuring fast and easy connections is the most important requirement for the success of HIC. The phone antenna design should consider how to support social media through the P2P feature.
One of the early design decisions made on the HIC mobile application was that the user would use the same tag for logging in and out from a location. It appeared that this logic was not very clear. In the beginning, many users did not realize that when they were leaving a tagged location that they had already logged in to, they should touch a tag to log out. Since users did not remember to log out, the information in the system did not always correspond with the real-life situation. Furthermore, we later noticed that it was far easier to log in than to log out. This observation made us consider removing the out tab from the user interface as redundant (cf. Figure 2 menu). On the other hand, time would be useful as context information in tags to define when location information should expire. For instance, a tag could be set for office hours from 8:00 AM-6:00 PM. Location status would then expire no later than at 6:00 PM. Friends could then see that the status has simply expired.

The HIC mobile application friend listing (cf. Figure 2 friend list) did not seem to follow a clear logic. Some users stated that it should be easy to figure out how friends were listed on it. However, many factors affected how the friend list was composed. The order of friends was determined by when they had become friends and been registered in the back-end system. Fortunately, the number of friends was low and this ‘feature’ did not cause many problems in the pilot. In future designs, friend list composition and list organization options will be crucial. When a large number of users have many friends, they will have to be able to organize their information-intensive lists. Design possibilities can already be identified: friends could be ordered, e.g. based on their location.
All in all, it was clear that the application logic suffers from bad design decisions due to technology limitations (touch, invitation) or inadequate reasoning through scenarios (location in/out, friend listing).
On-the-spot mobile friend connections are challenging from a design perspective. The technology and application should be designed to be almost as fast as the event of connecting friends.
4.2 Location Under Scrutiny
A starting point for the HIC service was that it can be used to share information about the location, which would then add value to friend list viewing. Pilot users started eagerly testing the new technology with their two to three tags during the introduction period. When these tags were positioned, we found out that the data written into the tags was not exactly what we had expected. Instead of just information on the location – such as room number or company restaurant name – other kind of information appeared. It was popular to write, for example, “on the bus” and “off duty” texts in the tags. These texts in fact express user status at a certain moment.
After writing the tag information, users also had to select a suitable location for it. Positioning a tag, including information about the exact location, was rather consistent. Almost all such tags were attached right next to a door or below the room sign.
One possible source of confusion is that enabling individual users to place their tags into public spaces means that the spaces will be littered with tags indicating the same location but with a different description. This small pilot made us realize that many users with a set of tags introduces multiple tags in public locations. For example, two tags were placed on both sides of the door to the company restaurant dining room. Both tags did in fact include the very same information: “Dining room”. When someone logged into the dining room by touching a tag on the left side, the user should have touched the same tag when exiting (in/out logic). If the user touched the tag on the right side instead, the user was logged in to the dining room again. Since the tags were visually very similar (white with penciled text), it was very easy to forget during lunch which tag should be touched. As we discussed in the earlier section, the in/out logic was a design flaw. If it is fixed, we still have the double tag problem where the tags do not provide exactly the same information for the location.
In the preparatory phase, users were not given any guidelines for either the visualization or placement of tags. In all the cases, they wrote exactly the same information into the tag with the mobile phone and on the tag with a pen. The tag is 3 cm in diameter. Since the tags are rather small and their color is subdued, they were sometimes difficult to locate in an office environment. It is apparent that issuing tag placement guidelines would make positioning of tags more consistent and ensure that they are easier to notice.
As pointed out earlier, the tags “off duty” or “on the bus” were created to express individual activity. In contrast, tags describing a location, such as a working room or a dining room, were meant for broader use. A clear difference between those tags is that the latter tag type is linked to a static location. On one hand, tags with no location information can be attached to a movable object, such as a wallet, suitcase, or laptop. On the other hand, tags with static location information cannot be relocated without changing the information embedded into the tag.
Text information on a tag is essential for other users. If a tag does not provide any clues about its content, it is impossible for a user to know what kind of information it has without reading it with a phone.
4.3 Friends by Touching
48 HIC friendships were created between all of the pilot users. 29 of them were established during the pilot evening event. The remaining 19 HIC friendships were made during the week-long introduction period. Thus, most of the HIC friend connections were established in the restaurant. Friends per user averaged 7; deviation was between 2 and 11 friends per user.
These user numbers are enough for studying single use situations, but do not suffice for studying the dynamics of friendship creation. We also wanted to focus on use situations to feed our efforts to include users’ context in future designs. Piloting is a technological intervention to try out software in a substantially controllable user group.
Although making friends by touching mobile phones often took many tries and required attention, many of the users considered it fun. One user comment was that the HIC mobile application made it easier to get to know colleagues better. The HIC mobile application gave users a good reason to approach colleagues who they did not work closely with at the office. Importantly, one user reckoned that touching is a rather intimate way to interact with other mobile phone users and thought that this ritual requires open-mindedness from users.
Interaction between users is radically different when using web-based social networking sites such as Facebook and when making friend connections in a mobile friend network. A Facebook friend is invited by sending a friend request to another Facebook user. The recipient can either accept or ignore the friend request. Touching a friend’s mobile with one’s mobile phone requires face-to-face interaction. This difference affects how easy or difficult it is to ask someone to be your friend or ignore a friend request. For example, it can be very difficult to refuse to be a friend with someone who is standing in front of you. When location and time provide distance between people, as is the case with Facebook, it may be much easier to refuse or just ignore the request. Furthermore, ways of communication highlight the differences between these
approaches. Web-based social networks rely on written messages. Touching a mobile phone means that different cues, such as body language and tone of voice, might be part of the ritual of becoming friends. Maybe this is also contextual information that should be analyzed more in order to inform design.
Intimacy of interaction can also have a negative impact on willingness to create a friend connection. In the pilot, many users knew each other and there was a reason to try out making HIC friend connections. They may even have felt that it was their duty to use the pilot application. Real-life situations would probably be different, because becoming friends requires relatively close interaction with other HIC mobile application users and entering others’ personal space. In social interactions, personal space makes one feel comfortable; if someone trespasses into our personal space, we can feel stressed.
In the study of interpersonal distance proxemics [8], four important interpersonal distances have been identified: intimate distance, personal distance, social distance and public distance. Based on Hall’s observations, intimate distance ranges from 0 to 0.5 meters. We have a tendency to avoid getting this close to people we are not intimate with, and we usually try to escape if we do. Thus, in some cases the threshold for approaching other users and becoming friends with them can be higher when touching than when using more indirect traditional web-based social media.
4.4 Visibility and Communication Possibilities
The pilot period showed that user visibility must be considered very carefully in location-based mobile social networks. All of the users were able to see their HIC friends’ location or status without limitations during the pilot. Several pilot users stated, however, that they would have wanted to limit their visibility to other users in some way. It is difficult to use the application when there in fact are different kinds of friends, and certain information may be private in nature. Consequently, it is not desirable to show all the information to all other users. The closer a friend is, the more willing one usually is to give that friend access to one’s personal life.
One possibility to limit visibility is that a user does not use tags at all, or only for a specific purpose, e.g. status information. In this way, the user maintains control over location privacy. However, that essentially limits the use of the application, because then no one gets real information about the user’s location. Users wanted a more versatile way to adjust their HIC visibility, so that information embedded into tags could be shared with specific users or a user group. When taking into consideration the need to limit visibility, it would also be possible to improve the privacy of users and at the same time provide a basis for more active use of the application.
Finding a friend was not a problem in the pilot, as the user group consisted of 14 people working for the same organization. The HIC concept probably works best when users already know each other or have a motivation to be connected, e.g. as colleagues. If the HIC system user group were larger and geographically widely dispersed, the HIC application could also be used to find potential new friends. For example, friends of a friend could be visible to HIC users. More study is needed to understand how the dynamics of the friends of friends concept would work in a mobile context.
Some users also wanted to have more diverse ways to communicate with friends through the HIC application. The HIC application now provided the possibility to monitor friend location or status with a mobile device. Although this was considered to be an interesting feature, the users soon identified missing features. Users were not able to directly communicate with a friend. Inherently, a mobile phone provides a possibility to call or send a text message. It would be easier to the user, for example, to add a message to be shown along with the location field. Furthermore, communication could be enhanced with a direct chat feature integrated in the application.
All in all, when friends were connected, some information was considered private. This brought new visibility considerations. The pilot application brought up new ideas on how to support friend communication with extra features.
4.5 Connecting Friends
It was evident from the pilot that creating friend connections with NFC mobile phones should be made as easy and fast as possible. When a user decides to create a connection with a friend or a colleague, the connection should occur quickly when phone-to-phone touch is made. The use of menus should be minimized. When users are required to know whether they are the inviter or invitee, the whole process is complicated. The next design could simplify the process to a single menu item. The reason why invitation and acceptance menu structures were used in the first place was that the P2P protocol of the mobile phone required one of the two devices to master the session.
After a successful friend connection, there is a need to assess and categorize the friendship. A successful event means that there will be a crowd of friends in the HIC list. Who is your close friend? How many friends and whom do you want to follow with the mobile application or the Facebook application? The mobile application should provide the functionality to make
friend categories. Even though the classification would not occur immediately during connection creation, it should be possible later.
There are also several smaller usability ideas for improving friend connection in a mobile environment. When the mobile application lists friends, it could provide functionalities to contact them directly. Browsing through the list and selecting a friend could open a menu with chat, SMS, or phone call commands.
4.6 Location and Status
In the introduction period, HIC users started to enthusiastically make their own virtual spaces using NFC tags. Originally, we thought that the rooms and spaces would be named along the lines of ‘Meeting room K220’ or ‘Restaurant’. These labels describe the exact location of the space. Consequently, it would be easy to map the tag location exactly to the person’s location. The mobile application would simply list that your friend ‘Vili’ (see Figure 2) is in ‘Beerhouse Leskinen’. How exact is ‘Beerhouse Leskinen’ or ‘Cloakroom’? Where is tag ‘Cloakroom’ or ‘Dance floor’? Who made the tag and can it be trusted?
In addition, the meeting room tags had information that was more or less mental statements. For example, one tag was placed in ‘the land of uhuru’. ‘Uhuru’ is not a real place, but the mental state of a person, his imaginary holiday destination.
As a design consequence, we need to ask the question if there should be an official set of tags placed by an organization, which can provide meaningful information for HIC users. Or would there be a need for certified and digitally signed tags to counter vandalism. Then perhaps there could be a way to provide information on the user’s context with the tag: in meeting, telco, holiday, etc. in addition to the room name or other description of the person’s location. In comparison, Facebook provides a field called ‘status’ where users can insert practically any information.
A consequence of individual tagging was that our company restaurant suddenly had two different tags with similar but different descriptions of that space. From a casual user point of view, the question is then which of the restaurant tags is the most suitable? There is no sense in using one of the user’s own tags. This brings us to a design scenario where attending the HICSS event would start by touching a tag. Friends and colleagues would then see that you are at this event. HICSS colleagues you have previously met would be able to see that you are in and meanwhile you would see the same view of the event. In fact, if the rooms were tagged, you would be able to see exactly where they are. The power of the structuring is in the possibility to manage any event data. When the event organizer has a toolset for structuring all the material, the event inside HIC in Facebook or other such application would provide the material. Thus, materials and friends could be structured according to the event.
4.7 Events
There is a need for higher abstraction of the events in order to organize where friends were met. In addition, such abstraction could organize data, where friendship creation occurred, or any multimedia material for that specific event.
The HIC Facebook application could provide information browsing capabilities using different views of friend connections and materials, e.g. an event view showing Christmas parties, conferences, etc., a date view, an event/friend view listing the events where you met these friends, and so on. An abstraction of an event would enable the user to virtually manage friend connections created at that event. A friend could then be a person met for example at HICSS, who we may want to consider as a colleague.
This brings us to a design scenario where attending the HICSS event would start by touching a tag. Friends and colleagues would then see that you are at this event. HICSS colleagues you have previously met would be able to see that you are in and meanwhile you would see the same view of the event. In fact, if the rooms were tagged, you would be able to see exactly where they are. The power of the structuring is in the possibility to manage any event data. When the event organizer has a toolset for structuring all the material, the event inside HIC in Facebook or other such application would provide the material. Thus, materials and friends could be structured according to the event.
5. Further Work
A three-step pilot exercise was carried out to evaluate the approach and generate a clear picture of what features had potential and what should be redesigned. Other research approaches are still needed for the further evaluation of HIC design. Especially,
making friends is a deeply social agreement between two persons. People behave differently when making a friend connection in the same space and time. We can identify several design items for the next iteration. The first concerns the creation of friendships, the second the context of the user, starting from his or her location and status, and then proceeding to the event and time as contextual parameters.
One intriguing question that emerges from this research is: what exactly is the concept of a friend? For instance, is a colleague a friend? Contemporary social media systems have tackled this same question in various ways. Yet, it seems that the way in which friend connections are handled and the visibility of these connections are crucial for the success of such systems [3].
We will take a new perspective on the friend concept into account while reconsidering what exactly constitutes an event. For instance, do we count a location, such as restaurant, and some specific time as an event, or should the event be more structured, organized by someone? An event is a dimension for further design, and more work on defining what an event is will also greatly affect how we want to think about the HIC system design.
During this effort, we had our first indication of how location can be seen from the tagging point of view. The difference of GPS is that users can leave tags in a location that can be used by everyone. Otherwise, location can be provided to the system in a similar manner. The users, however, can choose whether they want to inform others where they are by means of the act of touching a tag.
We consider that future research can have both a wider generic focus and a domain-specific focus. On one hand, it is possible that an NFC-based mobile social network can provide an open and generic service similar to Facebook or that this open system can be embedded into an existing generic social media system. On the other hand, specific domains (e.g. work domains, shops or restaurants) have different social dynamics. The HIC system can be designed to support selected social dynamics. Finally, we intend to do more research on user experience with the HIC system. In autumn 2009 we will hold a HIC field trial that aims to examine the use of HIC at a conference with several hundred participants.
References
|
{"Source-Url": "https://www.computer.org/csdl/proceedings/hicss/2010/3869/00/01-14-01.pdf", "len_cl100k_base": 9200, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31323, "total-output-tokens": 10770, "length": "2e13", "weborganizer": {"__label__adult": 0.00145721435546875, "__label__art_design": 0.016937255859375, "__label__crime_law": 0.000950336456298828, "__label__education_jobs": 0.007671356201171875, "__label__entertainment": 0.0015783309936523438, "__label__fashion_beauty": 0.001140594482421875, "__label__finance_business": 0.0009450912475585938, "__label__food_dining": 0.0015821456909179688, "__label__games": 0.0035457611083984375, "__label__hardware": 0.00722503662109375, "__label__health": 0.0016355514526367188, "__label__history": 0.0020313262939453125, "__label__home_hobbies": 0.001140594482421875, "__label__industrial": 0.0006880760192871094, "__label__literature": 0.0021915435791015625, "__label__politics": 0.00061798095703125, "__label__religion": 0.0015048980712890625, "__label__science_tech": 0.32421875, "__label__social_life": 0.0165863037109375, "__label__software": 0.27294921875, "__label__software_dev": 0.33056640625, "__label__sports_fitness": 0.0005612373352050781, "__label__transportation": 0.0014219284057617188, "__label__travel": 0.0008082389831542969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49741, 0.02574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49741, 0.31865]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49741, 0.95899]], "google_gemma-3-12b-it_contains_pii": [[0, 4413, false], [4413, 8497, null], [8497, 13628, null], [13628, 18576, null], [18576, 23224, null], [23224, 28941, null], [28941, 34437, null], [34437, 39890, null], [39890, 44542, null], [44542, 49741, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4413, true], [4413, 8497, null], [8497, 13628, null], [13628, 18576, null], [18576, 23224, null], [23224, 28941, null], [28941, 34437, null], [34437, 39890, null], [39890, 44542, null], [44542, 49741, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49741, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49741, null]], "pdf_page_numbers": [[0, 4413, 1], [4413, 8497, 2], [8497, 13628, 3], [13628, 18576, 4], [18576, 23224, 5], [23224, 28941, 6], [28941, 34437, 7], [34437, 39890, 8], [39890, 44542, 9], [44542, 49741, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49741, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
dc1a7d3b686751468938315563aa955fd148ebbc
|
[REMOVED]
|
{"Source-Url": "http://www.idc-online.com/technical_references/pdfs/information_technology/Operators_Expressions_and_Program_Flow.pdf", "len_cl100k_base": 9605, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 45209, "total-output-tokens": 9986, "length": "2e13", "weborganizer": {"__label__adult": 0.00027489662170410156, "__label__art_design": 0.0002529621124267578, "__label__crime_law": 0.00020968914031982425, "__label__education_jobs": 0.0016307830810546875, "__label__entertainment": 7.075071334838867e-05, "__label__fashion_beauty": 0.00010651350021362303, "__label__finance_business": 0.00012683868408203125, "__label__food_dining": 0.00047206878662109375, "__label__games": 0.0006227493286132812, "__label__hardware": 0.0006656646728515625, "__label__health": 0.0003910064697265625, "__label__history": 0.00015974044799804688, "__label__home_hobbies": 0.00011473894119262697, "__label__industrial": 0.0003650188446044922, "__label__literature": 0.0002617835998535156, "__label__politics": 0.00018346309661865232, "__label__religion": 0.00042557716369628906, "__label__science_tech": 0.01132965087890625, "__label__social_life": 9.34600830078125e-05, "__label__software": 0.006008148193359375, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.00033974647521972656, "__label__transportation": 0.0003495216369628906, "__label__travel": 0.00018465518951416016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37734, 0.11327]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37734, 0.44911]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37734, 0.87572]], "google_gemma-3-12b-it_contains_pii": [[0, 2612, false], [2612, 4355, null], [4355, 6070, null], [6070, 7812, null], [7812, 8898, null], [8898, 10991, null], [10991, 13142, null], [13142, 14225, null], [14225, 16046, null], [16046, 18515, null], [18515, 20816, null], [20816, 22219, null], [22219, 24995, null], [24995, 26781, null], [26781, 28086, null], [28086, 29971, null], [29971, 31588, null], [31588, 33324, null], [33324, 35310, null], [35310, 37734, null], [37734, 37734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2612, true], [2612, 4355, null], [4355, 6070, null], [6070, 7812, null], [7812, 8898, null], [8898, 10991, null], [10991, 13142, null], [13142, 14225, null], [14225, 16046, null], [16046, 18515, null], [18515, 20816, null], [20816, 22219, null], [22219, 24995, null], [24995, 26781, null], [26781, 28086, null], [28086, 29971, null], [29971, 31588, null], [31588, 33324, null], [33324, 35310, null], [35310, 37734, null], [37734, 37734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 37734, null]], "pdf_page_numbers": [[0, 2612, 1], [2612, 4355, 2], [4355, 6070, 3], [6070, 7812, 4], [7812, 8898, 5], [8898, 10991, 6], [10991, 13142, 7], [13142, 14225, 8], [14225, 16046, 9], [16046, 18515, 10], [18515, 20816, 11], [20816, 22219, 12], [22219, 24995, 13], [24995, 26781, 14], [26781, 28086, 15], [28086, 29971, 16], [29971, 31588, 17], [31588, 33324, 18], [33324, 35310, 19], [35310, 37734, 20], [37734, 37734, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37734, 0.14426]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4bd1fbc8a3556ad81b37df57e3272addc1d55df7
|
Verified Validation of Lazy Code Motion
Jean-Baptiste Tristan, Xavier Leroy
To cite this version:
HAL Id: inria-00415865
https://inria.hal.science/inria-00415865
Submitted on 11 Sep 2009
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Verified Validation of Lazy Code Motion
Jean-Baptiste Tristan
INRIA Paris-Rocquencourt
jean-baptiste.tristan@inria.fr
Xavier Leroy
INRIA Paris-Rocquencourt
xavier.leroy@inria.fr
Abstract
Translation validation establishes a posteriori the correctness of a run of a compilation pass or other program transformation. In this paper, we develop an efficient translation validation algorithm for the Lazy Code Motion (LCM) optimization. LCM is an interesting challenge for validation because it is a global optimization that moves code across loops. Consequently, care must be taken not to move computations that may fail before loops that may not terminate. Our validator includes a specific check for the possibility of rule out such incorrect moves. We present a mechanically-checked proof of correctness of the validation algorithm, using the Coq proof assistant. Combining our validator with an unverified implementation of LCM, we obtain a LCM pass that is provably semantics-preserving and was integrated in the CompCert formally verified compiler.
Categories and Subject Descriptors D.2.4 [Software Engineering]: Software/Program Verification - Correctness proofs; D.3.4 [Programming Languages]: Processors - Optimization; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying Reasoning about Programs - Mechanical verification; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages - Operational semantics
General Terms Languages, Verification, Algorithms
Keywords Translation validation, lazy code motion, redundancy elimination, verified compilers, the Coq proof assistant
1. Introduction
Advanced compiler optimizations perform subtle transformations over the programs being compiled, exploiting the results of delicate static analyses. Consequently, optimizer optimizations are sometimes incorrect, causing the compiler either to crash at compile-time, or to silently generate bad code from a correct source program. The latter case is especially troublesome since such compiler-introduced bugs are very difficult to track down. Incorrect optimizations often stem from bugs in the implementation of a correct optimization algorithm, but sometimes the algorithm itself is faulty, or the conditions under which it can be applied are not well understood.
The standard approach to weeding out incorrect optimizations is heavy testing of the compiler. Translation validation, as introduced by Pnueli et al. (1998b), provides a more systematic way to detect (at compile-time) semantic discrepancies between the input and the output of an optimization. At every compilation run, the input code and the generated code are fed to a validator (a piece of software distinct from the compiler itself), which tries to establish a posteriori that the generated code behaves as prescribed by the input code. If, however, the validator detects a discrepancy, or is unable to establish the desired semantic equivalence, compilation is aborted; some validators also produce an explanation of the error.
Algorithms for translation validation roughly fall in two classes, (See section 9 for more discussion.) General-purpose validators such as those of Pnueli et al. (1998b), Necula (2000), Barret et al. (2005), Rinard and Marinov (1999) and Rival (2004) rely on generic techniques such as symbolic execution, model-checking and theorem proving, and can therefore be applied to a wide range of program transformations. Since checking semantic equivalence between two code fragments is undecidable in general, these validators can generate false alarms and have high complexity. If we are interested only in a particular optimization or family of related optimizations, special-purpose validators can be developed, taking advantage of our knowledge of the limited range of code transformations that these optimizations can perform. Examples of special-purpose validators include that of Huang et al. (2006) for register allocation and that of Tristan and Leroy (2008) for list and trace instruction scheduling. These validators are based on efficient static analyses and are believed to be correct and complete.
This paper presents a translation validator specialized to the Lazy Code Motion (LCM) optimization of Knoop et al. (1992, 1994). LCM is an advanced optimization that removes redundant computations; it includes common subexpression elimination and loop-invariant code motion as special cases, and can also eliminate partially redundant computations (i.e. computations that are redundant on some but not all program paths). Since LCM can move computations across basic blocks and even across loops, its validation appears more challenging than that of register allocation or trace scheduling, which preserve the structure of basic blocks and extended basic blocks, respectively. As we show in this work, the validation of LCM turns out to be relatively simple (at any rate, much simpler than the LCM algorithm itself); it exploits the results of a standard available expression analysis. A delicate issue with LCM is that it can anticipate (insert earlier computations of) instructions that can fail at run-time, such as memory loads from a potentially invalid pointer; if done carelessly, this transformation can turn code that diverges into code that crashes. To address this issue, we complement the available expression analysis with a so-called “anticipability checker”, which ensures that the transformed code is at least as defined as the original code.
Translation validation provides much additional confidence in the correctness of a program transformation, but does not completely rule out the possibility of compiler-introduced bugs: what if the validator itself is buggy? This is a concern for the development of critical software, where systematic testing does not suffice.
to reach the desired level of assurance and must be complemented by formal verification of the source. Any bug in the compiler can potentially invalidate the guarantees obtained by this use of formal methods. One way to address this issue is to formally verify the compiler itself, proving that every pass preserves the semantics of the program being compiled. Several ambitious compiler verification efforts are currently under way, such as the Jinja project of Klein and Nipkow (2006), the Verisoft project of Leinenbach et al. (2005), and the CompCert project of Leroy et al. (2004–2009).
Translation validation can provide semantic preservation guarantees as strong as those obtained by formal verification of a compiler pass: it suffices to prove that the validator is correct, i.e. returns true only when the two programs it receives as inputs are semantically equivalent. The compiler pass itself does not need to be proved correct. As illustrated by Tristan and Leroy (2008), the proof of a validator can be significantly simpler and more reusable than that of the corresponding optimizations. The translation validator for LCM presented in this paper was mechanically verified using the Coq proof assistant (Coq development team 1989–2009; Bertot and Cast´eran 2004). We give a detailed overview of this proof in sections 5 to 7. Combining the verified validator with an unverified implementation of LCM written in Caml, we obtain a provably correct LCM optimization that integrates smoothly within the CompCert verified compiler (Leroy et al. 2004–2009).
The remainder of this paper is as follows. After a short presentation of Lazy Code Motion (section 3) and of the RTL intermediate language over which it is performed (section 2), section 4 develops our validation algorithm for LCM. The next three sections outline the correctness proof of this algorithm: section 5 gives the dynamic semantics of RTL, section 6 presents the general shape of the proof of semantic preservation using a simulation argument, and section 7 details the LCM-specific aspects of the proof. Section 8 discusses other aspects of the validator and its proof, including completeness, complexity, performance and reusability. Related work is discussed in section 9, followed by conclusions in section 10.
2. The RTL intermediate language
The LCM optimization and its validation are performed on the RTL intermediate language of the CompCert compiler. This is a standard Register Transfer Language where control is represented by a control flow graph (CFG). Nodes of the CFG carry abstract instructions, corresponding roughly to machine instructions but operating over pseudo-registers (also called temporaries). Every function has an unlimited supply of pseudo-registers, and their values are preserved across function calls.
An RTL program is a collection of functions plus some global variables. As shown in figure 1, functions come in two flavors: external functions ef are merely declared and model input-output operations and similar system calls; internal functions f are defined within the language and consist of a type signature sig, a parameter list r, the size n of their activation record, an entry point l, and a CFG g representing the code of the function. The CFG is implemented as a finite map from node labels l (positive integers) to instructions. The set of instructions includes arithmetic operations, memory load and stores, conditional branches, and function calls, tail calls and returns. Each instruction carries the list of its successors in the CFG. When the successor i is irrelevant or clear from the context, we use the following more readable notations for register-to-register moves, arithmetic operations, and memory loads:
\[
\begin{align*}
r &:= r' & \text{for } & \text{op(save, r', r, l)} \\
r &:= \text{op}(op, r) & \text{for } & \text{op(op, r, r, l)} \\
r &:= \text{load}(chunk, mode, r) & \text{for } & \text{load(chunk, mode, r, r, l)}
\end{align*}
\]
A more detailed description of RTL can be found in (Leroy 2008).
RTL instructions:
\[
\begin{align*}
i &:= \text{nop}(l) & \text{no operation} \\
\text{op}(op, r, r, l) & \text{arithmetic operation} \\
\text{load}(chunk, mode, r, r, l) & \text{memory load} \\
\text{store}(chunk, mode, r, r, l) & \text{memory store} \\
\text{call}(sig, (r | id), r, l) & \text{function call} \\
\text{tailcall}(sig, (r | id), r) & \text{function tail call} \\
\text{cond}(cond, r, \text{true}, \text{false}) & \text{conditional branch} \\
\text{return} & \text{return return(r)} & \text{return}
\end{align*}
\]
Control-flow graphs:
\[
g := l \iff i \text{ finite map}
\]
RTL functions:
\[
\begin{align*}
f d &:= f | ef & \text{internal function} \\
f &:= \text{id} \{ \text{sig sig} \} & \text{external function} \\
f &:= \text{id} \{ \text{sig sig} \} & \text{external function}
\end{align*}
\]
Figure 1. RTL syntax
3. Lazy Code Motion
Lazy code motion (LCM) (Knoop et al. 1992, 1994) is a dataflow-based algorithm for the placement of computations within control flow graphs. It suppresses unnecessary recomputations of values by moving their first computations earlier in the execution flow (if necessary), and later reusing the results of these first computations. Thus, LCM performs elimination of common subexpressions (both within and across basic blocks), as well as loop invariant code motion. In addition, it can also factor out partially redundant computations: computations that occur multiple times on some execution paths, but once or not at all on other paths. LCM is used in production compilers, for example in GCC version 4.
Figure 2 presents an example of lazy code motion. The original program in part (a) presents several interesting cases of redundancies for the computation of \( t_1 + t_2 \): loop invariant (node 4), simple straight-line redundancy (nodes 6 and 5), and partial redundancy (node 5). In the transformed program (part (b)), these redundant computations of \( t_1 + t_2 \) have all been eliminated: the expression is computed at most once on every possible execution path. Two instructions (node \( n_1 \) and \( n_2 \)) have been added to the graph, both of which compute \( t_1 + t_2 \) and save its result into a fresh temporary \( h_0 \). The three occurrences of \( t_1 + t_2 \) in the original code have been rewritten into move instructions (nodes 4′, 5′ and 6′), copying the fresh \( h_0 \) register to the original destinations of the instructions.
The reader might wonder why two instructions \( h_0 := t_1 + t_2 \) were added in the two branches of the conditional, instead of a single instruction before node 1. The latter is what the partial redundancy elimination optimization of Morel and Renvoie (1979) would do. However, this would create a long-lived temporary \( h_0 \), therefore increasing register pressure in the transformed code. The “lazy” aspect of LCM is that computations are placed as late as possible while avoiding repeated computations.
The LCM algorithm exploits the results of 4 dataflow analyses: up-safety (also called availability), down-safety (also called anticipability), delayability and isolation. These analyses can be implemented efficiently using bit vectors. Their results are then cleverly combined to determine an optimal placement for each computation performed by the initial program.
Knoop et al. (1994) presents a correctness proof for LCM. However, mechanizing this proof appears difficult. Unlike the program transformations that have already been mechanically verified in the CompCert project, LCM is a highly non-local transformation: in-
It connects each node of the source code to its (possibly rewritten) counterpart in the transformed code. In the example of figure 2, \( \phi \) maps nodes 1 \( \ldots \) 6 to their primed versions 1' \( \ldots \) 6'. We assume the unverified implementation of LCM is instrumented to produce this function. (In our implementation, we arrange that \( \phi \) is always the identity function.) Nodes that are not in the image of \( \phi \) are the fresh nodes introduced by LCM.
4. A translation validator for Lazy Code Motion
In this section, we detail a translation validator for LCM.
4.1 General structure
Since LCM is an intraprocedural optimization, the validator proceeds function per function: each internal function \( f \) of the original program is matched against the identically-named function \( f' \) of the transformed program. Moreover, LCM does not change the type signature, parameter list and stack size of functions, and can be assumed not to change the entry point (by inserting \texttt{nops} at the graph entrance if needed). Checking these invariants is easy; hence, we can focus on the validation of function graphs. Therefore, the validation algorithm is of the following shape:
\[
\text{validate}(f, f', \phi) = \\
\text{let} AE = \text{analyze}(f') \\
\text{in} \\
\phi' \text{.sig} = f'.\text{sig} \text{ and } f'.\text{params} = f.\text{params} \text{ and} \\
f'.\text{stack} = f.\text{stack} \text{ and } f'.\text{start} = f.\text{start} \text{ and} \\
\text{for each node } n \text{ of } f, V(f, f', n, \phi, AE) = \text{true}
\]
As discussed in section 3, the \( \phi \) parameter is the mapping from nodes of the input graph to nodes of the transformed graph provided by the implementation of LCM. The analyze function is a static analysis computing available expressions, described below in section 4.2.1. The \( V \) function validates pairs of matching nodes and is composed of two checks: unify, described in section 4.2.2 and path, described in section 4.3.2.
\[
V(f, f', n, \phi, AE) = \\
\text{unify}(RD(n'), f.\text{graph}(n), f'.\text{graph}(\phi(n))) \\
\text{and for all successor } s \text{ of } n \text{ and matching successor } s' \text{ of } n', \\
\text{path}(f.\text{graph}, f'.\text{graph}, s', \phi(s))
\]
As outlined above, our implementation of a validator for LCM is carefully structured in two parts: a generic, rather bureaucratic framework parameterized over the \text{analyze} and \( V \) functions; and the LCM-specific, more subtle functions \text{analyze} and \( V \). As we will see in section 7, this structure facilitates the correctness proof of the validator. It also makes it possible to reuse the generic framework and its proof in other contexts, as illustrated in section 8.
We now focus on the construction of \( V \), the node-level validator, and the static analysis it exploits.
4.2 Verification of the equivalence of single instructions
Consider an instruction \( i \) at node \( n \) in the original code and the corresponding instruction \( i' \) at node \( \phi(n) \) in the code after LCM (for example, nodes 4 and 4' in figure 2). We wish to check that these two instructions are semantically equivalent. If the transformation was a correct LCM, two cases are possible:
\[
i = i' : \text{both instructions will obviously lead to equivalent run-time states, if executed in equivalent initial states.}
\]
\[
i' \text{ is of the form } r := h \text{ for some register } r \text{ and fresh register } h, \text{ and } i \text{ is of the form } r := rhs \text{ for some right-hand side } rhs, \text{ which can be either an arithmetic operation op(...) or a memory read load(...)}.
\]
In the latter case, we need to verify that \( rhs \) and \( h \) produce the same value. More precisely, we need to verify that the value contained...
in $h$ in the transformed code is equal to the value produced by evaluating $r_{hs}$ in the original code. LCM being a purely syntactical redundancy elimination transformation, it must be the case that the instruction $h := r_{hs}$ exists on every path leading to $\varphi(n)$ in the transformed code; moreover, the values of $h$ and $r_{hs}$ are preserved along these paths. This property can be checked by performing an available expression analysis on the transformed code.
4.2.1 Available expressions
The available expression analysis produces, for each program point of the transformed code, a set of equations $r = r_{hs}$ between registers and right-hand sides. (For efficiency, we encode these sets as finite maps from registers to right-hand sides, represented as Patricia trees.) Available expressions is a standard forward dataflow analysis: $AE(s) = \bigcap \{T(f', \text{graph}(l)), AE(l)) \mid s \text{ is a successor of } l\}$
The join operation is set intersection; the top element of the lattice is the empty set, and the bottom element is a symbolic constant $\mathcal{U}$ denoting the universe of all equations. The transfer function $T$ is standard; full details can be found in the Coq development. For instance, if the instruction $i$ is the operation $r := t_1 + t_2$, and $R$ is the set of equations "before" $i$, the set $T(i, R)$ of equations "after" $i$ is obtained by adding the equality $r = t_1 + t_2$ to $R$, then removing every equality in this set that uses register $r$ (including the one just added if $t_1$ or $t_2$ equals $r$). We also track equalities between registers and load instructions. Those equalities are erased whenever a store instruction is encountered because we do not maintain aliasing information.
To solve the dataflow equations, we reuse the generic implementation of Kildall’s algorithm provided by the CompCert compiler. Leveraging the correctness proof of this solver and the definition of the transfer function, we obtain that the equations inferred by the analysis hold in any concrete execution of the transformed code. For example, if the set of equations at point $l$ include the equality $r = t_1 + t_2$, it must be the case that $R(r) = R(t_1) + R(t_2)$ for every possible execution of the program that reaches point $l$ with a register state $R$.
4.2.2 Instruction unification
Armed with the results of the available expression analysis, the unify check between pairs of matching instructions can be easily expressed:
\[
\text{unify}(D, i, i') =
\begin{array}{l}
\text{if } i' = i \text{ then } \text{true } \text{ else }
\text{ case } (i, i') \text{ of }
\begin{cases}
(r := \text{op}(op, r), r := h) \rightarrow & (h = \text{op}(op, r)) \in D \\
(r := \text{load}(\text{chunk}, \text{mode}, r), r := h) \rightarrow & (h = \text{load}(\text{chunk}, \text{mode}, r)) \in D \\
\text{otherwise} \rightarrow \text{false}
\end{cases}
\end{array}
\]
Here, $D = AE(n)$ is the set of available expressions at the point $n'$ where the transformed instruction $i'$ occurs. Either the original instruction $i$ and the transformed instruction $i'$ are equal, or the former is $r := r_{hs}$ and the latter is $r := h$, in which case instruction unification succeeds if and only if the equation $h = r_{hs}$ is known to hold according to the results of the available expression analysis.
4.3 Verifying the flow of control
Unifying pairs of instructions is not enough to guarantee semantic preservation: we also need to check that the control flow is preserved. For example, in the code shown in figure 2, after checking that the conditional tests at nodes 1 and 1' are identical, we must make sure that whenever the original code transitions from node 1 to node 6, the transformed code can transition from node 1' to 6', executing the anticipated computation at $n_2$ on its way.
More generally, if the $k$-th successor of $n$ in the original CFG is $m$, there must exist a path in the transformed CFG from $\varphi(n)$ to $\varphi(m)$ that goes through the $k$-th successor of $\varphi(n)$. (See figure 3.) Since instructions can be added to the transformed graph during lazy code motion, $\varphi(m)$ is not necessarily the $k$-th successor of $\varphi(n)$; one or several anticipated computations of the shape $h := r_{hs}$ may need to be executed. Here comes a delicate aspect of our validator: not only must there exist a path from $\varphi(n)$ to $\varphi(m)$, but moreover the anticipated computations $h := r_{hs}$ found on this path must be semantically well-defined: they should not go wrong at runtime. This is required to ensure that whenever an execution of the original code transitions in one step from $n$ to $m$, the transformed code can transition (possibly in several steps) from $\varphi(n)$ to $\varphi(m)$ without going wrong.
Figure 4 shows three examples of code motion where this property may not hold. In all three cases, we consider anticipating the computation $a/b$ (an integer division that can go wrong if $b = 0$) at the program points marked by a double arrow. In the leftmost example, it is obviously unsafe to compute $a/b$ before the conditional test: quite possibly, the test in the original code checks that $b \neq 0$ before computing $a/b$. The middle example is more subtle: it could be the case that the loop preceding the computation of $a/b$ does not terminate whenever $b = 0$. In this case, the original code never crashes on a division by zero, but anticipating the division before the loop could cause the transformed program to do so. The rightmost example is similar to the middle one, with the loop being replaced by a function call. The situation is similar because the function call may not terminate when $b = 0$.
How, then, can we check that the instructions that have been added to the graph are semantically well-defined? Because we distinguish erroneous executions and diverging executions, we cannot rely on a standard anticipability analysis. Our approach is the following: whenever we encounter an instruction $h := r_{hs}$ that was inserted by the LCM transformation on the path from $\varphi(n)$
\[
\begin{array}{c}
\xrightarrow{\text{result}} x := a/b \\
\xrightarrow{\text{case}} x := a/b \\
\xrightarrow{f(y)} x := a/b
\end{array}
\]
Figure 4. Three examples of incorrect code motion. Placing a computation of $a/b$ at the program points marked by $\xrightarrow{\text{result}}$ can potentially transform a well-defined execution into an erroneous one.
function ant_checker_rec (g,rhs,p_c,S) =
case S(p_c) of
| Found → (S,true)
| NotFound → (S,false)
| Visited → (S,false)
| Dunno →
case g(p_c) of
| return _→ (S{p_c ← NotFound},false)
| tailcall (_→) → (S{p_c ← NotFound},false)
| cond (_) → (true, not allowed)
let (S',b) = ant_checker_rec (g,rhs,l_{rec},S{p_c ← Visited}) in
let (S'',b) = ant_checker_rec (g,rhs,l_{false},S') in
if b1 && b2 then (S''{p_c ← Found},true) else (S''{p_c ← NotFound},false)
nop l →
let (S',b) = ant_checker_rec (g,rhs,l,S{p_c ← Visited}) in
if b then (S'{p_c ← Found},true) else (S'{p_c ← NotFound},false)
call (_) → (S{p_c ← NotFound},false)
store (_) →
if rhs reads memory then (S{p_c ← NotFound},false) else
let (S',b) = ant_checker_rec (g,rhs,l,S{p_c ← Visited}) in
if b then (S'{p_c ← Found},true) else (S'{p_c ← NotFound},false)
op (op,args,r,l) →
if r is an operand of rhs then (S{p_c ← NotFound},false) else
if rhs = (op op args) then (S{p_c ← Found},true) else
let (S',b) = ant_checker_rec (g,rhs,l,S{p_c ← Visited}) in
if b then (S'{p_c ← Found},true) else (S'{p_c ← NotFound},false)
load (chk,addr,args,r,l) →
if r is an operand of rhs then (S{p_c ← NotFound},false) else
if rhs = (load chk addr args) then (S{p_c ← Found},true) else
let (S',b) = ant_checker_rec (g,rhs,l,S{p_c ← Visited}) in
if b then (S'{p_c ← Found},true) else (S'{p_c ← NotFound},false)
function ant_checker (g,rhs,p_c) = let (S,b) = ant_checker_rec(g,rhs,p_c,(l → Dunno)) in b
\textbf{Figure 5.} Anticipability checker
to \varphi(m), we check that the computation of \textit{rhs} is inevitable in the original code starting at node \textit{m}. In other words, all execution paths starting from \textit{m} in the original code must, in a finite number of steps, compute \textit{rhs}. Since the semantic preservation result that we wish to establish takes as an assumption that the execution of the original code does not go wrong, we know that the computation of \textit{rhs} cannot go wrong, and therefore it is legal to anticipate it in the transformed code. We now define precisely an algorithm, called the \textit{anticipability checker}, that performs this check.
\textbf{4.3.1 Anticipability checking}
Our algorithm is described in figure 5. It takes four arguments: a graph \textit{g}, an instruction right-hand side \textit{rhs} to search for, a program point \textit{l} where the search begins and a map \textit{S} that associates to every node a marker. Its goal is to verify that on every path starting at \textit{l} in the graph \textit{g}, execution reaches an instruction with right-hand side \textit{rhs} such that none of the operands of \textit{rhs} have been redefined on the path. Basically it is a depth-first search that covers all the path starting at \textit{l}. Note that if there is a path starting at \textit{l} that contains a loop so that \textit{rhs} is neither between \textit{l} and the loop nor in the loop itself, then there exists a path on which \textit{rhs} is not reachable and that corresponds to an infinite execution. To obtain an efficient algorithm, we need to ensure that we do not go through loops several times. To this end, if the search reaches a join point not for the first time and where \textit{rhs} was not found before, we must stop searching immediately. This is achieved through the use of four different markers over nodes:
- \textit{Found} means that \textit{rhs} is computed on every path from the current node.
- \textit{NotFound} means that there exists a path from the current node in which \textit{rhs} is not computed.
- \textit{Dunno} is the initial state of every node before it has been visited.
- \textit{Visited} is the state when a state is visited and we do not know yet whether \textit{rhs} is computed on all paths or not. It is used to detect loops.
Let us detail a few cases. When the search reaches a node that is marked \textit{Visited} (line 6), it means that the search went through a loop and \textit{rhs} was not found. This could lead to a semantics discrepancy (recall the middle example in figure 4) and the search fails. For similar reasons, it also fails when a call is reached (line 19). When the search reaches an operation (line 24), we first verify (line 25) that \textit{r}, the destination register of the instruction does not modify the operands of \textit{rhs}. Then, (line 26) if the instruction right-hand side we reached correspond to \textit{rhs}, we found \textit{rhs} and we mark the node accordingly. Otherwise, the search continues (line 27) and we mark the node based on whether the recursive search found \textit{rhs} or not (line 28).
The \texttt{ant_checker} function, when it returns \texttt{Found}, should imply that the right-hand side expression is well defined. We prove that this is the case in section 7.3 below.
### 4.3.2 Verifying the existence of semantics paths
Once we can decide the well-definedness of instructions, checking for the existence of a path between two nodes of the transformed graph is simple. The function $\text{path}(g, q', \alpha, m)$ checks that there exists a path in CFG $g'$ from node $n$ to node $m$, composed of zero, one or several single-successor instructions of the form $h := \text{rhs}$. The destination register $h$ must be fresh (unused in $g$) so as to preserve the abstract semantics equivalence invariant. Moreover, the right-hand side $\text{rhs}$ must be safely anticipatable: it must be the case that $\text{ant_checker}(g, \text{rhs}, \phi^{-1}(m)) = \text{Found}$, so that $\text{rhs}$ can be computed before reaching $m$ without getting stuck.
### 5. Dynamic semantics of RTL
In preparation for a proof of correctness of the validator, we now outline the dynamic semantics of the RTL language. More details can be found in (Leroy 2008). The semantics manipulates values, written $v$, comprising 32-bit integers, 64-bit floats, and pointers. Several environments are involved in the semantics. Memory states $M$ map pointers and memory chunks to values, in a way that accounts for byte addressing and possible overlap between chunks (Leroy and Blazy 2008). Register files $R$ map registers to values. Global environments $G$ associate pointers to names of global variables and functions, and function definitions to function pointers. The semantics of RTL programs is given in small-step style, as a transition relation between execution states. Three kinds of states are used:
- Regular states: $S(\Sigma, f, \sigma, l, R, M)$: This state corresponds to an execution point within the internal function $f$, at node $l$ in the CFG of $f$. $R$ and $M$ are the current register file and memory state. $\Sigma$ represents the call stack, and $\sigma$ points to the activation record for the current invocation of $f$.
- Call states: $C(\Sigma, fd, \bar{v}, M)$. This is an intermediate state representing an invocation of function $F_{\text{call}}$ with parameters $\bar{v}$.
- Return states: $R(\Sigma, v, M)$. Symmetrically, this intermediate state represents a function return, with return value $v$ being passed back to the caller.
Call stacks $\Sigma$ are lists of frames $F(r, f, \sigma, l, R)$, where $r$ is the destination register where the value computed by the callee is to be stored on return, $f$ is the caller function, and $\sigma$, $l$ and $R$ its local state at the time of the function call.
The semantics is defined by the one-step transition relation $G : S \xrightarrow{t} S'$, where $G$ is the global environment (invariant during execution), $S$ and $S'$ the states before and after the transition, and $t$ a trace of the external function call possibly performed during the transition. Traces record the names of external functions invoked, along with the argument values provided by the program and the return value provided by the external world.
To give a flavor of the semantics and show the level of detail of the formalization, figure 6 shows a subset of the rules defining the one-step transition relation. For example, the first rule states that if the program counter $f$ points to an instruction that is an operation of the form $\text{op}(op, r', r_d, l')$, and if evaluating the operator $op$ on the values contained in the registers $r'$ of the register file $R$ returns the value $v$, then we transition to a new regular state where the register $r_d$ of $R$ is updated to hold the value $v$, and the program counter moves to the successor $l'$ of the operation. The only rule that produces a non-empty trace is the one for external function invocations (last rule in figure 6); all other rules produce the empty trace $\varepsilon$.
\[ f.\text{graph}(l) = \text{op}(op, r', r_d, l') \quad v = \text{eval_op}(G, op, R(r)) \]
\[ G \vdash S(\Sigma, f, \sigma, l, R, M) \xrightarrow{\sigma} S(\Sigma, f, \sigma, l', R(r_d \leftarrow v), M) \]
\[ f.\text{graph}(l) = \text{call}(\text{sig}, r_f, \bar{r}, r_d, l') \quad G(R(r_f)) = \text{fd} \]
\[ \text{fd} = \text{sig} \quad \Sigma = \text{F}(r_d, f, \sigma, l', R, \Sigma') \]
\[ G \vdash S(\Sigma, f, \sigma, l, R, M) \xrightarrow{\sigma} C(\Sigma', fd, \bar{v}, \bar{v}) \]
\[ f.\text{graph}(l) = \text{return}(r) \quad v = R(r) \]
\[ G \vdash S(\Sigma, f, \sigma, l, R, M) \xrightarrow{\sigma} R(\Sigma, v, M) \]
\[ \Sigma = \text{F}(r_d, f, \sigma, l, R, \Sigma') \]
\[ G \vdash R(\Sigma, v, M) \xrightarrow{\sigma} S(\Sigma, f, \sigma, l, R(r_d \leftarrow v), M) \]
\[ \text{alloc}(M, 0, f, \text{stacksize}) = (\sigma, M') \]
\[ l = f.\text{start} \quad R = [f, \text{params} \leftarrow \bar{v}] \]
\[ G \vdash C(\Sigma, f, \bar{v}, M) \xrightarrow{\sigma} R(\Sigma, v, M) \]
\[ t = (\text{ef, name}, \bar{v}, v) \]
\[ G \vdash C(\Sigma, ef, \bar{v}, M) \xrightarrow{\sigma} R(\Sigma, v, M) \]
\( \text{Figure 6. Selected rules from the dynamic semantics of RTL} \)
Sequences of transitions are captured by the following closures of the one-step transition relation:
\[ G \vdash S \xrightarrow{t} S' \quad \text{zero, one or several transitions} \]
\[ G \vdash S \xrightarrow{1^*} S' \quad \text{one or several transitions} \]
\[ G \vdash S \xrightarrow{\infty} \quad \text{infinitely many transitions} \]
The finite trace $t$ and the finite or infinite trace $T$ record the external function invocations performed during these sequences of transitions. The observable behavior of a program $P$, then, is defined in terms of the traces corresponding to transition sequences from an initial state to a final state. We write $P \xrightarrow{\varepsilon} B$ to say that program $P$ has behavior $B$, where $B$ is either termination with a finite trace $t$, or divergence with a possibly infinite trace $T$. Note that computations that go wrong, such as an integer division by zero, are modeled by the absence of a transition. Therefore, if $P$ goes wrong, then $P \xrightarrow{\varepsilon} B$ does not hold for any $B$.
### 6. Semantics preservation for LCM
Let $P_1$ be an input program and $P_2$ be the output program produced by the untrusted implementation of LCM. We wish to prove that if the validator succeeds on all pairs of matching functions from $P_1$ and $P_2$, then $P_1 \xrightarrow{\varepsilon} B \Rightarrow P_2 \xrightarrow{\varepsilon} B$. In other words, if $P_1$ does not go wrong and executes with observable behavior $B$, then so does $P_2$.
#### 6.1 Simulating executions
The way we build a semantics preservation proof is to construct a relation between execution states of the input and output programs, written $S_1 \sim S_2$, and show that it is a simulation:
- Initial states: if $S_1$ and $S_2$ are two initial states, then $S_1 \sim S_2$.
- Final states: if $S_1 \sim S_2$ and $S_1$ is a final state, then $S_2$ must be a final state.
- Simulation property: if $S_1 \sim S_2$, any transition from state $S_1$ with trace $t$ is simulated by one or several transitions starting in state $S_2$, producing the same trace $t$, and preserving the simulation relation $\sim$.
The hypothesis that the input program $P_1$ does not go wrong plays a crucial role in our semantic preservation proof, in particular to show the correctness of the anticipability criterion. Therefore,
we reflect this hypothesis in the precise statement of the simulation property above, as follows. \((G_i, G_o)\) are the global environments corresponding to programs \(P_i\) and \(P_o\), respectively.
**Definition 1 (Simulation property).**
Let \(I_i\) be the initial state of program \(P_i\) and \(I_o\) that of program \(P_o\). Assume that
- \(S_i \sim S_o\) (current states are related)
- \(G_i \vdash I_i \xrightarrow{t_i} S_i\) (the input program makes a transition)
- \(G_i \vdash I_o \xrightarrow{t_o} S_o\) (current states are reachable from initial states)
- \(G_i \vdash S_i \xrightarrow{t_i} B\) for some behavior \(B\) (the input program does not go wrong after the transition).
Then, there exists \(S'_o\) such that \(G_o \vdash S_o \xrightarrow{t_o} S'_o\) and \(S'_i \sim S'_o\).
The commuting diagram corresponding to this definition is depicted below. Solid lines represent hypotheses; dashed lines represent conclusions.
**Input program:** \(I_i \xrightarrow{t'_i} S_i \xrightarrow{t_i} S'_i\) does not go wrong
**Output program:** \(I_o \xrightarrow{t'_o} S_o \xrightarrow{t_o} S'_o\)
It is easy to show that the simulation property implies semantic preservation:
**Theorem 1.** Under the hypotheses between initial states and final states and the simulation property, \(P_i \xrightarrow{t_i} B\) implies \(P_o \xrightarrow{t_o} B\).
### 6.2 The invariant of semantic preservation
We now construct the relation \(\sim\) between execution states before and after LCM that acts as the invariant in our proof of semantic preservation. We first define a relation between register files.
**Definition 2 (Equivalence of register files).**
\(f \vdash R \sim R'\) if and only if \(R(v) = R'(v)\) for every register \(v\) that appears in an instruction of \(f\)'s code.
This definition allows the register file \(R'\) of the transformed function to bind additional registers not present in the original function, especially the temporary registers introduced during LCM optimization. Equivalence between execution states is then defined by the three rules below.
**Definition 3 (Equivalence of execution states).**
\[
\text{valid}(f, f', \varphi) = \begin{cases} \text{true} & f \vdash R \sim R' \quad G, G' \vdash \Sigma \sim_{\Sigma'} \Sigma' \\
G, G' \vdash S(\Sigma, f, \sigma, l, R, M) \sim S(\Sigma', f', \sigma, \varphi(l), R', M) \quad T_v(fd) = fd' \\
G, G' \vdash C(\Sigma, fd, \tilde{v}, M) \sim C(\Sigma', fd', \tilde{v}, M) \\
G, G' \vdash R(\Sigma, v, M) \sim R(\Sigma', v, M)
\end{cases}
\]
**Definition 4 (Equivalence of stack frames).**
\[
\text{valid}(f, f', \varphi) = \begin{cases} \text{true} & f \vdash R \sim R' \\
\forall v, M, B, \quad G \vdash \Sigma(\Sigma, f, \sigma, l, R(\leq v), M) \xrightarrow{t} B \\
\Rightarrow \exists R'' \quad G' \vdash \Sigma(\Sigma, f', \sigma, l', R' \leq v), M \xrightarrow{t'} G' \vdash \Sigma(\Sigma', f', \sigma, \varphi(l), R'', M)
\end{cases}
\]
The scary-looking third premise of the definition above captures the following condition: if we suppose that the execution of the initial program is well-defined once control returns to node \(l\) of the caller, then it should be possible to perform an execution in the transformed graph from \(l'\) down to \(\varphi(l)\). This requirement is a consequence of the anticipability problem. As explained earlier, we need to make sure that execution is well defined from \(l'\) to \(\varphi(l)\). But when the instruction is a function call, we have to store this information in the equivalence of frames, universally quantified on the not-yet-known return value \(v\) and memory state \(M\) at return time. At the time we store the property we do not know yet if the execution will be semantically correct from \(l\), so we suppose it until we get the information (that is, when execution reaches \(l\)).
Having stated semantics preservation as a simulation diagram and defined the invariant of the simulation, we now turn to the proof itself.
### 7. Sketch of the formal proof
This section gives a high-level overview of the correctness proof for our validator. It can be used as an introduction to the Coq development, which gives full details. Besides giving an idea of how we prove the validation kernel (this proof differs from earlier papers mainly on the handling of semantic well-definedness), we try to show that the burden of the proof can be reduced by adequate design.
#### 7.1 Design: getting rid of bureaucracy
Recall that the validator is composed of two parts: first, a generic validator that requires an implementation of \(V\) and of \(\text{analyze}\); second, an implementation of \(V\) and \(\text{analyze}\) specialized for LCM. The proof follows this structure: on one hand, we prove that if \(V\) satisfies the simulation property, then the generic validator implies semantics preservation; on the other hand, we prove that the node-level validation specialized for LCM satisfies the simulation property.
This decomposition of the proof improves re-usability and, above all, greatly improves abstraction for the proof that \(V\) satisfies the simulation property (which is the kernel of the proof on which we want to focus) and hence reduces the proof burden of the formalization. Indeed, many details of the formalization can be hidden in the proof of the framework. This includes, among other things, function invocation, function return, global variables, and stack management.
Besides, this allows us to prove that \(V\) only satisfies a weaker version of the simulation property that we call the validation property, and whose equivalence predicate is a simplification of the equivalence presented in section 6.2. In the simplified equivalence predicate, there is no mention of stack equivalence, function transformation, stack pointers or results of the validation.
**Definition 5 (Abstract equivalence of states).**
\[
\frac{f \vdash R \sim R'}{G, G' \vdash \Sigma(\Sigma, f, \sigma, l, R, M) \sim_{\Sigma'} \Sigma'(\Sigma, f', \sigma, l', R', M)}
\]
The validation property is stated in three versions, one for regular states, one for calls and one for return. We present only the property for regular states. If \( S = \mathcal{S}(\Sigma, f, \sigma, l, R, M) \) is a regular state, we write \( S.f \) for the \( f \) component of the state and \( S.l \) for the \( l \) component.
**Definition 6 (Validation property).**
Let \( I_i \) be the initial state of program \( P_i \) and \( I_o \) that of program \( P_o \).
Assume that
- \( S_i \approx_S S_o \)
- \( G_i \vdash S_i \xrightarrow{\epsilon} S'_i \)
- \( G_i \vdash I_o \xrightarrow{i} S_o \) and \( G_o \vdash I_o \xrightarrow{t'} S_o \)
- \( S'_i \Downarrow B \) for some behavior \( B \)
- \( V(S_i.f, S_o.f, S_i.l, \varphi, \text{analyze}(S_o.f)) = \text{true} \)
Then, there exists \( S'_o \) such that \( S_o \xrightarrow{\epsilon} S'_o \) and \( S'_i \approx S'_o \).
We then prove that if \( V \) satisfies the validation property, and if the two programs \( P_i, P_o \) successfully pass validation, then the simulation property (Definition 1) is satisfied, and therefore (Theorem 1) semantic preservation holds. This proof is not particularly interesting but represents a large part of the Coq development and requires a fair knowledge of CompCert internals.
We now outline the formal proof of the fact that \( V \) satisfies the validation property, which is the most interesting part of the proof.
### 7.2 Verification of the equivalence of single instructions
We first need to prove the correctness of the available expression analysis. The predicate \( S \models E \) states that a set of equalities \( E \) inferred by the analysis are satisfied in execution state \( S \). The predicate is always true on call states and on return states.
**Definition 7** (Correctness of a set of equalities).
\( \mathcal{S}(\Sigma, f, \sigma, l, R, M) \models \mathcal{R}(l) \) if and only if
- \( (r = \text{op}(\text{op}, r)) \in \mathcal{R}(l) \) implies \( \text{eval}(\text{op}, R(r)) \)
- \( (r = \text{load}(\text{mode}, r)) \in \mathcal{R}(l) \) implies \( \text{eval}(\text{mode}, r) = v \) and \( \text{R(r) = load}(\text{chunk}, v) \) for some pointer value \( v \).
The correctness of the analysis can now be stated:
**Lemma 2** (Correctness of available expression analysis). Let \( S^0 \) be the initial state of the program. For all regular states \( S \) such that \( S^0 \xrightarrow{\epsilon} S \), we have \( S \models \text{analyze}(S.f) \).
Then, it is easy to prove the correctness of the unification check. The predicate \( \approx_S \) is a weaker version of \( \approx_S \), where we remove the requirement that \( t' = \varphi(l) \), therefore enabling the program counter of the transformed code to temporarily get out of synchronization with that of the original code.
**Lemma 3**. Assume
- \( S_i \approx_S S_o \)
- \( S_i \xrightarrow{t} S'_i \)
- \( \text{unify}(\text{analyze}(S_o.f), S_i.f, \text{graph}, S_o.f, \text{graph}, S_i.l, S_o.l) = \text{true} \)
- \( I_o \xrightarrow{t'} S_o \)
Then, there exists a state \( S'^o \) such that \( S_o \xrightarrow{\epsilon} S'^o \) and \( S'_i \approx_S S'^o \).
Indeed, from the hypothesis \( I_o \xrightarrow{t'} S_o \) and the correctness of the analysis, we deduce that \( S_o \models \text{analyze}(S_o.f) \), which implies that the equality used during the unification, if any, holds at runtime. This illustrate the use of hypothesis on the past of the execution of the transformed program. By doing so, we avoid to maintain the correctness of the analysis in the predicate of equivalence. It remains to step through the transformed CFG, as performed by path checking, in order to go from the weak abstract equivalence \( \approx_S \) to the full abstract equivalence \( \approx_S \).
### 7.3 Anticipability checking
Before proving the properties of path checking, we need to prove the correctness of the anticipability check: if the check succeeds and the semantics of the input program is well defined, then the right-hand side expression given to the anticipability check is well defined.
**Lemma 4**. Assume \( \text{ant_checker}(f, \text{graph}, \text{rhs}, l) = \text{true} \) and \( S(\Sigma, f, \sigma, l, R, M) \Downarrow B \) for some \( B \). Then, there exists a value \( v \) such that \( \text{rhs} \) evaluates to \( v \) (without run-time errors) in the state \( R, M \).
Then, the semantic property guaranteed by path checking is that there exists a sequence of reductions from \( \text{successor}(v(n)) \) to \( v(\text{successor}(n)) \) such that the abstract invariant of semantic equivalence is reinstated at the end of the sequence.
**Lemma 5**. Assume
- \( S'_i \approx_W S'^o \)
- \( \text{path}(S'_i.f, \text{graph}, S'^o.f, \text{graph}, S'_i.l, \varphi(S_i.l)) = \text{true} \)
- \( S'_i \Downarrow B \) for some \( B \)
Then, there exists a state \( S'_o \) such that \( S'_o \xrightarrow{\epsilon} S'_o \) and \( S'_i \approx_S S'_o \).
This illustrates the use of the hypothesis on the future of the execution of the initial program. All the proofs are rather straightforward once we know that we need to reason on the future of the execution of the initial program.
By combining lemmas 3 and 5 we prove the validation property for regular states, according to the following diagram.
\[
\begin{array}{c}
S_i \\
\downarrow \approx_S \xrightarrow{t} S'_i \\
\end{array}
\]
The proofs of the validation property for call and return states are similar.
### 8. Discussion
**Implementation** The LCM validator and its proof of correctness were implemented in the Coq proof assistant. The Coq development is approximately 5000 lines long. 800 lines correspond to the specification of the LCM validator, in pure functional style, from which executable Caml code is automatically generated by Coq’s extraction facility. The remaining 4200 lines correspond to the correctness proof. In addition, a lazy code motion optimization was implemented in OCaml, in roughly 800 lines of code.
The following table shows the relative sizes of the various parts of the Coq development.
<table>
<thead>
<tr>
<th>Part</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>General framework</td>
<td>37%</td>
</tr>
<tr>
<td>Anticipability check</td>
<td>16%</td>
</tr>
<tr>
<td>Path verification</td>
<td>7%</td>
</tr>
<tr>
<td>Reaching definition analysis</td>
<td>18%</td>
</tr>
<tr>
<td>Instruction unification</td>
<td>6%</td>
</tr>
<tr>
<td>Validation function</td>
<td>16%</td>
</tr>
</tbody>
</table>
As discussed below, large parts of this development are not specific to LCM and can be reused: the general framework of section 7.1,
anticipability checking, available expressions, etc. Assuming these parts are available as part of a toolkit, building and proving correct the LCM validator would require only 1100 lines of code and proofs.
Completeness We proved the correctness of the validator. This is an important property, but not sufficient in practice: a validator that rejects every possible transformation is definitely correct but also quite useless. We need evidence that the validator is relatively complete with respect to “reasonable” implementations of LCM. Formally specifying and proving such a relative completeness result is difficult, so we reverted to experimentation. We ran LCM and its validator on the CompCert benchmark suite (17 small to medium-size C programs) and on a number of examples handcrafted to exercise the LCM optimization. No false alarms were reported by the validator.
More generally, there are two main sources of possible incompleteness in our validator. First, the external implementation of LCM could take advantage of equalities between right-hand sides of computations that our available expression analysis is unable to capture, causing instruction unification to fail. We believe this never happens as long as the available expression analysis used by the validator is identical to (or at least no coarser than) the up-safety analysis used in the implementation of LCM, which is the case in our implementation.
The second potential source of false alarms is the anticipability check. Recall that the validator prohibits anticipating a computation that can fail at run-time before a loop or function call. The CompCert semantics for the RTL language errs on the side of caution and treats all undefined behaviors as run-time failures: not just behaviors such as integer division by zero or memory loads from incorrect pointers, which can actually cause the program to crash when run on a real processor, but also behaviors such as adding two pointers or shifting an integer by more than 32 bits, which are not specified in RTL, but would not crash the program during actual execution. (However, arithmetic overflows and underflows are correctly modeled as not causing run-time errors, because the RTL language uses modulo integer arithmetic and IEEE float arithmetic.) Because the RTL semantics treats all undefined behaviors as potential run-time errors, our validator restricts the points where e.g. an addition or a shift can be anticipated, while the external implementation of LCM could (rightly) consider that such a computation is safe and can be placed anywhere. This situation happened once in our tests.
One way to address this issue is to increase the number of operations that cannot fail in the RTL semantics. We could exploit the results of a simple static analysis that keeps track of the shape of values (integers, pointers or floats), such as the trivial “int or float” type system for RTL used in Leroy (2008). Additionally, we could refine the semantics of RTL to distinguish between undefined operations that can crash the program (such as loads from invalid addresses) and undefined operations that cannot (such as adding two pointers); the latter would be modeled as succeeding, but returning an unspecified result. In both approaches, we increase the number of arithmetic instructions that can be anticipated freely.
Complexity and performance Let \( N \) be the number of nodes in the initial CFG \( g \). The number of nodes in the transformed graph \( g' \) is in \( O(N) \). We first perform an available expression analysis on the transformed graph, which takes time \( O(N^3) \). Then, for each node of the initial graph we perform an unification and a path checking. Unification is done in constant time and path checking tries to find a non-cyclic path in the transformed graph, performing an anticipability checking in time \( O(N) \) for instructions that may be ill-defined. Hence path checking is in \( O(N^2) \) but this is a rough pessimistic approximation.
In conclusion, our validator runs in time \( O(N^3) \). Since lazy code motion itself performs four data-flow analysis that run in time \( O(N^3) \), running the validator does not change the complexity of the lazy code motion compiler pass.
In practice, on our benchmark suite, the time needed to validate a function is on average 22.5% of the time it takes to perform LCM.
Reusing the development One advantage of translation validation is the re-usability of the approach. It makes it easy to experiment with variants of a transformation, for example by using a different set of data-flow analyzes in lazy code motion. It also happens that, in one compiler, two different versions of a transformation co-exist. It is the case with GCC: depending on whether one optimizes for space or for time, the compiler performs partial redundancy elimination (Morel and Renove 1979) or lazy code motion. We believe, without any formal proof, that the validator presented here works equally well for partial redundancy elimination. In such a configuration, the formalization burden is greatly reduced by using translation validation instead of compiler proof.
Classical redundancy elimination algorithms make the safe restriction that a computation \( e \) cannot be placed on some control flow path that does not compute \( e \) in the original program. As a consequence, code motion can be blocked by preventing regions (Bodik et al. 1998), resulting in less redundancy elimination than expected, especially in loops. A solution to this problem is safe speculative code motion (Bodik et al. 1998) where we lift the restriction for some computation \( e \) as long as \( e \) cannot cause run-time errors. Our validator can easily handle this case: the anticipability check is not needed if the new instruction is safe, as can easily be checked by examination of this instruction. Another solution is to perform control flow restructuring (Steffen 1996; Bodik et al. 1998) to separate paths depending on whether they contain the computation \( e \) or not. This control flow transformation is not allowed by our validator and constitutes an interesting direction for future work.
To show that re-usability can go one step further, we have modified the unification rules of our lazy code motion validator to build a certified compiler pass of constant propagation with strength reduction. For this transformation, the available expression analysis needs to be performed not on the transformed code but on the initial one. Thankfully, the framework is designed to allow analyses on both programs. The modification mainly consists of replacing the unification rules for operation and loads, which represent about 3% of the complete development of LCM. (Note however that unification rules in the case of constant propagation are much bigger because of the multiple possible strength reductions). It took two weeks to complete this experiment. The proof of semantics preservation uses the same invariant as for lazy code motion and the proof remains unchanged apart from unification of operations and loads. Using the same invariant, although effective, is questionable: it is also possible to use a simpler invariant crafted especially for constant propagation with strength reduction.
One interesting possibility is to try to abstract the invariant in the development. Instead of posing a particular invariant and then developing the framework upon it, with maybe other transformations that will luckily fit the invariant, the framework is developed with an unknown invariant on which we suppose some properties. (See Zuck et al. (2001) for more explanations.) We may hope that the resulting tool/theory be general enough for a wider class of transformations, with the possibility that the analyses have to be adapted. For example, by replacing the available expression analysis by global value numbering of Gulwani and Necula (2004), it is possible that the resulting validator would apply to a large class of redundancy elimination transformations.
9. Related Work
Since its introduction by Pnueli et al. (1998a,b), translation validation has been actively researched in several directions. One direction is the construction of general frameworks for validation (Zuck et al. 2001, 2003; Barret et al. 2005; Zaks and Pnueli 2008). Another direction is the development of generic validation algorithms that can be applied to production compilers (Rinard and Marinov 1999; Necula 2000; Zuck et al. 2001, 2003; Barret et al. 2005; Rival 2004; Kanade et al. 2006). Finally, validation algorithms specialized to particular classes of transformations have also been developed, such as (Huang et al. 2006) for register allocation or (Tristan and Leroy 2008) for instruction scheduling. Our work falls in the latter approach, emphasizing algorithmic efficiency and relative completeness over generality.
A novelty of our work is its emphasis on fully mechanized proofs of correctness. While unverified validators are already very useful to increase confidence in the compilation process, a formally verified validator provides an alternative to the formal verification of the corresponding compiler pass (Leinenbach et al. 2005; Klein and Nipkow 2006; Leroy 2006; Lerner et al. 2003; Blech et al. 2005). Several validation algorithms or frameworks use model checking or automatic theorem proving to check verification conditions produced by a run of validation (Zuck et al. 2001, 2003; Barret et al. 2005; Kanade et al. 2006), but the verification condition generator itself is, generally, not formally proved correct.
Many validation algorithms restrict the amount of code motion that the transformation can perform. For example, validators based on symbolic evaluation such as (Necula 2000; Tristan and Leroy 2008) easily support code motion within basic blocks or extended basic blocks, but have a hard time with global transformations that move instructions across loops, such as LCM. We are aware of only one other validator that handles LCM: that of Kanade et al. (2006). In their approach, LCM is instrumented to produce a detailed trace of the code transformations performed, each of these transformations being validated by reduction to a model-checking problem. Our approach requires less instrumentation (only the code mapping needs to be provided) and seems algorithmically more efficient.
As mentioned earlier, global code motion requires much care to avoid transforming nonterminating executions into executions that go wrong. This issue is not addressed in the work of Kanade et al. (2006), nor in the original proof of correctness of LCM by Knoop et al. (1994): both consider only terminating executions.
10. Conclusion
We presented a validation algorithm for Lazy Code Motion and its mechanized proof of correctness. The validation algorithm is significantly simpler than LCM itself: the latter uses four dataflow analyses, while our validator uses only one (a standard available expression analysis) complemented with an anticipability check (a simple traversal of the CFG). This relative simplicity of the algorithm, in turn, results in a mechanized proof of correctness that remains manageable after careful proof engineering. Therefore, this work gives a good example of the benefits of the verified validator approach compared with compiler verification.
We have also shown preliminary evidence that the verified validator can be re-used for other optimizations: not only other forms of redundancy elimination, but also unrelated optimizations such as constant propagation and instruction strength reduction. More work is needed to address the validation of advanced global optimizations such as global value numbering, but the decomposition of our validator and its proof into a generic framework and an LCM-specific part looks like a first step in this direction.
Even though lazy code motion moves instructions across loops, it is still a structure-preserving transformation. Future work includes extending the verified validation approach to optimizations that modify the structure of loops, such as software pipelining, loop jamming, or loop interchange.
Acknowledgments
We would like to thank Benoît Razet, Damien Doligez, and the anonymous reviewers for their helpful comments and suggestions for improvements.
This work was supported by Agence Nationale de la Recherche, grant number ANR-05-SSIA-0019.
References
|
{"Source-Url": "https://inria.hal.science/inria-00415865/file/validation-LCM.pdf", "len_cl100k_base": 15075, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 46044, "total-output-tokens": 17932, "length": "2e13", "weborganizer": {"__label__adult": 0.00034546852111816406, "__label__art_design": 0.0002522468566894531, "__label__crime_law": 0.0003018379211425781, "__label__education_jobs": 0.0003981590270996094, "__label__entertainment": 4.3332576751708984e-05, "__label__fashion_beauty": 0.00014543533325195312, "__label__finance_business": 0.0001652240753173828, "__label__food_dining": 0.00034427642822265625, "__label__games": 0.0005879402160644531, "__label__hardware": 0.000904083251953125, "__label__health": 0.0004220008850097656, "__label__history": 0.00018537044525146484, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.00035500526428222656, "__label__literature": 0.00022017955780029297, "__label__politics": 0.00024378299713134768, "__label__religion": 0.0005116462707519531, "__label__science_tech": 0.008636474609375, "__label__social_life": 5.6743621826171875e-05, "__label__software": 0.0035724639892578125, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0003132820129394531, "__label__transportation": 0.0005488395690917969, "__label__travel": 0.0002048015594482422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67579, 0.01537]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67579, 0.50807]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67579, 0.84233]], "google_gemma-3-12b-it_contains_pii": [[0, 972, false], [972, 6804, null], [6804, 14420, null], [14420, 18243, null], [18243, 24740, null], [24740, 29508, null], [29508, 37048, null], [37048, 43076, null], [43076, 49648, null], [49648, 57709, null], [57709, 65221, null], [65221, 67579, null]], "google_gemma-3-12b-it_is_public_document": [[0, 972, true], [972, 6804, null], [6804, 14420, null], [14420, 18243, null], [18243, 24740, null], [24740, 29508, null], [29508, 37048, null], [37048, 43076, null], [43076, 49648, null], [49648, 57709, null], [57709, 65221, null], [65221, 67579, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67579, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67579, null]], "pdf_page_numbers": [[0, 972, 1], [972, 6804, 2], [6804, 14420, 3], [14420, 18243, 4], [18243, 24740, 5], [24740, 29508, 6], [29508, 37048, 7], [37048, 43076, 8], [43076, 49648, 9], [49648, 57709, 10], [57709, 65221, 11], [65221, 67579, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67579, 0.02145]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9c287ff9226b29f62faf6067b9ba0c0bede2f5fc
|
Fever: Extracting feature-oriented changes from commits
Dintzner, Nicolas; van Deursen, Arie; Pinzger, Martin
DOI
10.1145/2901739.2901755
Publication date
2016
Document Version
Accepted author manuscript
Published in
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.
FEVER: Extracting Feature-oriented Changes from Commits
Nicolas Dintzner, Arie van Deursen, Martin Pinzger
Report TUD-SERG-2016-005
FEVER: Extracting Feature-oriented Changes from Commits
Nicolas Dintzner
Software Engineering
Research Group
Delft University of Technology
Delft, Netherlands
N.J.R.Dintzner@tudelft.nl
Arie van Deursen
Software Engineering
Research Group
Delft University of Technology
Delft, Netherlands
Arie.vanDeursen@tudelft.nl
Martin Pinzger
Software Engineering
Research Group
University of Klagenfurt
Klagenfurt, Austria
martin.pinzger@aau.at
ABSTRACT
The study of the evolution of highly configurable systems requires a thorough understanding of the core ingredients of such systems: (1) the underlying variability model; (2) the assets that together implement the configurable features; and (3) the mapping from variable features to actual assets. Unfortunately, to date no systematic way to obtain such information at a sufficiently fine grained level exists.
To remedy this problem we propose FEVER and its instantiation for the Linux kernel. FEVER extracts detailed information on changes in variability models (KConfig files), assets (preprocessor based C code), and mappings (Makefiles). We describe how FEVER works, and apply it to several releases of the Linux kernel. Our evaluation on 300 randomly selected commits, from two different releases, shows our results are accurate in 82.6% of the commits. Furthermore, we illustrate how the populated FEVER graph database thus obtained can be used in typical Linux engineering tasks.
CCS Concepts
• Software and its engineering → Model-driven software engineering; Feature interaction; Software design engineering;
Keywords
highly variable systems, co-evolution, feature, variability
1. INTRODUCTION
Highly configurable software systems allow end-users to tailor a system to suit their needs and expected operational context. This is achieved through the development of configurable components, allowing systematic reuse and mass-customization. [1]. Examples of such systems can be found in various domains such as database management [2,3], SOA based systems [4], operating systems [5], and a number of industrial ¹ and open source software projects [6] among which the Linux kernel may be the most well-known.
In the implementation of such system, configuration options, or features, play a significant role in a number of inter-related artefacts of different nature. For systems where variability is mostly resolved at build-time, features will play a role in, at least, the following three spaces [7,8]:
1. the variability space - describing available features and their allowed combinations;
2. the implementation space, comprised of re-usable assets, among which configurable implementation artefacts; and finally
3. the mapping space - relating features to assets and often supported by a build system like Makefiles;
When such systems evolve, information about feature implementation across those three spaces is actively sought by engineers [9]. Inconsistent modifications across the three spaces (variability, mapping, and implementation) may lead to the incapacity to derive products, code compilation errors, or dead code [10–12]. Consistent co-evolution of artefacts is a necessity adding complexity to an already non-trivial evolutionary process [13], occurring in both industrial [14] and open-source contexts [15,16].
Recent studies [7,15] described common changes occurring in such systems, giving insight on how each space could evolve, and revealing the relationship between the various artefacts. More recently, Passos et al. proposed a dataset capturing the addition and removal of features [17].
Such feature-related change information is important in various practical scenarios.
• A release manager is interested in finding out which commits participated in the creation of a feature, to build the release notes for instance. In such cases, he would be interested in commits introducing the feature, and the following ones, adjusting the behaviour or declaration of the feature.
• A developer introducing a new feature to a subsystem will be interested in finding how such feature was supported by similar subsystems in the past. Then, (s)he needs to look for changes in those subsystems, involving that feature.
¹http://splc.net/fame.html
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org
MSR ’16 May 17–18, 2016, Austin, TX, USA
DOI: 10.1145/2884602
• Researchers focusing on feature-oriented evolution of systems are interested in automatically identifying instances of co-evolution patterns or templates, or extending the existing pattern catalogue.
Unfortunately, the most detailed change descriptions currently available [7,15] were obtained using extensive manual analysis of commits, and the existing datasets do not provide the necessary links between features and associated assets to enable such queries.
To remedy this problem, we present FEVER (Feature EVolution Extractor), a tool-supported approach designed to automatically extract changes in commits affecting artefacts in all three spaces. FEVER retrieves the commits from a versioning system and rebuilds a model of each artefact before and after their modification. Then it extracts detailed information on the changes using graph differencing techniques. Finally, relying on naming conventions and heuristics the changes are aggregated based on the affected feature(s) across all commits in a release. The resulting data is then stored in a database relating the features and their evolution in each commit.
While the tool we built to extract changes is centred on the Linux kernel, the approach itself is applicable to a wide set of systems [16,18] with an explicit variability model, where the implementation of variability is performed using annotative methods (pre-processor statements in our case), and where the mapping between features and implementation assets can be recovered from the build system.
With this study, we make the following key contributions: (1) a model of feature-oriented co-evolving artefacts, (2) an approach to automatically extract instances of the model from commits, (3) a dataset of such change descriptions covering 5 releases of the Linux kernel history (3.11 to 3.15 in separate databases), (4) an evaluation of the accuracy of our heuristics showing that we can extract accurately the information out of 82.6% of the commits, (5) how the FEVER dataset can be used to assist developers and researchers in performing the aforementioned tasks, and finally, the tool and datasets used for this study are available on our website.2
We first provide information on previous work on the evolution of highly variable systems in Section 2. We then give additional information on how variability can be implemented using the Linux kernel as an example in Section 3. Then, we present the feature-oriented change model we use to describe the evolution of such systems in Section 4. We explain the main steps of the model-based change extraction process in Section 5. We evaluate our prototype implementation of FEVER by manually validating a subset of 300 randomly selected commits we extracted from release v3.11 and v3.12 of the Linux kernel and present the results in Section 6. Finally in Section 6.3, we discuss the possibilities and limitations of our approach, and elaborate on its usage in the context of complex change description and configurable software maintenance operations in Section 7.
2. RELATED WORK
Variability implementation in highly-configurable systems has been extensively studied in the past [19]. While many approaches can be found to analyze features in each individual space, few focus on their detailed evolution or the consolidation of such changes.
In [20], we introduced FMDiff, an approach to extract feature model changes, that we reuse for the approach presented in this paper. In this work, we extend FMDiff concepts to cover all types of artefacts and relate those changes on a feature-basis.
Several studies present methods to extract variability information from build systems [21–23]. Such approaches are designed to study the current state of the system, and require all files to be present. In our case, we are interested by the changes as performed by developers, focusing on commits which avoid the need for a costly (and often impossible) analysis of the entire build system. We built a custom Makefile parser allowing us to extract information relying on modified artefacts only.
Variability implementation using annotative methods in source file were also studied in the past [24], often for error detection [10, 25, 26]. In this study, we use the approach presented in [6] to identify code blocks and their condition, and we then rely on this representation to build a model of implementation assets.
Only few studies focused on the co-evolution of artefacts in all three variability spaces: variability model (VM), mapping, and implementation. In [7], Neves et al. describe the core elements involved in feature changes (VM, mapping, and assets). A collection of 23 co-evolution patterns is presented by Passos et al. in [15]. Each pattern describes a combination of changes that occur in the three variability spaces. These papers aimed at identifying common change operations, and relied on manual analysis of commits. In this work, we relied on such change descriptions to design the FEVER change meta-model, and we focused on how to extract automatically such changes.
Change consolidation across heterogeneous artefacts has been a long standing challenge. For instance, Begel et al. proposed a large database aggregating code level information, people, and work items [27]. We take a different approach, and propose to extract more detailed information focusing on implementation artefacts only. Recently, Passos et al. created a database of feature addition and removal [17] in the Linux kernel. We extend this work by extracting detailed changes on all commits, and provide such descriptions on all types of artefacts. The FEVER dataset is, to the best of our knowledge, the first dataset providing a consolidated view of complex feature changes.
3. BACKGROUND
In this section, we present how the variability is supported in the Linux kernel, the different artefacts involved in its realization and their relationships.
3.1 Variability Model
A variability model (VM) formalizes the available configuration options (which we assimilate to “features” in this work) of a system as well as their allowed configurations [28]. In the context of the Linux kernel, the VM is expressed in the Kconfig language. An example of a feature as described the Kconfig language shown in Listing 1. Features have at least a name (following the “config” keyword on line 3) and a type. The “type” attribute specifies what kind of values can be associated with a feature, which may be “boolean” (selected or not), “tristate” (selected, selected but compiled as a module,
or not selected), or a value (when the type is “int”, “hex”, or “string”). In our example the SQUASHFS_FILE_DIRECT feature is of type boolean (line 2). In the remainder of this work, we will refer to boolean and tristate features simply as “boolean features”, while features with type “int”, “hex”, or “string”, will be referred to as “value-based features”. The text following the type on line 3 is the “prompt” attribute. Its presence indicates that the feature is visible to the end user during the configuration process. Features can also have default values. In our example the feature is selected by default (y on line 5). The default value might be conditioned by an “if” statement.
Kconfig expresses feature dependencies using the “depends” statements (see line 5). If the expression is satisfied, the feature becomes selectable during the configuration process. In this example, the feature SQUASHFS must be selected. Reverse dependencies are declared using the “select” statement. If the feature is selected then the target of the “select” will be selected automatically as well (ZLIBInflater is the target of the “select” statement on line 6). The selection occurs if the expression in the following “if” statement is satisfied by the current feature selection (e.g., if SQUASHFS_ZLIB is already selected).
In the context of this study, we consider additions and removals of features as well as modifications of existing ones i.e., modifications of any attributes of a feature.
3.2 Feature-asset Mapping
The mapping between features and assets determines which assets should be included in a product upon the selection of specific features. In highly-configurable systems, the assets could be source code, documentation, or any other type of resources (e.g., images). In the context of this study, we focus on implementation artefacts. The addition of the mapping between a feature and code in a Makefile, as performed in the Linux kernel, is presented in Listing 2.
Upon feature selection, the name of the feature is passed on to the build system which uses it to select artefacts and artefact fragments to include in the image before compiling them.
3.3 Assets
Many types of assets exists, such as images, code, or documentation. We consider only configurable implementation assets (source files). We focus specifically on pre-processor based variability implementation (using #ifdef statements), which, despite known limitations [29], is still widely used today [6]. An example of an addition of a pre-processor statement is presented in Listing 3 where feature SQUASHFS_FILE_DIRECT is used to condition the compilation of two code blocks, one pre-existing (line 2 to 7) and a new one (lines 9 to 13). As a result, based on the selection of the feature SQUASHFS_FILE_DIRECT during the configuration phase, only one of the two code blocks will be included in the final product.
4. DESCRIBING CO-EVOLUTION: THE FEVER CHANGE META-MODEL
The objective of this work is to obtain a consolidated view of changes occurring to features and their implementation. We present in this section the meta-model we use to describe feature-related changes to individual artefacts, and how we relate those changes to one-another. We illustrate the us-
age of the model with a example of actual feature changes, affecting all spaces, extracted from release v3.11. In this scenario, a developer commits a new driver for an ambient light sensor, “APDS9300”.
4.1 FEVER co-evolution change meta-model
An overview of the FEVER change meta-model is shown in Figure 1. This overview highlights the different entities we use to describe what occurs in a commit, from a feature perspective.
The commit represents a commit in a version control system. Commit entities are related to one another through the “next” relationship, capturing the sequence of changes over time. Each commit “touches” a number of artefacts, and those changes are captured in ArtefactEdit entities. The commit may affect any of the three spaces, leading to SourceEdit entities when features are modified at a source level, MappingEdit entities when the mapping between feature and assets is affected, or finally FeatureEdit entities when the variability model changes. While the ArtefactEdit indicates a change to a file, Source-, Mapping- and Feature-Edit entities are all representing the change related to individual features within those files. We omitted the following relationship in the model for readability purposes: FeatureEdit, MappingEdit, and SourceEdit entities are linked to ArtefactEdit with a “in” relationship, pointing to the artefact in which the change took place.
Figure 1: FEVER Feature-oriented change model
For a commit in the repository we record the commit id (sha1) to relate our data with the original repository. We save the commit message which may contain information about the rationale of a change. Finally, to keep track of who touches which feature, we record people-related information such as committer and author of each commit.
4.2 Variability model changes
A FeatureEdit entity represents the change of one feature within the variability model performed in the context of a commit. We are interested in the affected feature, as well as the change operation that took place (addition, removal or modification of an existing feature). The FeatureEdit entity also points to a more complete description of the feature, FeatureDesc entities. FeatureDesc presents the feature as it “was” before the change (if existing) and how it “is” after the edit operation (if existing). Those entities contain the details of the feature before and after the change. From an evolution perspective however, we are more interested in the change affecting the feature, as this may be linked to changes in other spaces.
In our example presented in Figure 2 we can see on the left hand side the commit sequence, where commit “03eff” touches four ArtefactEdits (in gray), and “changes the vm” by adding a feature (in light pink). The FeatureEdit entity points, via the “in” relationship, to the Kconfig file in which the feature was touched. We can also see how the FeatureEdit entity is connected to a FeatureDesc (in purple) using the “is” relationship. The feature is added, as noted on the FeatureEdit entity.
Figure 2: Change model instance for the introduction of a new driver in the Linux kernel
4.3 Mapping changes
Regarding the evolution of the mapping, we are mainly interested in the evolution of the mapping between feature and asset, in order to assign code changes, occurring within files, to features. The evolution of the mapping space is represented by MappingEdit entities characterized by: the feature involved, and the type of artefacts it is mapped to. We describe the feature-mapping change operation (added, removed, or modified), referring to the association of a feature any assets, and the change affecting the target within that mapping (added or removed). We can thus make the difference between a situation where a new mapping is introduced (addition of a mapping with an added target) and an existing mapping being extended (modification of a mapping with an added target). In the example, the MappingEdit entity is highlighted in blue. It is connected to the commit with a “changes_build” relationship.
4.4 Source changes
Feature related changes within source code, such as modifications to conditionally compiled blocks and feature references, are captured as SourceEdit entities. Feature in #ifdef code block conditions and feature references within a given file are an indication that the behaviour of the feature mapped is configurable, and its exact behaviour is determined by other features.
Feature references are references to feature names within the code, meant to be replaced by the feature’s value at compile-time. Such references may only be added or removed. In such cases, the SourceEdits entity contains the name of the affected feature and the change in question.
Conditionally compiled blocks are identified by the conditions under which they will be included in the final product. A change to such block is represented by a SourceEdit containing the exact condition of the block, the change to the block itself (added, removed, modified), and the change of the implementation within that block: added if the code is entirely new, removed if the whole block was removed, modified when the changed block contains arbitrary edits, or finally preserved if the code itself has not been touched.
In our example, two SourceEdit entities, in yellow in Figure 2, are connected to the commit indicating that the commit affected conditionally compiled blocks, and to the file “in” which those changes occurred.
4.5 TimeLines: Aggregating feature changes
Changes pertaining to the same features are then aggregated into TimeLine entities. For this study, we created TimeLine entities for entire releases.
We divide the types of changes that may affect a feature into two broad categories: core changes and influence change. A feature core change indicates that the behaviour of the feature itself or its definition is being adjusted. This comprises changes to the feature definition in the VM, changes to the mapping between the feature and assets, and changes affecting assets mapped to that feature. A feature influence change indicates that the feature is playing a role in the behaviour of another feature. This is visible in a SourceEdit, through reference of that feature in conditionally compiled code blocks, as part of a condition, or referenced for its value.
In Figure 2, two TimeLine entities are depicted in red. The first one relates to the feature that was introduced. We can see that the “APDS9300” node is connected to the FeatureEdit, the MappingEdit and an ArtefactEdit with a “feature core update” relationship. The connection between the TimeLine for this feature and the ArtefactEdit is deduced from the MappingEdit: because the new mapping assigns this artefact to feature APDS9300, then the introduction of this artefact is a “core” update of this feature. The APDS9300 TimeLine connects the different changes occurring in 3 different types of artefacts, all related to the same operation: the addition of a feature.
Moreover, we can see that a TimeLine for feature PM.SLEEP is present and connected to two SourceEdit entities. This indicates that, at the creation time, the driver APDS9300 interacts with the power management “sleep” feature, and this interaction occurs in two different code blocks.
It is important to note that changes are extracted on an “per artefact basis”. This means that entities being moved within the same artefacts (a feature in a Kconfig file, or a mapping in Makefile) will be seen as modified. However, if an entity is moved from one artefact to another, this is captured as two separate operations: a removal and an addition, and as such, two Edits entities. Those two Edit entities are linked together by a TimeLine entity, referring to the modified feature.
5. POPULATING FEVER
5.1 Overview
The FEVER approach starts from a set of commits and outputs an instance of the FEVER change model covering the given commit range. Figure 3 presents an overview of the change extraction process.
We provide an estimation of the accuracy of those heuristics in Section 6.
Step 3 is the extraction of changes in artefacts for which we do not extract detailed changes. This includes only commit-related information from which we create a commit entity, and “untyped” artefacts (documentation, scripts…), represented by ArtefactEdit entities.
In Step 4, we create the relationships between Edit entities, the Commit, and ArtefactEdit.
Step 5 of our approach consists in creating entities and relationships spreading beyond single commits: “next” relationships among commits, and feature Timeline entities with their respective relationships to edit entities. This is done by running through every commit, and identifying touched feature(s), creating if necessary a new Timeline entity and the appropriate relationships between the Timeline and relevant edits.
5.2 Extracting Variability Model Changes
The characteristics of the changed features that we focus on are their type (boolean or value-based), their visibility, and their optionality as described in Section 3.
We first reconstruct two instances of the VM depicted in Figure 4-A per VM file touched, one representing the VM before the change, the other after the change. If, like in the case of the Linux kernel, the VM is described in multiple files, we reconstruct the parts of the model described in the touched files, i.e., the model we rebuild is always partial. The extraction process follows the FMDiff approach [20], including the usage of “dumpconf”. This tool takes as an input a Kconfig file and translates it into XML. “dumpconf” is designed to work on the complete Kconfig model, where the different files are linked together with a “source” statement, similar to #include in C. To invoke “dumpconf” successfully on isolated files, we remove the “source” statements as a pre-processing step. “dumpconf” also affects the attributes of features, and the details of the change operation are described in [30]. We use this XML representation of the Linux VM to build the model shown in Figure 4-A.
We then use EMF Compare to extract the differences and compile the information in a FeatureEdit entity. We attach to this entity the snapshot of the feature as it was before and after the change in FeatureDesc entities. If the feature is new, respectively deleted, we do not create a “before”, respectively “after”, FeatureDesc entity.
5.3 Extracting Build Changes
Similarly to the extraction of VM changes, MappingEdit entities are created based on the differences of reverse engineered models of a Makefile, before and after the change. We use the model shown in Figure 4-B.
The model contains a set of features and symbols mapped to targets. “Symbol” refers to any variable mapped to any assets which is not a feature. We identify feature names in Makefiles by their prefix “CONFIG.”. We scan the Makefiles and extract pairs of symbols by searching for assignment operators (“=” and “:=”). We consider that the symbol on the left hand side is mapped to the symbol on the right hand side (target).
To determine the type of a targeted asset, we use the following rules: Compilation unit names finish with either “.o”, “.dts”, “.dtb”; compilation flags contain specific strings (“cc-flags”, “-D”, “-I”, “-m”, or “-W”). We identify folder names by “/”, or single words, not containing any special characters nor spaces. When features are found as part of “ifeq” or “ifneq” statements, we consider that they are mapped to any targets contained within their scope. In Listing 5, both CONFIG_OF and CONFIG_SHDMA will be mapped to the compilation unit “shdma.o”.
We also resolve aliases within Makefiles. An example of an alias is presented in Listing 5, where feature TREE_TEST is mapped to the alias “tree_test.o” referring to two compilation units “tree_main.o” and “tree.o”. This step is performed as a post-processing step for each build model instance, and is based on heuristics, also evaluated in Section 6.
1 ifeq $(CONFIG_OF),y
2 shdma-$(CONFIG_SHDMA) += shdma.o
3 endif
4 obj-$(CONFIG_TREE_TEST) += tree_test.o
5 tree_test-objs := tree_main.o tree.o
Listing 5: Example of an “ifeq” statement and aliases used in Makefiles
We then use EMF Compare to extract the differences between the two model instances, giving us the list of feature mappings that were added or removed in that commit.
As mentioned in Section 3, the exact mapping between features and files is the result of a complex Makefile hierarchy. By focusing on the mapping as described in a single Makefile, FEVER only captures a part of the presence condition of each file.
5.4 Extracting Implementation Changes
At the implementation level, we consider changes to #ifdef blocks and changes to feature references in the code, as presented in Section 3. To extract those changes, we rebuild a model of each implementation file in its before and after state following the model presented in Figure 4-C.
To rebuild the models, we rely on CPPSTATS [6] to obtain starting and ending lines of each #ifdef block as well as their guarding condition. It should be noted that CPPSTATS expends conditions of nested blocks within a file, facilitating the identification of block conditions. In the model, code blocks and their #else counter-parts are captured as two distinct entities. “Referenced value features” are obtained by scanning each modified source file looking for the usage of the “CONFIG,” string outside of comments and #ifdef statements.
We then use EMF Compare to compare the two models and build the SourceEdit entities. We determine the code changes occurring inside #ifdef blocks to compute the value of the “code edit” attribute of SourceEdit entities. We extract from the commit the diff of the file in the “unified diff” format, and identify which lines of code where modified. We compare this information with the first and last lines of each modified code block to determine which code block is affected by the code changes.
5.5 Change consolidation and TimeLines
The final step consist in the creation of feature TimeLine entities, and relate them to the appropriate entities. We create such entities for every feature touched affected by any change in any Edit entity. We apply the following rules:
- if a feature is touched in the VM, mapping or source file, the corresponding Edit entity is associated with a TimeLine;
- if a SourceEdit changes a block condition, the source edit is connected to one TimeLine entity per feature present in the condition;
- if an artefact is touched, it is linked to the TimeLine entity of the feature(s) to which it is mapped;
In order to map file changes to features, we need to know the mapping between features and files. Note that FEVER only focuses on mapping changes, leaving us with a gap with respect to mappings that are not touched. As a result, many files, whose mapping has not evolved would not be mapped - wrongly - to any features. To compensate for this, we create a snapshot of the complete mapping based on the state of the artefacts on the first commit of the commit set. This is the only operation we perform requiring the entire code base. We then run through all commits, starting from the leaves in a breadth-first manner, creating or updating TimeLine as necessary, and updating the known mapping between files and features as we encounters MappingEdits.
Some files in the Linux kernel cannot be mapped to directly to features. This concerns mostly header files, contained in “include” folders. “Include” folders do not contain Makefiles, which prevents direct mapping between features and such artefacts. Moreover, such files are included in the compilation process on the basis that they are referenced by implementation files (#include statement), which by definition bypasses any possible feature-related condition. For those reasons, we do not attempt to map such files to features. They are, however, highly conditional, and often contain many #ifdef statements, which we track.
6. EVALUATING FEVER WITH LINUX
The FEVER change extraction process is based on heuristics, and assumptions about the structure of the artefacts. Those heuristics affect the model build phase, and the comparison process - the mapping between EMF model changes and higher-level feature oriented changes. It is then important to evaluate whether the data captured by FEVER reflects the changes that are performed by developers in the source control system, leading us to formulate the research question driving this evaluation:
RQ: To what extent does FEVER data match changes performed by developers?
To answer this question, we apply FEVER to two releases of the Linux kernel, and compare the changes captured by FEVER and the commits obtained from the Linux SCM (Git).
6.1 Evaluation Method
The objective is to evaluate the accuracy of the heuristics and the model comparison process used for artefact change extraction and the change consolidation process. To do so, we manually compare the content of the FEVER dataset with the information that can be obtained from Git, using the GitK user interface. The evaluation was performed by the main author of this paper.
For a set of commits, we check that the different Edit entities and their attributes can be explained by the changes observed in Git. Conversely, we ensure that feature-related changes seen in Git have a FEVER representation. At variability model level, we check whether the features captured by FEVER as added, removed or modified are indeed changed in a similar fashion in the Linux Kconfig files.
Regarding mapping changes, we check that the pairing of features and files is accurate and that the type of targeted artefact is also correct. Special consideration is given to the validation of the mapping between features and assets. The mapping between features and files may be the results of complex Makefile constructs and may be distributed over several files through inclusion mechanism. FEVER only considers changes on a file level, and so is unlikely to resolve such complex constructs. Whenever we are able to manually assign a file to a feature by looking only at the content of makefiles - including the Makefile hierarchy, we assume that FEVER should have the information as well. This includes cases were files are assigned to “obj-y” lists, and the mapping is done in a parent Makefile. FEVER does not capture those structures, but the mapping exists.
At the code level, we check that the blocks seen as touched are indeed touched, and we compare the condition of each block. Then, by inspecting the patch, we can see if the code changes within the blocks are correct.
Regarding TimeLine entities, we do no check whether all relevant changes in all commits are indeed gathered into TimeLine object. We make the assumption that if TimeLine entities are properly linked in the commits we check, then the algorithm is correct, and the check on the complete release is unnecessary. We also keep track of the commits for which all extracted information is accurate, giving us an overview of the accuracy on a commit basis.
Using FEVER, we extracted feature changes from release 3.12 and 3.13 of the Linux kernel, and randomly extracted 150 commits from each release. The selection of commits in each release was performed as follows: we randomly selected
Table 1: FEVER change extraction accuracy
50 commits touching at least the variability model, 50 among the commits touching at least the mapping, and 50 touching at least source files. Those three sets are non-overlapping. So the creation of three different sets ensures that our random sample covers at least all three spaces. During the evaluation, we ignored commits associated with merges and tagged releases.
6.2 Results
The results are compiled in Table 1. The table is divided into three sections, each presenting the precision and recall of FEVER when capturing detailed changes in each of the three spaces. We then present in the last section of the table the accuracy of the Timeline aggregation process.
In addition to the information contained in the table, we kept track of the commits in which changes were accurately described by the FEVER change model. Among the 300 commits studied for this evaluation, we found that FEVER extracted all change attributes accurately in 82.6% (248 out of 300) of the cases.
As shown by the numbers, our implementation of FEVER extracted the changes occurring in the variability model space precision and recall of at least 80%. In some cases, features are defined multiple times within the same file and those will be seen as modified even if they are not - hence the precision of only 80% for feature modification. This is a side effect of using model comparison, where each entity of the compared models must be uniquely identified.
Regarding the mapping space, the approach is quite successful in identifying changes to features mapped to files and folders, determining whether the mapping is new for that feature, and if the target is added or removed. However, we note that detection of features linked to compilation flags is harder. Such situations are less frequent than mapping to other types of assets, making small errors having a large impact on the statistical results. The parsing of complex Makefiles tends to lead to miss-interpretation of variables, considering them wrongly as compilation flags.
Regarding implementation changes, our heuristic is good at determining whether conditionally compiled code blocks are added or removed, with a precision of 80% or more and a recall of at least 97%. The combination of CPPSTATS and model differencing proved to be efficient to identify conditionally compiled code block changes. Certain types of code changes within the blocks are well identified: blocks with fully added, removed or modified code are captured with an accuracy of 90% or more. Similarly to what occurs at a VM level, FEVER returned a number of false positive changes with “preserved code”. This occurs when a file contains multiple code blocks with the exact same condition and the exact same code. In our random sample, multiple commits edited the same files containing such structures. Considering the changes with that characteristic are not frequent, those false positives reduced drastically the measured precision, but we still have a high recall.
The results showed that the data collected by FEVER matches the changes performed by developers in 82.6% or more of the commits.
6.3 Threats to Validity
Internal validity. To extract and analyze feature-related changes, FEVER uses model-based differencing techniques. We first rebuild a model of each artefact, and then perform a comparison. The construction of the model relies on heuristics, which themselves work based on assumptions on the structure of the touched artefacts - whether they be code, models, or mapping. For this reason, information might be lost in the process. To guarantee that the data extracted by FEVER do match what can be observed in commits, we performed a manual evaluation, covering every change attribute we consider. The evaluation showed that a large majority of the changes are captured accurately, with a precision and recall of at least 80%. This gives us confidence in the reliability of the data.
The identification of compilation flags mapped to features and changes to conditional blocks preserving the code is not captured as accurately as the other attributes. Those are the results of false positives occurring when the compared models contained duplicated entities (two code blocks with the same condition and same code for instance). Those situations are not frequent, as shown in our random sample. But because in our random sample actual changes to compilation flags and changes to blocks preserving the implementation are rare, such false positives skew the statistical results. Given the high precision and recall we obtain on all other attributes, we believe this does not affect the validity of the data.
Mapping between feature and files established through Makefile variables such as “obj-y”, which FEVER does not extract, had little influence on the accuracy of the mapping change extraction (with at least 98% accuracy for mapping changes). Such mapping appears to be more stable, and thus are less present in our data. Nonetheless, from an evolution point of view, FEVER performed as expected.
External validity. We devised our prototype to extract changes from a single large scale highly variable system, namely the Linux kernel. In that sense, our study is tied to the technologies that are used to implement this system:
the Kconfig language, the Makefile system and the usage of code macros to support fine-grained variability. However, there are several other systems using those very same technologies, such as uXTLs and uClibc, on which our prototype - and thus our approach - would be directly usable.
For other types of systems, one would have to adapt the model reconstruction phases depending on the system under study. If we consider another operation system such as eCos, one would have to rebuild the same change model from features described in the CDL language instead of KConfig. A similar work would be necessary to consider systems using the Gradle build system, rather than Makefile. However, the change model, based on an abstract representation of feature changes, should be sufficient to describe the evolution of highly variable systems, regardless of the implementation technology.
This work focuses on build-time variability, constructed around the build system and an annotative approach to fine-grained variability implementation (#ifdef statements). While we believe that the change model may be useful to describe runtime variability, the extraction process is not suitable to extract feature mapping from the implementation itself at this time. We cannot extend this work to runtime variability analysis without further study.
7. THE FEVER DATASET
In this section, we provide an overview of the feature evolution in the Linux kernel during release v3.13 captured by FEVER. Then, we present three practical scenarios where FEVER can be of use. Finally, we elaborate on further potential usage of the FEVER dataset.
7.1 Co-evolution in Linux
The feature-oriented co-evolution of artefacts has been studied in the past, as mentioned in Section 3. Previous studies describing complex changes relied on manual analysis of commits and do not provide a quantitative overview of how frequent co-evolution of artefacts is during feature evolution. With FEVER, this is possible. In this section, we rely on the FEVER data extract from release v3.13 of the Linux kernel.
Let us first consider the coverage of TimeLine entities in terms of commits. In release v3.13, we captured 13,288 commits. Among those, 11,859 (89.2%) are related to at least one TimeLine entity. Among the 1,429 commits that are not connected to a TimeLine, 1,209 relate to merge operations, tagged releases or other maintenance operations. The remaining commits affect files which are not source, build nor variability model related.
We focus on how features evolved in this release, and the spaces affected by their evolution. The number of TimeLine entities is the number of features that have seen their core behaviour or influence modified in the course of the release. We can then, for each of them, determine in which spaces this evolution took place.
The dataset contains 4,480 TimeLine entities. Among those, 3,437 are connected to commit entities only through feature “core update” relationships. The majority (75.6%) of the features evolved due to changes to their declaration, mapping, or modifications to the files they are mapped to. Only 587 (13%) evolved only through “influence updates”: their implementation did not change but they played a role in the evolution of the implementa-
7.2 FEVER in Practice
A Linux release lasts six weeks, four of them are dedicated to bug fixes [31]. Most the development is then focused on fine-tuning the implementation of features. Moreover, new capabilities may also be supported by modifications of existing features. This would explain why most of the feature changes we observe are in the implementation. Nonetheless, for 19% of features, modifications to heterogeneous artefacts took place.
7.2.1 Space Analysis
A Linux release lasts six weeks, four of them are dedicated to bug fixes [31]. Most the development is then focused on fine-tuning the implementation of features. Moreover, new capabilities may also be supported by modifications of existing features. This would explain why most of the feature changes we observe are in the implementation. Nonetheless, for 19% of features, modifications to heterogeneous artefacts took place.
A Linux release lasts six weeks, four of them are dedicated to bug fixes [31]. Most the development is then focused on fine-tuning the implementation of features. Moreover, new capabilities may also be supported by modifications of existing features. This would explain why most of the feature changes we observe are in the implementation. Nonetheless, for 19% of features, modifications to heterogeneous artefacts took place.
Figure 5: Spaces affected by feature evolution
A Linux release lasts six weeks, four of them are dedicated to bug fixes [31]. Most the development is then focused on fine-tuning the implementation of features. Moreover, new capabilities may also be supported by modifications of existing features. This would explain why most of the feature changes we observe are in the implementation. Nonetheless, for 19% of features, modifications to heterogeneous artefacts took place.
The FEVER data is stored in a Neo4j graph database. Every entity of the FEVER change meta-model is a node of the graph, and the relationships are edges. Data types are represented using node labels, and attributes are stored as node properties. The queries presented in this subsection are written in the Cypher query language. To illustrate the use of the FEVER dataset, let us consider the situation of a release manager building the release notes. He is interested in highlighting important features, and matching those to the commits that participated in their implementation. The release notes of Linux v3.13 mention the following change “add[s] option to disable kernel compression” with a single commit. Looking at the commit, we know that a new configuration option named “KERNEL_UNCOMPRESSED” is introduced. We can check this with FEVER by querying the commits associated with the TimeLine of “KERNEL_UNCOMPRESSED” as follows:
```
match (t:TimeLine)-[:Core Update]->(c:commit)
where t.name = "KERNEL_UNCOMPRESSED"
return distinct c;
```
5http://kernelnewbies.org/Linux_3.13
This query returns two commits. The first, commit 69f055 mentioned in the release note is associated with a FeatureEdit entity denoting the addition of a feature. The second, commit 2d3c62- occurring a few days later, is also associated with a FeatureEdit entity, but, surprisingly, removes the feature. A check in release v3.14 showed that the feature was never re-introduced. This means that the release notes written by the 3.14 release engineer were, in fact, incorrect. We argue that a dataset such as FEVER would have prevented this false entry in the release notes.
In another scenario, a developer is about to introduce a new driver for a touch-screen which should support the power management “SLEEP” feature. The developer might want to know how such support was done in other drivers. He queries the FEVER database for commits where a new feature (f1) is added (fe.change = “add”), and which interacts with a second feature (f2) whose name is ‘PM_SLEEP’ as follows:
\[
\text{match (f1:FeatureEdit)<-[:FEATURE_CORE_UPDATE]->(c:commit),
(f:FeatureEdit)<-[[:FEATURE_INFLUENCE_UPDATE]->(f2:FeatureEdit)
where f2.name = “PM_SLEEP” and f.change = “Add”
return f1, f2, distinct c;}
\]
In release v3.13 of the Linux kernel, this query returns ten results, giving the name of the newly introduced features, and the commits in which those changes occurred. Among the results, the developer might notice that feature “TOUCHSCREEN_ZFORCE” is among the results and might consider using this as an example to drive his own development.
A researcher in the domain of evolution of highly variable software systems might be interested in the typical structure of feature related changes. For instance, one might be interested in the introduction of abstract features, in the sense of Thuem et al. [32]: a feature only exists in the VM. We can identify the introduction of such features with this query:
\[
\text{match (t:TimeLine)<-[[:FEATURE_CORE_UPDATE]->(fe:FeatureEdit)
where not (t)<-[[:FEATURE_CORE_UPDATE]->(m:MappingEdit)
and not (t)<-[[:FEATURE_CORE_UPDATE]->(a:ArtefactEdit)
and not (t)<-[[:FEATURE_INFLUENCE_UPDATE]->(s:SourceEdit)
and f.change = “Add”
return t}
\]
In release v3.13, this query returns 42 features. Because TimeLine entities are regrouping changes across spaces and commits, we know that those 42 features are indeed abstract, and this is not the result of a developer who first modified the variability model and in a later commit adjusted the implementation.
One may be able retrieve similar information using a combination of Git and “grep” commands. We argue that obtaining the same information would require expert knowledge of features and their mapped artefacts, as well as a good knowledge of Git. With FEVER, a single query on the database is sufficient.
7.3 Further Applications
In a system such as Linux with 13,000 features, it might be difficult to pinpoint which configurations should be used to test a new release, as testing all possible configurations is not feasible. The view of feature changes provided by FEVER provides additional information about commits, namely in terms of touched features. This information can be of use when deciding which configurations should be tested for defect following a code delivery.
The FEVER database could be combined with other existing data sources. Tian et al. devised a methodology to identify bug fixing commits in the Linux kernel [33]. Combined with the FEVER data, it is possible to identify the characteristics of changes leading to bug fixes, or find how features evolve during bug fix operations. This would in turn facilitate the work of Abal et al. to study the nature, the introduction and fixes to variability related bug [12].
The data provided by German et al. [31] can be used to track commits over time and across repositories. Combining this information with the FEVER database would allow us to track feature development across Git repositories, and observe how the Linux community collaboratively handles the development of inter-related features.
8. CONCLUSIONS
In this paper, we presented FEVER, an approach to automatically extract changes in commits affecting the implementation of features in highly variable systems. FEVER retrieves commits from versioning systems, and using model-based differencing, extracts detailed information on the changes, to finally combine them into feature-oriented changes. We applied this approach to the Linux kernel, and used the constructed dataset to evaluate its accuracy in terms of complex change representation. We showed that we were able to accurately extract and integrate changes from various artefacts in 82.6% of the studied commits.
Through this work, we make the following contributions. We first presented a model of feature-oriented changes, focusing on the co-evolution of feature representation in heterogeneous artefacts. We showed how we used model based differencing techniques to recover instances of the model from a SCM system in an automated fashion. We showed that the heuristics we used to obtain the change information yielded accurate results by applying the approach to the Linux kernel and manually validating the collected data. The collected data allowed us to show that co-evolution of artefacts during feature evolution does occur, but, over a single release, most features only evolve through their implementation. We presented practical scenarios in which FEVER can be useful for both developers and researchers. Finally, our prototype implementation and collected datasets are available for download.
The next step of our research is to establish a mapping between our change model and the co-evolution patterns as defined by Passos et al. [15] and the safe evolution templates proposed by Neves et al. [7]. We believe this might lead us to an automated identification of instances of known types of changes, and further identification of frequent complex changes in large scale systems. Furthermore, we will extended FEVER to more types of artefacts in order to apply this approach to a larger set of systems.
Acknowledgements
The authors thank Sven Apel for his feedback on the early versions of this work. This publication was supported by the Dutch national program COMMIT and carried out as part of the Allegio project under the responsibility of the Embedded Systems Innovation group of TNO.
9. REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/8203726/TUD_SERG_2016_005.pdf", "len_cl100k_base": 10715, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 47205, "total-output-tokens": 14450, "length": "2e13", "weborganizer": {"__label__adult": 0.0002655982971191406, "__label__art_design": 0.0002715587615966797, "__label__crime_law": 0.00021839141845703125, "__label__education_jobs": 0.0006842613220214844, "__label__entertainment": 4.8100948333740234e-05, "__label__fashion_beauty": 0.0001283884048461914, "__label__finance_business": 0.00019347667694091797, "__label__food_dining": 0.0002130270004272461, "__label__games": 0.0004868507385253906, "__label__hardware": 0.0005383491516113281, "__label__health": 0.0002675056457519531, "__label__history": 0.00019633769989013672, "__label__home_hobbies": 5.936622619628906e-05, "__label__industrial": 0.0002416372299194336, "__label__literature": 0.00021660327911376953, "__label__politics": 0.00019502639770507812, "__label__religion": 0.00029587745666503906, "__label__science_tech": 0.0130767822265625, "__label__social_life": 7.361173629760742e-05, "__label__software": 0.00942230224609375, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0001773834228515625, "__label__transportation": 0.0002970695495605469, "__label__travel": 0.00014293193817138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60703, 0.03615]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60703, 0.19858]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60703, 0.91564]], "google_gemma-3-12b-it_contains_pii": [[0, 735, false], [735, 869, null], [869, 869, null], [869, 5694, null], [5694, 12261, null], [12261, 15516, null], [15516, 20272, null], [20272, 23553, null], [23553, 28505, null], [28505, 34913, null], [34913, 40239, null], [40239, 46459, null], [46459, 52850, null], [52850, 58535, null], [58535, 60703, null], [60703, 60703, null], [60703, 60703, null]], "google_gemma-3-12b-it_is_public_document": [[0, 735, true], [735, 869, null], [869, 869, null], [869, 5694, null], [5694, 12261, null], [12261, 15516, null], [15516, 20272, null], [20272, 23553, null], [23553, 28505, null], [28505, 34913, null], [34913, 40239, null], [40239, 46459, null], [46459, 52850, null], [52850, 58535, null], [58535, 60703, null], [60703, 60703, null], [60703, 60703, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60703, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60703, null]], "pdf_page_numbers": [[0, 735, 1], [735, 869, 2], [869, 869, 3], [869, 5694, 4], [5694, 12261, 5], [12261, 15516, 6], [15516, 20272, 7], [20272, 23553, 8], [23553, 28505, 9], [28505, 34913, 10], [34913, 40239, 11], [40239, 46459, 12], [46459, 52850, 13], [52850, 58535, 14], [58535, 60703, 15], [60703, 60703, 16], [60703, 60703, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60703, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
180ee06cf33092d8e8140c9127e9a6155521986f
|
[REMOVED]
|
{"Source-Url": "http://repository.cmu.edu/cgi/viewcontent.cgi?article=2807&context=compsci", "len_cl100k_base": 12069, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 53589, "total-output-tokens": 14298, "length": "2e13", "weborganizer": {"__label__adult": 0.0003368854522705078, "__label__art_design": 0.0003237724304199219, "__label__crime_law": 0.0003364086151123047, "__label__education_jobs": 0.0008196830749511719, "__label__entertainment": 5.7578086853027344e-05, "__label__fashion_beauty": 0.0001430511474609375, "__label__finance_business": 0.0002930164337158203, "__label__food_dining": 0.00029540061950683594, "__label__games": 0.0004253387451171875, "__label__hardware": 0.000865936279296875, "__label__health": 0.0004935264587402344, "__label__history": 0.00026869773864746094, "__label__home_hobbies": 9.137392044067384e-05, "__label__industrial": 0.0003848075866699219, "__label__literature": 0.0002875328063964844, "__label__politics": 0.0002887248992919922, "__label__religion": 0.0004558563232421875, "__label__science_tech": 0.0199432373046875, "__label__social_life": 8.159875869750977e-05, "__label__software": 0.00418853759765625, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.0002386569976806641, "__label__transportation": 0.00063323974609375, "__label__travel": 0.00018453598022460935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52350, 0.017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52350, 0.68135]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52350, 0.87034]], "google_gemma-3-12b-it_contains_pii": [[0, 84, false], [84, 365, null], [365, 1099, null], [1099, 4673, null], [4673, 7989, null], [7989, 11071, null], [11071, 14776, null], [14776, 17185, null], [17185, 17644, null], [17644, 21190, null], [21190, 23498, null], [23498, 25518, null], [25518, 28593, null], [28593, 31415, null], [31415, 32980, null], [32980, 35803, null], [35803, 39390, null], [39390, 43344, null], [43344, 45188, null], [45188, 47873, null], [47873, 49347, null], [49347, 51585, null], [51585, 52350, null]], "google_gemma-3-12b-it_is_public_document": [[0, 84, true], [84, 365, null], [365, 1099, null], [1099, 4673, null], [4673, 7989, null], [7989, 11071, null], [11071, 14776, null], [14776, 17185, null], [17185, 17644, null], [17644, 21190, null], [21190, 23498, null], [23498, 25518, null], [25518, 28593, null], [28593, 31415, null], [31415, 32980, null], [32980, 35803, null], [35803, 39390, null], [39390, 43344, null], [43344, 45188, null], [45188, 47873, null], [47873, 49347, null], [49347, 51585, null], [51585, 52350, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52350, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52350, null]], "pdf_page_numbers": [[0, 84, 1], [84, 365, 2], [365, 1099, 3], [1099, 4673, 4], [4673, 7989, 5], [7989, 11071, 6], [11071, 14776, 7], [14776, 17185, 8], [17185, 17644, 9], [17644, 21190, 10], [21190, 23498, 11], [23498, 25518, 12], [25518, 28593, 13], [28593, 31415, 14], [31415, 32980, 15], [32980, 35803, 16], [35803, 39390, 17], [39390, 43344, 18], [43344, 45188, 19], [45188, 47873, 20], [47873, 49347, 21], [49347, 51585, 22], [51585, 52350, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52350, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0d88a14939d4d28dc763bd151fc509dce698a9e2
|
A debugging approach for live Big Data applications
Matteo Marraa,∗, Guillermo Politob, Elisa Gonzalez Boixa
aSoftware Languages Lab, Vrije Universiteit Brussel, Brussels, Belgium
bUniv. Lille, CNRS, Centrale Lille, Inria, UMR 9189, CRISTAL, Lille, France
Abstract
Many frameworks exist for programmers to develop and deploy Big Data applications such as Hadoop Map/Reduce and Apache Spark. However, very little debugging support is currently provided in those frameworks. When an error occurs, developers are lost in trying to understand what has happened from the information provided in log files. Recently, new solutions allow developers to record & replay the application execution, but replaying is not always affordable when hours of computation need to be re-executed. In this paper, we present an online approach that allows developers to debug Big Data applications in isolation by moving the debugging session to an external process when a halting point is reached. We introduce IDRA MR, our prototype implementation in Pharo. IDRA MR centralizes the debugging of parallel applications by introducing novel debugging concepts, such as composite debugging events, and the ability to dynamically update both the code of the debugged application and the same configuration of the running framework. We validate our approach by debugging both application and configuration failures for two driving scenarios. The scenarios are implemented and executed using Port, our Map/Reduce framework for Pharo, also introduced in this paper.
Keywords: Online Debugging, Big Data, Map/Reduce, Live Programming
1. Introduction
Hardware advances in storage capacity and CPU processing have given rise to the field of Big Data. This field is characterized by the so-called 3 Vs: Volume, Velocity, and Variety [28]. Big Data applications analyze a constantly increasing Volume of data, coming at an increasing Velocity and from a day by day bigger Variety of sources. As a result, novel software platforms have emerged to analyze and store such large data sets in a scalable way. The two most prominent programming models for Big Data are Hadoop Map/Reduce [6]
∗This work is an extension of Marra et al. [17]
∗Corresponding author
Email addresses: marra@vub.be (Matteo Marra), guillermo.polito@univ-lille.fr (Guillermo Polito), egonzale@vub.be (Elisa Gonzalez Boix)
Preprint submitted to Science of Computer Programming October 19, 2021
and Apache Spark [2], which typically embrace batch-oriented data processing to achieve a high parallelization of data analysis. Current trends indicate that the volume, velocity and variety of data are increasing quickly due to an explosion on diversity and number of sources of information (as a result of the digitalization of data, e.g. smart objects and sensors, the interconnectivity of data and popularity of social media data [19]). This poses challenges for Big Data frameworks to be able to meet the requirements of the emerging real-time streaming data processing applications. For example, the 2017 Hadoop perspective annual report by Syncsort [27], a leading company in (Big) data integration, estimates the need for new tools to simplify the interaction of the programmers with different evolving frameworks and datasets.
Recent work has shown that Big Data platforms provide little or no support for debugging software failures [12]. Developers mostly rely on log files, that can easily grow to the order of terabytes of data. Even though specialized tools to visualize and analyze logs for Big Data platforms exist [22], it’s often extremely difficult to understand production failures from such log files [24]. As a result, such post-mortem debugging techniques may require many hours of analysis just to spot a simple problem, such as a minor bug in the application or a configuration error [9].
To overcome the use of log files, recent work has proposed replay debuggers ([5, 26, 21]), which allow developers to repeat a recorded execution. Once a program execution failed, if it was recorded, it can be replayed. Replay times can, however, increase exponentially in such systems, and it might take hours and multiple replays to spot a particular bug [12]. Online debuggers, on the other hand, can potentially shorten the time to find a bug by avoiding replay steps. They control a program’s execution by placing breakpoints in specific points of interest during the execution and stepping until the bug is hit. However, typically traditional debuggers (like GDB [10] and the Java debugger [23]) pause the entire execution while debugging. This solution is not always feasible for long-running applications such as Big Data applications.
In this paper, we propose a novel online debugging solution targeted to Big Data applications. In prior work, we proposed out-of-place debugging [18], an online debugging architecture which transfers the debugging session to an external process, in which the developer can debug in an isolated way. Based on this, we augment out-of-place debugging with dedicated features to allow debugging of Map/Reduce applications. With our solution, developers can debug within the same integrated development environment (IDE) both application failures and the so-called configuration failures present in Map/Reduce applications. We also present novel debugging features to combine and relate errors that happen across the parallel execution of Map/Reduce applications. In particular, using our solution developers are able to debug only the failed parts of the computation, with a clear knowledge of which data caused a certain exception. Furthermore, they are also able to propagate code changes to the execution environment without needing to re-deploy or restart.
We prototype our solution in Pharo. For the development of applications, we rely on Port, a Map/Reduce programming model in Pharo introduced in
prior work [17] which we augmented with a framework for dynamically deploying Pharo on state-of-the-art Hadoop clusters. We prototype our debugging solution for Map/Reduce applications in IDRA\textsubscript{MR}, an out-of-place debugger for Pharo applications. We validate our solution by describing three different debugging experiments through two inspiring scenarios, showing how IDRA\textsubscript{MR} can help to debug both application and configuration level bugs in prototypical Big Data applications.
This paper complements our previous paper [17] by deploying out-of-place debugging on a Map/Reduce distributed architecture, and by defining new debugging abstractions, such as composite exceptions and debugging of virtual partitions. More concretely, the main contributions of this paper are:
1. We introduce the concept of composite debugging events, which aggregate occurrences of a single exception or breakpoint in different workers on a same parallel execution.
2. We introduce debugging operations on virtual partitions, to debug locally a failed parallel execution.
3. We validate our approach by applying it on two real-case analysis: a polls analysis application and blockchain analysis application.
As a technical contribution, we provide Port \footnote{Soon available at https://github.com/Marmat21/Port}, a Smalltalk implementation of the Map/Reduce programming and execution model, and IDRA\textsubscript{MR}, a debugger for Port applications based on the concept of Out-Of-Place debugging [18]. Furthermore, we introduce Pharo on Yarn, a library to dynamically deploy Pharo images on different nodes of a cluster using Hadoop Yarn [3].
2. Motivation
To show the different problems that can arise when debugging Big Data applications, we present here two concrete scenarios of applications featuring a failure. In particular, we illustrate the debugging of the two most representative types of failures in Map/Reduce applications: application-level failures\cite{15}, also known as application bugs, and configuration and installation bugs which are reported to cause more than half of the bugs in Hadoop clusters \cite{25}.
Note that code samples in this paper use Smalltalk. We will explain the necessary features of the language to understand the contributions of this work along with the explanation of code samples using footnotes.
2.1. Application Bugs by Example: Poll Analysis Application
A classic example of a Big Data application is an election polls analyzer, akin to the one presented by Gulzar et al. \cite{12}. This application analyzes a dataset containing the results of the election polls and computes, for one region, the number of votes received by each of the candidates. This application actually boils down to a word-count computation, which lies at the core of many
other applications in Map/Reduce [6]. We implemented again the application as described by Gulzar et al. [12], in which the data is formatted as file entries with the following fields:
{Region Name Timestamp}
We also introduced in our application the same bug as in BigDebug [12]: when the application is executed on a Hadoop cluster, a single worker fails. Since the application is deployed remotely without a user interface, this failure produces a log. The stack-trace provided in the crash-report shows that there was a parsing error (a number between 0 and 9 was expected), providing the stack trace shown in Listing 1, but not providing any information on the data causing the exception.
```
2019-04-16T16:51:56.532637+02:00
NumberParser>>error:
NumberParser>>expected:
NumberParser>>nextUnsignedIntegerBase:
NumberParser>>nextIntegerBase:
Integer class>>readFrom:base:
Integer class>>readFrom:
VoteCountingMRApplication>>map:
[ :el | self map: el ] in VoteCountingMRApplication(MapReduceApplication)
>>applyMapTo: in Block: [ :el | self map: el ]
Array(SequenceableCollection)>>collect:
VoteCountingMRApplication(MapReduceApplication)>>applyMapTo:
```
Listing 1: Stacktrace in the log file of the failing polls analysis application.
This particular bug is caused by the record {Bianchi Toscana 02-03-2018}. Indeed, the program was expecting a numeric UNIX timestamp while the record presented a String-based timestamp. Our poll analyzer code was parsing the timestamp using asInteger, which returns nil with the non-numeric record, causing an unexpected exception.
In this concrete example, finding the error would be trivial if the developer was provided with the contextual information about the state of the application, i.e., which record caused the exception.
Since the bug did not manifest in the test-set of the developer, she could try to add insightful log statements to, for instance, print the runtime values of the arguments to detect which one causes the error. This, however, would require to re-deploy and restart several times the execution to find the bug. Moreover, printing runtime values of arguments may fill the log with extra information which will not help the developer to find the bug (e.g., information about not faulty execution). Alternatively, the developer could use more advanced techniques, such as data provenance [11], to detect which of the records caused the error. However, such a technique also requires various replays of the execution until debugging can happen. In short, using post-mortem debugging to reproduce the bug requires, after an initial analysis of a log file, different re-executions, losing hours of processing as the analyzed data set grows.
2.2. Configuration Bugs by Example: Blockchain Analysis Application
A second representative example of a Big Data application is a Map/Reduce application that analyzes an existing Blockchain platform (i.e., Ethereum) and indexes each of its blocks, storing an association \( \text{index} \Rightarrow \text{hash of the block} \) in a relational database. When done sequentially, such an analysis takes days of computation. As a result, many times Blockchain analysis techniques are limited. For instance, BlockSci \([14]\) scoped their bitcoin analysis to only 22GB of transactions. We implemented this analysis as a Map/Reduce application in a Hadoop cluster, taking 7 hours to process 266GB of transactions \([4]\).
Listing 2 illustrates the pseudocode of such an application. The \texttt{map} function queries the blockchain to obtain the data related to a block index. The \texttt{reduce} function takes the result of the map on several indexes (i.e., a partition of indexes), and stores them all in a centralized database with a bulk insert. Both the blockchain and the database are accessible at known network addresses through drivers loaded in the runtime environment.
```java
map(blockIndex) {
return blockIndex->hash(blockchain.at(blockIndex))
}
reduce(pair){
storeInDatabase(pair)
}
```
Listing 2: Pseudocode of the blockchain indexer.
While developing and executing this application, we faced different configuration bugs that invalidated the results generated by minutes (or hours) of computation. For example, one bug made the application fail when attempting to store the associations in the relational database. After analyzing the logs of the failed application (included in Appendix C) we realized that the application developer forgot to drop the existent tables in the database, making all the stores fail because there was already data with the same primary key from previous executions. Such a bug is representative of the case of a production environment that is not fully ready to execute the application. Similar bugs can also happen when a library is missing or mis-deployed in the execution environment.
Fixing this kind of configuration bug is relatively easy as it does not require extra coding but only the re-deployment of the right configuration files or libraries, or the restart of a service like a database, or, in this specific case, executing a script to drop existent tables in the database. Identifying that we are facing a configuration bug is, however, much more difficult because the root cause of the failure is not in the application. This means developers need to analyze logs that contain information about other components of the framework they are using to implement their applications, requiring them to understand implementation details of the framework to figure out what it is being reported as a failure.
2.3. Problem Statement
Debugging Big Data applications is difficult because of different factors. On the one hand, their distributed nature and the size of the systems and data that they analyze complicates the process of identifying a root cause of a failure. On the other hand, not only do programs fail because of application-level errors accidentally introduced by developers, but they also fail because of mis-configuration (of both the application and the execution environment) and initialization errors [25]. These problems, qualified by Zeller [29] as minor and trivial problems, can be easily solvable in local applications using interactive debugging tools. However, when present in Big Data programs, solving them with the current state of the art debugging tools becomes a time-consuming task even though the fix may be trivial. As Fischer et al. state in their 2012 article [9]:
*It is frustrating to wait for hours only to realize you need a slight tweak to your feature set.*
In particular, replay debuggers for Big Data applications would restart the full execution even when the fix only affects a part of the failed execution, possibly replaying the execution for hours. A checkpointed-based debugger like BigDebug [12] can alleviate that issue since it allows one to replay only a part of the application process from the lastest checkpoint [12]. However, what both of the presented debugging scenarios have in common, is that the bug becomes apparent when the developer can control the execution of the application and have access to its state when it fails. Moreover, none of the current debugging approaches for Big Data applications feature support to expose both types of failures and deploy fixes for them without restarting the system.
We believe that these shortcomings of the state of the art motivate the need of a novel debugging approach that allows developers to (1) expose both application-level and configuration failures in their Map/Reduce programs executing remotely in one environment (to avoid searching for the root cause in logs in different software technologies involved), and (2) provide primitives to deploy code fixes without restarting the whole system, including deploying library code and changing the configuration of the framework itself.
2.4. Online Debugging of Big Data applications
In this work, we propose an online debugging technique for Big Data applications. In particular, we apply previous research on *out-of-place debugging* [18] and augment it with novel features to tackle the aforementioned shortcomings for Map/Reduce applications.
Out-of-place debugging is an online debugging technique in which debugging happens by transferring the execution state of the remotely debugged application to another machine. The developer proceeds then to debug as if the application was originally a local application. In previous work, we successfully applied this approach to debugging long-running applications and cyber-physical systems [18, 16]. Such a debugging technique suits Big Data applications since it
allows production code running on a cluster to continue processing tasks while
the failing tasks can be debugged in an external machine. Once the failing tasks
are fixed, developers can commit the code changes and restart the now fixed
tasks in the production environment.
In this work, we extend out-of-place debugging by customizing its deploy-
ment to a Master/Worker architecture which supports a Map/Reduce program-
ning environment, and introduce different abstractions to compose and de-
bug exceptions happening across the parallel execution. Figure 1 provides an
overview of the debugging architecture for Map/Reduce applications.
![Diagram of debugging architecture for Map/Reduce applications]
Figure 1: Overview of an out-of-place debugging architecture for Map/Reduce applications
The architecture includes, on the left side, the developer’s machine, so the
machine used by the developer for remote monitoring and debugging of the pro-
gram execution. The developer’s machine thus has an IDE with the MapReduce
UI, to monitor the state of the workers, and the Debugger UI. The developer’s
machine is connected over the network to a cluster running the application.
In particular, the cluster runs different processes containing the Map/Reduce
Master and different Map/Reduce Workers, that will manage the application
execution as explained in the following section. We assume that the different
nodes (i.e. master and workers) do not share memory, but that all of the nodes
have access to a shared distributed file system (e.g., HDFS in Hadoop clusters).
Finally, all of the nodes have a debugger API, detailed later in Section 4 which
is used to control and steer execution during debugging. Section 4 will detail
this architecture in the context of our prototype implementation in Pharo.
3. Port: A Big Data Framework for Pharo
Before delving into our online debugging approach for Big Data applications,
we introduce Port: the programming environment which we employ in this work
to write Big Data applications using the Map/Reduce computational model. We
also provide the necessary background information on both Master/Worker and Map/Reduce models.
### 3.1. The Master/Worker Model and Map/Reduce
Port models Big Data applications using a Master/Worker model, akin to the one used in Apache Spark [2]. The Master/Worker model consists of one master process which acts as coordinator, and many worker processes performing tasks. The master is responsible for assigning work to the workers and coordinating results. The workers execute tasks instructed by the master and returns to it the result of the computation. The Master/Worker framework is suitable for modeling the execution but does not provide high-level abstractions to actually program applications. Hence, we introduced a Map/Reduce programming model [6] on top of it.
A Map/Reduce application is mainly composed of two functions: a map function, that is mapped to all the elements of the input collection, and a reduce function, executed after the map, that can reduce all the intermediate results to a final one. Our Master/Worker framework creates a Map/Reduce Master process which is responsible for scheduling map or reduce tasks on different Map/Reduce Worker processes and handling their results.
### 3.2. Map/Reduce by example
A Map/Reduce application in Port is defined as a Pharo class implementing the methods `map:` and `reduce:`. Listing 3 shows the core code of our election polls analyzing application[2]. The `map:` method first filters the interviews for a region (in this case, Abruzzo). It then checks if the timestamp of the interview is valid, reading it from the string as a UNIX timestamp and checks if the date is greater than yesterday.
The `reduce:` method reduces all the valid entries into an unique dictionary, which will include the information on the preference for each candidate.
```plaintext
PollsAnalyzer >> map: aLine
| split |
| split := aLine substrings: ' ', |
| (split at: 1 includesSubstring: 'Abruzzo') ifTrue: [
| ((DateAndTime fromUnixTime: (Integer readFrom: (split at: 3) )) >
| DateAndTime yesterday) ifTrue: [
| (split at: 2) -> 1.
| ]
```
---
2 In this code example, you can find syntax that is specific to Smalltalk. `PollsAnalyzer >> map:` indicates that this is the implementation of the method `map:` in the class `PollsAnalyzer`. Please note that Smalltalk methods make use of keywords: `line substrings: ' ' ' (cfr. line 3) is equivalent to calling `line substrings: ' ' ' in canonical syntax. Methods with multiple parameters (cfr. line 4) are called using a composition of keywords. `split at: 1 includesSubstring: '...'` is equivalent to `split includesSubstringAt(1, '...')` in canonical syntax.
PollsAnalyzer >> reduce: aSetOfVotes
| dict |
dict := Dictionary new.
aSetOfVotes
do: [:vote |
vote key
ifNotNil: [ dict
at: vote key
ifPresent: [:val | dict at: vote key put: val + 1 ]
ifAbsentPut: 1 ]].
↑dict.
Listing 3: The core code of the election poll analysis application.
When the application is run, each entry in the input log files is first mapped by the map: method and the results of each map: invocation is passed as an argument to reduce:. Eventually, the poll application returns a set of dictionaries with the number of votes of each candidate.
3.3. Handling input data
A Map/Reduce application accepts different data sources: i.e., (i) an arbitrary collection in memory, (ii) a file on the local file system, and (iii) a file on the distributed file system (e.g., HDFS).
To provide parallel execution of the map and the reduce methods, Port splits the original data into different partitions, that it then assigns to the different workers. In the case of an arbitrary collection, such collection is split equally between the workers and serialized over the network. Instead, when executing on a file (either in the local or distributed file system), the master will instruct the single workers to read each a part of the file, and then to execute the analysis.
Note that in classic Map/Reduce frameworks, the result of the map should always return data in the form of key/value pairs. In Port, map methods are not constrained to return key-value pairs. However, returning key-value pairs becomes mandatory when using reduce by key instead of reduce.
3.4. Handling intermediate results
Once the map is finished, the partial results of the maps executed in different workers need to be reduced. The developer configures the application to either send the partial results back to the master, store them on an intermediate file on the distributed file system (approach akin to classical Map/Reduce), or keep them in the memory of the workers (approach akin to Apache Spark’s workflow).
Before scheduling reduce tasks, a shuffling step might be needed to correctly reduce by key. As other Map/Reduce frameworks, Port handles the eventual shuffling of the data in a transparent way for the developers.
4. Debugging Port Applications with Out-of-Place Debugging
The Port framework described in Section 3 deploys Map/Reduce Pharo applications such as the election polls analyzing application. To debug such applications, in this paper we propose an online debugging technique based on out-of-place debugging [18]. In this section, we first introduce the necessary concepts of out-of-place debugging, then we explain how we applied it to a Big Data context, and finally, we describe the new kind of debugging events devised to debug Map/Reduce applications.
4.1. Out-of-place debugging in a nutshell
Figure 2 depicts the out-of-place debugging architecture. An application runs on a process monitored by the debugger, and an external debugger process hosted in the developer’s machine presents the front-end of the debugger. When the application monitored by a debugger monitor stops due to a breakpoint or an exception (step 1), the debugger monitor serializes the program execution state (step 2) and transfers it to the developer’s machine (step 3), where the debugger manager reconstructs the debugging session (step 4). The developer then proceeds to debug locally an exact copy of the original program at the moment of the exception (step 5). When the developer discovers the cause of the bug, she modifies the application’s code locally to create a bugfix (step 6). Finally, the developer sends all the changes of a bugfix in a single commit step to the debugged application (step 7). The explicit commit operation gives the developer control to deploy only code that she is confident about. These changes are deployed in the remote application (step 8) and it is finally possible to resume the execution of the suspended point of the application (step 9).
The out-of-place debugging architecture is naturally distributed: a single debugger manager can connect to multiple debugger monitors at the same time, making it possible to debug different connected applications from a single point. When the debugger manager receives a halted execution from one of the connected debugger monitors, it queues a new debugging session instead of blocking the debugger process by opening multiple sessions. The developers then choose which debug session to open (if more than one is available). Eventually, the developer resumes the execution or cancels the original application process, with the possibility of applying the same operation to all similar debug sessions. This
---
3After a map computation, the resulting key/value pairs are physically at the worker that performed the map. In order to easily reduce by key, key/value pairs that have the same key should be moved to the same worker.
4By debugging session we mean a practical Pharo debugging session, with a copy of the call-stack and variables as the original debugging session created by the normal execution.
interactive workflow allows developers to inspect and modify a debug session to find and correct bugs in a live way. Code changes produced in the debugger process can be propagated to the application nodes when the developer does a commit operation. Such code changes include adding/modifying/removing of both methods and classes. Similar to the debugger manager, the changes handler supports connections to multiple updaters at the same time.
4.2. Out-of-place debugging on Port
In this section, we describe how we adapt the debugging infrastructure (shown in Figure 2) to be deployed on our Map/Reduce runtime (cfr. Section 3). To this end, we build our Big Data debugger, called IDRAMR, by extending the existing implementation of an out-of-place debugger for Pharo Smalltalk applications.
Figure 3 shows the overall architecture of Port when deployed with IDRAMR. Concretely, the different Debugger API instances shown in Figure 1 represent the different instances of IDRA Manager, IDRA Monitor, and Updaters.
The node running the Map/Reduce Master also runs an instance of the IDRA Monitor. Moreover, the Map/Reduce Master node and all the Map/Reduce Worker nodes also run an updater, enabling code updates from an out-of-place debugging session.
External to the cluster, the developer’s machine runs an IDRA Manager instance and a Pharo IDE. The IDRA Monitor propagates the exceptions occurring in the cluster to the IDRA Manager instance. The IDRA Manager UI governs the IDRA Manager instance and uses the Port API to communicate with the instances running on the cluster. It presents dedicated UIs to display new debugging features for Big Data applications, which we detail in the remainder of this section.
4.3. Debugging Events and Halting Points
During the execution, the application may reach different halting points. A halting point is a point of the execution in which the execution stops either because of a breakpoint inserted by the developer, or because of an unhandled exception. When a halting point is reached in a Map/Reduce worker, the worker notifies the master. The Map/Reduce master then extracts all of the information needed for debugging from the worker and notifies the IDRA Monitor of a new debugging event. A debugging event contains all the contextual information about the halting point. More precisely, it holds an identifier, a copy of the call stack at the halting point, the configuration of the application (e.g., partitioning information), and the data partition that was being analyzed when the event happened. Since the Map/Reduce Master has complete knowledge of the distributed program execution and state, not only of the failed worker(s) but also of the rest of the running tasks of the application, it has access to all the information necessary to construct such debugging event.
As different Map/Reduce Workers are performing parallel map and reduce tasks, the same bug may raise multiple exceptions while analyzing different portions of data in different workers. For example, in the case of the polls analyzer application, if more than one record has the wrong format in the dataset, then the same failure will occur many times during the parallel execution. This will generate many individual debugging events sent to the IDRA Monitor at the Map/Reduce Master process. All these events, however, conceptually belong to a single failure which manifested in different portions of data.
To ease the debugging of such failures, the IDRA Monitor aggregates all concurrently raised debugging events related to the same failure into a unique composite debugging event. This composite event is then sent to the IDRA Manager at the developer’s machine for debugging as if it was one single debugging event. Two or more individual debugging events are aggregated if their call stacks are structurally the same, i.e., the halting point is the same and the call stack frames preceding the halting are called in the same sequence. The composite debugging event will then construct a single call stack which can be further debugged as if it was one failure.
4.4. Composing Events by Example
We now detail how composite events work in the context of debugging the poll analysis application described in Section 2.1. Recall that a composite event is generated when the master receives from the worker(s) the same exception more than once. Figure 4 shows a simplified version of the stack associated with the error in the poll analysis application.

When the worker handles the `NumberParser` exception, it first removes the stack frames related to the framework methods to avoid noise and concentrate on the specific application debugging information (i.e. all frames from `map:` and up). The removed frames are depicted in red in Figure 4. The worker then extracts the meta-data that identifies both the faulty record and its partition and sends it together with the stack in a unique debugging event to the master. The master, in turn, forwards it to the IDRA Monitor.
When a debugging event arrives at the IDRA Monitor, it checks if there are other events related to the same execution. The first time that the IDRA Monitor finds two events to be structurally equivalent, it will generate a composite debugging event for them with a unique id carrying a unique call stack.

Figure 5 shows a simplified representation of the stacks of two structurally equivalent events. We consider two events to be structurally equivalent when (i) they are generated by the same operation (e.g. `map:` in this case) and (ii) each of the stack frames, in order top to bottom, has the same method selector and points to the same program counter (PC). At this point, only one copy of the first stack is stored in the composite event, together with the meta-data of
each of the events. Using such meta-data IDRA\textsubscript{MR} is then able to reconstruct the exception for each record that caused it.
When successive structurally equivalent debugging events arrive at the IDRA Monitor, only the first one contains stack information. All the rest contain their unique meta-data and share the identifier of the composite event they belong to. The IDRA Manager will then identify it as a part of a single composite event.
Composite events do not only provide developers with a higher debugging abstraction tailored for the parallelism exhibited by Map/Reduce applications, but they also reduce the amount of memory and network used by IDRA\textsubscript{MR}. More specifically, the IDRA Monitor hooks into the exception handling of Port, extracting the necessary data from the stack associated with each individual debugging event, to then verify if the stacks are structurally equivalent.
4.5. The Debugging Cycle
A debugging cycle in out-of-place debugging denotes the stages from the point in which the developer’s machine is notified of a halting point (due to an exception or breakpoint) in a Map or Reduce task of an application until the execution of the halted task is resumed. In this section, we describe the debugging cycle of an application error that manifested in an exception during the execution of the poll analyzer application described in Section 2.1. For a screencast of such debugging cycle, we refer the reader to \url{https://tinyurl.com/SCPDemo2019}.
Figure 6 shows a screenshot of IDRA Manager UI at the point the developer is signaled of an exception occurring in a \textit{map}. On the left side, we see the list of distinct exceptions that happened: only one in this case. The number 3 between square brackets denotes that the exception was actually raised three times, meaning this a composite debugging event for the three exceptions. On the right side, the developers see the stack and the three different records that caused the exception.
Recall from Section 3.3 that the data is split into different partitions, hence when an error happens because of a specific record, such record is part of an associated partition. Through the buttons in the bottom right side of the window, developers can then perform three different debugging operations:
**Debug a single halted record.** Developers start debugging the map on one of the records for which the execution was halted (denoted those as halted records). Once the map on the associated record returns, the developers continue debugging on the rest of records in the same partition.
**Debug a virtual partition with all halted records.** The developers debug the map on a virtual partition containing only the halting records, regardless of their original partition. For instance, in our example this operation will construct a virtual partition containing all records visible in the bottom part of Figure 9 and let the developer debug the map on such virtual partition.
**Debug a virtual partition with all halting partitions.** The developers debug the map on a virtual partition which is the union of all of the partitions that contain at least one halted record. This virtual partition will contain all records in those partitions, including those that do not halt.
IDRA$_{\text{MR}}$ creates a debugging session by transferring the data required for debugging the requested partition, including the data originally referenced by the stack, the analyzed partition and the current index. Once created, developers use the IDRA$_{\text{MR}}$ UI on the reconstructed debugging session. The IDRA$_{\text{MR}}$ UI extends on Pharo’s default online debugger to add dedicated debugging operations for Map/Reduce tasks. More concretely, it provides a new stepping operation that jumps to the map of the next element of the partition, a new operation to resume the execution of all of the remaining elements, and a new operation to halt and inspect the intermediate state.
A debugger UI is created when the application receives a debugging event. Once the debugger UI appears, a developer uses classical debugging operations (step into, step over, resume execution) of the Pharo debugger to debug the reconstructed failed execution on the local machine.
Let’s consider that during the debugging session, the developer found the bug and applied a fix. In our concrete scenario, this means modifying the code to manage also string-based timestamps. Those changes are tracked in the Code Manager tab of IDRA, displayed in Figure 7. The right side of the code manager shows all of the changes made by the developer while debugging, and the diff of such code changes to the original versions. By clicking on the commit changes button, the developer sends the bugfix to the local changes handler.
The bugfix is then immediately propagated by the Changes Handler to the updater instance running alongside the Map/Reduce Master. The Master schedules a task in itself to apply the updates and sends the code fix to the updater instances running alongside each of the Map/Reduce Workers. Note that the
update propagation does not happen atomically in all workers at once since each worker will apply the updates only when it finished executing the current task.
Once the code changes are deployed the debug session is finished with one of the following operations:
1. Re-schedule all partitions that halted. This avoids the re-execution of tasks that finished with success.
2. Re-schedule the application from the start on all the partitions. In case the modified code requires an entire re-execution.
The first option is particularly useful when only a small part of the computation failed due to few failure-inducing records. In this case, the developer preserves most of the execution, avoiding tedious replay times, and restart only the failed part of the computation.
4.6. Debugging a configuration error
While explaining the debugging cycle, we focused on application-level failures that are be fixed in the code of the application. However, to debug configuration errors, developers need different debugging operations. IDRA\textsubscript{MR} allows developers to (i) load and change code of libraries locally, and propagate the changes as a normal code-update and to (ii) execute arbitrary code directly on the runtime environment of the master and workers. The former leverages on the previously explained code-update capabilities of IDRA\textsubscript{MR}, the latter makes use of debugging hooks provided by the Port framework itself.
Port allows developers to execute arbitrary expressions in the context of the master, a single worker or all workers. Configuration errors are fixed by executing expressions that modify internal configuration of the nodes or affecting
the global running environment. For example, this can be used to programmat-
ically re-start a database in the cluster. We will present more details on how
this functionality can be used in Section 5.4.
5. Validation
We validate our approach through three different experiments that highlight
the utility of the different debugging features presented in this paper. Through-
out the different experiments, we use the two scenarios described in Section 2:
the polls analyzer and the blockchain indexing application.
5.1. Experimental Setup
We execute the experiments with Port deployed using Yarn (cfr. Section 6.1)
on a 10-nodes cluster. The cluster is composed of one root node and ten identical
slave nodes. Each slave node presents the following specifications:
- Processor: Intel Xeon CPU E3-1240 @ 3.50GHz (4 cores, 8 threads)
- Ram: 32 GB
- Storage: 200 GB SSD
The root node has the same specification as the slave nodes, but it has enhanced
storage. All the nodes are connected through a 1 Gigabit local network.
HDFS is running as namenode in the root node, and as datanode in the ten
slave nodes. For running the blockchain application, one of the ten slave nodes
is exclusively running Geth, a blockchain data node. In addition, the root node
is running an instance of the Postgres database.
5.2. Experiment 1: Debugging a Composite Exception
In the first experiment, we compare the debugging cycle for an exception that
happened multiple times using log files or using our approach which features
composite debugging events. Consider the application level failure of the polls
analyzer application described in Section 2.1. Analyzing the log file does not
allow developers to get enough information over the execution to know which
register or partition caused the failure. Furthermore, if the same exception
happened in parallel on more than one map tasks, the log file gets much more
complex, and partially replicated, and it still does not give developers enough
information. The reader can find such a complete log file in Appendix B. Even
if the developer would add explicit log statements to log the intermediate state
of a variable, retrieving such log would require multiple executions, and, in order
to spot the right record that caused the exception, the developer will need to
do a thorough read of the log to find the right statement.
When debugging using our approach, the IDRA monitor will immediately be
notified by the exception(s) happening in the different map tasks and will trans-
fer the debugging session, in the form of a composite debugging event, to the
IDRA manager running on a different external machine, providing centralized debugging and the different debugging operations as described in Section 4.5.
Figures 8 and 9 show in detail the IDRA Manager handling the exception (cfr. Figure 6). Figure 8 shows the name of the exception, how many times it happened (the number 3 between squared brackets) and where it happened (i.e., the map). Figure 9 shows the shared stack, and shows the different data samples that caused the exception, three in this concrete case. With these visualizations, we can already see that they share the timestamp in a string format, and not in the numeric UNIX timestamp format. The developer can choose one and debug it through the Debug Selected button (cfr. Figure 6), or debug a custom set of the data as described in Section 4.5.
In this case, online debugging features such as accessing the state of the application helps to immediately identify the problem. Moreover, the enhanced online debugging features for Big Data applications of IDRA_MR, like fix and resume only failed tasks, avoid re-executions of the application to reproduce and fix the bug.
5.3. Experiment 2: Debugging a Configuration Error
In this second experiment, we describe how IDRA_MR’s debugging cycle avoids tedious times of re-deployment in case of configuration bugs. Config-
uration bugs are a classic kind of bug when using big data frameworks, and reported to raise many bugs in Map/Reduce [25]. In this experiment we consider again the polls analyzing application. Consider that, after debugging the application with IDRA_{MR} and fixing the application level failure as described in experiment 1, the framework correctly finishes executing the program. The code for the application including the fix is shown in Appendix A. When the application finishes, Port will store the final results to HDFS, as indicated in the code of the application (cfr. lines 10-16 in Appendix A).
Consider a configuration and installation error in our poll analysis in which we forget to deploy our HDFS library on the worker nodes at the cluster. This is akin to not correctly packaging a library jar in Hadoop Map/Reduce or Apache Spark. Our correct poll analyzer application will now fail after the reduce task is completed, when the master is handling (and storing) the final result. The program will fail because the class representing the HDFS File System access is not loaded in the worker’s execution environment (i.e. The package is not loaded in the image that is running the Map/Reduce Worker). While classic approaches crash the application and require log analyses to find the problem, Port reports to IDRA_{MR} an exception, in the same way as a classic application exception. The developer then proceeds to load locally the HDFS FileSystem library and IDRA_{MR} will capture the associated code changes. Such code changes can be committed in order to update the codebase of Port, and the execution of the result store can be restarted. In this particular case, the developer will need to restart the reduce phase to trigger again the result store. This is because, otherwise, the master would need to keep in memory the result that triggered the failure.
Debugging such configuration errors with IDRA_{MR} and its code updating capabilities avoids the restarting of the whole system. The support for library code update avoids the hassle of packaging errors and related re-compiling and re-deployment steps. This is particularly useful, especially when configuration bugs appear only in a late stage of the computation, as in this example.
5.4. Experiment 3: Debugging cycles
Consider now the blockchain analysis application failing in the reduce phase because of a misconfiguration error of the database as described in Section 2.2. Listing 4 shows the code of the map: and reduce: methods of the application.
<table>
<thead>
<tr>
<th>MRIndexingApp>>map:blockIndex</th>
</tr>
</thead>
<tbody>
<tr>
<td>ethereumBlock mappedProperty</td>
</tr>
<tr>
<td>ethereumBlock := FogBlockchain at: blockIndex.</td>
</tr>
<tr>
<td>mappedProperty := ethereumBlock</td>
</tr>
<tr>
<td>get: #hash.</td>
</tr>
<tr>
<td>↑ blockIndex -> mappedProperty</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>MRIndexingApp>>reduce:pairs</th>
</tr>
</thead>
<tbody>
<tr>
<td>PostgresDatabase storeIndexedValues: pairs.</td>
</tr>
</tbody>
</table>
Listing 4: Map and Reduce methods of the Blockchain indexing application.
The Map/Reduce is initially called on a collection of indexes, meaning numbers from 1 to the maximum number of blocks in the blockchain. The map method executes on a single index, and uses FogBlockChain, a global reference to the driver for the blockchain data node, to retrieve a particular block from the blockchain. It then extracts the hash of the block and returns a key/value pair with the index and the associated mapped property (i.e., the hash of the block). In the reduce method, the application takes a set of key-pairs, returned by applying the map to a partition of the collection of indexes, and calls the database, accessible globally, to do a bulk insert of the data.
Note that the reduce method assumes that the database is empty, otherwise a primary key clash error occurs. If the database is not correctly reset, all the reduces fail, after the execution of the map completed without errors. IDRA\textsubscript{MR} will receive it as a composite event, akin to experiment 1. However, the error will be raised in the call to PostgresDatabase, making it clear to the developer that such error is not directly related to the application code, since the code of the reduce is correct, but it is due to a wrong initialization that in turn caused a configuration error.
To solve this configuration error, developers can use Port’s remote code execution infrastructure (cfr. Section 4.6) to submit a script performing the database initialization to the node containing the database. They then test the database to check its correct initialization and finally resume the execution.
Since all of the reduces failed because of this error, all reduces need to be resumed. However, by using IDRA\textsubscript{MR} we avoid re-executing all of the maps. This is crucial in the case of the blockchain indexing application where the entire map execution time for analyzing 266GB of transactions takes more than 6 hours, accounting for 90% of the total execution time of the application [4].
Since out-of-place debugging does not block the whole application but allows to debug and fix individual tasks in an isolated environment, developers can correctly re-initialize the database with the procedure described above, and reschedule the execution only of the reduce operation, saving 6 hours of replaying the computation.
Also in this case, similar to experiment 2, debugging with the remote code execution capabilities of Port and IDRA\textsubscript{MR} avoids a complete restart of the system, saving precious computational time. The remote code execution also allows developers to inspect intermediate state of the master and of the workers, or change the configuration of Port while it is deployed.
5.5. Discussion
While applying out-of-place debugging on a Map/Reduce architecture provides advantages when debugging parallel applications, it also presents several challenges that we discuss in what follows.
First, debugging a configuration error in a different environment can be tricky, even using our approach. Consider the example of the database configuration error of Section 2.2. While the debugger would show an error produced by the database driver, if the developer restarts locally the execution on her
machine, a different error will appear: the database is not available at the developer’s machine. This shows a more general problem of code mobility. In the original paper on out-of-place debugging for long-running applications [18], we solved these particular situations by employing proxies. In particular, a proxy can be added during serialization to objects that are known not to be movable (e.g., a connection to the database, a file, a socket, ...). In this paper, due to the nature of current Big Data applications, we believe that this is a minor limitation to debugging such use cases, as shown in Section 5.4. In fact, such applications often interact (in a stateless way) just with the source of data, having a limited interaction, normally known to the developer, with other external sources.
Second, the presented debugging approach has been devised to work with a Master/Worker and Map/Reduce models. While the Master/Worker model is widely used in frameworks that provide parallel execution, relying on a Map/Reduce programming model may limit the applicability of our approach to different Big Data frameworks (e.g., Spark). For instance, the generation of composite events and the handling of its meta-data, are really coupled, in their implementation, to debugging the map and reduce methods. Applying it to a more extended programming model such as the one of Spark may require different kinds of abstractions. It is ongoing work to extend composite events to a Spark-like programming model.
We discuss technical limitations related to our current prototype implementation in Section 6.4.
6. Implementation
In this section, we describe technical details of our approach, including the deployment of Port and the libraries we rely on, as well as a complete architecture when deploying Port on Hadoop Yarn with IDRA_MR.
6.1. Deploying Port on Clusters
Port can be deployed in three ways:
1. **Locally**: with different processes (including master and workers) running on the same machine.
2. **Standalone**: deploying manually the master and the different workers across a distributed system, to then provide a specification to the master to know where the workers are.
3. **On Yarn**: using Hadoop Yarn [3] and our library Pharo On Yarn to deploy the different master and workers on a cluster.
While both local and standalone modes are good for small testing environments, deploying Port on a cluster brings different challenges including how to handle resources, monitor nodes, share data between the different nodes, etc. To this end, we decided to rely on the popular resource manager Hadoop Yarn [3]. Yarn handles the configuration and the deployment of the system. It is commonly
used to deploy frameworks such as Map/Reduce and Spark, especially when the size of the system increases since it can scale to thousands of nodes. Yarn allows us to abstract over the properties of the underlying hardware like available memory and CPU, availability of a node, etc.
Figure 10 shows an overview of Port deployed on a cluster using Yarn, including the IDRAMR debugging infrastructure. As mentioned, resource management is leveraged on Hadoop technology (i.e., Yarn and HDFS), while the execution environment layer we use our Port framework including the Map/Reduce Master and the different Map/Reduce Workers to execute a Map/Reduce application.

**Pharo on Yarn (PHOY).** PHOY is a Yarn application used to dynamically spawn different isolated execution environments so-called containers. In our case, each container runs either a Port Map/Reduce Master or a Port Map/Reduce Worker. While Yarn takes care of where and when to allocate a new container, PHOY introduces an API to instruct Yarn to deploy new containers and to query information about existing containers. For instance, the Map/Reduce Master will be able to use PHOY to know if a particular Map/Reduce worker is still running.
**Supporting a Distributed File System.** Typically, on a cluster, a distributed file system allows easy sharing of data between nodes. Such a distributed file system is used both by the developers to store data and result and by the Big Data frameworks to store intermediate results or share data between the different running components. To this end, we provide the Pharo-HDFS library which enables Pharo developers and the Port framework itself to access the HDFS file system, the popular Distributed File System of Hadoop. In particular, Port uses Pharo-HDFS to start the execution on a file stored on HDFS and to store intermediate information when needed.
6.2. Handling Composite Events
Composite events (described in Section 4.3) are generated in the IDRA Monitor and sent to the IDRA Manager. Such events contain one copy of the gener-
ated stack, as it would be using classic out-of-place debugging, and the metadata of the single exceptions. In the current implementation, such metadata includes the full partition that had the record causing the exception and the index of the faulty record in the partition. This information is enough to reconstruct the exception in the IDRA Manager every time the developer selects a particular virtual partition to debug.
6.3. Communication and libraries
Both Port and IDRA\textsubscript{MR} leverage on Fuel [7], a common Pharo library for the serialization of object graphs, to serialize debugging sessions, data, and operations.
Epicea [8] is used in IDRA\textsubscript{MR} for the detection and application of code changes. Epicea is a Pharo library for logging, un-doing and replaying code changes made at runtime. It stores the single code changes in an external file.
All communications happen by using HTTP requests with Zinc [30], an HTTP framework for Pharo that allows, among other things, to both manage an HTTP Server and act as an HTTP client.
Thanks to the code-update capabilities of Smalltalk, the code base of different connected nodes can be updated without stopping the application, reducing debugging and deployment time.
6.4. Limitations of the prototype
Both the prototype of IDRA\textsubscript{MR} and of Port present different limitations due to implementation choices. We now discuss the most important technical limitations of our prototype in what follows.
When there is an exception, the IDRA Monitor sends to the IDRA Manager the exception, and some meta-data. As explained in Section 6.2, such meta-data includes the full partition that the worker was analyzing when the exception happened. In the case this partition is particularly big, it may cause (i) delays, (ii) the IDRA Monitor or the external IDRA Manager to run out of memory. This can be solved by sending data on-demand.
For serialization, we heavily rely on the Fuel [7] library. While using Fuel spared us much work (e.g., to define a serialization protocol), Fuel is sometimes slow to serialize, and, in some corner cases, it will try to serialize some objects that are not really needed in the debugging session. This introduces delays during serialization and network communication, which could be avoided by optimizing the serialization engine.
7. Related Work
In literature, we can find two well-known families of debuggers: online and offline debuggers [20, 24]. Online debuggers manage the execution of an application at the moment of failure. They allow developers to interact smoothly with a running application, offering breakpoints, watchpoints and stepping operations that give immediate feedback to the developer. Offline debuggers (or
post-mortem debuggers), on the other hand, try to help the developers understanding, or sometimes reconstructing, the context of a bug from a failed execution. Such solutions analyze or replay log files, code dumps and/or execution traces to help the developer discover the source of the problem. Reproducing a bug with these techniques can be tedious and time-consuming, especially because many debugging cycles are required before the error happens again as argued in Section 2.
While this paper focuses on devising an online debugging solution, in what follows we compare our approach to the closest related work in debugging approaches for Big Data applications, both for offline and online techniques.
Most of the debugging solutions for Big Data are the so-called event-based debuggers [20] that record and store events of one execution for later inspection and or replay. Among these debuggers, we can find Arthur [5], a debugger for Apache Spark, where multiple replays are necessary to find the point of failure. Another solution is Graft [26], a debugger for Apache Giraph [1]. When using Graft, the developer needs to indicate beforehand which particular points of the execution to record, to then be able to replay them afterward. More recently, Daphne [13] and BigDebug [12] combine replay debugging with some interesting online debugging capabilities. We detail below both approaches and how they compare to our solution.
Daphne is a debugger for DryadLINQ [21] which provides a runtime view of the running system and of the query nodes generated by a LINQ query. It allows developers to add breakpoints to inspect the state and start and stop commands through the Visual Studio remote debugger. Contrary to IDRA$_{MR}$, debugging is done remotely directly where the breakpoints are executing, while IDRA$_{MR}$ moves the debugging session to an external debugger process. Interestingly, Daphne allows to debug also locally but still requiring a replaying step that IDRA$_{MR}$ avoids by moving the debugging session as soon as a halting point is reached. Furthermore, IDRA$_{MR}$ can handle both breakpoints and exceptions in an online way, while Daphne requires a replaying step in case of an exception.
BigDebug is a checkpoint-based debugger for Apache Spark [2] which introduces the concept of a simulated breakpoint that does not stop the execution nor freezes the system waiting for the resolution of the breakpoint. Instead, it stores the information necessary to replay the environment in a snapshot (i.e. a checkpoint) and then continues the execution. After the simulated breakpoint, the developer can proceed to debug in a sort of step-by-step execution on the remote node.
When an exception is raised in the application, the execution stops and the BigDebug debugger does not capture immediately the context of the bug, letting the application crash. Crash analysis is then used to detect which part of the execution failed, and then a replay step is required (in the best case from a stored checkpoint). This is avoided by IDRA$_{MR}$, offering an online debugging session to the developer reconstructing the application context when the failure occurred. Although BigDebug provides some support for hot-fixing the code, it is only limited to one particular execution (the replayed one), and such code fix can only change a particular function (e.g. the lambda that is
mapped). For instance, the developers cannot change the type returned by the mapped lambda. This functionality aims to fix a particular crash inducing record, instead of fixing the application. Major code changes that modify the behaviour of the application need to be done offline and re-deployed on the system. In comparison, IDRA_{MR} can propagate both minor and major code updates in a live and transparent way.
8. Conclusion
In this paper, we presented an online debugging approach for Map/Reduce applications, by the use of online debugging and of debugging abstractions such as composite exceptions. Since our prototype is based on our Map/Reduce implementation, we first described Port, a distributed framework for Pharo. Port models the execution of parallel applications with a master/worker model on top of which we build a Map/Reduce model. We then presented an online debugger for Map/Reduce applications in Port based on the ideas of out-of-place debugging called IDRA_{MR}. The main characteristics of IDRA_{MR} are:
1. It completely moves the debugging session from the worker nodes at the cluster to an external process, allowing to debug map or reduce tasks in an isolated environment.
2. It provides dynamic code updates facilities to propagate code changes back to the workers, without requiring stopping the whole distributed system.
3. It centralizes the debugging session, allowing to debug a distributed parallel application from a unique debugger manager.
IDRA_{MR} introduces also different dedicated online debugging features targeted at Map/Reduce applications. First, IDRA_{MR} provides composite debugging events, as an abstraction of the same event (e.g., an exception or breakpoint) that happened multiple times during the parallel execution of a task. Second, IDRA_{MR} allows developers to choose three different strategies to determine which kind of data a debugging session operates on (e.g., a virtual partition with all the failing records).
We validate our approach by debugging two concrete cases, an election polls analyzer, originally described in the work of Gulzar et al. [12], and a blockchain analysis application. Through three different experiments, we show how our approach can help developers to (i) detect and react to bugs happening in parallel during the execution (ii) discover and fix configuration bugs through remote code execution and (iii) correctly resume the execution of the application with updated code for both application and library code.
As future work, we are planning to generalize our debugging support and operations to other Big Data execution models, such as Spark. We are also planning to consider debugging the dependencies between different operations and data, to improve the debugging experience.
Acknowledgements
We would like to thank Clément Béra for his help in the early stages of this work ([17]). We would also like to thank the anonymous reviewers for their useful feedback.
Matteo Marra is a SB PhD Fellow at the FWO - Research Foundation - Flanders. Project: IS63418N.
References
Appendix A. Code of the polls analyzer application
This appendix provides the code of the polls analyzer application described in Section 2 which is used as running example along the paper. We use the notation NameClass >> nameMethod as a convention in this appendix to denote a method called nameMethod defined at a NameClass class.
```java
MapReduceApplication subclass: #VoteCountingMRAplication
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: 'Port−Examples'
VoteCountingMRAplication >> parallelReduce
t true.
```
VoteCountingMRApplication >> handleResult: res
fs := FileSystem hdfsAtHost: hdfsConfiguration hdfsHost user:
hdfsConfiguration hdfsUser.
fileName := fs workingDirectory
/ ('results/result─', DateAndTime now asUnixTime asString , '─' , aResult
dataId asString , ' ' , aResult partition asString).
fs store createFile: fileName.
aResult data do: [:data | fileName writeStream appendAll: data asString ,
String cr ].
VoteCountingMRApplication >> remotePartitions
PersistedRemotePartitions.
VoteCountingMRApplication >> map: line
| splitted |
splitted := line substrings: ' ',
(splitted at: 1 includesSubstring: 'Abruzzo') ifTrue: [
((DateAndTime fromUnixTime: (Integer readFrom: (splitted at: 3 )) ) >
DateAndTime yesterday) ifTrue: [
(splitted at: 2) -- > 1.
]
].
nil -> nil.
VoteCountingMRApplication >> isResultKeyable: aCommand
(aCommand beginsWith: 'applyMap')
true.
VoteCountingMRApplication >> repartitionBeforeReduce
true.
VoteCountingMRApplication >> reduce: aSetOfVotes
| dict |
dict := Dictionary new.
aSetOfVotes
do: [:vote |
vote key
ifNotNil: [ dict
at: vote key
ifPresent: [:val | dict at: vote key put: val + 1]
ifAbsentPut: 1 ] ].
]
dict.
Appendix B. Log of the failing polls analyzer
This appendix provides the full log filed used in the running example of both
motivation (Section 2) and experiment 1 of the validation (Section 5).
2019−04−30T13:43:27.442041+02:00 FINISH SCHEDULING OF
applyMapTo:
HandleResult of applyMapTo:2019−04−30T13:43:27.442168+02:00 MAP
FINISHED
HandleResult of applyMapTo:2019−04−30T13:43:27.442445+02:00 MAP
FINISHED
HandleResult of applyMapTo:2019−04−30T13:43:27.442552+02:00 MAP
FINISHED
2019−04−30T13:43:27.442612+02:00 HANDLING ERROR
2019−04−30T13:43:27.445653+02:00
NumberParser(Object)>>error:
NumberParser>>expected:
NumberParser>>nextUnsignedIntegerBase:
NumberParser>>nextIntegerBase:
Integer class>>readFrom:base:
Integer class>>readFrom:
VoteCountingMRApplication>>map:
[ :el | self map: el ] in VoteCountingMRApplication(MapReduceApplication)
>>applyMapTo: in Block: [ :el | self map: el ]
Array(SequenceableCollection)>>collect:
VoteCountingMRApplication>>applyMapTo:
2019−04−30T13:43:27.445728+02:00 CRITICAL FAILURE
HandleResult of applyMapTo:2019−04−30T13:43:27.445787+02:00 MAP
FINISHED
HandleResult of applyMapTo:2019−04−30T13:43:27.445858+02:00 MAP
FINISHED
HandleResult of applyMapTo:2019−04−30T13:43:27.445882+02:00 MAP
FINISHED
2019−04−30T13:43:27.445909+02:00 HANDLING ERROR
2019−04−30T13:43:27.446234+02:00
NumberParser(Object)>>error:
NumberParser>>expected:
NumberParser>>nextUnsignedIntegerBase:
NumberParser>>nextIntegerBase:
Integer class>>readFrom:base:
Integer class>>readFrom:
VoteCountingMRApplication>>map:
[ :el | self map: el ] in VoteCountingMRApplication(MapReduceApplication)
>>applyMapTo: in Block: [ :el | self map: el ]
Array(SequenceableCollection)>>collect:
VoteCountingMRApplication(MapReduceApplication)>>applyMapTo:
2019–04–30T13:43:27.446262+02:00 CRITICAL FAILURE
HandleResult of applyMapTo:2019–04–30T13:43:27.460139+02:00 MAP FINISHED
2019–04–30T13:43:27.493406+02:00 HANDLING ERROR
2019–04–30T13:43:27.493644+02:00
NumberParser(Object)>>error:
NumberParser>>expected:
NumberParser>>nextUnsignedIntegerBase:
NumberParser>>nextIntegerBase:
Integer class>>readFrom:base:
Integer class>>readFrom:
VoteCountingMRApplication>>map:
[ :el | self map: el ] in VoteCountingMRApplication(MapReduceApplication)
>>applyMapTo: in Block: [ :el | self map: el ]
Array(SequenceableCollection)>>collect:
VoteCountingMRApplication(MapReduceApplication)>>applyMapTo:
2019–04–30T13:43:27.493661+02:00 CRITICAL FAILURE
Appendix C. Log of the failing blockchain analysis
In the following we present the log of the blockchain analysis failing during the reduce because of the database initialization problem.
The presented log is only the printed stack, the rest of the log was omitted. Please refer to Appendix B for an example of a complete log.
ERROR: duplicate key value violates unique constraint "blocks_hash_pkey"
[ self executeTask ] in TKTTaskExecution>>value in Block: [ self executeTask ]
[ activeProcess psValueAt: index put: anObject.
aBlock value ] in TKTConfiguration(DynamicVariable)>>value:during: in
Block: [ activeProcess psValueAt: index put: anObject....
BlockClosure>>ensure:
TKTConfiguration(DynamicVariable)>>value:during:
TKTConfiguration class(DynamicVariable class)>>value:during:
TKTConfiguration class>>optionAt:value:during:
TKTConfiguration class>>runner:during:
TKTTaskExecution>>value
[ self noteBusy.
aTaskExecution value.
self noteFree ] in TKTWorkerProcess(TKTAbsactExecutor)>>executeTask:
in Block: [ self noteBusy....
BlockClosure>>on:do:
TKTWorkerProcess(TKTAbsactExecutor)>>executeTask:
TKTWorkerProcess>>executeTask:
[ self executeTask: taskQueue next ] in TKTWorkerProcess>>workerLoop in
Block: [ self executeTask: taskQueue next ]
BlockClosure>>repeat
TKTWorkerProcess>>workerLoop
MessageSend>>value
MessageSend>>value
TKTProcess>>privateExecution
TKTProcess>>privateExecuteAndFinalizeProcess
|
{"Source-Url": "https://biblio.vub.ac.be/vubirfiles/75859669/paper.pdf", "len_cl100k_base": 14716, "olmocr-version": "0.1.50", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 220968, "total-output-tokens": 18687, "length": "2e13", "weborganizer": {"__label__adult": 0.0002696514129638672, "__label__art_design": 0.000244140625, "__label__crime_law": 0.00020623207092285156, "__label__education_jobs": 0.0004878044128417969, "__label__entertainment": 5.245208740234375e-05, "__label__fashion_beauty": 0.00011724233627319336, "__label__finance_business": 0.00017642974853515625, "__label__food_dining": 0.0002160072326660156, "__label__games": 0.0004422664642333984, "__label__hardware": 0.0007371902465820312, "__label__health": 0.0002944469451904297, "__label__history": 0.00019037723541259768, "__label__home_hobbies": 7.110834121704102e-05, "__label__industrial": 0.0002727508544921875, "__label__literature": 0.00017976760864257812, "__label__politics": 0.00019371509552001953, "__label__religion": 0.00033402442932128906, "__label__science_tech": 0.011505126953125, "__label__social_life": 6.937980651855469e-05, "__label__software": 0.006694793701171875, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00019240379333496096, "__label__transportation": 0.00037026405334472656, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75564, 0.03599]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75564, 0.24457]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75564, 0.87552]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2438, false], [2438, 5900, null], [5900, 8721, null], [8721, 11435, null], [11435, 14302, null], [14302, 17377, null], [17377, 19455, null], [19455, 22149, null], [22149, 24173, null], [24173, 27261, null], [27261, 28278, null], [28278, 31367, null], [31367, 33285, null], [33285, 35296, null], [35296, 38400, null], [38400, 40085, null], [40085, 42685, null], [42685, 44025, null], [44025, 46989, null], [46989, 50220, null], [50220, 52931, null], [52931, 55052, null], [55052, 57811, null], [57811, 61213, null], [61213, 63996, null], [63996, 65995, null], [65995, 68548, null], [68548, 70467, null], [70467, 71894, null], [71894, 73351, null], [73351, 74544, null], [74544, 75564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2438, true], [2438, 5900, null], [5900, 8721, null], [8721, 11435, null], [11435, 14302, null], [14302, 17377, null], [17377, 19455, null], [19455, 22149, null], [22149, 24173, null], [24173, 27261, null], [27261, 28278, null], [28278, 31367, null], [31367, 33285, null], [33285, 35296, null], [35296, 38400, null], [38400, 40085, null], [40085, 42685, null], [42685, 44025, null], [44025, 46989, null], [46989, 50220, null], [50220, 52931, null], [52931, 55052, null], [55052, 57811, null], [57811, 61213, null], [61213, 63996, null], [63996, 65995, null], [65995, 68548, null], [68548, 70467, null], [70467, 71894, null], [71894, 73351, null], [73351, 74544, null], [74544, 75564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75564, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2438, 2], [2438, 5900, 3], [5900, 8721, 4], [8721, 11435, 5], [11435, 14302, 6], [14302, 17377, 7], [17377, 19455, 8], [19455, 22149, 9], [22149, 24173, 10], [24173, 27261, 11], [27261, 28278, 12], [28278, 31367, 13], [31367, 33285, 14], [33285, 35296, 15], [35296, 38400, 16], [38400, 40085, 17], [40085, 42685, 18], [42685, 44025, 19], [44025, 46989, 20], [46989, 50220, 21], [50220, 52931, 22], [52931, 55052, 23], [55052, 57811, 24], [57811, 61213, 25], [61213, 63996, 26], [63996, 65995, 27], [65995, 68548, 28], [68548, 70467, 29], [70467, 71894, 30], [71894, 73351, 31], [73351, 74544, 32], [74544, 75564, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75564, 0.03185]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
c1443dad03758163f4204830650a4aec202a0cef
|
Publishable Summary
1. The Challenge
Over the last years, business compliance, i.e., the conformance of business procedures with laws, regulations, standards, best practices, or similar requirements, has evolved from a prerogative of lawyers and consulting companies to a major concern also in IT research and software development. Given the increasing IT support in everyday business as well as the repetitive and work-intensive nature of compliance controls and audits, this evolution can be seen as a natural extension of current enterprise software, especially in light of the novel, technical opportunities offered by the Service-Oriented Architecture (SOA). Yet, until only few years ago, compliance management was not perceived as major concern in IT research.
2. Addressing the Challenge: The Project's Proposition
In this context, COMPAS was surely one of the forerunners and first international research efforts recognising both the need for IT support in compliance management and the spreading of the SOA in today’s business realities. COMPAS is a Specific Targeted Research Project (STREP) funded by the European Commission under the 7th Framework Programme. The project had a budget of 3.920.000 € and started in February 2008 with a duration of 36 months. COMPAS is a NESSI Project and targets standardization of some parts of its contributions.
Pragmatically, COMPAS did not aim at over-engineering the compliance problem, e.g., by allowing compliance experts to enforce compliance of individual messages flowing through a company’s IT infrastructure, and instead focused on compliance awareness, that is, on the design for, monitoring, and reporting on compliance. As such, it particularly follows the pace of business, not that of IT systems, a feature that turns it into a valuable instrument in the hands of those who have to deal with compliance at an everyday basis.
The COMPAS approach should not be expected as an ultimate solution to compliance management, in the sense that, like other similar research projects, it does not cover all possible compliance requirements imaginable. Yet, this is mostly due to the very nature of compliance, which is a multifaceted and interdisciplinary problem that cannot be approached via IT only and instead highly depends on the correct identification and interpretation of the laws and regulations that apply to a given business sector as well as on the attitude a company has (or not) toward compliance. Nevertheless, COMPAS significantly advanced the state of the art in IT compliance management, identifying both which contributions IT can bring to compliance management and which capabilities, instead, are outside its reach.
The COMPAS project realized a practical, yet general enough, modelling approach for specifying service-oriented architectures with compliance concerns. In particular, business processes can be designed and compliance controls can be associated with processes and process elements. For this we profit from applying a model-driven engineering approach and use annotation techniques for relating system and requirement models at design-time. To the best of our knowledge, the COMPAS approach is the first approach that makes this link at design-time and – supported through a model-aware service environment (MORSE) – utilizes such relations at runtime for compliance monitoring.
3. Who Can Benefit from COMPAS
The results from the COMPAS project are relevant to everyone such as large and middle-sized companies who want
- to specify and document compliance requirements originating from laws, regulations, or policies;
- to link IT – in particular business processes and services – to compliance requirements originating from laws, regulations, or policies;
- to establish and realize compliance management for their IT-based business solutions and services.
4. Highlights of Achievements
This result in the following advantages compared to other modelling and/or monitoring approaches while combining their strengths: First, the various stakeholders (e.g., compliance expert, business administrator, IT expert, etc.) can participate in the development process: they are supported through (1) the adoption of the separation of concerns principle and (2) suited domain specific languages with defined levels of abstraction. Second, the information in these models can be used for an automated generation of compliance documentation and a generator can consider the information for building compliant systems or detecting static compliance violations. Third, the monitoring can take place at a higher level of abstraction (i.e., the level of the process and compliance models). This eases the (root cause) analysis and report of dynamic compliance violations (i.e., compliance violations that occur during execution) and helps stakeholders to easily relate to the respective modelling artefacts.
All of the components of the COMPAS architecture have been implemented and tested and we have conducted extensive use-case evaluation for demonstrating the suitability and feasibility of COMPAS approach. We realize the chance of our contributions to impact the field of monitoring in general as well as the field of adaptation. Finally, we expect more work to be conducted by expanding model-driven engineering across the boundaries of generation, i.e., by embedding and working with traceability information and dynamic model look-up and model-based reflection and monitoring.
This document summarizes the achievements of the individual project partners throughout the course of the project.
5. The Results
The COMPAS project had been accomplished to design and implement novel models, languages, and an architectural framework to ensure compliance of services to design rules and regulations. In the COMPAS approach model-driven techniques, domain-specific languages, and service-oriented infrastructure were applied to enable organizations to develop business compliance solutions easier and faster. Compliance refers to the entirety of all measures that need to be taken in order to adhere to laws, regulations, guidelines, and internal policies.
The resulting “design-for-compliance” architecture framework ensures compliant composition of business processes and services, and that allows specification, validation, and enforcement of comprehensive compliance policies related to these processes and services. The framework provides the possibility to enhance business process languages, such as (but not limited to) the
Business Process Execution Language (BPEL), with enforceable compliance concepts and policies. Additionally, the necessary specification languages and models for expressing typical compliance concerns were developed.
A formally grounded and implemented behaviour model for services and service composition were provided, enabling the formal validation of compliance of composed services to the behaviour and process constraint specifications. Consequently, compliance concerns can be checked statically as well as dynamically. Finally, monitoring and management tools had been developed for tracking and validating those compliance concerns that can only be verified at runtime. These tools were complemented with reasoning and mining tooling that helps to discover compliant instances services and processes.
The COMPAS project was scheduled into five milestones:
Milestone 1: Definition of Case Studies for Business Compliance [M1-6]
Milestone 2: Initial Meta-models and Languages for Business Compliance [M7-11]
Milestone 3: Initial MDSD Software framework for Business Compliance [M12-23]
Milestone 4: Initial Compliance Governance Concepts and Software framework [M12-23]
Milestone 5: Integrated Compliance Software framework and Runtime Infrastructure [M24-35]
The following table presents a summary of the progress of the project with regard to the milestones. Specifically, the results achieved for the different milestones and any relevant prototypes delivered. Detailed progress results are provided in later sections.
<table>
<thead>
<tr>
<th>Milestone</th>
<th>Results Achieved</th>
<th>Prototypes Delivered</th>
</tr>
</thead>
</table>
| Milestone 1: Definition of Case Studies for Business Compliance [M1-6] | • Report on industry experience, state-of-the-art reports
• Case studies for research evaluation | |
| Milestone 2: Initial Meta-models and Languages for Business Compliance [M7-11] | • Overall conceptual and concrete architecture perspectives for COMPAS project
• Conducted initial specification of compliance language constructs and operators (see [D2.2])
• Introduced a formal model and validation framework for describing, reasoning and automated analysis of business processes (see [D3.1]).
• Introduced a goal-oriented data model for warehousing process execution and compliance data (see [D5.2]). | |
| Milestone 3: Initial MDSD Software framework for Business Compliance [M12-23] | • Initial standalone MDSD prototypes and documentation
• A video was provided demonstrating the process steps in the ICT security case study from THALES. | • The MDSD software framework (View-based Modeling framework (see [D1.3])
• The Compliance Request Language Tool (CRLT) (see [D2.6])
• A library and user interface (Eclipse plugin) for verification of service descriptions (see [D3.3])
• A collection of extensions to the Business Process Execution Language (BPEL) (see [D4.2]) |
| Milestone 4: Initial Compliance | • Initial standalone compliance | • The MDSD software framework |
Governance Concepts and Software framework [M12-23]
- governance prototypes
- A video was provided demonstrating the process steps in the ICT security case study from THALES.
(View-based Modeling framework (see [D1.3])
- Infrastructure for supporting reusable SOA units (e.g. process fragments) and for generation and execution of compliant processes (see [D4.4])
- A compliance governance dashboard for visualization of compliance state (see [D5.5]).
Milestone 5: Integrated Compliance Software framework and Runtime Infrastructure [M24-35]
- Developers’ Integration Meetings
- Initial demonstration of standalone prototypes (components) for the software framework and runtime infrastructure
The tasks from these milestones are subdivided into a number of work packages that comprise areas of expertise from the nine (9) partners that make up the project consortium. The consortium includes academic and industry partners who complement each other’s skills through carrying out academic research and providing industry experience in the form of case studies. The industry partners provided the experience that guided the exploitation of the research products.
The COMPAS project has significant positive impact on different areas in service-oriented computing, from industry solutions to addressing new open research issues on how services are developed, composed and maintained. One of the main achievements is the development of a comprehensive SOA business compliance software framework that enables a business to express various compliance concerns using one and the same software framework. The major impact of the COMPAS project spans over the following areas:
- End-to-end business compliance software framework
- Reducing the development complexity
- Business process specification and better reuse of existing services
- Verification and validation of services
- Contributions to open standards
The project also has an impact in terms of inputs to standards and reference architectures, and open source platforms and frameworks. Scientific collaborations inspired by the project have produced numerous scientific publications in various international conferences and journals, whilst industry involvement leads to sharing many valuable experience and knowledge from industry partners.
Ultimately, the project also yields innovations in the important area of business compliance, which have an impact on everyday life - consider the banking crisis that started in 2007 as an example. A lot of regulatory compliance issues have come into question and been revised as a result of this crisis. Organizations need to have a systematic way to implement and adapt their systems in such a dynamic environment. While the impact on the availability to and use by citizens of new products and services are currently only rated as low, it is expected to increase once mature products have been built based on COMPAS technology and concepts.
6. The Pilots
6.1. Overall COMPAS Architecture
The figure below shows a high-level view on the architecture that had been implemented in the course of the COMPAS project. Each of the components and its integration with other components had been described in project deliverables. A short description of the components is available online at the public COMPAS Web site at http://www.compas-ict.eu/components.
6.2. COMPAS Conceptual Model
The figure below shows the conceptual model of the concepts which have been developed in the course of the project. The definition of the terms used in this conceptual model has been made available online at the public COMPAS Web site at http://www.compas-ict.eu/terminology in order to make the project and its results more easily accessible for the public.
6.3. Objectives and Achievements
6.3.1. Modelling of compliance concepts (meta-models and languages) at design time.
Due to the model-driven approach, an accurate modelling of compliance concepts has been essential for the project and for the work of all partners from the beginning on. After iterative design steps, the consortium partners agreed on the COMPAS conceptual model in a dedicated meeting. At this meeting, the model has been designed in cooperation with PWC as experts in the field of compliance.
An important aspect when designing the COMPAS conceptual model was compliance traceability, asking the question: how do compliance requirements relate to compliance sources such as laws or regulations? Such pre-requirements specification traceability has been supported in the model, thus. Another equally important aspect has been generality: as a consequence, the compliance annotation of service-oriented architectural (SOA) elements became generic.
6.3.2. Using model-driven domain-specific languages to support the stakeholders.
Various stakeholders are involved in the design process of compliant business processes, ranging from technical to business experts. We support the stakeholders with tailored domain-specific languages (DSL) following a model-driven approach. As a consequence, business experts do not have to specify any technological artefacts to comply to appropriate compliance concerns. Based on the DSL specifications, an automatic generation of executable code is possible. Technical and business experts can collaborate better for securing the compliance concerns at design time and runtime. In COMPAS, we have developed the Quality of Service Language (QuaLa) for specifying the services’ QoS compliance concerns.
6.3.3. Model-driven approach for the generation of business processes with compliance concerns at generation time.
The automatic and model-driven generation of business processes with compliance concerns from conceptual models has been achieved through the view-based modelling framework (VbMF). For this, BPEL and WSDL code is generated and model-traceability information is supplied in form of a traceability matrix.
6.3.4. Supporting model-based reflection for the compliance monitoring at runtime.
Business administrators, compliance experts and other stakeholders specify various concerns (e.g., the control flow of a business process, compliance sources for a compliance requirement, etc.) at design time. Yet, during execution the runtime needs to relate to these concepts. In a distributed and evolving environment we addressed this issue by making models and model elements uniquely identifiable and retrievable. For this we made use of Universal Unique Identifiers (UUIDs) as described by the International Telecommunication Union (ISO/IEC 9834-8, 2004), and provided a Model-Aware Service Environment (MORSE) that realizes transparent UUID-based model-versioning.
6.3.5. Establish monitoring and management of business events in order to proactively identify problems and/or opportunities associated with a given request.
Our aim in COMPAS and specifically in WP5 was to extend current data warehousing and reporting technology toward event-based business process warehousing and analysis, which supports the offline monitoring and management of the performance and compliance of executed business process instances. This goal has been achieved by means of the following ingredients that have been conceived and developed throughout the project: an event log for runtime business events, a business event-centric data warehouse, a set of ETL (Extract-Transform-Load) procedures that are able to feed the data warehouse, a graphical reporting dashboard for inspection of the compliance state, and a root cause analysis tool. We call these components collectively compliance governance infrastructure.
6.3.6. Provide specific support for monitoring and management of compliance.
Compliance has been taken into account in four different ways: by jointly agreeing on a set of events from which it is possible to assess whether a process instance has been executed compliantly or not, by storing the respective compliance requirements in the data warehouse, by equipping the dashboard with compliance-specific navigation paths (from coarse requirements to low-level events), and by implementing a decision tree mining algorithm that identifies correlations between business data produced during process execution and compliance evaluations.
6.3.7. Provide offline governance of compliance through mining and analysing logs.
The analysis of non-compliant situations has been addressed by two root cause analysis techniques: decision tree mining and business protocol mining. The decision tree approach is able to identify whether there are dependencies among the data exchanged during the execution of a process instance and the final compliance assessment of a process. Once a dependency is identified, such can be used for two purposes. First, the dependency may allow the process analyst to trace back non-compliant situations to their root cause. Second, the decision tree can be used to predict likely compliance assessments already during the on-going execution of a process. The protocol mining approach complements this technique. It aims at reconstructing a so-called protocol (the logic of the exchanged messages in a process) from the event log. As such, it allows the process analyst to check whether a deployed process is really been executed as expected by its design.
6.3.8. Provide tool support and integration in the COMPAS architecture.
The integration of the compliance governance infrastructure with the overall COMPAS architecture occurs via two main channels: via the event log and via the MORSE repository. The event log collects all runtime data that is necessary to assess the compliance of executed process instances. The MORSE repository contains all the process models and compliance requirements that are necessary to interpret collected events and to support the compliance-centric navigation through the data in the data warehouse. The specific tool that performs this last interpretation and preparation of the data is the ETL procedures.
6.3.9. **Concept of reusable process artefacts to assure compliance of business processes and service compositions.**
The concept of process fragments to ease the task of compliant process design by reusable building blocks has been developed in the COMPAS project. We applied the concept of process fragments to implement the compliance requirements related to process activities and control flow. To ensure a correct integration in the process, we included concepts on compliance rule formalization developed by University of Tilburg and the approach on compliance verification by CWI Amsterdam. Using this combination of techniques, compliant business process design is achieved: Compliance rules can be captured using fragments and these fragments can be included in a process without breaking the compliance modelled by the fragments.
6.3.10. **Language and runtime support for reusable process artefacts.**
We developed language extensions to BPEL in order to support the specification of process fragments for compliance. Process fragments have been implemented using these extensions and have been successfully integrated into the process models of the use cases. To achieve most possible impact, we prepared a standardization proposal for these extensions. Process fragments for compliance are integrated into the process model during design time, i.e. before its execution. After integration, a standard process model without extensions for compliance fragments can be generated. Therefore, this concept supports reusable process artefacts, but it does not require an extension or modification of the process engine as standard process models are executed.
6.3.11. **Tool support and integration in the COMPAS architecture.**
Besides tools that provide the “glue” for integration in the COMPAS architecture, we mainly developed three components to support the concepts of process fragments and compliant business process execution. (i) For design-time support of process fragments we developed the fragment-oriented repository *Fragmento*. This repository provides advanced functions for the management of process fragments for compliance, e.g. a process stored in Fragmento can be annotated with security policies or with process fragments that constrain its behaviour. (ii) To support traceability during execution, we developed an extension of the eventing functionality of the open-source process engine Apache ODE. The need to address traceability has been identified as crucial in an early stage of COMPAS. Traceability denotes the property of being able to trace a requirement throughout the process lifecycle. It forms the bridge between compliance requirements from design time and compliance violations from runtime and thus enables drill-down of violations to their origin. At runtime, this traceability information is emitted in execution events by the extended process engine. These events contain Universally Unique Identifiers (UUIDs) of the process model, process instance, process activities, an event type, and optional further properties. (iii) To support monitoring the execution of a process instance based on a process graph we developed the Web-based monitoring tool *Business Process Illustrator (BPI)*. This tool allows for following the execution of a process, while abstracting from details of little importance for understanding, e.g. hiding of fault handlers or assign activities. Furthermore, we can use this monitor to highlight those fragments in a process that are related to compliance.
In summary, the concepts and tools we have developed meet our expectations. We have implemented our approach in COMPAS infrastructure components. The presented concepts cover the compliance requirements related to the internals of a process. Our work on case studies revealed that this is a subset of the compliance requirements. Thus, other actions have to be taken in addition to provide a holistic compliance management.
6.3.12. Designing concepts for expressive languages based on the DSL and specification language concepts.
A major issue to enable the effective management and enforcement of compliance requirements is to decouple compliance specification from business process specification. Compliance requirements should be organized and represented at various levels of abstraction to accommodate different stakeholders’ needs. Decoupling involves the specification and management of compliance requirements and all relevant concepts (e.g. risks, controls, compliance regulations and directives, etc.) as a separate entity – starting from abstract requirements to concrete and organization-specific rules – and requires them to be linked to the relevant business processes/fragments to enable their traceability. For this purpose, a conceptual model has been developed, where compliance requirements and all related concepts can be organized, stored and maintained, enabling their usability and traceability.
6.3.13. Developing an expressive language for compliance concerns.
The main objective of WP2 is to provide an expressive language for compliance requirements; we name it “Compliance Request Language (CRL)”. Compliance requirements should be based on a formal foundation of a logical language to pave the way for automatic reasoning and analysis techniques that assist in verifying and ensuring design-time business process compliance. In this aspect, we make use of process verification tools against formal compliance rules. We have analysed a wide range of compliance legislations and frameworks including Basel II, Sarbanes-Oxley, IFRS, FINRA, COSO, and COBIT, and examined a variety of relevant works on the specification of compliance requirements. Our analysis identified a set of features that CRL should possess, such as expressiveness, usability, non-monotonicity, intelligible feedback, etc. Based on these findings, we have conducted a comparative analysis between a set of formal languages that are candidates to serve as the formal foundation for the CRL. The comparative analysis put more strength on temporal logic mainly because of its maturity and the availability of its associated sophisticated verification tools that have proven to be successful in the verification of various large-scale systems. In particular, we have adapted Linear Temporal Logic (LTL).
We have introduced the meta-model and the grammar of Compliance Request Language (CRL) that is grounded on LTL and property specification patterns (cf. Dwyer et al. 1998, Property Specification Patterns for Finite-State Verification), which are high-level abstraction of frequently used temporal logic formulas. Patterns are intended to solve one of the major problems relevant to the usage of formal languages: the usability. In addition to the original patterns, we have also identified and introduced a set of compliance patterns to capture recurring requirements in the compliance context. CRL enables the user to build pattern-based representations of compliance requirements and based on the mapping rules from patterns to LTL, formal compliance rules are automatically generated using the tools we developed for this purpose.
Based on these foundations, we also proposed an approach to identify root causes of compliance violations during design-time, to provide remedies as guidelines/ suggestions that can help the business and/or compliance experts to resolve compliance deviations. Identifying the root-causes of violations and providing the experts with appropriate guidelines to resolve non-compliance is an important issue that should be considered and integrated in a comprehensive compliance management solution.
6.3.14. Design and implement tools and a supporting infrastructure that helps users to use the expressive languages for compliance concerns.
We developed the integrated environment – Compliance Request Language Tools (CRLT) together with the Compliance Requirements Repository (CRR), where data maintained by the CRLT resides. We also described how the CRLT is integrated with the COMPAS Architecture. The CRLT (http://criss.uvt.nl/compas) integrates with the Model Repository and Process Verification Tools and comprises components that allows the definition and management of the compliance requirements; handling of interactive user specified compliance requests in a compliance language (design-time verification of the compliance targets), and the design of visual representations of compliance requirements using patterns for the automated generation of formal compliance rules.
6.3.15. A monitoring framework based on message abstraction.
This abstraction is called business protocol. We provide an extension of XPath to accommodate verification issues. The resulting language (called BPath) is also a query language that can be used to track and make visibility on business process execution. First, a BPEL business process specification is transformed into a business protocol. Then, monitoring properties and queries are formulated using BPath monitoring language over the business protocol. At runtime, all incoming or outgoing messages are captured by the business protocol monitor component before reaching their original destination. The process engine as well as the monitoring framework will publish respectively the execution and monitoring events, which are stored in the execution log. The execution log is of two types: states log, generated by the business protocol monitor, and events log generated by the process engine. We have implemented this approach and shown its relevance on some scenarios.
6.3.16. Automatic extraction of communication protocols.
Model extraction and mining could also help to discover the behaviour of a running model implementation using its interaction and activity traces. Process monitoring handles the tracking of individual processes in order to extract activity and execution information. Process mining, sometimes named offline or post-mortem monitoring, is used to analyse the event logs instead of the runtime process instances. The result of the analysis is then compared to the existing system models, and can either result in model updates or – where suggested by assessment – result in some corrective actions to overcome such discrepancies. We investigated extraction approaches by resorting to linear algebra. The proposed methodology allows us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we proposed the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. The approaches have been implemented in the form of prototype tools, and experimentally validated on scalable datasets.
Modelling Web services is a major step towards their automated analysis. One of the important parameters in this modelling, for the majority of Web services, is the time. A Web service can be presented by its behaviour that can be described by a business protocol representing the possible sequences of message exchanges. Automated analysis of timed Web services such as compatibility and replaceability checking are very difficult and in some cases are not possible with the presence of implicit transitions (internal transitions) based on time constraints. The semantics of the implicit
transitions is the source of this difficulty because most of well-known modelling tools do not express this semantics (e.g., epsilon transition on the timed automata has a different semantics). We investigated an approach for converting any protocol containing implicit transitions to an equivalent one without implicit transitions before performing analysis. The proposed approach was implemented.
6.3.18. Graphical environment for service description.
One of COMPAS challenges was the models and tools for specifying and verifying compliance requirements. In our approach, we employed the Reo coordination language to graphically specify service composition glue code and coordinate message exchanges among individual services involved into a process. As our work on compliance source analysis showed, compliance requirements influence various classes of system properties, i.e., control flow temporal constraints, data-centric requirements, time-related properties, probabilistic properties, etc.
Therefore, we extended our graphical environment with annotation tools that allow designers to enrich process specification with necessary information, e.g., define service input/output messages, specify data-dependent branching conditions and functional transformations on process dataflow indicate channel delays, and task timeouts. We extended the initial set of Reo channels supported by our tools with primitives necessary for data manipulation, possibility to create user-defined channels and build hierarchical workflow models. Depending on which set of channels is used for process modelling, its semantics in a form of extended constraint automata is obtained automatically, while the modelling environment is the same regardless of what kind of property we target at verification time.
6.3.19. Formalisation of business process models.
Since service developers may use various notations for process specification, we developed tools for automated conversion of several workflow modelling languages to their formal representation in Reo, which precisely describes control and dataflow in a business process and, thus, disambiguates the initially informal workflow specifications. Given this translation, multiple verification tools, both developed within the scope of the COMPAS project and external tools, can be used for automatic analysis of various classes of formalised compliance requirements.
6.3.20. Formal specification and automated verification of compliance requirements.
At the very low level of abstraction, compliance requirements are represented by logic properties that should hold for a certain system specification. Theoretical computer science offers many well-establish formalisms for specifying system properties. We chose the mu-calculus model as the most expressive logic formalism that subsumes many of the logics used in system verification, including LTL and CTL. To enable model checking of formalised dataflow specifications, we implemented a tool for generating mCRL2 code for a given graphical process model. This allows us to apply the whole range of available state-of-the-art model checking, simulation, visualization and optimization tools and verify the validity of compliance requirements expressed in the form of mu-calculus formulae.
6.3.21. Integration of process verification tools with COMPAS architecture.
For the integration with the overall COMPAS architecture, we developed a set of services for exchanging information about system properties and the results of process verification with template-based property specification tools and service repositories where the process models are
stored for reuse, adaptation and code generation. These tools help developers to connect compliance source documents with actual requirements represented in a form of logic formulas to be verified using simulation and model checking techniques.
6.3.22. Develop thought leadership on compliance issues around SOA.
As mentioned by Stuttgart University it was essential to define an adequate conceptual model of compliance in a service-oriented architecture (SOA), because of the model-driven approach. We helped our consortium partners to define this model by providing input from the aspect of compliance whereas they provided input from the technical (SOA) point of view. By combining these two aspects we gained new insights on how to address compliance issues in a SOA. We used our new insights from the COMPAS project to organize round table sessions with some of our PwC relations and we are processing our gained knowledge in articles.
6.3.23. Provide the industrial partners with practical information that can be used to create, perform and evaluate the case studies.
As compliance experts we were involved as an industrial partner. By performing extensive iterative reviews on the case studies from a compliance point of view we were able, together with Telcordia and Thales, to come up with two realistic use cases with sufficient compliance challenges that would be addressed by the COMPAS prototype tooling.
In addition we also provided (review) input on the COMPAS prototype tooling where interaction was needed with either a business user (e.g. Compliance Officer, Business Process Manager, etc.). We did this mainly for the tooling developed by Tilburg University (Compliance Request Language Tool) and Trento University (Compliance Governance Dashboard).
6.3.24. Use the COMPAS results to help SOA enabled organizations with compliance.
When the COMPAS project is finalised we are interested in how the results of the project (e.g. tooling) can be used for commercial use and exploitation and what services can be offered to clients in this area. E.g. how can COMPAS tooling help the auditor perform his work at an organisation with a SOA environment, and what services can we offer to our clients that have a SOA and want to address various compliance requirements and issues.
7. Availability of Results
Project results are available from the website http://compas-ict.eu. The following prototypes have been developed or used by the project:
Business Process Illustrator (BPI) is a Web-based tool for monitoring the execution of business processes. It allows to view a graph of a process model enriched with status information of a process instance. The process graph is refreshed regularly. Additionally the user can adapt the graph by highlighting or omitting activities. The source code, binaries, and installation manual are available for download: http://sourceforge.net/projects/bpi/.
Compliance Governance Dashboards (CGD) aims at reporting on compliance, creating an awareness of possible problems or violations, and facilitating the identification of root-causes for noncompliant situations. For that, CGD concentrates on the most important information at a glance, condensed into just one page. For more information on CGD please visit the CGD Web site: http://compas.disi.unitn.it/CGD/home.html.
Compliance Request Language Tools (CRLT) serves two main purposes. First, it offers the interface for the Compliance Requirements Repository to define, store and maintain compliance requirements in various abstractions together with related aspects such as compliance risks, sources, controls and rules. Second; it enables compliance and business experts to formulate compliance requests at design time for checking end-to-end business processes and process fragments against formalized regulatory compliance requirements. For more information on CRLT please visit the CRLT Web site: http://eriss.uvt.nl/compas/.
Eclipse Coordination Tools
Eclipse Coordination Tools (ECT) is a framework for verifiable design of component and service-based software using the coordination language Reo. Reo presents a paradigm for composition of distributed software components and services based on the notion of mobile channels. Software application designers can use Reo as a “glue code” language for compositional construction of connectors that orchestrate the cooperative behaviour of components or services. The ECT framework consists of a set of integrated tools that are implemented as plug-ins for the Eclipse platform. ECT provides functionality for converting high-level modelling languages such UML, BPMN and BPEL to Reo, for editing and animation of Reo models, synthesis of automata-based semantical models from Reo, annotation of Reo and automata with QoS constraints and verifying these models using dedicated model checking tools. ECT is an open source project. For more detail and the information how to participate in the development, please refer to the Reo Web site: http://reo.project.cwi.nl/.
Fragmento
Fragmento is a Fragment-oriented Repository that is dedicated to the management of process-related artefacts, such as BPEL processes, WSDL documents, deployment descriptors, and especially, process fragments. Fragmento provides particular functionality in addition to the basic repository functionalities for handling process artefacts (persistence, storage, search, retrieval, version management). Fragmento provides XML schema validation, and provides an extensibility mechanism for integration of additional validation functions. Furthermore Fragmento provides an extensibility mechanism for custom query functions. This allows the implementation of search functions beyond the metadata of a process artefact (e.g., concerning the structure of a process fragment). Fragmento also provides mechanisms for definition of bundles, which allows packaging all artefacts related to a process (or fragment) together into one package. Fragmento is released as
Open Source during the year 2010. For more information on Fragmento please visit the project Web site: http://www.iaas.uni-stuttgart.de/forschung/projects/fragmento/start.htm.
The Model-Aware Repository and Service Environment (MORSE) is a service-based environment for the storage and retrieval of models and model-instances at both design- and runtime. Models and model-elements are identified by Universally Unique Identifiers (UUID) and stored and managed in the MORSE repository. The MORSE repository provides versioning capabilities so that models can be manipulated at runtime and new and old versions of the models can be maintained in parallel. For more information on MORSE please visit the MORSE Web site: http://www.infosys.tuwien.ac.at/prototype/morse.
The Pluggable Framework for Apache ODE extends the Apache ODE BPEL engine to support a generic eventing framework. The eventing framework consists of generic events and architecture for handling the events. The events are tailored towards BPEL, but independent of the concrete engine used. That means, the BPEL engine can be exchanged with another BPEL engine and the events remain the same. Therefore the code dealing with the events does not need to be changed. This is a basis for a BPEL monitoring infrastructure being engine independent. For more information on ODE-PGF please visit the project Web site at: http://www.iaas.uni-stuttgart.de/forschung/projects/ODE-PGF/.
The View-based Modeling Framework (VbMF) provides flexible, extensible methodology and tooling for modelling, developing, and maintaining business processes based on the notion of view models – a realization of the separation of concerns principle, and the model-driven development paradigm – a realization of the separation of abstraction levels. The core concepts of the framework are extended or refined to represent and integrate business compliance concerns. Finally, process implementation, deployment configurations, runtime monitoring directives, and so on, can be automatically generated from view models. For more information on the View-based Modeling Framework please visit the VbMF Web site: http://www.infosys.tuwien.ac.at/staff/htran/#software.
8. Potential Impact of the Results
The impacts of the project can be categorized as follows: scientific impact in respective research communities and industrial impact.
8.1. Scientific Impact
The COMPAS project was one of the first international research efforts to focus on compliance management in service-oriented architectures. Thus, the scientific publications as produced within the scope of the project are very likely to develop a high impact for the community and future research regarding a holistic approach of compliance management in an IT context.
But not only the topic of compliance management will be affected by the COMPAS research project: in order to accomplish the major goal of a compliance IT framework, various fundamental research topics such as in model- and DSL-engineering, business process management, runtime monitoring, model checking and verification, and data-mining had to be studied. Numerous results within these fields have already been published in scientific articles and papers.
8.2. Industrial Impact
The COMPAS project was a research project that conducted fundamental research on compliance management in services-oriented architectures and as such presented a first endeavour to study and address this problem. With this in mind, an industrial application would not be to be expected from such a project, thus. Yet, the results and prototypes developed within the scope of the project promise early application in an industrial context: the modelling approach that takes place at various levels of abstraction contributes a conceptual solution to specify and document compliance and relate concerns to IT. Similarly, the monitoring infrastructure depicts a general enough IT architecture for the runtime system. In short, the results of the COMPAS project clearly describe approaches for realizing compliance management – particularly in an industrial context.
As compliance management increasingly pervades business processes – not only at large but also small and middle-sized companies, it is expected that solutions have to be realized and applied to realize compliance management in an industrial context. The results from the COMPAS project directly worked towards addressing this need and thus are expected to gain momentum in impact.
9. Lessons Learned During the Project
One of the lessons learned throughout the project is that reporting on compliance is not as easy as it could seem in the first place. The process-centric approach of COMPAS requires combining the concerns of two different stakeholders, i.e., process analysts/owners and compliance experts, in the same graphical user interface (GUI). We could also have simply opted for two different, independent views for the two roles, but the discussions throughout the whole project have shown that compliance is a crosscutting concern that most of the times requires strong cooperation of process analysts and compliance experts. Therefore we decided to merge both views into one GUI.
The process-centric approach of COMPAS is further very strong in managing process-related compliance requirements, that is, compliance requirements that are related with the structure and timing of individual tasks/service invocations inside a process, while it is less strong in the identification of data-related compliance requirements (e.g., checking the conformance with a given data format). As a consequence, if there are no explicit data checking tasks in the process, only seldom the decision tree algorithm is able to identify relationships between compliance outcomes and business data. Yet, if there are such activities in the process, the algorithm performs very well. As an extension of the COMPAS approach, it could therefore be a good idea to extend its process-centric approach also toward data-centric compliance concerns.
We soon realized that the evolution of the compliance conceptual model needs to be supported throughout the project. For easing the co-evolution of dependent systems after a model change we
fully automated the generation of storage and information retrieval service provider and requester agents in MORSE.
The concept of process fragments for compliance was intended to address compliance requirements which are related to control flow and activities within a process. Process fragments are the right choice for implementing compliance requirements that prescribe what should be executed. Integration mechanisms for process fragments allow integrating such compliance functionality into a process. When looking beyond the internals of a process, however, compliance management comprises more aspects. A process may orchestrate services and may involve people, but it cannot control them. For instance, requirements related to a data storage used by a service cannot be captured with the aid of a process fragment. To give an example: a requirement demanding for encrypted storage of loan request data for at least ten years is related to a database and is out of the scope of control of the process. Although these limitations exist, the approach is well applicable to formulate and integrate process structures that help achieving compliance.
Compliance entails different aspects of business processes and requires knowledge of various domains. Our analysis of various compliance sources and frameworks identified several compliance concerns where automated verification of relevant formal requirements can only ensure partial compliance as these constraints require human intervention in the form of manual checks, reviews, assessments, etc. for guaranteed assurance. We have also identified several concerns that involve requirements relevant to the retention of records, data encryption, etc., which typically crosscut business processes. In general, these requirements handled through the use of dedicated IT solutions and are not encoded in business process specifications. Hence, the assurance of such requirements is hardly possible with our solutions that focus on requirements that are represented within business process specifications and applicable for design-time phase of the business process compliance.
Design-time is the first step for ensuring compliance during the entire business process lifecycle. Dealing with compliance starting from the business process analysis and design phase is critical as identifying and solving compliance problems in the early phases is less costly than corresponding checks at later phases. However, it is not always feasible to enforce compliance with all constraints imposed on a process models at design time. There are limitations on the aspects that could be fully ensured during design time. For example, during design-time typical segregation-of-duties requirements can only be partially addressed, as such requirement typically demands runtime information for guaranteed compliance.
The type and coverage of the formal compliance rules that can be used for automated compliance verification and monitoring depend not only on the expressive power of the language used for their specification but also on the extent of the information encoded within business process specifications. For example, a compliance rule implementing a control that involves roles or other organizational units cannot be verified if the process specification under consideration does not incorporate process elements that capture these aspects. Thus, the granularity and formality level of the specifications in different phases of the lifecycle and the languages used for their specifications pose limits on the rules that can be used for their verification and monitoring.
From our perspective, one of the successes of the COMPAS project is providing a workable context within which the partners were able to apply theoretical results and formal methods on real, industrial problems. It is a fact that the gap between formal tools and techniques and real-life industrial problems is vast. Generally, complaints from both camps abound: practitioners are often disappointed by the limitations of theoretical results, the shortcomings of formal tools, and the seeming ignorance or indifference of theoreticians and the developers of formal methods and tools about their real world applications; the theoreticians and the developers of formal methods and tools, in turn, are often discouraged by the seeming inability or unwillingness of practitioners to adopt their arcana to express themselves.
Still, the fact remains that there is no escape out of this dilemma: formal tools and techniques are relevant only because they (aspire to) tackle real problems; and the scale and complexity of real problems are well beyond the realm of human intuition or informal techniques. Through our work in COMPAS, we learned that the so-called gap between formal methods and industrial applications is often not a void that one may hope to eventually bridge over by either forcing the practitioners to express themselves in the arcana of formal methods, or encouraging the formal people to become application domain experts. We learned that this so-called gap is in fact a vast terrain full of unexpected, non-trivial problems that need to be discovered first, before they can be solved. Both discovery and solving these problems requires a collaborative exploration of this vast terrain by domain experts along side their more formal colleagues.
The predefined structure of work in the project makes it hard to focus on fundamental research issues that emerge beyond the scope of the initially planned tasks. Due to the fact that all components of the architecture had to be provided on time, we had to forego producing some pieces of functionality that would be useful for the static analysis of COMPAS case studies. Specifically, we believe performance evaluation and verification of probabilistic and stochastic properties of business processes are useful. With such functionality, for example, we can tackle specification and verification of non-functional properties of workflow models, such as checking whether a process meets the performance requirements specified in a service-level agreement. To this end, we developed a compositional automata-based model for behaviour specification that enables automated reasoning about provisioned QoS. However, we had to postpone the development of an actual tool based on this model, due to the need to deliver service simulator engines.
Although COMPAS has been a successful project and delivered all the necessary results, it still requires much effort to produce a complete set of tools which could be applied universally to any domain. Complete automatization (without much of human involvement), more mature and universal language models and enforcement capabilities are still needed to be worked on. Therefore, the major lesson learned is that COMPAS is just a first step to provide universal solution for compliance in SOA/Business Processes world. New projects have to be developed to continue on the solid foundation built by COMPAS. The areas like Domain Specific Languages design or compliance monitoring are so broad that they would need to be continued in new separate projects. It was impossible to prepare a perfect set of languages for any domain which could be universally integrated.
10. Partners
Technische Universität Wien, Coordinator
Schahram Dustdar, Ta’id Holmes, Emmanuel Mulo, Ernst Oberortner, Huy Tran, Uwe Zdun
http://www.infosys.tuwien.ac.at/
Università degli Studi di Trento
Aliaksandr Birukou, Fabio Casati, Vincenzo d’Andrea, Florian Daniel, Patricia Silveira, Soudeep Roy Chowdhury
http://disi.unitn.it/
Members of the consortium who presented the COMPAS results during the second review meeting in Brussels:
11. Contact
Prof. Schahram Dustdar
Technische Universität Wien
Distributed Systems Group
Vienna, Austria
Phone +43-1-58801-18414
Fax +43-1-58801-18491
E-mail dustdar@infosys.tuwien.ac.at
|
{"Source-Url": "http://cordis.europa.eu/fp7/ict/ssai/docs/finalreport-compas.pdf", "len_cl100k_base": 10010, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 44329, "total-output-tokens": 10990, "length": "2e13", "weborganizer": {"__label__adult": 0.0005736351013183594, "__label__art_design": 0.0015411376953125, "__label__crime_law": 0.00177764892578125, "__label__education_jobs": 0.0025348663330078125, "__label__entertainment": 0.00014531612396240234, "__label__fashion_beauty": 0.0003032684326171875, "__label__finance_business": 0.005405426025390625, "__label__food_dining": 0.0005545616149902344, "__label__games": 0.0006899833679199219, "__label__hardware": 0.0019330978393554688, "__label__health": 0.0007433891296386719, "__label__history": 0.0006136894226074219, "__label__home_hobbies": 0.00021278858184814453, "__label__industrial": 0.005039215087890625, "__label__literature": 0.0004119873046875, "__label__politics": 0.0005335807800292969, "__label__religion": 0.0005440711975097656, "__label__science_tech": 0.2440185546875, "__label__social_life": 0.00017368793487548828, "__label__software": 0.03289794921875, "__label__software_dev": 0.69775390625, "__label__sports_fitness": 0.0002321004867553711, "__label__transportation": 0.001033782958984375, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54968, 0.02402]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54968, 0.09487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54968, 0.92928]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 20, false], [20, 3375, null], [3375, 6526, null], [6526, 9540, null], [9540, 12490, null], [12490, 12900, null], [12900, 13289, null], [13289, 16223, null], [16223, 19531, null], [19531, 23492, null], [23492, 27200, null], [27200, 31140, null], [31140, 34790, null], [34790, 37707, null], [37707, 40793, null], [40793, 43168, null], [43168, 47037, null], [47037, 51480, null], [51480, 54663, null], [54663, 54663, null], [54663, 54968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 20, true], [20, 3375, null], [3375, 6526, null], [6526, 9540, null], [9540, 12490, null], [12490, 12900, null], [12900, 13289, null], [13289, 16223, null], [16223, 19531, null], [19531, 23492, null], [23492, 27200, null], [27200, 31140, null], [31140, 34790, null], [34790, 37707, null], [37707, 40793, null], [40793, 43168, null], [43168, 47037, null], [47037, 51480, null], [51480, 54663, null], [54663, 54663, null], [54663, 54968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54968, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 20, 3], [20, 3375, 4], [3375, 6526, 5], [6526, 9540, 6], [9540, 12490, 7], [12490, 12900, 8], [12900, 13289, 9], [13289, 16223, 10], [16223, 19531, 11], [19531, 23492, 12], [23492, 27200, 13], [27200, 31140, 14], [31140, 34790, 15], [34790, 37707, 16], [37707, 40793, 17], [40793, 43168, 18], [43168, 47037, 19], [47037, 51480, 20], [51480, 54663, 21], [54663, 54663, 22], [54663, 54968, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54968, 0.01734]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c80a38d53d48d6ba73b521e4d8b3a079f1e2a0a3
|
CHAPTER 2
What are requirements?
The simple question "what are requirements?" turns out not to have a simple answer. In this chapter we will explore many of the key ideas that underlie requirements engineering. We will spend some time looking at two fundamental principles in requirements engineering: (1) that if we plan to build a new system, it is a good idea to describe the problem to be solved separately from particular solutions to the problem, and (2) that for most systems, this separation is impossible to achieve in practice. The tension between these two principles explains some wildly different perceptions of RE. We will break down the idea of a problem description into three components: the requirements (which are things in the world we would like to achieve), the domain properties (which are things that are true of the world anyway), and specifications (which are descriptions of what the system we are designing should do if it is to meet the requirements), and show how these are inter-related. Throughout the chapter we will emphasize that we are primarily concerned with systems of human activities, rather than 'software' or 'computers'.
Our aim in this chapter is to introduce a number of key ideas and distinctions that will help you understand what requirements engineering is all about. By the end of the chapter you should be able to:
- Distinguish between requirements and specifications, and describe the relationship between them.
- Explain how a system can meet its specification but still fail.
- Give examples of how misunderstanding of the domain properties can cause a system not to meet its requirements.
- Explain why any specification is likely to be imperfect.
- Distinguish between verification and validation, and give criteria for performing each.
- Explain the different perspectives of the systems engineer and the software engineer.
- Give examples of how the systems engineer can move the boundary between the application domain and the machine, to change the design problem.
- List the principal parts of a design pattern, and explain why a clear understanding of the requirements is needed to select appropriate design patterns.
- Use problem frames to describe the differences between major problem types such as control systems, information systems, and desktop application software.
2.1. Requirements Describe Problems
In chapter 1 we introduced the idea of capturing the purpose of a software-intensive system. To identify the purpose, we need to study the human activities that the system supports because it is these activities that give a system its purpose. Suppose we are setting out to design a new system, or perhaps to modify an existing system. Presumably, we have perceived an opportunity to use software technology to make some activities more efficient, or more effective, or perhaps to enable some new activities that are not currently feasible. But what, precisely, is the problem we are trying to solve? If we want to understand the purpose of the new system, then we need to be clear about what problem it is intended to solve.
2.1.1. Separating the Problem from the Solution
The first key insight of requirements engineering is that it is worthwhile to separate the description of a problem from the description of a solution to that problem. For a software-intensive system, the solution description includes anything that expresses the design: the program code, design drawings, the system architecture, user manuals, etc. The problem description is usually less well-documented – for some projects it may be captured in a ‘concept of operations’ document, or a ‘requirements specification’. For other projects, it may exist only in notes taken from discussions with customers or users, or in a collection of ‘user stories’ or ‘scenarios’. And for some projects there is no explicit statement of what the problem is, just a vague understanding of the problem in the minds of the developers. A basic principle of Requirements Engineering is that problem statements should be made explicit.
Separating the problem from potential solutions, and writing an explicit problem statement is useful for a number of reasons. To create a problem statement, we need to study the messy “real world”, ask questions about the activities that the new system should support, decide a suitable scope for the new system, and then write a precise description of the problem. This allows a designer to properly understand the nature of the problem, before considering how to solve it. The exercise of analyzing the real world problem situation will reveal many subtleties that might be missed if the developer launches straight into designing a solution:
- It might reveal that the most obvious problem is not really the right one to solve.
- The problem statement can be shared with various stakeholders, to initiate a discussion about whether their needs have been adequately captured.
- The problem statement can be used when comparing different potential designs, and when comparing design trade-offs.
Making the problem statement explicit makes it much easier to test the system – a candidate solution is only correct if it solves the problem as stated. Of course, the solution might still be unsatisfactory, because we might have made a poor job of writing down the problem statement, or we might have focused on the wrong problem. In other words we need to check both that the solution is correct according to the problem statement, and that the solution statement corresponds to the real-world needs of the stakeholders (see figure 1). But we have gained something by breaking this down into two separate steps – we can ask the question of whether the problem statement is adequate, independently from the question of testing whether a proposed design solves the problem as stated.
2.1.2. Intertwining of Problems and Solutions
The second key insight of requirements engineering is that this separation of problem statement from solution statement cannot really be done in practice. The real world in which human activities take place is complex, and any attempt to model and understand some piece of it will inevitably be imperfect. The world (and the people in it) change continually, so that the problem statement we write down at the beginning of a project may be wrong by the end of it. And because software technology opens up all kinds of new possibilities for how work is organized, the process of design itself will change the nature of the problem: show a user an early prototype of the system, and she will usually think of all sorts of new things she would like it to do. In some cases, it is only by attempting to design a new system that we start to understand whether there really is a problem that needs solving. For example, nearly all of the most practical uses of the world wide web weren’t discovered until after the web was created, and most people never realized they needed cellphones with built-in cameras until they saw all the cool things you can do with them.
These observations lead to a fundamental tension at the heart of requirements engineering. While it is worthwhile separating the problem statement from the solution statement, in practice, this separation can rarely be fully achieved. People react to this tension in various ways. At one extreme, some developers attempt to ‘freeze’ the requirements early in a project, allowing
development to proceed in an isolated bubble, without having to worry about what is happening in the real world. This allows for a great degree of control of the engineering process, but runs the risk of developing a product that will be useless by the time it is delivered. At the other extreme, some developers discard all attempts to document the requirements explicitly, arguing that such effort will be wasted anyway. This allows for a high degree of ‘agility’ in responding to new ideas, but runs the risk of a chaotic development process that never converges on a useful product, or in which developers fail to understand what the system is really for.
Rather than adopting either of these extreme positions, we can use an understanding of this tension to explore some of the basic principles of RE, and to predict which techniques are likely to be useful:
- Firstly, the idea of separating the problem statement from the solution statement does not imply that these steps should be done in a particular order. Writing a problem statement is a way of capturing the current understanding of the purpose of a system, and can be useful at any stage of development. Rather than being the first phase of a project, requirements engineering is a set of activities that continues throughout the development process.
- Any version of the problem statement will be imperfect. The models produced as part of requirements engineering are only ever approximations of the world that the requirements analyst is trying to understand, and so will contain inaccuracies and inconsistencies, and will omit some information. Specifications are always imperfect, and there will usually be missed requirements. The requirements analyst needs to perform enough analysis to reduce the risk that such imperfections and missed requirements will cause serious problems, but that risk can never be reduced to zero. Requirements Engineering is therefore crucial for risk management.
- Perfecting a specification may not be cost-effective. Although we have characterized a number of benefits of writing an explicit problem statement, those benefits must be weighed against the cost of performing a requirements analysis. For different projects, the cost-benefit balance will be different. In large safety critical systems, the cost of producing and validating a detailed, formal requirements specification may be easy to justify, but for many other types of system, this cost may outweigh the benefits, and a much less rigorous problem description may be more appropriate.
• The problem statement should never be treated as fixed. Change is inevitable, and therefore must be planned for. Whatever process is used for producing a problem statement, there should be a way of incorporating changes periodically, or updating the problem statement as more is learned about the requirements. Even the process of building a new system changes the problem, so perhaps we should redraw figure 1 more like figure 2.
We characterized requirements engineering as being concerned with explicit statements of the problem to be solved. We could equally characterize it as a process of improving the developers’ understanding of the problem to be solved. Sometimes that understanding is improved by trying to write a problem statement, and other times it is improved by attempting to design a solution to what you think the problem is. The diagram in figure 3, expresses this: an iteration of requirements and design activities, in which the understanding of both is increased at each iteration. It is illustrative to compare figures 1 and 3. Figure 1 represents an ideal, while figure 3 represents what often happens in reality.
2.2. Distinguishing the Problem
So requirements engineering is about describing problems separately from describing solutions to those problems, even though maintaining this separation is hard in practice. To make things harder, most stakeholders make no such distinction themselves. People are natural problem solvers – it is very hard to resist the temptation to solve a problem rather than merely describe it. Stakeholders often answer questions about what they need by describing their ideas about how they think the new system should work. To help keep the distinction clear, we need a way to decide which kinds of statements refer to problems, and which refer to solutions.
2.2.1. ‘What’ vs. ‘How’
Early papers and textbooks on requirements engineering used to distinguish between requirements and designs by talking about ‘what’ versus ‘how’. The distinction was introduced to illustrate the idea of implementation bias. A problem statement has implementation bias if it unnecessarily suggests or precludes particular design solutions. So, ideally, a requirements specification should describe what the problem is, without describing how it should be solved.
Unfortunately, this distinction is confusing, because the ‘what’ and ‘how’ will vary depending on the level of analysis. For example, the requirements for an overall system capture what that system is required to do, and the architectural design gives an indication of how the system will do it, in terms of a set of interconnected components. But then we still have to specify what each component should do (and we can only do so with reference to the overall design, or the ‘how’). So thinking in terms of ‘what’ versus ‘how’ does not help when trying to decide if a particular statement is a requirement or part of a design.
Another criticism of the ‘what’ versus ‘how’ distinction is that it leaves out other equally important questions, such as ‘why’ (why is this system needed? why should it behave like that?) and ‘who’ (whose problem is it anyway? who will use it?), and so on. Also, the problem of implementation bias is not fully addressed by separating the ‘what’ from the ‘how’, because there may well be good reasons why a customer requires certain design choices to be made. A classic example is the choice of programming language. This is clearly a ‘how’ issue, and for most projects should be a free design choice. But if a customer needs to maintain the software after delivery, and only has Java programmers available to do this (and expects this not to change for the life of the software), then insisting the software be written in Java is a perfectly valid requirement, even though it is a ‘how’ statement.
Clearly, the distinction between ‘what’ and ‘how’ does not get us very far. The simplistic distinction we made between a ‘problem statement’ and ‘solution statement’ in the previous section also suffers from some of these criticisms. We clearly need a better way of understanding the distinction.
### 2.2.2. Application Domains vs. Machine Domains
A more appealing distinction is introduced by Michael Jackson, and concerns the difference between two different worlds that we might wish to describe – the *machine domain* and the *application domain*. Jackson uses the term 'machine' to describe the thing that is to be built. The term captures the notion of writing some software to turn a general-purpose hardware platform into a useful machine for a particular purpose\(^1\). The machine domain is the set of phenomena that the machine has access to: data structures it can manipulate, algorithms it can run, devices it can control, inputs it can get from the world, and so on. In contrast, the application domain is the world into which the machine will be introduced, and in particular, is that part of the world in which the machine’s actions will be observed and evaluated. Given our characterization of requirements engineering as concerned with the purpose of a system, it should be clear by now that requirements are part of the application domain, rather than the machine domain. It is the application domain that provides a purpose for the machine, and so it is the application domain that determines the requirements.
The application domain and the machine domain must be connected somehow, because the machine must interact with the world in order to be useful. The connection is via shared phenomena – things that are observable both to the machine and to the application domain. Shared phenomena include events in the real world that the machine can directly sense (e.g. buttons being pushed, movements that sensors can detect) and actions in the real world that the machine can directly cause (e.g. images appearing on a screen, devices being turned on or off).
There are of course, many things in the world that the machine cannot directly sense, which we can think of as ‘private phenomena’ of the application domain. For example, the machine cannot know whether a person typing a password really is the person authorized to use that password. Requirements are, in general, about private phenomena, because a machine does not usually have access to the phenomena that define its purpose. For example, the requirement to allow only authorized personnel access to a building involves events and states that are private phenomena of the application domain, such as the identity of people, possession of authority, and people entering buildings. The machine senses most of these things indirectly, through devices that ask for passwords, mechanisms that lock or unlock doors, and so on.
---
\(^1\) We should note that designing ‘a machine’ (in Jackson’s terms) is very different from designing a ‘software-intensive system’ as we described in chapter 1. We will explain this difference shortly.
Finally, a *specification* for the machine can only be written in terms of the shared phenomena between the machine domain and the application domain. We cannot refer to private phenomena of the application domain in a specification, because we cannot (reasonably) specify what the machine should do in response to phenomena to which it has no access. We should not refer to private machine domain phenomena in the specification, because these have no role in giving the machine its purpose, and should be left to the designer to decide. The specification really only refers to things that cross the boundary between the application domain and the machine domain: primarily the inputs and outputs of the software, but also any ways in which the application domain constrains the design or operation of the machine.
Using these terms, we can recast the distinction between a problem statement and solution statement in the following way (see figure 4). The requirements analyst must be familiar with two aspects of the application domain: the *requirements*, which are things that the machine is required to make true (e.g. “prevent access to unauthorized personnel”) and the *domain properties*, which are things that are true about the application domain irrespective of whether we build the machine or not (e.g. “only a manager can assign access authority”). Using these, the requirements analyst writes a specification for the machine, in terms of phenomena that are observable at its interface with the world (e.g. “when the user enters a valid password, the computer will unlock the door”). The programmer is then responsible for designing a program to run on a particular computer to meet this specification. Hence, a full description of the problem statement now has three components: the requirements, the domain properties and the specification.
A description of the problem statement can be said to suffer from implementation bias if it contains things that have no justification in the application domain. This gives us a much better way of determining whether something should be a requirement than the ‘what’ vs. ‘how’ distinction. For example, if in the application domain, there is an army of Java programmers available to maintain the software, then specifying that the software be written in Java is a valid requirement, even though it constrains how the software should be developed.
The role of the domain properties is crucial in this process. Domain properties help to link the specification and the requirements. Recall that the machine will probably not have direct access to the phenomena described in the requirements. For example, if a requirement is “to only allow access to authorized personnel”, it is unlikely that the machine can directly sense who is authorized and who is not. However, we can make use of domain properties such as the fact that authorized personnel can be issued with passwords, that authorized personnel can be trusted to keep these passwords secure, and will be able to remember them when needed. This allows us to write a specification that refers to “entering a password”, which is an input that the machine can sense. Of course, if we have misunderstood the domain properties, we may end up with a program that satisfies its specification, but does not meet the original requirements – in this example, the password software may work correctly according to its specification, but the overall system may not meet its purpose because security may be breached when passwords are shared with unauthorized personnel.
Whether certain domain properties will hold or not depends on the context in which we use the system once it is developed. A system that meets its requirements when used in one context might not do so when used in a different context. For example, our security system might work fine in an office environment where people are familiar with the need to remember their passwords and keep them confidential, but fail entirely in a care home for the elderly, where the residents share their passwords with each other because they can never remember them themselves. Sometimes we can make the design simpler by assuming that certain domain properties will hold, even if we know they can be violated. In this case we are deliberately restricting the kinds of context in which a system should be used. One of the reasons for explicitly capturing the domain properties is so that we have a record of such assumptions.
2.2.3. Verification and Validation
Using these distinctions, we can now return to the question of quality – does the system meet its intended purpose? Figure 1 showed two separate aspects of this question:
- Verifying correctness (or just verification), by which we mean checking that a design solution correctly solves the stated problem. Using Jackson’s terms, we can break this into two separate criteria:
1. The program, running on a particular computer, satisfies its specification.
2. The specification satisfies the stated application domain requirements, assuming the stated domain properties hold.
- Validating the problem statement (or just validation), by which we mean checking the correspondence between the problem we have stated and the demands of the real world. Again we can break this into two parts:
1. Did we discover and understand all the relevant requirements?
2. Did we discover and understand all the relevant domain properties?
If we know all the properties of the program and the computer on which it is run, and we express the specification sufficiently precisely, the first verification criterion is entirely objective, and could conceivably be automated. By ‘objective’, we mean the outcome should not depend on the opinions of the person performing the test, nor on how she interprets the specification. Increasingly, this verification criterion is checked automatically, through automated software testing, and/or formal proof techniques. Similarly, if we have a precise specification, and write down the requirements and domain properties sufficiently precisely, the second verification criterion should also be entirely objective, and could conceivably be automated. However, it is hard (and perhaps expensive) to write down the requirements and domain properties so precisely, and therefore this step is usually performed manually, if it is checked at all.
In contrast, validation steps are always subjective by their very nature. Our understanding of both the domain properties and the requirements involve an assessment of what is true of the real world. Two different people may disagree on whether the problem statement is valid, because they may disagree on whether certain domain properties really are true, or they may have different understandings of the real requirements.
We can illustrate the difference between verification and validation with another example, also due to Jackson. For an aircraft, it is important to prevent accidental engagement of reverse thrust while the aircraft is flying. This is an important safety requirement – it is especially dangerous to engage reverse thrust while in the air. We can express this as the following requirement:
R1: “Reverse thrust should be enabled only when the aircraft is moving on the runway, and disabled at all other times”.
However, because the control software cannot directly sense the state “moving on the runway”, we need to find a way of connecting this to phenomena that the machine can detect. A standard solution is to use the sensors on the wheels that pulse when the wheels are turning. We can then address the requirement using this specification:
S1: “reverse thrust should be enabled if and only if wheel pulses are on”.
Note that we are using some assumptions about the domain properties to connect the specification to the requirement:
D1: “wheel pulses are on if and only if wheels are turning”.
D2: “wheels are turning if and only if aircraft is moving on the runway”.
To verify the software, we could write test cases that check whether it meets its specification, i.e. that reverse thrust is enabled when wheel pulses are on, and disabled if they are not. For complex software, this may require a very large set of test cases, because we may need to test whether these properties hold in every possible mode. In some cases, there will be so many combinations that we cannot possibly test them all. Even if we manage to complete this testing, we
have still only covered the first verification criterion. We could attempt to check the second verification criterion during system testing, by building the aircraft, and checking that reverse thrust is indeed enabled only when the aircraft is moving on the runway, perhaps with the help of a wind tunnel. However, such testing is very crude – it is impossible to perform this test under all possible flight conditions. More likely we will have to rely on careful reasoning, using a model of the domain properties, and a model of the requirements.
To validate the problem statement, we need to check that the requirement and the domain properties adequately capture what happens in reality. If it is ever possible for the wheels to turn while the aircraft is in the air, or fail to turn when it is moving on the runway, we have a problem. In fact, D2 is not always true. In one accident, an aircraft touched down very lightly on a wet runway, so that reverse thrust would not engage\(^2\). Even if we fix this error in our understanding of the domain properties, we can never be absolutely sure that there are not other circumstances under which our understanding of the world is still wrong. We can probably always think up some circumstances that do break the assumptions about the domain (what if mice stow away in the wheel compartments, and use the wheels as exercise wheels during the flight?), so an important part of requirements analysis is to assess the risk of each such circumstance, and decide whether the specification should be altered to handle them.
2.3. Software Problems or System Problems?
In chapter 1, we presented requirements engineering as an essential part of any development of software intensive systems, and argued that a key distinguishing feature of the design of such systems is that it inevitably involves the design of some of the human activities that the software is to support. Yet, in the previous sections, we defined requirements only in relation to specifying and building a machine, and we had to assume that the people using the machine would behave in a particular way – for example, we had to assume they would do the right thing with passwords. We assumed that our job was only to design the machine, and that the application domain was fixed, along with the boundary between the two domains.
In practice, the requirements analyst must also decide where this boundary lies, whether it can be moved to help redefine the problem, and hence whether parts of the application domain itself should be changed. For example, if we take seriously our argument about designing the human activities, then we should consider the whole question of whether passwords are the right way to detect authorization, and address the design of a process for issuing and protecting passwords, or whatever tokens of authority we eventually decide on.
The issue here is whether we are doing software engineering or systems engineering. Software engineering tends to assume that the hardware platform, and the devices with which it can interact with the world, are fixed, and the job is to write some software to make it all work. In contrast, systems engineering concerns itself with the development of an entire system, which may comprise a variety of components, including software, hardware, mechanical devices, and human operators. The systems engineer must examine how these various subsystems will interact, and how various functions should be allocated to different components. For example, should the selection and allocation of passwords be carried out by a human, by a mechanical device, or by a software algorithm? Should there be redundancy (more than one person, device or computer)? Essentially, the systems engineer must decide where to draw the boundaries.
\(^2\) The incident was Lufthansa flight DLH 2904 from Frankfurt to Warsaw on 14 September 1993. There were several factors involved, but the fact that the reverse thrust and spoilers failed to deploy when the aircraft first touched down was a major factor. The flight computer failed to detect the touchdown, the pilot was unable to engage reverse thrust to slow the aircraft, and the plane overshot the end of the runway, crashed and caught fire with the loss of two lives.
Another way of viewing the distinction between systems and software engineering is offered by considering the 4-variable model suggested by David Parnas (see figure 5). This model was originally conceived as a way of understanding real-time control systems, but the key ideas generalize nicely. A control system continually monitors some environmental variables (e.g. for flight control: altitude, windspeed, groundspeed, etc). In response to changes in the monitored variables, it manipulates some controlled variables (e.g. the angle of wing flaps, thrust from the jets, etc) in order to satisfy some control policy. However, the monitored and controlled variables are not directly accessible to the software, so we rely on input and output devices that map these environmental variables onto shared phenomena for the machine. For control systems, the input and output devices are known as sensors and actuators.
The system requirements are expressed in terms of a desired relationship between monitored and controlled variables. The system is required to maintain this relationship. There may be additional relationships between monitored and controlled variables that are natural properties of the domain (precisely Jackson’s application domain properties). For example, if the thrusters are fired, the aircraft will accelerate, and so groundspeed will change.
The software requirements are expressed in terms of a desired relationship between the input and output variables. Clearly, the software requirements, together with the properties of the input and output devices should guarantee that the system requirements are met. But a systems engineer must also choose what types of sensors and actuators to use, and different choices may make the software easier or harder to design.
For example, in most elevator systems, the control software has no way of detecting when people actually want to use the elevator – it has to rely on buttons being pressed. It is possible to add additional sensors to detect when there is anyone in the elevator, and whether anyone is actually standing waiting at a floor (whether or not a button has been pressed). These could make the elevator more efficient, by cutting down wasted journeys when people press wrong buttons either accidentally or maliciously. In effect, such sensors take things that were previously private phenomena of the application domain, and make them shared phenomena with the machine. But
the cost (and poor accuracy!) of these extra sensors may well outweigh the added benefit. A systems engineer must weigh up these trade-offs.
Both software engineering and systems engineering are relatively young disciplines, compared to traditional branches of engineering such as electrical engineering. One of the discernable trends in both fields as they have developed over the past decade is a move towards the use of standardized solutions to common types of problem. An example is the trend towards component-based systems, which attempt to use standard, “off-the-shelf” components wherever possible.
Unfortunately, there is often a tendency to assume that any deficiencies in the existing components can be fixed by adding more software during the design process; because software is so flexible, it can make up for problems elsewhere. Unfortunately, software is often the least well-understood part of any system. The assumption that such problems can be left to fix later using software patches demonstrates an inadequate risk analysis. Instead of fully considering whether a different system architecture would be a better choice, the risk is pushed towards the least understood components, to be addressed late in the design process, once the rest of the system has been created.
Requirements engineering seeks to overcome this problem by providing more detailed system-level analysis early in the design process, so that such risks can be more accurately assessed. Software is always embedded in a larger system, and in some cases this system too must be specified and designed. It is the requirements analyst’s job to decide where the boundaries should be drawn, and which functions should be allocated to which types of component. This includes deciding which activities will be done by the software, and which will be done by people.
### 2.4. Requirements Patterns and Problem Types
Although we have described some general principles about requirements, the actual requirements for different types of system may look very different. This is because different methods for discovering and expressing requirements have been developed for different application domains. It remains an open question how much the experience gained in analyzing the requirements on one project can be re-used on other projects. One possibility is to look for patterns that seem to recur over different projects. Such patterns might offer a way of re-using our experience, as well as insights into similarities and differences between different problem types. Here we will consider the way patterns have been discovered in both designs and in requirements.
#### 2.4.1. Design Patterns
Much of the work on design patterns for software engineering takes its inspiration from the work of the architect Christopher Alexander, who first catalogued patterns in architectural design and invented a pattern language for expressing them. In 1994 the “gang of four” (Gamma, Helm, Johnson and Vlissides) published their book “Design Patterns”, in which they applied this idea to patterns found in object-oriented programming. The idea was immediately appealing because it captured patterns that were familiar to most programmers, but which had not previously been documented in any systematic way. The design patterns were closely tied to programming problems. Around the same time, the first book on software architecture appeared, in which Shaw and Garlan identified a number of common software architecture patterns, and carefully described the qualities of each.
These books beautifully illustrated the benefits of a pattern language. Each pattern captures both a design problem, and a well-known solution. Or, in the words of Alexander, “each pattern is a relationship between a certain context, a certain system of forces which occurs repeatedly in that context, and a certain spatial configuration which allows these forces to resolve themselves”.
Hence, patterns capture a little of both a problem statement and a design solution, including the context in which the pattern is applicable. A typical pattern description includes:
- a name for the pattern,
- a problem statement,
- the context in which it occurs,
- a description of the forces, a design solution,
- and cross-references to related patterns.
The emphasis on forces is particularly important – these are the inter-related set of requirements and constraints that surround the design problem. Understanding these forces means understanding the design trade-offs, and hence the rationale for using the pattern.
In a design process, the designer might be aware of a large number of useful patterns, and will apply one whenever she recognizes that some aspect of the current design problem matches the interplay of forces described in one of the patterns. But note that the use of design patterns is not a substitute for requirements analysis. On the contrary, it is only once the requirements are understood that it becomes possible to find good matches between the current problem and any of the patterns in the catalogue. The alternative is to apply patterns blindly in the hope that because they were useful before, they may be useful in the current context. This is a sure recipe for solving the wrong problem!
2.4.2. Requirements Patterns
So, design patterns do not help with analyzing requirements, but they do help in mapping between requirements and designs. However, there is another potential use of patterns in requirements engineering: we can look for re-usable patterns in the requirements themselves. Just as with design, the idea of capturing past experience in the form of typical patterns is appealing. Requirements patterns do not capture design solutions, but capture the “spatial configuration” of particular types of problem, and present a way of understanding and describing that problem. In essence, they describe problematic human activities, or particular arrangements of the world where we can see an opportunity for change. A catalogue of requirements patterns would help us to recognize different types of problem.
The idea of requirements patterns is relatively new, so there is as yet little consensus on what requirements patterns should look like. Here, we will briefly consider two quite different approaches: the analysis patterns identified by Martin Fowler, and the problem frames defined by Michael Jackson.
Fowler’s analysis patterns are intended to be used to help build conceptual models of relevant parts of the application domain. The solution part of each of his patterns is represented as a fragment of the Unified Modelling Language (UML). Consider the example in figure 6. Here, the problem is how to model a transaction in the context of any kind of financial trading. A deal involves a buyer and a seller, but their views of the transaction are different. Technically speaking, a deal involves an exchange of one thing for another, so there should be two instruments involved, but as one of them is usually money, it is not clear whether this should be modeled separately. Fowler’s solution in this pattern is to model the deal as a contract, with the price as an attribute. The annotations on the relationships indicate that a contract must have exactly one party who is the buyer, and exactly one party who is the seller. However, each party can participate in any number of deals as a buyer and any number of deals as a seller. Similarly, each deal involves exactly one instrument (although the quantity can be specified as a attribute of the contract), but each instrument can be involved in any number of deals. The pattern provides a standard solution to the problem of modeling transactions, and also offers suggestions for when this standard solution might not be appropriate. We will meet both UML and Fowler’s patterns again in chapter 9, when we consider the role of modelling in requirements engineering.
Jackson’s problem frames are quite different in intent: they are designed to help us to sort out different kinds of problem and problem decompositions as a prelude to more detailed requirements analysis. There are so many different types of problem to which software-intensive systems are solutions that it helps to have a high-level classification scheme for different problem classes. A problem frame provides an initial decomposition of a problem into principle parts and a solution task. Although, at first sight, problem frames may look overly simplistic, they provide a useful starting point for understanding what the problem description might be for a given problem.
Figure 7 shows some simple examples of problem frames. In each case, the machine to be built is drawn as a box with a double border, and the requirement it is expected to satisfy is shown as an oval. The remaining boxes describe other relevant parts of the application domain that need to be understood. For example, in the simple information display frame, the problem is to build a machine that will maintain a representation of some part of the real world (e.g. bank accounts), will accept information requests (e.g. account requests) and in response provide information outputs (e.g. account statements). The relationship between the current state of the real world, and the information output expected for each request is determined by the information function (e.g. banking rules), which the machine is required to apply.
As well as providing a starting point for problem analysis, Jackson’s problem frames offer an initial typology of problems to which software-intensive systems might be applied. They demonstrate that information systems have a very different shape than, say, control systems. Here are Jackson’s five basic frames – each represents an entire class of software system, and for each class, a different set of requirements engineering methods is likely to be appropriate:
<table>
<thead>
<tr>
<th>Problem Frame</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>a) Required Behaviour Frame:</td>
<td></td>
</tr>
<tr>
<td><img src="image" alt="Controller" /> Desired Behavior</td>
<td><img src="image" alt="Program sequencer" /> Washing Rules</td>
</tr>
<tr>
<td>Controlled Domain</td>
<td>Washing machine</td>
</tr>
<tr>
<td>b) Simple Information Display Frame:</td>
<td></td>
</tr>
<tr>
<td><img src="image" alt="System" /> Information Outputs</td>
<td><img src="image" alt="Banking system" /> Account Requests</td>
</tr>
<tr>
<td>Information Requests</td>
<td>Information function</td>
</tr>
<tr>
<td>Real World</td>
<td>Banking Rules</td>
</tr>
<tr>
<td>c) Simple Workpieces Frame:</td>
<td></td>
</tr>
<tr>
<td><img src="image" alt="Operation Properties" /> Workpieces</td>
<td><img src="image" alt="Edit operation rules" /> text files</td>
</tr>
<tr>
<td>Operation Requests</td>
<td>Users</td>
</tr>
<tr>
<td>Workpieces</td>
<td>Editor tool</td>
</tr>
<tr>
<td>d) Connection Frame:</td>
<td></td>
</tr>
<tr>
<td><img src="image" alt="System" /> Achievable correspondence</td>
<td><img src="image" alt="Information System" /> Data entry system</td>
</tr>
<tr>
<td>MC</td>
<td>Data modeling rules</td>
</tr>
<tr>
<td>Connection</td>
<td>Data collection</td>
</tr>
<tr>
<td>CR</td>
<td>Transactions</td>
</tr>
<tr>
<td>Real world</td>
<td></td>
</tr>
</tbody>
</table>
Figure 7: Some example problem frames (adapted from Jackson 1997)
- **Required behaviour** (figure 7a). The problem is to build a machine to control some part of the real world in accordance with a fixed set of control rules. The solution will be an automated control system.
- **Commanded Behaviour**. The problem is to build a machine that allows some part of the real world to be controlled by an operator by issuing commands. The solution will be a “human-in-the-loop” control system.
- **Information Display** (figure 7b). The problem is to provide information about the current state of some part of the real world in an appropriate form and in an appropriate place, in response to information requests. The solution will be an information system.
- **Simple workpieces frame** (figure 7c). The problem is to keep track of the edits that are performed to some workpiece, for example a text file or a graphical object. The solution is likely to be some kind of application software, such as a word processor.
• *Transformation.* The problem is to take some input data represented in a certain format, and provide a transformation of that data according to a certain set of rules. Solutions to this type of problem include traditional data processing applications, as well as tools such as compilers.
### 2.5. Chapter Summary
TBD
### 2.6. Further Reading
**Requirements vs. Specifications**: One of the best all-round introductions to requirements engineering is Michael Jackson’s “Software Requirements and Specifications: A Lexicon of Practice, Principles, and Prejudices”. It covers many of the same ideas described in this chapter, and is nicely divided up in to bite-size essays for easy browsing. We’ve followed several of Jackson’s key ideas in this chapter, most notably on the distinction between requirements and specifications. Note that this distinction is not universally accepted in RE – many other authors suggest that there is no distinction, and that the requirements are what is written in the specification. For example, Suzanne and James Robertson, in their book “Mastering the Requirements Process” provide a template for documenting requirements as part of a requirements specification. The IEEE standard on requirements (IEEE-Std-830-1998) also makes no distinction. We will revisit this difference of opinion in chapter 14, when we consider how to document the requirements.
**Intertwining problems and solutions**: The idea of separating a description of the problem from a description of the solution appears throughout the requirements literature. The fact that this cannot really be done, however, was quite a radical idea once, perhaps summed best in Swartout and Balzer’s paper “On the inevitable intertwining of specification and implementation”, Communications of the ACM, vol 25, no 7, 1982. For a more recent account, including the twin peaks model of figure 2, see Nuseibeh’s paper “Weaving the Software Development Process Between Requirements and Architectures” IEEE Computer, Vol 34, No 2, 2001.
**Application Domains**: Jackson’s distinctions between requirements, domain properties and specifications is best described in his paper “The Meaning of Requirements”, which appeared in the Annals of Software Engineering, vol 3, 1997. This volume was a special issue on Requirements Engineering, and many of the other papers are also worth reading. The ideas are also elaborated in the paper “The Four Dark Corners of Requirements Engineering”, which appeared in ACM Transactions on Software Engineering and Methodology also in 1997.
**The 4-variable model** was introduced by Parnas and colleagues. It is described in detail in the paper by Parnas and Madey “Functional Documents for Computer Systems”, which appeared in the Science of Computer Programming, vol 25, no 1, 1995.
**Patterns and Problem Frames**: The original work on patterns was due to the architect Christopher Alexander, in his book “Notes on the Synthesis of Form”. The ideas were popularized in the software engineering community by the book “Design Patterns” by Gamma, Helm, Johnson and Vlissides. The original book on software architecture, Garlan and Shaw’s “Software Architecture: Perspectives on an Emerging Discipline” is also a patterns book in all but name. Martin Fowler took patterns to a higher level with his book “Analysis Patterns”, and now of course you can find books on patterns of just about anything in software engineering. For problem frames, read Michael Jackson’s “Problem Frames: Analyzing and Structuring Software Development Problems”.
Key distinctions in RE
Problem Description vs. Solution Description
Requirements Engineering assumes that it is useful to separate a description of the problem being solved from a description of a particular solution. This distinction is useful for communicating with customers, and for weighing up different design solutions. However, the problem and the solution interact, so that it is impossible to make this distinction entirely cleanly.
What vs. How
Traditionally, a specification states ‘what’ a system should do, without saying ‘how’ it should do it. The reason for this distinction is to prevent ‘solution bias’ in the statement of the problem – we should not write a problem statement in such a way as to suggest certain solutions, or preclude others. However, there may be good reasons to prefer some solutions over others, and we do need to capture these reasons, so ‘what’ versus ‘how’ is too simplistic for most purposes.
Application Domain vs. Machine Domain
It is useful to distinguish between the world in which the problem exists (the ‘application domain’) and the world in which software solutions are developed (the ‘machine domain’). These worlds overlap in a limited way, and it is this overlap that allows us to take the application domain requirements that we care about and translate them into relationships between inputs and outputs that the software can control.
Functional vs. Non-functional Requirements
Functional requirements capture the functions that a system must perform, while non-functional requirements capture general properties about the system, such as its speed, usability, safety, reliability, and so on. Non-functional requirements are often also called ‘system qualities’, or ‘ilities’.
Systems Engineering vs. Software Engineering
Requirements engineering applies to both systems engineering and software engineering. For software engineering, it is normally assumed that the way in which the software interacts with the world is fixed, using standardized types of input and output device. For systems engineering, no such assumptions are made – the task is to design an entire system, of which the software is just one component.
Customers vs. Users
In requirements engineering, we normally use the term ‘stakeholders’ to indicate in the broadest terms all the different groups of people who may be affected by a new system, and therefore who might have requirements that need to be considered. Two important subgroups are customers and users. However, these are distinct roles: the customers are those who are responsible for commissioning a new system, while the users (or ‘end-users’) are the people who will interact with the system once it is installed. Only in special cases are these the same person or people.
Indicative vs. Optative Descriptions
In describing a problem, it is often necessary to talk about both the current situation, and the future envisaged situation once we have designed a solution. An indicative statement describes the world as it is now, while an optative statement describes a state of affairs that we would like to bring about. The distinction is important because we need to understand which things can be assumed about the world, and which things the software is required to bring about. Application domain properties are indicative, whereas requirements are optative.
Verification vs. Validation
Verification is the process of determining that a program meets its specification, whereas validation is the task of making sure that the system will address the right problem in the real world. Many people remember the distinction as: “verification means are we solving the problem right; validation means are we solving the right problem”. This quote captures the essence of the distinction, but remember that the distinction only makes sense when you consider the role of a specification.
Capturing vs. Synthesizing Requirements
Stakeholders often do not know what they want or what is possible. We may not know exactly who the customers and users will be for some systems. Under these circumstances, it is wrong to think of requirements as being ‘out there’ ready to be captured. Rather, they need to be negotiated or even invented. But this does not mean we just make them up. Instead, we synthesize the requirements based on our best understanding of the problem we are trying to address, along with reasonable estimates for the unknowns.
|
{"Source-Url": "http://www.cs.toronto.edu/~sme/papers/2004/FoRE-chapter02-v7.pdf", "len_cl100k_base": 10124, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 82414, "total-output-tokens": 10562, "length": "2e13", "weborganizer": {"__label__adult": 0.00032591819763183594, "__label__art_design": 0.00039768218994140625, "__label__crime_law": 0.0002474784851074219, "__label__education_jobs": 0.0023517608642578125, "__label__entertainment": 5.429983139038086e-05, "__label__fashion_beauty": 0.00016021728515625, "__label__finance_business": 0.0003159046173095703, "__label__food_dining": 0.00032401084899902344, "__label__games": 0.0007042884826660156, "__label__hardware": 0.00048828125, "__label__health": 0.00026917457580566406, "__label__history": 0.00021314620971679688, "__label__home_hobbies": 7.450580596923828e-05, "__label__industrial": 0.0003180503845214844, "__label__literature": 0.0003993511199951172, "__label__politics": 0.0001577138900756836, "__label__religion": 0.0003676414489746094, "__label__science_tech": 0.0076751708984375, "__label__social_life": 7.987022399902344e-05, "__label__software": 0.006011962890625, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00024247169494628904, "__label__transportation": 0.0003571510314941406, "__label__travel": 0.00016582012176513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51904, 0.00941]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51904, 0.78446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51904, 0.94871]], "google_gemma-3-12b-it_contains_pii": [[0, 3107, false], [3107, 5070, null], [5070, 7441, null], [7441, 9996, null], [9996, 13838, null], [13838, 16954, null], [16954, 21435, null], [21435, 25420, null], [25420, 29696, null], [29696, 32153, null], [32153, 36097, null], [36097, 38561, null], [38561, 42054, null], [42054, 43902, null], [43902, 47469, null], [47469, 51904, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3107, true], [3107, 5070, null], [5070, 7441, null], [7441, 9996, null], [9996, 13838, null], [13838, 16954, null], [16954, 21435, null], [21435, 25420, null], [25420, 29696, null], [29696, 32153, null], [32153, 36097, null], [36097, 38561, null], [38561, 42054, null], [42054, 43902, null], [43902, 47469, null], [47469, 51904, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51904, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 51904, null]], "pdf_page_numbers": [[0, 3107, 1], [3107, 5070, 2], [5070, 7441, 3], [7441, 9996, 4], [9996, 13838, 5], [13838, 16954, 6], [16954, 21435, 7], [21435, 25420, 8], [25420, 29696, 9], [29696, 32153, 10], [32153, 36097, 11], [36097, 38561, 12], [38561, 42054, 13], [42054, 43902, 14], [43902, 47469, 15], [47469, 51904, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51904, 0.12258]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c3e0b13ec94112e99629ee3f049cef570ad40a18
|
[REMOVED]
|
{"Source-Url": "http://www.it.swin.edu.au/personal/cliu/DASFAA08_2.pdf", "len_cl100k_base": 10351, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 52757, "total-output-tokens": 12314, "length": "2e13", "weborganizer": {"__label__adult": 0.0004012584686279297, "__label__art_design": 0.00042557716369628906, "__label__crime_law": 0.0005774497985839844, "__label__education_jobs": 0.0025424957275390625, "__label__entertainment": 0.00017774105072021484, "__label__fashion_beauty": 0.0002474784851074219, "__label__finance_business": 0.0009107589721679688, "__label__food_dining": 0.0004296302795410156, "__label__games": 0.0010652542114257812, "__label__hardware": 0.0009307861328125, "__label__health": 0.0007586479187011719, "__label__history": 0.0005669593811035156, "__label__home_hobbies": 0.00013303756713867188, "__label__industrial": 0.0004982948303222656, "__label__literature": 0.000621795654296875, "__label__politics": 0.0003719329833984375, "__label__religion": 0.0005998611450195312, "__label__science_tech": 0.183349609375, "__label__social_life": 0.00016617774963378906, "__label__software": 0.044219970703125, "__label__software_dev": 0.759765625, "__label__sports_fitness": 0.0002830028533935547, "__label__transportation": 0.0006022453308105469, "__label__travel": 0.0003314018249511719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41806, 0.04176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41806, 0.41521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41806, 0.87782]], "google_gemma-3-12b-it_contains_pii": [[0, 2596, false], [2596, 4343, null], [4343, 6326, null], [6326, 9414, null], [9414, 12960, null], [12960, 16835, null], [16835, 20126, null], [20126, 23651, null], [23651, 27150, null], [27150, 28235, null], [28235, 30555, null], [30555, 33293, null], [33293, 35154, null], [35154, 38304, null], [38304, 41089, null], [41089, 41806, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2596, true], [2596, 4343, null], [4343, 6326, null], [6326, 9414, null], [9414, 12960, null], [12960, 16835, null], [16835, 20126, null], [20126, 23651, null], [23651, 27150, null], [27150, 28235, null], [28235, 30555, null], [30555, 33293, null], [33293, 35154, null], [35154, 38304, null], [38304, 41089, null], [41089, 41806, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41806, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41806, null]], "pdf_page_numbers": [[0, 2596, 1], [2596, 4343, 2], [4343, 6326, 3], [6326, 9414, 4], [9414, 12960, 5], [12960, 16835, 6], [16835, 20126, 7], [20126, 23651, 8], [23651, 27150, 9], [27150, 28235, 10], [28235, 30555, 11], [30555, 33293, 12], [33293, 35154, 13], [35154, 38304, 14], [38304, 41089, 15], [41089, 41806, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41806, 0.03333]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8c0cd775c860fa4dae957a466883c7dccd280e7f
|
Designing a distributed peer-to-peer file system
Fredrik Lindroth
Abstract
Designing a distributed peer-to-peer file system
Fredrik Lindroth
Currently, most companies and institutions are relying on dedicated file servers in order to provide both shared and personal files to employees. Meanwhile, a lot of desktop machines have a lot of unused hard drive space, especially if most files are stored on these servers. This report tries to create a file system which can be deployed in an existing infrastructure, and is completely managed and replicated on machines which normally would hold nothing more than an operating system and a few personal files.
This report discusses distributed file systems, files, and directories, within the context of a UNIX-based local area network (LAN), and how file operations, such as opening, reading, writing, and locking can be performed on these distributed objects.
Contents
1 Introduction ......................................................... 3
1.1 Focus of the report ........................................... 3
2 Existing approaches .............................................. 5
3 Requirements ....................................................... 7
3.1 File operation requirements ................................... 7
3.2 Security requirements .......................................... 8
3.2.1 Messages .................................................. 8
3.2.2 Storage security ......................................... 8
3.3 Availability requirements .................................... 8
4 Proposed solution ................................................ 10
4.1 Overview ...................................................... 10
4.2 DHT ............................................................ 10
4.2.1 Terminology ............................................... 12
4.3 Messaging ...................................................... 12
4.3.1 Message structure ....................................... 13
4.4 Security ....................................................... 13
4.5 The node key pair ............................................. 13
4.6 The user key pair ............................................. 14
4.7 Key management ............................................... 14
4.8 Routing ........................................................ 15
4.9 Joining and departing ......................................... 18
4.10 Controlled departure ......................................... 18
4.11 Node failure ................................................ 19
4.12 Local storage ................................................. 19
4.13 Replication ................................................... 20
4.13.1 Files ..................................................... 23
4.13.2 Directories ............................................... 24
4.14 Creating files ............................................... 24
4.14.1 Analysis of algorithm 4.7 ................................ 25
4.14.2 Analysis of algorithms 4.8 and 4.9 .................... 26
4.15 Locking files ................................................. 27
4.15.1 Analysis of algorithms 4.10 and 4.11 .................. 27
4.16 Reading and seeking files ........................................ 29
4.16.1 Analysis of algorithm 4.12 ................................. 30
4.17 Deleting files ....................................................... 30
4.17.1 Analysis of algorithm 4.13 ................................. 31
5 Differences from existing solutions ............................... 32
5.1 CFS ............................................................... 32
5.2 AndrewFS ....................................................... 32
5.3 Freenet ......................................................... 32
5.4 OCFS2 ........................................................... 33
6 Future extensions ..................................................... 34
6.1 Key management ................................................ 34
6.2 File locking .................................................... 34
6.3 Replication ..................................................... 34
7 Thoughts on implementation ........................................ 35
7.1 Programming languages ........................................ 35
7.2 Operating System interfaces ................................... 35
7.3 Implementation structure ....................................... 35
8 Conclusion ............................................................ 37
Chapter 1
Introduction
Currently, most companies and institutions are relying on dedicated file servers in order to provide both shared and personal files to employees. Meanwhile, a lot of desktop machines have a lot of unused hard drive space, especially if most files are stored on these servers. This report tries to create a file system which can be deployed in an existing infrastructure, and is completely managed and replicated on machines which normally would hold nothing more than an operating system and a few personal files.
File sharing networks, such as gnutella [8], have shown that it is possible to provide files to a vast network of connected machines. Is this possible for a network of a smaller size? There are a number of requirements that must be met for this solution to be completely relied on, such as data availability and system-wide redundancy and security.
When designing a multi-user file system, other issues come into play which do not need to be considered in designing a single-user file system. Such an issue is file locking [1, 16]. GNU/Linux [2] has traditionally solved this by making file locks advisory, as provided by the fcntl(2) [16] system call; software needs to check for file locks when relevant, the system will not deny access to a locked resource. There are two kinds of locks: Read and Write. These are not mutually exclusive; both types of locks may exist simultaneously on the same object.
Another multi-user specific issue is that of access permissions: how do we verify that a user may access a certain resource when there is no central location to verify access requests against? This, too, will have to be addressed.
Another issue is that of consistency. We have a set of machines which may be separated into two or more sets at any time as a result of any number of failures. How can any node be certain that it views the latest data for a requested resource?
1.1 Focus of the report
This report will focus on the theoretical considerations and deliberations of implementing such a file system under GNU/Linux and other UNIX-like systems and flavours, due to the author’s familiarity of such implementations. Because of this limitation, this report will only consider advisory file locking, and the
user/group/other access scheme commonly found in such systems. It is certainly possible to consider mandatory locking and other access control schemes in future work, and it is encouraged.
Chapter 2
Existing approaches
Quite a few UNIX-based solutions for distributed file systems exist. The most traditional approach is Network File System, or NFS [9]. This file system has been used for exporting file systems from a centralised server to clients, but this does not provide redundancy.
NFS [10, 9] is designed in an hierarchical manner, meaning that there is a clear difference between clients and servers. NFS servers are designed to be as stateless as possible, meaning they do not keep track of their clients. This also causes file locking to be implemented as a separate service, as it is inherently stateful. Being stateless also requires the server to check permissions on a file for each read/write call.
Andrew FS (AFS) [11] is somewhat related to NFS, being that AFS file systems may be accessed through NFS. The difference, however, is that AFS is designed to make clients store files locally, and through this mechanism enable a much larger degree of scalability, since files do not need to be transferred each time they are accessed, as opposed to NFS. For the purpose of this report, AFS can be thought of as a caching version of NFS. Some redundancy is gained by the local storage of requested files.
For actually distributing the file system among the peers of a network, Cooperative File System, or CFS [14], comes a fair bit closer to what is required, but with the major drawback that it is read-only; only the user exporting the file system can add, change, or remove files. It uses the Chord DHT system (see section 4.1: Overview) for routing and communication, and DHash for storage of the data blocks themselves.
Distributed file systems have also been used extensively with clustering, and so there exists both GFS [18] and OCFS (Oracle Cluster File System) [4], which are aimed at providing files to a computing cluster and distributed databases. This relies on a network of master and slave servers, which will require certain machines to be powered on permanently.
GFS2 [18] is Red Hat’s approach to a clustered file system. It is designed to work in a SAN (Storage Area Network) rather than a normal LAN. It is classless; all nodes perform an identical function. It does, however, require the Red Hat Cluster Suite, a set of software only available through Red Hat Enterprise Linux. The nodes on which GFS2 is running are also meant to be
Table 2.1: A table comparing file system capabilities. Both FreeNet and Gnutella do not work with traditional file system semantics, and will not provide directly writeable objects nor file locks. CFS is read-only, and so does not need lock management. The proposed solution aims to have all these capabilities.
<table>
<thead>
<tr>
<th>Name</th>
<th>Read-Write</th>
<th>Class-less</th>
<th>Dynamic</th>
<th>Redundant</th>
<th>Lock managing</th>
</tr>
</thead>
<tbody>
<tr>
<td>CFS</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>N/A</td>
</tr>
<tr>
<td>Freenet</td>
<td>N/A</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>N/A</td>
</tr>
<tr>
<td>GFS</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Gnutella</td>
<td>N/A</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>N/A</td>
</tr>
<tr>
<td>NFS</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>AFS</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>OCFS2</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Proposed soln.</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
</tr>
</tbody>
</table>
in a server role, and the file system is designed to cater to those needs, as it assumes all nodes have equal access to the storage, something a LAN cannot provide. GFS2 uses a distributed lock manager, which is based on the VAX DLM API.
Gnutella [8] is a popular file sharing network which is completely decentralised. It relies on a number of leaf nodes, which are connected to so-called ultrapeers. Ordinary leaf nodes may promote themselves to ultrapeer status if they deem themselves fit. Due to its use of ultrapeers to handle routing, many nodes can be reached in a few hops. Ultrapeers are themselves able to share files, but their primary job is to route messages to and from other ultrapeers, and to their leaf nodes when they are the intended destination.
Freenet [5] is yet another solution, which uses a key-based routing scheme and focuses on anonymous routing and storage. It does not support file system primitives, and should therefore be considered a file sharing solution rather than a file system.
Chapter 3
Requirements
To achieve both goals of data redundancy and lack of permanent servers, any given node must be allowed to crash or leave the network without bringing it down, while still maintaining high availability for all files. The network should react to any such event and reach a state where such an event may reoccur without impeding availability; in other words, the network should continuously keep itself at a sufficient level of data redundancy.
As the goal is to store file metadata much in the same way a traditional, local file system would, the metadata structures should be able to store information about file ownership and access permissions. It is also desirable for the metadata to be extendable in order to fit other attributes, such as access control lists and metadata applicable to additional operating systems, such as Microsoft Windows.
Table 2.1 lists some relevant capabilities of existing distributed storage solutions. FreeNet and Gnutella are not file systems, but rather file sharing services, and therefore lack facilities that are required of file systems, such as file listing.
3.1 File operation requirements
For file operations, we need to be able to do the following:
- Create - create a new file and associated metadata, and distribute it through the network.
- Delete - removes a file from the network, updating metadata to reflect this.
- Open - open the file for reading and/or writing.
- Close - close the file handle and announce this to all parties concerned. This operation will also release all held locks on the file.
- Read - read bytes from the file.
- Write - write bytes to the file.
• Lock - Obtain a lock on the file. Both read and write locks may be requested. Locks are advisory to software and will not deny access.
• Unlock - Remove locks on the file.
These requirements reflect what is possible to achieve on local file system.
3.2 Security requirements
3.2.1 Messages
These two properties must hold for a message routed from one node to another:
• Data integrity - The contents of the message must be proven to be correct in order to protect from tampering and/or failed transmissions.
• Data privacy - The contents of the message should not be readable by nodes along the path between the two nodes
Data integrity can be achieved by signing messages with a private key, while data privacy can be achieved by encrypting them with a public key. The only information that needs to be left in the clear are the source and destination node identifiers, as these are of vital importance for being able to route a message forward, and to provide a way for intermediate nodes to return a response to the source node in case of failure.
The use of public key encryption also provides a mechanism for authentication upon joining the network, since public keys can be signed by a trusted party, and clients may be configured to discard messages not signed by that party, effectively denying entry for unauthorised clients.
3.2.2 Storage security
Unlike traditional file systems, a distributed file system of this kind will require storage of data originating from other nodes for replication purposes. This data should not be read by local users, much like the personnel of a bank should not be able to access the possessions that their customers are storing in the safe; personnel should be able to bring the box to a customer, but it is the customer who opens the box in private.
3.3 Availability requirements
As this file system is intended to replace a centralised server, the availability of the files is a high priority. However, there are a few things that cannot be sacrificed in order to satisfy this goal.
The CAP theorem [7], or Brewer’s theorem, states.
In a distributed computer system, only two of three of the following may hold true:
- Consistency: all nodes see the same data set
- Availability: the system is always available
- Partition tolerance
Availability is a high priority, but should not be considered critical. In a file system that is used for many critical files, consistency is far more important, as lack thereof may cause different processes to see different data, thus making corruption of the file collection possible. Therefore, the service should be able to avoid answering requests if answering them would cause an incorrect reply.
The event of a node departing should never put the network in such a state that a file becomes unavailable. This requires a replication factor of at least two for each file chunk, with mechanisms in place to maintain that factor. If a node departs, its data should be replicated from another node holding that data in order to maintain the replication factor.
Routing should also not be affected; a departing node should not disrupt any communications between other nodes, even though the departing node used to route messages between the two nodes.
For these reasons, it is not unreasonable to sacrifice total availability for consistency and partition tolerance. The system should naturally strive to maintain availability, but not at the expense of consistency, which is mandatory, and partition tolerance, since partitioning is unavoidable given enough time.
Chapter 4
Proposed solution
This chapter will specify a design that attempts to implement the given requirements. It derives its routing mechanism from the one used by the Cooperative File System (CFS) [14], as well as the local caching properties of Andrew FS (AFS) [11], in order to achieve scalability.
4.1 Overview
For this solution, we will use the DHT (Distributed Hash Table) [6, 15] design as a means of routing and replication management, as CFS has previously done using the Chord DHT implementation [14, 15]. Using a DHT network means that we do not have to consider the physical network topology, and the nodes may even be on different subnets or different networks altogether, enabling for file systems to exist across multiple office facilities. This solution recognises the ideas behind the Cooperative File System (CFS) [14] concerning routing and message passing.
A DHT design provides an overlay network and a means to route messages between nodes. It also allows for redundancy, and it is suitable for handling applications with large amounts of churn, which is the term for nodes joining and departing from the network [13].
This solution also builds on the scaling advantage that is provided by local caching, as shown by Andrew FS (AFS) [11].
4.2 DHT
As mentioned above, DHT is an overlay network that is used for creating a logical network topology on top of a physical one, for storing pairs of keys and values. This overlay network is used for passing messages between nodes, storing key-indexed values, and to facilitate the joining of new nodes. Since routing on this network is independent from the physical topology, routing can happen on any scale, from a LAN to a global network.
A DHT is laid out as a ring, called the DHT ring. This comes from the fact that DHT uses a circular keyspace; if $n_1$ is the node with the smallest numerical identifier (key) in the DHT ring, and $n_n$ is the node with the greatest key, the
successor of $n_n$ is $n_1$, since we have gone full circle in the keyspace. This enables a useful property for replication, which will be touched on in section 4.13: Replication.
4.2.1 Terminology
- **Key** - A numerical identifier for an object in the DHT ring. Usually derived from some property of the object, such as creation time, content or name.
- **Keyspace** - An interval $a \leq b$, in which a key $o$ may be placed. In case of circular DHT systems, such as Chord and Pastry, this keyspace wraps around in a circular fashion. In this case, $a$ may be greater than $b$, but still proceed $b$ in the ring.
- **Leaf set** - In the Pastry DHT system, the leaf set $L$ is the set of nodes with the $|L|/2$ numerically closest larger and $|L|/2$ numerically closest smaller node IDs, or node keys. These are used at the end of routing to hand the message over to the correct node.
- **Predecessor** - The predecessor of a node or object is the node immediately preceding it in the keyspace of the DHT ring.
- **Root node** - For a given object $O$, its root node is the node, $n$, that has that object in its keyspace, which is $n_p < O \leq n$, for predecessor $n_p$.
- **Routing table** - A table containing entries which describe what path to take to reach a certain destination. In the Pastry DHT system, the routing table for a node is used to address node IDs with a shared prefix to the node in question.
- **Successor** - The successor of a node or object is the node immediately following it in the keyspace of the DHT ring.
The Pastry DHT system [6] uses a 128-bit, circular keyspace for nodes and objects. Every Pastry node keeps a routing table of $[\log_b N]$ rows, with $2^b - 1$ entries each, where $b$ is a configuration parameter specifying the number of bits in a digit, where digits are used to divide the keyspace, yielding a numbering system with base $2^b$. This numbering system is then used in routing, as described in section 4.8: Routing.
The Pastry DHT system is a good choice for this file system since it is relatively simple to derive an implementation from. It is not really necessary to use the Pastry software library, but to use it as a model for creating a new DHT system in whichever language you please. Furthermore, much research into DHT-based systems use Pastry as a base, including research into replication [13].
4.3 Messaging
If any node wishes to communicate with another node, it does so by using message passing on the DHT ring. Each node in the ring has a responsibility to pass along messages if it is not the intended recipient.
4.3.1 Message structure
Each message sent in the ring has a few mandatory attributes. These are as follows.
- **Source Node ID** - The identifier of the node which sent the message. Used by the receiving node when replying.
- **Sequence** - A sequence number set by the source node when sending the message. Used to differentiate between replies from other nodes.
- **Operation/Message type** - Used to distinguish between different kinds of messages.
- **Arguments** - Arguments to the message, and supplementary data to the operation defined by the message type.
This message is passed by means of nodes passing the messages along using routing. This is explained more in section 4.8: Routing.
4.4 Security
For security purposes, this design makes use of the GNU Privacy Guard (GPG) public key encryption scheme, both for verification purposes by use of signatures, and for data privacy through encryption. This scheme makes use of a public key and a secret private key. For our purposes, two keypairs will be used: one pair for nodes, and one pair for users. Figure 4.1 provides an overview of how encryption and signing is used during message passing.
4.5 The node key pair
The node key pair is used for sending and receiving messages to other nodes in the network. Any message that is to be sent from one node to another is
to be encrypted using the recipient’s public key. The message is also signed with the private key of the sender. The recipient will then verify each message against the sender’s public key in order to verify that the message has not been tampered with, and also that it originates from the alleged node.
4.6 The user key pair
When a file is created on the file system, it is encrypted using the public key of the user that created the file. Depending on permissions, the file may also be encrypted using the public keys of the users that belong to the group identified by the UNIX group ID (GID). This applies to directories as well, and affects mainly the read (r) permission, which will prevent the listing of directory contents.
The user key pair must be present on the user’s current node, as files and messages intended solely for the user must be delivered to this node only.
4.7 Key management
The node key pair is generated and distributed to machines which are to become nodes in the network, along with a public root key. The public key of each node is signed by the root key, effectively granting the node access to the network, since nodes will look for this signature upon receipt of messages.
When a node is joining the network, it presents its public key as part of the join request. This means that any node routing the request has a way of verifying the authenticity and authorisation of the originating node, effectively blocking any intruders. Upon receipt of a message, the public key of the originating node or user is not guaranteed to be known. Such keys may be requested from any given node by sending a public key request message.
The Cooperative File System (CFS) [14] authenticates updates to the root block by checking that the new block is signed with the same key as the old one, providing an appropriate level of file security for a read-only file system.
Figure 4.2: A DHT ring. Object O has Node B as its root node, while Object P has Node A as root.
4.8 Routing
Figure 4.3: Routing example: The first route is to a node numerically closer to the destination. The two following is to nodes with at least one more digit in the prefix common to the destination. The last route is found in the leaf table.
All routing happens through the overlay network, which is gradually constructed as nodes join. Each node has its own numerical identifier which signifies its position in the DHT ring. Objects which are to be stored on the network are
assigned to the node which numerical identifier follows the numerical identifier of the object. For two nodes, \( a \) and \( b \), and one object \( o \), with \( a < O \leq b \) holding, \( b \) is chosen as being responsible for storing \( o \). [6]
The process of routing an object \( O \) from node \( n_k \) to destination key \( k \) is as follows:
**Algorithm 4.1** Message routing
1. **procedure** `Route(SourceNode, LastNode, Dest, Seq, Op, Args)`
2. \[ \text{Reply(LastNode, received, Sequence)} \]
3. \[ \text{Result } \leftarrow \text{LeafTableRoute(SourceNode, LastNode, Dest, Seq, Op, Args)} \]
4. while \( \text{Result} = \text{timeout} \) do
5. \[ \text{Result } \leftarrow \text{LeafTableRoute(SourceNode, LastNode, Dest, Seq, Op, Args)} \]
6. end while
7. if \( \text{Result} = \text{delivered} \) then
8. \[ \text{return delivered} \]
9. end if
10. \[ \text{Result } \leftarrow \text{PrefixRoute(SourceNode, LastNode, Dest, Seq, Op, Args)} \]
11. while \( \text{Result} = \text{timeout} \) do
12. \[ \text{Result } \leftarrow \text{PrefixRoute(SourceNode, LastNode, Dest, Seq, Op, Args)} \]
13. end while
14. if \( \text{Result} = \text{delivered} \) then
15. \[ \text{return delivered} \]
16. end if
17. end procedure
Algorithm 4.1: Message routing works like this:
1. Before any routing occurs, the last node in the chain, \( \text{LastNode} \), is notified about the successful receipt of the message (line 2).
2. The local leaf table is searched for two nodes, \( a \) and \( b \), such that \( a < d \leq b \) for destination \( d \). If such a nodes is found, the message is delivered to \( b \) and the procedure ends (lines 3 to 9). This procedure is detailed in algorithm 4.2.
3. If no such node is found, the routing table is searched through for a node which shares a common prefix with \( \text{Dest} \) by at least one more digit than the current node. If such a node is found, the message is forwarded there, and the procedure ends. If no such node is found, the message is forwarded to a node that is numerically closer to \( \text{Dest} \) than the current node (lines 10 to 16).
The Leaf table routing procedure (algorithm 4.2) passes the message on to a suitable leaf node. If no reply is received within the set time, the leaf node is deemed as failed, and the procedure will remove the leaf node from the leaf table, and return \( \text{timeout} \) to the `Route` procedure in order to let it know that it should try again.
Algorithm 4.2 Leaf table routing
1: procedure LEAFTABLEROUTE(SourceNode, LastNode, Dest, Seq, Op, Args)
2: PreviousNode ← null
3: for all LeafNode ← LeafTable do
4: if PreviousNode = null then
5: PreviousNode ← LeafNode
6: else if PreviousNode < Dest ≤ LeafNode then
7: Reply ← DELIVER(SourceNode, LeafNode, Seq, Op, Args)
8: if Reply = false then
9: REMOVEFAILEDLEAFNODE(LeafTable, LeafNode)
10: return timeout
11: else
12: return delivered
13: end if
14: else
15: PreviousNode ← LeafNode
16: end if
17: end for
18: return notfound
19: end procedure
Algorithm 4.3 Prefix routing
1: procedure PREFIXROUTE(SourceNode, LastNode, Dest, Seq, Op, Args)
2: PrefixClosestNode ← LocalNode
3: LongestPrefixLength ← SHAREDPREFIXLENGTH(LocalNode, Dest)
4: NumericallyClosestNode ← LocalNode
5: for all Node ← RoutingTable do
6: NodePrefixLength ← SHAREDPREFIXLENGTH(Node, Dest)
7: if NodePrefixLength > LongestPrefixLength then
8: LongestPrefixLength ← NodePrefixLength
9: PrefixClosestNode ← Node
10: else if NodePrefixLength = LongestPrefixLength & |Node − Dest| < |NumericallyClosestNode − Dest| then
11: NumericallyClosestNode ← Node
12: end if
13: Reply ← FORWARD(SourceNode, Node, Dest, Seq, Op, Args)
14: if Reply = false then
15: REMOVEFAILEDNODE(RoutingTable, Node)
16: return timeout
17: else
18: return delivered
19: end if
20: end for
21: if PrefixClosestNode = LocalNode then
22: Reply ← FORWARD(SourceNode, NumericallyClosestNode, Dest, Seq, Op, Args)
23: if Reply = false then
24: REMOVEFAILEDNODE(RoutingTable, Node)
25: return timeout
26: else
27: return delivered
28: end if
29: end if
30: end procedure
Algorithm 4.3: Prefix routing attempts to find a node that shares at least one more prefix digit with the destination, \( \text{Dest} \). It continually records the node that is numerically closest to \( \text{Dest} \), \( \text{NumericallyClosestNode} \), so in the case where a node with longer common prefix is not found, the message can be routed there instead. This procedure, like the leaf table routing procedure, will remove a node if it does not respond, and will return a "timeout" status if that happens, letting the \text{Route} procedure know that it should try again.
Due to the circular nature of the DHT ring, finding a closer node will always succeed, since it is possible to walk around the ring towards the destination.
4.9 Joining and departing
This mechanism is used for all messages sent over the DHT ring. Joining the network simply means sending a join message with the ID of the joining node as the destination. Given two successive nodes \( n_a \) and \( n_b \) and a joining node \( n_j \), which ID is in the keyspace of \( n_b \), \( n_b \) will eventually receive a join message, and will place \( n_j \) as its predecessor. It will then pass the join message along to its current predecessor, \( n_a \), which has \( n_b \) in its leaf set, and therefore knows to add \( n_j \) as its successor.
An example of object ownership with two objects and two nodes can be seen in figure 4.2.
In order to join the network, a node must know of at least one previously existing node. This node may be retrieved from a list of frequently-active nodes, such as the company web server, or it may be located by scanning the network for appropriate resources, which requires only a network connection.
4.10 Controlled departure
When a node wishes to depart, it sends a departure message to all of its leaf nodes.
\begin{algorithm}
\caption{Node departure: departing node}
\begin{algorithmic}[1]
\Procedure{Depart}{LeafNodes, StoredObjects}
\ForAll{Object \leftarrow StoredObjects}
\State Reply \leftarrow SendMessage(\text{Successor}, replicate\_request, Object)
\EndFor
\ForAll{Node \leftarrow LeafNodes}
\State Reply \leftarrow SendMessage(Node, self\_depart,"")
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
First, the departing node asks its successor to replicate all of its data, since its position in the ring relative to the departing node will make it responsible for those objects once the departing node departs. Then, the departing node
sends \textit{self\_depart} messages to its leaf nodes in order to make them remove it from their leaf tables.
\textbf{Algorithm 4.5} Node departure handler
1: \textbf{procedure} \textsc{DepartHandler}(SourceNode, LeafNodes)
2: \hspace{1em} \textbf{for all} Node $\leftarrow$ LeafNodes \textbf{do}
3: \hspace{1em} \hspace{1em} \textbf{if} Node.ID $\neq$ SourceNode \textbf{then}
4: \hspace{1em} \hspace{1em} \hspace{1em} NewLeafNodes[Node.ID] $\leftarrow$ Node
5: \hspace{1em} \hspace{1em} \textbf{end if}
6: \hspace{1em} \textbf{end for}
7: \hspace{1em} LeafNodes $\leftarrow$ NewLeafNodes
8: \textbf{end procedure}
Upon receipt of the \textit{self\_depart} message from one of its leaf nodes, say $n_d$, the receiving node, say $n_r$, will then remove that node from its leaf set. If $n_d$ is the predecessor of $n_r$, then $n_r$ would have already received replication requests from $n_d$ regarding all of its stored objects. As such, $n_r$ is now ready to serve requests for objects previously on $n_d$.
\subsection{4.11 Node failure}
There is also a possibility that a node will stop responding to requests. In this case, the unexpected departure will eventually be detected by other nodes during an attempt to route a message through this node, since each node expects a \textit{received} message back within a given timeout period as a confirmation of receipt in routing. See section 4.8: Routing. When a node detects this, the non-responsive node will be removed from its leaf table or routing table, whichever is applicable.
\subsection{4.12 Local storage}
Each node will store file data and metadata for files. As with local file systems, files are stored in a hierarchical manner, with directories being a collection of files, and the root directory containing multiple files and directories.
After a file has been requested and downloaded to the local drive, it remains there, effectively creating a transparent cache, as with Andrew FS (AFS) [11]. Since all files in the network are stored on the participating nodes, this solution scales very well, as the total storage space increases with each joining node. However, space must be put aside to maintain redundancy, and to allow nodes to depart without losing data. This is discussed in section 4.13: Replication.
CFS stores and replicates inode and data blocks separately onto nodes [14].
4.13 Replication
Replication is without a doubt the most important part of this file system, since it would achieve neither redundancy nor decentralisation without it.
For each file in the file system, there is a set of nodes where each node contains a copy of the file. Such a set is called the replication set for that file. In order to maintain a sufficient level of replication, the replication set needs to be sufficiently large. If a node in a replication set departs, a new node will be chosen and included in that set.
The replication factor is an integer value chosen to be larger than or equal to two, and may be adjusted depending on the amount of churn, nodes joining and departing, the network is expected to handle. A higher factor means more network overhead, since data needs to be replicated to a larger number of nodes. A lower factor means less overhead, but less tolerance for churn (nodes arriving and departing from the network). Situations which may call for a higher replication factor is the departure of nodes at the end of a working day. The replication factor may thus be increased in anticipation of such an event.
For any given replication factor \( rf \) and replication set \( rs \) with regards to file \( f \), these are actions taken by the network to ensure that \( |rs| = rf \).
- If \( |rs| < rf \), find a new node candidate and add it to \( rs \), ask it to replicate \( f \).
- If \( |rs| > rf \), remove a node from \( rs \) after instructing it to remove \( f \) from local storage.
The Cooperative File System (CFS) [14] handles replication by copying blocks onto the \( k \) succeeding nodes following the successor of the node containing the original object. The successor is responsible for making sure that the data is replicated onto the \( k \) succeeding nodes.
With regard to storage space efficiency, this scheme could be improved upon by allowing for other nodes than the successors to be used. Kim and Chan-Tin write in their paper [13] about different allocation schemes which could prioritise certain nodes based on available storage space. One could replicate smaller files unto nodes where larger files would not fit.
Choosing candidate nodes may be done using four main allocation criteria, according to Kim and Chan-Tin [13], which vary in suitability for different scenarios [13].
- **random-fit** - replicate the object onto $rf$ random nodes.
- **first-fit** - replicate the object onto the $rf$ succeeding nodes (after the root node for $f$ with sufficient storage space available).
- **best-fit** - replicate the object onto the $rf$ nodes which have the smallest adequate storage space among the set of nodes.
- **worst-fit** - replicate the object onto the $rf$ nodes which have the largest remaining storage space among the set of nodes.
In theory, each individual node may use its own method of allocation, and additional criteria, such as file size, may be used to determine how replication sets are to be allocated. It would, however, be difficult to predict the allocation behaviour in such a network, and it may be desired that all nodes agree on the allocation scheme.

**Figure 4.6:** The replication relationships: The nodes in the replication set monitor the root node to take action in case it fails. The root node monitors the replication set in case one of the nodes fail, in which case it will choose a new candidate for the replication set.
Expanding on the CFS solution, each root node has a table over its associated objects and their replication sets. If a node is in a replication set for a file, it keeps an entry pointing towards the root node of the file. This allows for two-way monitoring, and efficient restoration in case of node failure (see fig. 4.6). This makes it possible for the root node itself to include a new node in the replication sets for its files upon receipt of a departure or crash message (for crash messages, see section 4.8: Routing, algorithms 4.2 and 4.3).
In case of root node failure, the remaining nodes will copy their data to the successor of the defunct root node, which will then take over the duties from the old root node.
In order to find candidate nodes for file replication, the root node looks in its leaf set and routing table for nodes manifesting the required traits. If not found there, a message can passed along in the ring, selecting candidates along the way, while decrementing a counter once a candidate is found, reaching zero when a sufficient number of candidates have been selected.
Intuitively, the first-fit replication heuristic is the easiest to implement, since it will continuously jump from node to node by means of each node passing a probe message along to its successor, eventually ending up where it started, the root node. This message carries a counter as an argument, which is initially set to the number of candidates needed, and decreased each time a receiving node satisfies the storage requirement, that is, the object fits. This search is bounded by \( O(N) \) for \( N \) nodes in the ring, in the case where the last candidate is the predecessor to the root node, which means the probe message would have gone the whole lap around the ring. When the file system is less populated in the beginning, this search would be more likely to finish early, near the lower bound of \( \Omega(R) \), where \( R \) is the amount of candidates sought.
As for the best-fit and worst-fit heuristics, a query limit may be imposed in order to limit the amount of messages that are sent while querying for the nodes that best fit those criteria. Otherwise, the entire ring will have to be visited in order to find the most suitable nodes. If a limit, \( n \), is imposed, the algorithm will stop once it is found \( n \) nodes that are physically able to accommodate the object, and chooses \( rf - 1 \) of them, to satisfy the replication factor \( rf \), excluding the root node. The query will start at the successor of the root node, and continue for at least \( n \) nodes, and possibly for all the nodes in the ring, depending on where the suitable nodes are located. This solution will run in the same amount of time as first-fit with this solution.
Algorithm 4.6 is a message handler for circular space queries, which is used for the first-fit, best-fit, and worst-fit storage allocation schemes.
**Algorithm 4.6 Handling storage space queries.**
```plaintext
1: procedure HANDLE_SPACEQUERY(RemNodes, SpaceRequired, Asker, MessageID)
2: if SpaceRequired \( \leq \) LocalNode.SpaceAvailable then
3: REPLY_MESSAGE(MessageID, Asker, space available, LocalNode.SpaceAvailable)
4: if LocalNode.Successor \( \neq \) Asker then
5: FORWARD(MessageID, LocalNode.Successor, space query, RemNodes–1, SpaceRequired)
6: end if
7: else
8: if LocalNode.Successor \( \neq \) Asker then
9: FORWARD(MessageID, LocalNode.Successor, space query, RemNodes, SpaceRequired)
10: end if
11: end if
12: end procedure
```
This procedure is executed by each node upon receipt of a space query.
message. This message is sent to a node in order to find RemNodes nodes among the receiving node and its successors, that are able to store an object of size \(\text{SpaceRequired}\). For each receiving node, there are two possible cases:
- **case 1**: The object fits - The node sends a positive message to the querying node, informing that it can indeed store an object of the requested size, and of the remaining storage space in its local storage. The node then passes the message on to its successor, with a decremented RemNodes, if the successor is not the asking node (Lines 2 to 6).
- **case 2**: The object does not fit - In this case, the node passes the query message along to its successor, but with the RemainingValidNodes remaining at its current value, if the successor is not the asking node (Lines 7 to 11).
### 4.13.1 Files
Files are responsible for storing the data itself, and in this file system they are comprised of the file data structure, and the data chunks.
The role of the file data structure is to contain all the metadata associated with the file, such as the file name, access permission and ownership information. This structure will also contain links to the file chunks which comprise the data itself, and there is also a revision number for handling updates, as well as a field that contains the total file size.
The file chunks are pieces of the complete files, identified by a hash of their content.
The file data structure, as seen in figure 4.4, is referenced by its UID (Unique identifier). The referrer can be any type of directory object, including the root node. Each file chunk has an unique identifier which is equal to the SHA1 hash of the chunk data. These identifiers are then stored in the chunk list of the file's data structure.
The identifier for the file structure is generated by concatenating the hashes of the file chunks, appending the full path of the file and the current UNIX time without separators, and then hashing the resulting string using SHA1. This identifier will be used to identify the file during its existence. Subsequent file updates do not change the file identifier, but merely increases the revision number. This is to avoid the recursive updates that would result from having to update the parent directory with the new file identifier, and having to move up the tree to perform the same updates, ending with the root directory. This solution makes it sufficient to just increment the revision number.
Andrew FS (AFS) [11] caches entire files at once. Working with chunks allows for opening large files, and only fetching the required data as seeks occur on the file handle. This allows for more selective caching, with reduced space usage in the case where many large files are opened, read for a few bytes, and closed.
4.13.2 Directories
In this design, a directory is not much different from a file. The main difference is that a directory is a collection of files rather than file chunks, as can be seen in figure 4.5. Directory UIDs are generated in the same way that file UIDs are, although without hashing the files contained, since there are none upon creation. Thus, only the full path and UNIX time will be used.
4.14 Creating files
When a file is written for the first time, the following steps are performed:
**Algorithm 4.7** File creation and upload
1: procedure FileCreate(FileName, FileData)
2: ContainingDir ← StripFileName(FileName)
3: CreationTime ← GetUNIXTime
4: ChunkIterator ← 0
5: UID ← ””
6: file[size] ← 0
7: for all ChunkData, ChunkSize ← Split(FileData) do
8: ChunkHash ← SHA(ChunkData)
9: UID ← UID + ChunkHash
10: file[chunks][ChunkIterator] ← ChunkHash
12: STORELOCALLY(ChunkHash, ChunkData)
13: SENDMESSAGE(ChunkHash, ChunkData)
14: ChunkIterator ← ChunkIterator + 1
15: end for
16: UID ← SHA(UID + FileName + CreationTime)
17: file[name] ← FileName
18: file[ctime] ← CreationTime
19: file[revision] ← 0
20: STORELOCALLY(UID, file)
21: SENDMESSAGE(UID, file)
22: DirObjectUID ← GETOBJECTBYPATH(ContainingDir)
23: DirObject ← SENDMESSAGE(DirObjectUID, get)
24: if DirObject = false then
25: abort
26: end if
27: DirObject[files][UID] ← file
28: DirObject[revision] ← DirObject[revision] + 1
29: SENDMESSAGE(DirObjectUID, put, DirObject)
30: end procedure
Line 1 extracts the directory name from the path. Line 2 gets the current
system time.
For each file chunk (line 6), the hash of that chunk is calculated (line 7). This hash is then appended to the UID of the created file (line 8), and the chunk is inserted into the file data structure, and the file size is updated accordingly (lines 9 and 10). The chunk is then stored locally, and is then sent as a message to the network.
The full file name and path is then appended to the UID, along with the UNIX time (line 15). The file name and creation time is then stored in the file metadata object (lines 16 and 17). The revision number is then set to 0, indicating a new object (line 19). This object is then stored locally, and then uploaded (lines 20 and 21).
As a last step, the file is added to the directory object, and the resulting directory object is uploaded again with its revision number increased to reflect the change (lines 26-28).
The receiving node is responsible for replicating the created object.
4.14.1 Analysis of algorithm 4.7
The amount of messages required to be sent are primarily dependent on the amount of chunks, which in turn is dependent on the chunk size and file size. For each chunk, there is one message sent to upload the chunk (line 13).
Except for the chunk uploads, the containing directory needs to be downloaded, updated and uploaded. This makes for another two messages, or four including replies. Furthermore, in order to download the containing directory, it is necessary to navigate the path from the root, through subdirectories, until we get to the containing directory. This work is done by the GetObjectByPath call on line 22, which will return the UID of the containing directory. This operation depends on the length of the path.
We thus have \( \text{NoChunks} \ast (\text{RepFactor} - 1) + 4 \ast (\text{RepFactor} - 1) + \text{PathLength} = (\text{RepFactor} - 1) \ast (4 + \text{NoChunks}) + \text{PathLength} \) messages to be sent, since all file chunks need to be replicated. \( (\text{RepFactor} - 1) \ast (4 + \text{NoChunks}) \) is probably larger than \( \text{PathLength} \), so the final estimate will be \( (\text{RepFactor} - 1) \ast (4 + \text{NoChunks}) \) messages for the entire operation, when replication is taken into account.
Algorithm 4.8 Handling of file upload by root node
1: procedure UploadHandler(FileUID)
2: RemainingReplications ← RepFactor − 1
3: while RemainingReplications > 0 do
4: RepSet ← GetRepCandidates(RemainingReplications)
5: for all Node ← RepSet do
6: ReplicationResult ← SendMessage(Node, replicate_request, FileUID)
7: if ReplicationResult.status = success then
8: RemainingReplications ← RemainingReplications − 1
9: end if
10: end for
11: end while
12: end procedure
This procedure is run by any receiving node that becomes the root node for a new object. It calls the GetRepCandidates procedure to get RepFactor − 1 candidates, where RepFactor is the replication factor for the network. It then continuously tries to replicate the object onto these nodes, until RemainingReplications reaches zero, which means that the wanted number of copies exist.
The GetRepCandidates procedure may differ from network to network, but its goal is to return a given number of possible candidates that satisfy the replication criteria outlined in section 4.13: Replication.
On the receiving end of the message, the receiving node will handle the replication request.
Algorithm 4.9 Handling of replication request
1: procedure ReplicationHandler(SourceNode, Sequence, ReplicationObject)
2: Object ← SendMessage(ReplicationObject, get)
3: ReplyMessage(SourceNode, Sequence, replicated)
4: end procedure
4.14.2 Analysis of algorithms 4.8 and 4.9
The upload handler defined by algorithm 4.8 is tasked with finding replication candidates for the uploaded file. The theoretical upper bound for such a task is \(O(N)\) messages, where \(N\) is the number of nodes in the network. For practical reasons, this is seldom the case, however. As described in section 4.13: Replication, the GetRepCandidates function is unpredictable when it comes to the actual amount of messages that need to be sent, but a minimum of RepFactor − 1 nodes need to be contacted to satisfy the replication factor. It depends on which method is used to find suitable candidate nodes.
This algorithm only sends two messages: one to ask the root node, identified by SourceNode for the object to be replicated. It then acknowledges the replication by replying to the replicate_request message with a replicated mes-
sage. Thus, the algorithm will only ever send two messages, which gives us an $O(2) = O(1)$ bound on messages sent.
The $\text{GetRepCandidates}$ procedure may differ from network to network, but its goal is to return a given numbers of possible candidates that satisfy the replication criteria outlined in section 4.13: Replication.
### 4.15 Locking files
As described in the requirements, all locks in this file system are advisory. In other words, the locking system won’t enforce locks, and it is up to the software to make sure that locks are considered. These locks are made to be compatible with the fcntl [16] system call in order to facilitate easy integration into UNIX-like operating systems. The locks are kept in a lock table for each node. In order to maintain a consistent lock state over the replication set for each file, the root node will distribute the lock changes to its replication set, maintaining the current lock state for the file should the node fail. In a traditional UNIX file system, locks are held on a per-process basis. To maintain this in a distributed file system, we must be able to identify an unique process, which can reside on any node, making sure that any process ID is unique. To achieve this, the node ID is concatenated with the process ID number, creating an unique identifier, which is passed along when requesting a lock.
The fcntl system call defines locks for byte ranges, but for now these locks are for the whole file. Any inquiry on the lock status of a byte range will return the lock status for the file.
There exists four modes in which a file may be locked: $\text{unlk, rlk, wlk, rwlk}$, which is unlocked, read locked, write locked, and read/write locked respectively.
In order for Process $\text{PID}$ on node $\text{SourceNode}$ to establish a lock on a file $\text{FileUID}$, the steps shown in algorithm 4.10: Lock request will occur for the requesting node.
**Algorithm 4.10 Lock request**
1: $\text{ProcessUID} \leftarrow \text{SHA(NodeID + PID)}$
2: $\text{reply} \leftarrow \text{SendMessage(FileUID, lock\_request, ProcessUID, lockmode)}$
3: if $\text{reply} = \text{ok}$ then
4: end if
The corresponding algorithm for the receiving node is outlined in algorithm 4.11: The Lock Request Handler.
#### 4.15.1 Analysis of algorithms 4.10 and 4.11
Algorithm 4.10 is trivial, since it only generates a process UID and asks for the relevant lock, resulting in $O(1)$ messages.
The next algorithm, 4.11, on the other hand, is not trivial. It needs to propagate the lock changes to the replication set of the locked file, to facilitate a smooth failover in case the root node dies. Therefore it is upper-bounded by
Algorithm 4.11 The Lock Request Handler
1: procedure HANDLELOCKREQUEST(SourceNode, FileUID, ProcessUID, LockMode, LockTable)
2: ExistingLock ← LockTable[UID]
3: if id ≠ null then ▷ There is currently a lock held on this file.
4: if ExistingLock.ProcessUID = message.ProcessUID then
5: if LockMode ≠ ExistingLock.LockMode then
6: LockTable[FileUID].LockMode ← LockMode
7: REPLYMESSAGE(SourceNode, Sequence, lock_success)
8: for all Node ← GETREPSET(FileUID) do
9: SENDMESSAGE(Node, update_lock_table, FileUID)
10: end for
11: else
12: REPLYMESSAGE(SourceNode, Sequence, lock_exists)
13: end if
14: else
15: REPLYMESSAGE(SourceNode, MessageID, lock_denied)
16: end if
17: else ▷ No existing lock exists, we are free to lock.
18: LockTable[ProcessUID].LockMode ← LockMode
19: REPLYMESSAGE(SourceNode, MessageID, lock_success)
20: for all Node ← GETREPSET(FileUID) do
21: SENDMESSAGE(Node, update_lock_table, FileUID)
22: end for
23: end if
24: end procedure
the size of the replication set for the given file, which is often \( \text{RepFactor} - 1 \), since the root node is not counted amongst the replication set. Algorithm 4.11 will therefore send approximately \( O(\text{RepFactor} - 1) \) messages.
4.16 Reading and seeking files
Reading and seeking files are implemented in the same way they are in a local file system. The only difference is that reaching the end of a chunk will prompt the download of the next one.
Algorithm 4.12 File read
1: \underline{procedure} \textsc{read}(\textit{fd, length})
2: \hspace{1em} \textit{LeftToRead} ← \textit{length}
3: \hspace{1em} \textit{ReadBuffer} ← ””
4: \hspace{1em} \underline{while} \ \textit{LeftToRead} > 0 & \textit{fd.filepos} ≤ \textit{fd.length} \ \underline{do}
5: \hspace{2em} \textbf{if} \ \textit{fd.chunkpos} + 1 > \textit{fd.chunk.length} \ \textbf{then} \hspace{1em} \textbf{▷} Time to move to next chunk?
6: \hspace{3em} \textit{fd.chunk} ← \textit{fd.chunk.next}
7: \hspace{3em} \textit{fd.chunkpos} ← 1
8: \hspace{2em} \underline{end if}
9: \hspace{2em} \textit{ReadBuffer} ← \textit{ReadBuffer} + \textit{fd.currentbyte}
10: \hspace{2em} \textit{fd.chunkpos} ← \textit{fd.chunkpos} + 1
11: \hspace{2em} \textit{fd.filepos} ← \textit{fd.filepos} + 1
12: \hspace{1em} \underline{end while}
13: \hspace{1em} \underline{return} \ \textit{ReadBuffer}
14: \underline{end procedure}
The \textit{fd} structure represents the file descriptor. It contains everything you can read from the file data structure (Figure 4.4), as well as some state information, which is maintained locally on the node that has the file descriptor. This extra state information is as follows:
- \textit{filepos} - The overall seek position in the file.
- \textit{chunkpos} - The seek position within the current chunk.
- \textit{currentbyte} - The byte at the current seek position.
- \textit{chunk} - Information regarding the current chunk.
The \textit{chunk} structure contains
- \textit{next} - A pointer to the chunk that is immediately following this one.
- \textit{length} - The length, in bytes, of the current chunk.
The major advantage of this approach compared to the whole-file-cache approach of Andrew FS [11] is that chunks are downloaded on-demand, potentially saving space when working with large files.
Forward seeking is equivalent to reading, except no data is returned.
4.16.1 Analysis of algorithm 4.12
Each read operation may result in $0..n$ messages, with associated replies, where $n$ is the number of chunks in the file. A new chunk is downloaded if it does not exist locally, and the download is then triggered by reading through the end of a chunk, prompting the fetching of the next one.
If only a few bytes are read at a time, either zero or one download requests will be performed, but if more bytes than the chunk size are read, then it is even possible that two download requests are made as part of the read. In conclusion, it depends on which sort of file is being read, but up to one download request per read seems like a good guess for simple text file operations.
4.17 Deleting files
Deleting a file entails removing it from storage and from the containing directory structure. Upon receiving a delete request, the root node of the file will do the following:
**Algorithm 4.13** Delete handler
1: procedure DELETEHANDLER(FileUID)
2: DirUID ← GetParent(FileUID)
3: Dir ← SendMessage(DirUID, get)
4: Dir.files[FileUID] ← null
5: Dir.revision ← Dir.revision + 1
6: SendMessage(DirUID, put, Dir)
7: File ← SendMessage(FileUID, get)
8: for all chunk ← File.chunks do
9: SendMessage(chunk.id, delete)
10: end for
11: RepSet ← QueryRepSetForUID(FileUID)
12: for all RepNode ← RepSet do
13: SendMessage(RepNode, unreplicate, FileUID)
14: end for
15: end procedure
First, the parent directory of the deleted object is established (line 2). This directory object is then fetched (line 3). The deleted object is then dereferenced from the directory object (line 4), and the revision counter of the directory object is increased to reflect the change (line 5), and the directory is then uploaded (line 6).
After this step, the chunks of the deleted file, and the file structure itself, are still stored on the network, even if they are not referenced to. To remedy this, the file chunks will have to be removed from their root nodes and replication sets, and thus the delete message will be sent for each of the chunks, and the unreplicate message will be sent to the replication set of the deleted file.
First, the file object is downloaded (line 7). Then, a delete message is sent to the root node of each chunk (lines 8 to 10). Following this, the replication set for the file data object is fetched from the replication table. QueryRepSetForUID simply extracts all nodes that replicate the given object from this node. (line 11). Finally, these nodes are asked to stop replicating the file object (lines 12 to 14).
4.17.1 Analysis of algorithm 4.13
In order for the root node to delete a file, it needs to advise the file’s replication set of the deletion. This applies to both the file data structure and the file chunks. Thus, it is safe to say that approximately $O(NoChunks \times RepFactor)$, the number of chunks times the replication factor, messages are sent for file deletion.
Chapter 5
Differences from existing solutions
5.1 CFS
The most obvious comparison is to the Cooperative File System, or CFS [14]. The major difference is that the proposed solution is read-write, while CFS is read-only. Otherwise, the file system structure is strikingly similar. Directory blocks point to inode blocks (or file blocks), which in turn point to data blocks. Routing-wise, CFS uses the Chord [15] lookup service, which scales logarithmically in regards to the lookup cost over the number of nodes.
CFS uses DHash as a distributed storage, in addition to the routing service provided by Chord. The proposed design is using the DHT design for both routing and replication, requiring just the one overlay network, and messages to facilitate these tasks, making this a simpler approach.
5.2 AndrewFS
AndrewFS [11] caches whole files locally in order to access them, while the proposed solution only stores the necessary chunks, which may well be all of them in some cases. This provides better local storage efficiency.
While AndrewFS relies on workstations for caching, the primary storage still occurs on dedicated servers, while the proposed solution relies on the workstations for metadata, primary storage, and replication. This difference makes the proposed solution the only possible choice of the two for a true server-less environment.
5.3 Freenet
Freenet [5] is another solution which uses key-based routing [5]. Freenet emphasizes on anonymous routing and storage, which are goals not required by this design. Routing-wise, Freenet uses its own protocol which uses TTL (time-to-live) on its messages, which means that it is possible that any given resource may never be accessed due to a large hop distance. This is a desirable compromise for large-scale networks, but it makes little sense otherwise. The circular nature of the DHT keyspace makes it possible to guarantee successful routing,
with a worst case scenario of $O(N)$ for $N$ nodes, the cost of following each node’s successor all the way around the keyspace.
### 5.4 OCFS2
OCFS [3] is a file system with focus on local storage and clustering. It is designed to avoid abstraction levels and to integrate with the Linux kernel. It does use replication, but it is not based on DHT, and it does not support a dynamic network structure, but rather relies on a configuration file to describe each file system and its participating nodes. Updating and distributing such a configuration file is impractical in a dynamic network setting. It also defines fixed points in the network, machines that needs to be always on for the file system to be accessible. OCFS2 also supports only 255 nodes in total [4]. OCFS2 also does not encrypt its local storage, which makes replication impossible with regards to file security. The proposed solution encrypts all objects, file data and metadata, using the public key of the creator, to be decrypted by the corresponding private key.
Chapter 6
Future extensions
6.1 Key management
This solution lacks a method of reliably storing and distributing the cryptographic keys associated with this file system. Some method of distributing private keys to users must be established in a secure manner for this solution to be effective security-wise.
6.2 File locking
The file locking scheme used in this design is thought to be sufficient for its intended purpose, but it is a naive approach which fails to take into account some distinct use cases, such as cases where mandatory locking is required. In order to expand the usability of this file system in other environments than UNIX-like systems, a new file locking scheme may be necessary to achieve local file system semantics on those systems.
6.3 Replication
Algorithms for choosing the replication set for a given file is heavily dependent on the type of data which will be contained within the file system, and developing an efficient algorithm for allocating replication sets is a challenge by itself. Without a doubt, several advances may be made in this area.
Chapter 7
Thoughts on implementation
While the work presented in this report is purely theoretical, an implementation is certainly possible, as the concepts have been established, and pseudocode has helped define the protocol.
7.1 Programming languages
Erlang [21] would be a suitable programming language to use for implementing most of this file system, since it has a high level of fault-tolerance built-in, and is effective at leveraging multi-threaded approaches to problems, such as spawning a new process for each incoming request.
In order to interface with the operating system, a mid-level language, such as C or C++ would be preferred, since most operating system kernels are written in such a language. The FUSE (File System in Userspace) system [20] is written in C, and is a common way to handle file systems in userspace.
7.2 Operating System interfaces
As mentioned under Programming languages, FUSE [20] is a common way to make file systems function as userspace processes. This provides great freedom with regards to size restraints, for instance, and allows for the init system to handle processes, since they are part of the normal userspace processes. A kernel module called fuse.ko is responsible for interfacing with the kernel.
7.3 Implementation structure
In an implementation using Erlang, C and FUSE, the code will be split into essentially two parts. One part will be written in Erlang and handle everything that has to do with DHT, local storage, locks, key management and permissions. The other part will be written in C and use the FUSE libraries to provide callbacks to system calls such as open(), close(), read(), write(), fseek() etc., so that the operating system will know how to use the file system.
The C part will communicate with the Erlang part to provide file chunks, which can then be access by normal read() system calls on the FUSE side.
call to write() will cause the C part to create and hash chunks, and tell the Erlang part to upload them to the network and handle the metadata.
These calls are only examples. A complete implementation will as such find a way to implement all system calls that a FUSE file system may be expected to handle, but even a partial implementation may be sufficient; it depends on the application.
Chapter 8
Conclusion
It is the author’s belief that this report has sufficiently demonstrated the viability of a distributed, scalable file system intended for use in a heterogeneous, class-less environment, using workstations as storage nodes. Certainly, there are improvements to be made, and a couple of those have been pointed out in chapter 6: Future Extensions. A real-world test would be the best proof of concept, and as chapter 7: Thoughts on implementation describes, a bilingual implementation combining the suitability of Erlang for highly-distributed and parallelised processes, with the interoperability and practicality of C, is a viable candidate. As with many things, the choice of languages is very much up to debate, as well as many other aspects of the file system design.
It is the author’s hope that this report will inspire a debate that will question the role of the file server in a modern office or school environment.
Bibliography
|
{"Source-Url": "http://uu.diva-portal.org/smash/get/diva2:623331/FULLTEXT01.pdf", "len_cl100k_base": 15388, "olmocr-version": "0.1.49", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 88858, "total-output-tokens": 18703, "length": "2e13", "weborganizer": {"__label__adult": 0.0002522468566894531, "__label__art_design": 0.00037384033203125, "__label__crime_law": 0.00025582313537597656, "__label__education_jobs": 0.0010519027709960938, "__label__entertainment": 0.00011432170867919922, "__label__fashion_beauty": 0.00011545419692993164, "__label__finance_business": 0.0005598068237304688, "__label__food_dining": 0.00027108192443847656, "__label__games": 0.0005297660827636719, "__label__hardware": 0.0018415451049804688, "__label__health": 0.00023424625396728516, "__label__history": 0.00033855438232421875, "__label__home_hobbies": 9.053945541381836e-05, "__label__industrial": 0.000354766845703125, "__label__literature": 0.00029730796813964844, "__label__politics": 0.0002505779266357422, "__label__religion": 0.0003170967102050781, "__label__science_tech": 0.08172607421875, "__label__social_life": 9.042024612426758e-05, "__label__software": 0.041595458984375, "__label__software_dev": 0.86865234375, "__label__sports_fitness": 0.00015687942504882812, "__label__transportation": 0.0004775524139404297, "__label__travel": 0.0002015829086303711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70644, 0.04591]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70644, 0.41793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70644, 0.88514]], "google_gemma-3-12b-it_contains_pii": [[0, 67, false], [67, 67, null], [67, 912, null], [912, 912, null], [912, 3252, null], [3252, 4623, null], [4623, 6888, null], [6888, 7077, null], [7077, 9463, null], [9463, 11548, null], [11548, 13199, null], [13199, 15378, null], [15378, 16764, null], [16764, 18726, null], [18726, 18906, null], [18906, 21321, null], [21321, 22657, null], [22657, 24551, null], [24551, 25137, null], [25137, 27609, null], [27609, 29586, null], [29586, 32070, null], [32070, 34434, null], [34434, 36618, null], [36618, 38612, null], [38612, 41525, null], [41525, 44332, null], [44332, 45954, null], [45954, 48185, null], [48185, 50460, null], [50460, 53151, null], [53151, 54343, null], [54343, 56728, null], [56728, 58882, null], [58882, 59669, null], [59669, 61592, null], [61592, 62629, null], [62629, 63716, null], [63716, 65610, null], [65610, 66002, null], [66002, 66950, null], [66950, 68612, null], [68612, 70644, null]], "google_gemma-3-12b-it_is_public_document": [[0, 67, true], [67, 67, null], [67, 912, null], [912, 912, null], [912, 3252, null], [3252, 4623, null], [4623, 6888, null], [6888, 7077, null], [7077, 9463, null], [9463, 11548, null], [11548, 13199, null], [13199, 15378, null], [15378, 16764, null], [16764, 18726, null], [18726, 18906, null], [18906, 21321, null], [21321, 22657, null], [22657, 24551, null], [24551, 25137, null], [25137, 27609, null], [27609, 29586, null], [29586, 32070, null], [32070, 34434, null], [34434, 36618, null], [36618, 38612, null], [38612, 41525, null], [41525, 44332, null], [44332, 45954, null], [45954, 48185, null], [48185, 50460, null], [50460, 53151, null], [53151, 54343, null], [54343, 56728, null], [56728, 58882, null], [58882, 59669, null], [59669, 61592, null], [61592, 62629, null], [62629, 63716, null], [63716, 65610, null], [65610, 66002, null], [66002, 66950, null], [66950, 68612, null], [68612, 70644, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70644, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70644, null]], "pdf_page_numbers": [[0, 67, 1], [67, 67, 2], [67, 912, 3], [912, 912, 4], [912, 3252, 5], [3252, 4623, 6], [4623, 6888, 7], [6888, 7077, 8], [7077, 9463, 9], [9463, 11548, 10], [11548, 13199, 11], [13199, 15378, 12], [15378, 16764, 13], [16764, 18726, 14], [18726, 18906, 15], [18906, 21321, 16], [21321, 22657, 17], [22657, 24551, 18], [24551, 25137, 19], [25137, 27609, 20], [27609, 29586, 21], [29586, 32070, 22], [32070, 34434, 23], [34434, 36618, 24], [36618, 38612, 25], [38612, 41525, 26], [41525, 44332, 27], [44332, 45954, 28], [45954, 48185, 29], [48185, 50460, 30], [50460, 53151, 31], [53151, 54343, 32], [54343, 56728, 33], [56728, 58882, 34], [58882, 59669, 35], [59669, 61592, 36], [61592, 62629, 37], [62629, 63716, 38], [63716, 65610, 39], [65610, 66002, 40], [66002, 66950, 41], [66950, 68612, 42], [68612, 70644, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70644, 0.01786]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
34197cbdf0341aa530fa0fa0c9445bb0f71fda37
|
SIFRAN: Evaluating IoT Networks with a No-Code Framework based on ns-3
Samir Si-Mohammed, Malasri Janumporn, Thomas Begin, Isabelle Guérin Lassous, Pascale Vicat-Blanc
To cite this version:
Samir Si-Mohammed, Malasri Janumporn, Thomas Begin, Isabelle Guérin Lassous, Pascale Vicat-Blanc. SIFRAN: Evaluating IoT Networks with a No-Code Framework based on ns-3. Latin America Networking Conference, Oct 2022, Armenia, Colombia. 10.1145/3545250.3560845 . hal-03822142
HAL Id: hal-03822142
https://hal.science/hal-03822142
Submitted on 20 Oct 2022
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
SIFRAN: Evaluating IoT Networks with a No-Code Framework based on ns-3
Samir Si-Mohammed
Univ Lyon, ENS de Lyon, Université Claude Bernard Lyon 1, Inria, CNRS
Stackeo
Lyon, Rhône-Alpes, France
samir.si-mohammed@ens-lyon.fr
Malasri Janumporn
Univ Lyon, ENS de Lyon, Université Claude Bernard Lyon 1, Inria, CNRS
Lyon, Rhône-Alpes, France
malasri.janumporn@etu.univ-lyon1.fr
Thomas Begin
Univ Lyon, ENS de Lyon, Université Claude Bernard Lyon 1, Inria, CNRS
Lyon, Rhône-Alpes, France
thomas.begin@ens-lyon.fr
Isabelle Guérin Lassous
Univ Lyon, ENS de Lyon, Université Claude Bernard Lyon 1, Inria, CNRS
Lyon, Rhône-Alpes, France
isabelle.guerin-lassous@ens-lyon.fr
Pascale Vicat-Blanc
Stackeo
Lyon, Rhône-Alpes, France
pascale@stackeo.io
ABSTRACT
With the tremendous ascension of the Internet of Things over the recent years, ns-3 has consolidated its position in terms of popularity among the research community. Indeed, it has become one of the most used open-source network simulators, with an important community of users and contributors. However, the growth of this community is constrained by the networking and programming skills required to use ns-3. This reduces the ns-3 traction within the industrial community, since many IoT specialists lack these skills. In this paper, we present SIFRAN, a no-code framework for IoT networks simulation using ns-3. The main objective of SIFRAN is to extend the use of ns-3 to a community of non-programmers by making them able to benefit from its features without writing a single line of code, and to encourage network experts to contribute to this effort. We show how the framework can be used via a simple web interface for simulating Wi-Fi- and LoRaWAN-based IoT setups, and how the programmers’ community of ns-3 can contribute to the framework by adding more IoT network technologies.
CCS CONCEPTS
• Networks → Network simulations.
KEYWORDS
ACM Reference Format:
1 INTRODUCTION
The Internet of Things, or IoT, defined as the convergence of the digital and physical worlds, has become a fundamental trend underlying the digital transformation of enterprises and is becoming the beating heart of their operations. A wide range of connectivity options are now offered to IoT users. New low-power communication technologies like LoRaWAN or Sigfox have emerged. Remarkable advances have been made in network technology and protocols to serve an increasing number of IoT use cases. These technologies differ from each other, whether in terms of inherent parameters or in terms of their targeted applications. The variety of options can be seen as an opportunity to widen the range of possible IoT use cases. However, they often make it hard for researchers and industrial companies to do the right technology choice and configuration setting, yet these are crucial decisions. Indeed, under- or over-sizing a network has to be avoided to ensure profitability. A good trade-off between cost and QoS has to be found. To address this problem, simulation appears as a key enabler for the IoT network technology selection. Indeed, it can provide good insights about the performance of a technology at low cost since no real IoT material is needed.
The network simulator 3, or commonly called ns-3, is a discrete-event simulator that has been developed to provide an open and extensible network simulation platform for networking research and education. Due to its highly available documentation and the important set of network technologies it supports, it has become one of the most used simulators in the network community. However, ns-3 is targeting programmers rather than IoT architects and solution vendors. This is due to the fact that it requires network expertise and C++ programming skills, while industrial teams generally lack these combined capabilities. Therefore, having a no-code approach for using ns-3 would be an efficient way of reaching an important community of IoT professionals and make them able to benefit from ns-3 features. No-code [6] is becoming very popular in IoT, as it empowers manufacturers and operation managers to program their IoT applications while reducing the time and expertise needed. A no-code approach implies a cautious abstraction work to hide the technical details while enabling useful projections. In the case on network simulation, the abstraction also requires to allow...
the integration of a large diversity of network technologies in the same framework without losing in precision.
For that reason, and in order to extend the use of ns-3 to a community of non-programmers, we propose, in this work, a no-code framework for users to set up and run ns-3 IoT networks simulations without writing a single script. We believe it can, on the one hand, expand the community of users and accelerate their IoT journey, and, encourage contributions to ns-3 towards further inclusion of more IoT technologies for industrial purposes on the other. The contributions of this work are the following:
- An intuitive web application to setup and run simulations by selecting and tuning scenario parameters.
- A set of relevant KPIs (Key Performance Indicators) for IoT simulations and their automatic calculation.
- A set of generic templates of ns-3 script for IoT use cases, and their implementation for Wi-Fi and LoRaWAN technologies, and guidelines on how they can be modified for other IoT networks.
The remainder of this paper is organized as follows: Related works are discussed in Section 2. The problem formulation is established in Section 3, and an overview of our framework is given in Section 4. Section 5 first describes the developed templates, and then provides some integration guidelines for further contributions. A discussion is provided in Section 6, while the conclusion and the future works are given in Section 7.
2 RELATED WORKS
No-code [6] initiative has always attracted an important interest in the research and development community. The reason behind that is that it encourages contributions even from people lacking programming skills. For instance in the software development field, due to the increase of workers demands of mobile applications which has grown faster than what IT can deliver, [1] propose an environment where non-developers who are in charge in business development can develop apps and webs for their work. In [11], they propose a low-code platform for automating business processes in manufacturing, indeed, they state that the use of low-code can represent a significant step forward in creating business applications, especially with the a rapidly growing number of companies.
In IoT, No-code is becoming very popular, as it empowers manufacturers and operation managers to program their IoT applications while reducing the time and expertise needed. Some initiative are therefore going into that direction. For example, [9] proposes an end-to-end low-code mechanism for managing the relationship between heterogeneous hardware sensors and IoT platform. The objective of that mechanism is to overcome the problematic of lacking programming experience which burdens the widespread adoption of IoT. On the other hand, [6] affirms that IoT requires system developers to have a deep understanding of the individual devices’ functionalities to achieve a successful integration. Thus, they propose a method to create virtual instances of IoT devices based on their technical description to act the real device, usable without programming experience. This can also be seen as a form of simulation. In addition to that, several no-code tools exist for IoT development (Node-RED\(^1\), AtmosphericIoT\(^2\), Simplifier\(^3\), etc.) according to [5].
SIFRAN differs from these works in the fact that it uses simulation through an already well-established tool in IoT (ns-3), with the purpose of making the non-programmers community able to run IoT simulations and gather indicators of performance from them in a very easy way.
3 PROBLEM FORMULATION
The objective of this section is to propose a comprehensible way of defining an IoT scenario and the targeted output metrics that a user wants to gather using simulation. To do so, we need to clearly state what must be taken into consideration in the simulator when running an IoT network simulation: the input parameters that define a scenario, and the output metrics that need to be gathered for evaluating the performance. Both theses will then be integrated into an ns-3 script that we call template. To illustrate this, we consider the case wherein an IoT solution provider, offering a smart water management service based on LoRaWAN, needs to deploy a private network for a customer. One of the main questions that could be asked in this case is how robust will the network be, considering the radio parameters and the topology (number of sensors, their location, etc.). In other words, the IoT company has to know how much percentage of packets will successfully be transmitted, without omitting the fact that the minimum required packet delivery rate for such application is typically around 90\% [3]. A way of answering the question would be to deploy the network and evaluate its performance. However, knowing that one LoRaWAN gateway can handle at least dozens of sensors, deploying them to answer the question can turn out to be very costly. Thus, using simulation instead would make it possible to answer that question while lowering costs (such an application has been studied in [7] using ns-3). As we can see, two aspects need to be defined for running such IoT scenario simulation: The scenario description, in terms of traffic and topology (e.g., the number, location and data rate of smart water sensors in the previous example), and the KPIs that need to be analyzed and will give insights to answer the question (e.g., the packet delivery ratio). We describe these two aspects in the following.
3.1 Scenario Description
A scenario is defined by a list of parameters representing the network topology, the considered IoT network technology and the traffic specifications. They can be divided as follows: (1) the number of end-devices and their location, (2) the number of gateways and their location, (3) the IoT network technology defined by its physical and mac layers, (4) the low-level parameters related to the radio channel and the propagation model, the frequency and bandwidth of the radio channel and (5) the traffic type and workload (defined by the packet size and the inter-packets period).
The IoT traffic types can be classified according to (i) their direction: upstream (from end-devices to gateways or the cloud) or downstream (from the cloud or gateways to end-devices) and (ii) their profile: periodic or stochastic. We call periodic the traffic with
\(^{1}\)https://nodered.org/
\(^{2}\)https://atmosphericiot.com/
\(^{3}\)https://simplifier.io/en/
a fixed data rate, while the traffic with a variable rate is referred to as stochastic. Although some applications have bidirectional traffic, the majority of IoT applications have upstream traffic. Figure 1 depicts a classical IoT system architecture where the end-devices can either be sensors or actuators, depending on the traffic direction, upstream or downstream respectively.

**3.2 KPIs**
We propose to gather five metrics, which together provide a fair representation of the performance of an IoT network technology for a given scenario. These parameters are: (i) packet throughput, (ii) packet latency, (iii) packet delivery, (iv) energy consumption and (v) battery lifetime.
Packet throughput, packet latency and packet delivery are common performance parameters in network performance evaluation. Packet throughput represents the data rate delivered to each IoT device or gateway. Packet latency is the time a packet takes to transit from its source to its destination. Packet delivery is the ratio (percentage) of successfully received out of all the packets sent.
Energy is extremely important in the IoT industry where end-devices are often equipped with a battery, and thus have a limited power supply. Energy consumption represents the amount of energy consumed during a period of time. It can be measured for the overall network or on each IoT end-device, in joules. The battery lifetime gives an indication on the IoT device’s autonomy without recharging its battery.
**4 FRAMEWORK OVERVIEW**
In this section, we present our no-code simulation framework. We begin by describing its architecture, then we show how to use it through a web platform by providing an example of application.
**4.1 Architecture**
The architecture of our framework (Figure 2) consists on an ns-3 environment where the simulations are executed, a web platform which serves as a user interface for entering scenario parameters and displaying KPIs, and a database to store both scenarios and KPIs. We describe each component in what follows:
- **Web platform**: It is used to enter the scenario parameters via a form, specific to each scenario traffic type and IoT technology. The form contains a complete list of the traffic related parameters such as the packet size, the distance between gateways and end-devices, the data rate, etc. and low-parameters such as channel bandwidth, transmitting power, etc. A process of validation of the entered values, in terms of data ranges and types, is done before moving to the simulation step. The web platform is also used to display the list of KPIs returned from ns-3 after the end of the simulation. The web application has been developed using Flask [4], which is a Python-based web development framework.
- **ns-3**: Once the parameters have been entered by the user and validated by the engine, they are passed from the webapp on to ns-3 (which can be hosted in a virtual machine) through a command line. Depending on the chosen IoT technology, a template will be executed with the passed input parameters. Once the simulation is over, the KPIs returned by the template are passed back to the web platform, to finally be displayed through the user interface to the user. The version of ns-3 which is used in the current SIFRAN software is ns-3.33.
- **Database**: As users may need to get access to their previous simulations, we store both scenario parameters and the resulting KPIs in a database. Note that users have to create an account on the web platform beforehand if they want to store their simulations and KPIs and have access to them. We have opted to use a NoSQL database using MongoDB [2], which is a document-oriented database program.
**4.2 Usage**
We illustrate in the following section the usage of the platform. From the homepage, users have the possibility to create an account through the ‘Register’ button, which will give them access to their previous simulations. After that, they can directly fill a new IoT scenario form, either by assigning values to each parameter, or by selecting a preset which holds a set of predefined ones. As said before, all the parameter values are validated in terms of data type and range before the form is submitted. Once the form is correctly filled, the input parameters are sent to the ns-3 environment to be executed. The results of the simulation (KPIs) are calculated at that level before being sent back to the platform, which finally displays...
them. An example of a scenario form and simulation results is shown in Figure 3.
5 TEMPLATE DESCRIPTION AND INTEGRATION GUIDELINES
In this section, we show how to implement a template for simulating an IoT network scenario, then we give some integration guidelines on how to contribute to this framework for other IoT network technologies.
5.1 Template Description
We call a template the translation of an IoT scenario in the ns-3 environment language. It consists of C++ code globally working as follows: i) take input parameters which define the scenario, ii) create the corresponding nodes and traffic, iii) calculate the KPIs obtained from the simulation.
We considered two IoT technologies in our templates: Wi-Fi and LoRaWAN. The Wi-Fi stack is completely implemented in the official release of ns-3. Even though different Wi-Fi amendments are available, we focused on the 802.11ac and 802.11ax amendments, as they are the most recent ones.
Regarding LoRaWAN, its stack is not implemented in the official release of ns-3. However, a link to a public LoRaWAN module [8] is provided in the official website of NSNAM. The steps for installing this module are provided in the link.
Even if we implemented one template per IoT technology, the structure of both is almost identical. We describe below the template implemented for Wi-Fi technology, by giving screenshots of code:
(1) Input parameters definition: All the scenario parameters mentioned in Section 3 are set here. They take as values the parameters filled by users through the scenario forms shown in the previous section. They include both traffic and low-level parameters. Clearly, the considered parameters will most likely differ depending on the implemented IoT technology.
```cpp
/* Input parameters definition */
// Simulation time in seconds double simulationTime = 10;
// Number of end-devices uint32_t nWifi = 10;
// Traffic direction std::string trafficDirection = "upstream";
// Packet size in bytes uint32_t payloadSize = 1024;
// Short guard interval int sgi = 0;
// Distance between AP and end-devices in meters double distance = 1.0;
// Modulation and Coding Scheme uint32_t MCS = 0;
// Transmitting power in dBm uint32_t txPower = 1;
// Number of spatial streams int spatialStreams = 1;
// Rx current draw in mA double rxCurrent = 40;
// CCA Busy current draw in mA double ccaBusyCurrent = 1;
// Idle current draw in mA double idleCurrent = 1;
```
Listing 1: Input Parameters Definition
(2) Nodes placement: This part of code creates all the nodes (end-devices and gateways) using the NodeContainer object, and places them in three dimensional space, using the ConstantPositionMobilityModel object.
```cpp
/* Positioning Nodes */
for (uint32_t i = 0; i < nWifi; i++) {
mobility.SetPositionAllocator(positionDevices->Add(Vector(distance, 0, 0, 0));
mobility.SetMobilityModel("ns3::ConstantPositionMobilityModel");
mobility.Install(wifiStaNodes);
Ptr<ListPositionAllocator> positionAp = CreateObject<ListPositionAllocator>();
mobility.SetPositionAllocator(positionAp);
mobility.SetMobilityModel("ns3::ConstantPositionMobilityModel");
mobility.Install(wifiApNode);
```
Listing 2: Nodes Creation & Placement
(3) Layers configuration: The technology is defined here by setting the YansWifi PhyHelper, WifiMacHelper and wifi objects as the physical, mac and network layers for Wi-Fi nodes. For LoRaWAN, the LoraPhyHelper, LorawanMacHelper and LorawanHelper objects are used for the physical, mac and network layers. The Wi-Fi amendment is also specified here with the SetStandard () method.
```cpp
/* Layers installation */
TansWifiPhyHelper phy;
phy.SetChannel(channel.Create());
```
Listing 3: Layers Configuration
(4) Low-level parameters configuration: The low-level parameters which have been declared on Listing 1 such as the short guard interval, the bandwidth, the spreading factor for LoRaWAN, etc. are instantiated and set at the nodes level here.
/* Low-level parameters configuration */
// Set channel width
Config::Set("/NodeList/*/DeviceList/*/$ns3::WifiNetDevice/Phy/ChannelWidth", UintegerValue(channelWidth));
// Set guard interval
Config::Set("/NodeList/*/DeviceList/*/$ns3::WifiNetDevice/HtConfiguration/ShortGuardIntervalSupported", BooleanValue(sgi));
// Set txPower in the end-devices
for (uint32_t index = 0; index < nWifi; ++index) {
Ptr<WifiPhy> phy_tx = dynamic_cast<WifiNetDevice*>(GetPointer((staDevices.Get(index)))->GetPhy());
phy_tx->SetTxPowerEnd(txPower);
phy_tx->SetTxPowerStart(txPower);
}
// Set txPower in the AP
Ptr<WifiPhy> phy_tx = dynamic_cast<WifiNetDevice*>(GetPointer((apDevice.Get(0)))->GetPhy());
phy_tx->SetTxPowerEnd(txPower);
phy_tx->SetTxPowerStart(txPower);
Listing 4: Low-Level Parameters
(5) IP address configuration: In case the IP addresses are supported in the nodes (not supported in LoRaWAN), we configure them in this part using the Ipv4AddressHelper, in order to make the nodes accessible to each other.
(6) Application traffic specification: This part is where the traffic definition is made. Depending on the traffic type, applications are defined and installed in the nodes with fixing the destination address. We detail this process in what follows:
- **Periodic:** For simulating a periodic traffic, we install the UdpClient and UdpSocket objects in the sender and the receiver nodes respectively. The needed parameters for the UdpClient object are the packet period and the packet size, which are specified by the user. The implementation of such a traffic is given below. It is worth noting that we consider UDP as a transport protocol because it is more suited for IoT application than TCP (less energy consumption). For LoRaWAN, the PeriodicSenderHelper object is used with setting the period and the packet size with the SetPeriod () and SetPacketSize () methods respectively. Since LoRaWAN does not allow communications with big data rates, we can only simulate periodic traffics with relatively low data rates.
- **Constant Bit Rate:** The difference between this traffic and the previous one is that the parameter which is specified is the data rate (in Mbps or bps) instead of the packet period. In some cases, it may be simpler for the end user to express the application needs in terms of data rate than the packet period. The objects used in this case are the OnOff and UdpSocket. We need to specify in this case the application data rate (in Megabits per second) which is a parameter of the OnOff object.
- **Variable Bit Rate:** For this kind of traffic, since packets can have different sizes and periods, we generate them using random variables (\(X\) for the packet size and \(Y\) for the packet period) following Normal laws with mean and variance defined by user. Thus, we use in this case a function which takes as a parameter a Socket object, and which schedules for every realization of \(X\) a sending of a packet which size is a realization \(Y\).
(7) **Energy configuration**: To keep trace of the consumed energy during the simulation, an energy source and a draining model have been configured on nodes. The energy source can be seen as a battery from where the energy is drained from. We use for that the `BasicEnergySourceHelper` object that drains energy in a linear way. A non-linear drain is added for Wi-Fi and LoRaWAN respectively. Both methods return the amount of bytes received by a node. This value is converted and divided by the simulation time to get the throughput, in Mbps.
(8) **Trace files generation**: There is the possibility in ns-3 of generating pcap (Packet Capture) and trace files which contain all the packets that have flowed through the network. It can be done using the `AsciiTraceHelper` object for some IoT technologies. To the best of our knowledge, there is no tracing system (neither pcap nor trace files) proposed using the LoRaWAN module. It is worth noting that pcap files can be opened by software like Wireshark, while the trace files can be read using any text editor.
(9) **KPIs calculation**: At the end of the template, we gather all the wanted KPIs from our simulation, as the following:
- **Packet Throughput**: For this KPI, the `GetTotalRx()` methods are used for Wi-Fi and LoRaWAN respectively. Both methods return the amount of bytes received by a node. This value is converted and divided by the simulation time to get the throughput, in Mbps.
- **Packet Delivery**: The way to get the ratio of successfully received packets over the total amount sent differs according to the traffic type. In case it is periodic, we can simply get it by dividing the simulation time by the packet period. This will give us the number of received bytes, which we divide by the number of sent ones to get the packet delivery. For the CBR case, in order to have a precise metric, we added an attribute in the `OnOff` application that contains the exact number of sent bytes. Then, we just divide it by the same `GetTotalRx()` method used for the packet throughput. Finally, if the traffic is VBR, we keep trace of the number of sent bytes in a variable incremented with each sending corresponding to a realization of the X random variable during the simulation. Then it is divided by the results returned by the `GetTotalRx()` method.
```cpp
Listing 6: Energy Configuration
(8) Trace files generation: There is the possibility in ns-3 of generating pcap (Packet Capture) and trace files which contain all the packets that have flowed through the network. It can be done using the `AsciiTraceHelper` object for some IoT technologies. To the best of our knowledge, there is no tracing system (neither pcap nor trace files) proposed using the LoRaWAN module. It is worth noting that pcap files can be opened by software like Wireshark, while the trace files can be read using any text editor.
(9) KPIs calculation: At the end of the template, we gather all the wanted KPIs from our simulation, as the following:
- **Packet Throughput**: For this KPI, the `GetTotalRx()` methods are used for Wi-Fi and LoRaWAN respectively. Both methods return the amount of bytes received by a node. This value is converted and divided by the simulation time to get the throughput, in Mbps.
- **Packet Delivery**: The way to get the ratio of successfully received packets over the total amount sent differs according to the traffic type. In case it is periodic, we can simply get it by dividing the simulation time by the packet period. This will give us the number of received bytes, which we divide by the number of sent ones to get the packet delivery. For the CBR case, in order to have a precise metric, we added an attribute in the `OnOff` application that contains the exact number of sent bytes. Then, we just divide it by the same `GetTotalRx()` method used for the packet throughput. Finally, if the traffic is VBR, we keep trace of the number of sent bytes in a variable incremented with each sending corresponding to a realization of the X random variable during the simulation. Then it is divided by the results returned by the `GetTotalRx()` method.
```cpp
Listing 7: KPIs Calculation
```
• **Packet Latency:** If the traffic is relatively low, we can get it directly using the logging system of the simulator, for each sent packet. In case the traffic is important, there may be overbuffered buffers in the sending nodes, which will increase latency. It would be of benefit to get rid of this problem since the latency in this case would more depend on the buffer sizes than on the network state. A way of doing so and getting a representative value of the latency is by adding a probing node in the network, which only sends data periodically in the same direction as the other nodes in the network, and get the latency only from the packets sent by this node. This allows us to avoid the queue time in the nodes buffers. The objects we install at the probing end-device and the gateway respectively are the UdpEchoClient() and UdpEchoServer() which print the times of sending and arrival of packets.
• **Energy consumption:** The energy consumption is obtained using the `GetTotalEnergyConsumption()` method of the energy model which returns the total amount of consumed energy, in joules. This method is called at the end of the simulation, for one end-device, since we consider that all of them have the same behaviour.
• **Battery Lifetime:** The battery lifetime is directly derived from the energy consumption, by dividing the capacity of the battery (in Joules) by the energy consumed, which gives us the number of simulations of same length that can be supported by the battery. We then multiply it by the simulation time to get how much will the battery last in seconds.
```cpp
// Calculating Energy KPIs */
double energy = 0, battery_lifetime = 0;
DeviceEnergyModelContainer::Iterator iter;
for (iter = deviceModels.begin(); iter != deviceModels.End(); iter++) {
double energyConsumed = iter->GetTotalEnergyConsumption();
NS_LOG_UNCOND("End of simulation (" << Simulator::Now().GetSeconds() << ") Total energy consumed by radio (End-device) = " << energyConsumed << " J;
std::cout << "Total energy consumed by radio (End-device) = " << energyConsumed << " J;
battery_lifetime = (CapacityJoules /
std::cout << "Battery lifetime = " << battery_lifetime / 86400; // Days
std::cout << "Battery lifetime = " << battery_lifetime << " Days;
energy = energyConsumed;
break; // Energy in only one station
}
```
**Listing 8: Energy Calculation**
### 5.2 Integration Guidelines: Example with 6LoWPAN
We now present guidelines to the community for contributing to SIFRAN by writing new templates in order to enhance it with more IoT network technologies. Overall, the structure of the templates remains the same but obviously some parts need to be updated due to the peculiarities of the newly considered technology. Table 1 summarizes the guidelines for each defined and labelled portion of code. Table 1 also shows how to implement templates for the short range IoT technology 6LoWPAN (based on 802.15.4 standard).
### 6 DISCUSSION
We discuss here the positioning regarding ns-3 and the contribution it may bring to the networking industry and research community. First, we would like to emphasize the fact that SIFRAN is inherently limited by ns-3 itself, since the executed simulations are done using it. This means that, in one hand, the possible simulateable network technologies are the ones that are using ns-3, after having defined their corresponding templates and implemented the user interfaces. As stated in the previous section, these should not require tremendous efforts. The major part is to make the IoT network technology available in ns-3. In other hand, this also means that the results (in terms of KPIs) provided by SIFRAN are the ones that ns-3 would have provided in a classical way, e.g., writing and executing C++ scripts. Thus, no additional validation should be needed for SIFRAN itself that is not required in ns-3. Regarding the impact on the community, its design should be very helpful and, with the right exposure to researchers and SMEs in both industry and academia, should be quite impactful, as well.
### 7 CONCLUSION AND FUTURE WORKS
In this work, we have presented SIFRAN, a no-code framework with the objective of enabling IoT simulation through ns-3 without coding. We began by clearly identifying the most salient aspects that need to be taken into consideration for simulating an IoT scenario, and the required KPIs for the network performance evaluation. Then, we detailed the architecture of SIFRAN which consists of a web application, a database and ns-3 templates. The latter were illustrated with the example of a Wi-Fi template and a LoRaWAN one. We then provided guidelines to the community in the hope that new IoT network modules will be developed in ns-3 and then incorporated in SIFRAN. An example of how to proceed with the example of 6LoWPAN has also been given.
The next step is to share the SIFRAN framework with IoT user communities such as the ns-3 Group and the FIT IoT-Lab® in order to gather feedback from them.
In terms of future works, we plan to provide the following enhancements:
- Refine the web application to make it more user friendly, taking into account feedback from the user community.
- Extend the list of supported technologies with additional ns-3 templates.
- Extend SIFRAN to let it handle scenarios with multiple gateways.
- Explore range of values for a given parameter to appraise its influence over a KPI. One could for instance see the influence of the number of end-devices on the battery lifetime.
A first version SIFRAN has just been made publicly available at https://sifran.labs.stackeo.io/, while the source code is available at https://github.com/Stackeo-io/SIFRAN. We hope that it will attract contributions from other developers.
### 8 ACKNOWLEDGMENTS
This work was performed within the framework of the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the
---
4https://groups.google.com/g/ns-3-users
5https://www.iot-lab.info/community/publications/
The ID of the PAN can be set here using the AssociateToPAN() method.
No energy model is implemented for 6LoWPAN in the official release of ns-3.
The AsciiTraceHelper also provides pcap and trace files for 6LoWPAN.
The same way of calculating KPIs as for Wi-Fi can be used for 6LoWPAN since the same traffic applications are used.
The energy source in this part remains the same, but the draining model should correspond to the target IoT technology since each has its own PHY states and corresponding current draw consumption.
The IPv6AddressHelper object for the addressing process.
This part may remain the same in the case the target IoT technology supports IPv4 addresses for the nodes. It can also change if the IP layer is not supported, or in IPv6 must be used instead.
This optional part here may be unavailable for some IoT technologies due to the lack of implementation. We advice the community to refer to the Tracing or the logging system is sufficient.
Just as for the energy source, the way of calculating how long the battery will last in the scenario conditions is identical whatever the IoT technology is.
The layers must change since it is precisely here that the IoT technology layers are specified.
Depending on the traffic type, the same applications as for Wi-Fi can be used if they are supported.
Traffic parameters remain the same, while low-level parameters change depending on the chosen IoT technology.
No changes.
low-level parameters which are specific to the IoT technology are set here.
<table>
<thead>
<tr>
<th>Code label</th>
<th>Required changes</th>
<th>Example with 6LoWPAN</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Traffic parameters remain the same, while low-level parameters change depending on the chosen IoT technology.</td>
<td>Declare here parameters such as the Perimeter Area Network (PAN) id.</td>
</tr>
<tr>
<td>2</td>
<td>No changes.</td>
<td>/</td>
</tr>
<tr>
<td>3</td>
<td>The layers must change since it is precisely here that the IoT technology layers are specified.</td>
<td>Use the LrWpanHelper for the phy and mac layers (802.15.4 norm) and the SixLowPanHelper for the network layer.</td>
</tr>
<tr>
<td>4</td>
<td>The low-level parameters which are specific to the IoT technology are set here.</td>
<td>The ID of the PAN can be set here using the AssociateToPAN() method.</td>
</tr>
<tr>
<td>5</td>
<td>This part may remain the same in case the target IoT technology supports IPv4 addresses for the nodes. It can also change if the IP layer is not supported, or in IPv6 must be used instead.</td>
<td>Use the IPv6AddressHelper object for the addressing process.</td>
</tr>
<tr>
<td>6</td>
<td>Depending on the traffic type, the same applications as for Wi-Fi can be used if they are supported.</td>
<td>The OnOffHelper and UdpClientHelper objects can also be used for 6LoWPAN.</td>
</tr>
<tr>
<td>7</td>
<td>The energy source in this part remains the same, but the draining model should correspond to the target IoT technology since each has its own PHY states and corresponding current draw consumption.</td>
<td>No energy model is implemented for 6LoWPAN.</td>
</tr>
<tr>
<td>8</td>
<td>This optional part here may be unavailable for some IoT technologies due to the lack of implementation. We advice the community to refer to the Tracing or the logging system is sufficient.</td>
<td>The AsciiTraceHelper also provides pcap and trace files for 6LoWPAN.</td>
</tr>
<tr>
<td>9</td>
<td>• Packet throughput & Packet delivery: Since the same application as for Wi-Fi can be used, the way of gathering packet throughput and packet delivery is identical.</td>
<td>• Packet throughput & Packet delivery: Since the same application as for Wi-Fi can be used, the way of gathering packet throughput and packet delivery is identical.</td>
</tr>
<tr>
<td></td>
<td>• Packet latency: As stated before, the probing mechanism can be used to get the packet latency in the case the traffic is not periodic. Otherwise, using the tracing or the logging system is sufficient.</td>
<td>• Packet latency: As stated before, the probing mechanism can be used to get the packet latency in the case the traffic is not periodic. Otherwise, using the tracing or the logging system is sufficient.</td>
</tr>
<tr>
<td></td>
<td>• Energy consumption: The energy source remains the same, while the model should correspond to the IoT technology which has its own PHY states and their current draw consumption.</td>
<td>• Energy consumption: The energy source remains the same, while the model should correspond to the IoT technology which has its own PHY states and their current draw consumption.</td>
</tr>
<tr>
<td></td>
<td>• Battery lifetime: Just as for the energy source, the way of calculating how long the battery will last in the scenario conditions is identical whatever the IoT technology is.</td>
<td>• Battery lifetime: Just as for the energy source, the way of calculating how long the battery will last in the scenario conditions is identical whatever the IoT technology is.</td>
</tr>
</tbody>
</table>
Table 1: Integration Guidelines
REFERENCES
program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR) and the technical support of Stackeo (https://stackeo.io).
|
{"Source-Url": "https://hal.science/hal-03822142/file/LANC-2022%20%281%29.pdf", "len_cl100k_base": 8580, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28808, "total-output-tokens": 9979, "length": "2e13", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.00040793418884277344, "__label__crime_law": 0.00034499168395996094, "__label__education_jobs": 0.0007872581481933594, "__label__entertainment": 0.00016045570373535156, "__label__fashion_beauty": 0.00019812583923339844, "__label__finance_business": 0.0004525184631347656, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0009899139404296875, "__label__hardware": 0.0044708251953125, "__label__health": 0.0007762908935546875, "__label__history": 0.0005221366882324219, "__label__home_hobbies": 0.00015044212341308594, "__label__industrial": 0.001094818115234375, "__label__literature": 0.00027942657470703125, "__label__politics": 0.00035262107849121094, "__label__religion": 0.00054931640625, "__label__science_tech": 0.447021484375, "__label__social_life": 0.00012433528900146484, "__label__software": 0.0157470703125, "__label__software_dev": 0.52294921875, "__label__sports_fitness": 0.0004677772521972656, "__label__transportation": 0.0011157989501953125, "__label__travel": 0.00028133392333984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42026, 0.021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42026, 0.57806]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42026, 0.87637]], "google_gemma-3-12b-it_contains_pii": [[0, 1090, false], [1090, 5876, null], [5876, 12385, null], [12385, 16922, null], [16922, 20990, null], [20990, 23978, null], [23978, 28158, null], [28158, 34168, null], [34168, 42026, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1090, true], [1090, 5876, null], [5876, 12385, null], [12385, 16922, null], [16922, 20990, null], [20990, 23978, null], [23978, 28158, null], [28158, 34168, null], [34168, 42026, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42026, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42026, null]], "pdf_page_numbers": [[0, 1090, 1], [1090, 5876, 2], [5876, 12385, 3], [12385, 16922, 4], [16922, 20990, 5], [20990, 23978, 6], [23978, 28158, 7], [28158, 34168, 8], [34168, 42026, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42026, 0.05932]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
208e0dbbca64b17e35766780d3f54d36a1d2756a
|
BLINDSHOPPING: NAVIGATION SYSTEM
The QR Trail
By
Yoogaraj A/L Vijaya Kumar
13431
Dissertation submitted in partial fulfilment of
The requirements for the
Bachelor of Technology (Hons)
(Information & Communication Technology)
JANUARY 2014
Universiti Teknologi PETRONAS
Bandar Seri Iskandar
31750 Tronoh
Perak Darul Ridzuan
ABSTRACT
The QR trail is an android application that designed to encourage visually challenged person to participate in more normal activities as normal person does. Moreover, this application can be used by normal person as well to navigate around places when the person lost in a way. The main purpose of the project is to provide a navigation system for the visually challenged person to move around autonomously in supermarkets or hypermarkets and do some shopping. The application will provide a guidance for visually impaired person through voice command from the smartphone as the user need to scan QR codes on the floor which contains the details of current location and instruction to move from one point of the shopping mall to another point. The development of this application will use Eclipse development tool. The programming language that will be used the development process in Java language and ZXing library. The rapid application development methodology is applied in development process of this application which consists 4 stages which are system design, prototype cycle, system testing and implication. This system will be further enhanced if it is necessary to meet the objective of this project.
LIST OF FIGURES
Figure 1: QR Trail SWOT analysis 10
Figure 2: Navigation using white cane augmented with RFID reader 13
Figure 3: QR Code 15
Figure 4: Gantt chart for Final year Project 1 and 2 24
Figure 5: Rapid Application Development Cycle 25
Figure 6: System design of QR Trail App 26
Figure 7: Process Flow of QR Trail App 27
Figure 8: Converting process of information into a QR code 28
Figure 9: Translating the information stored in the QR code image 29
Figure 10: Variety of Cane tips 30
Figure 11: Android GPS Architecture 31
Figure 12: QR code Proposed Interfaces 35
Figure 13: Layout of QR code in Shopping mall 40
Figure 14: 11cm x 11cm QR Code 49
Figure 15: Interfaces of App 50
Figure 16: Questioner Smartphone and Shopping 54
Figure 17: Questioner Supermarkets and Hypermarkets 55
LIST OF TABLES
Table 1: Results of the Questionnaire Conducted .................................................. 18
Table 2: Data of crashed per app launch compared between IOS and Android ...... 19
Table 3: Key Milestone Final Year Project 1 .......................................................... 22
Table 4: Key Milestone Final Year Project 2 .......................................................... 22
Table 5: Final Year Project Milestone ................................................................. 23
Table 6: Development Tools .................................................................................. 37
Table 7: Graph of respondent group age and profession ..................................... 42
Table 8: Results surveys for smartphones user category .................................... 43
Table 9: Results surveys for shopping category .................................................. 45
Table 10: Results surveys for supermarkets and hypermarkets category .......... 46
Table 11: Comparison of price between RFID and NFC Tags Usage ................... 51
# TABLE OF CONTENTS
**CHAPTER 1: INTRODUCTION** . . . . . . 6
1.1 Background of Study . . . . . . 6
1.2 Problem Statement . . . . . . 7
1.3 Objectives . . . . . . 8
1.4 Scope of Study. . . . . . 8
1.5 Limitation. . . . . . 8
1.6 Feasibility Studies. . . . . . 9
1.6.1 Technical Feasibility. . . . . 9
1.6.2 Economic Feasibility. . . . . 9
16.3 Organizational Feasibility . . . . 9
1.7 SWOT Analysis . . . . . . 10
**CHAPTER 2: LITERATURE REVIEW** . . . . . . 11
2.1 Moving of Visually Impaired People . . . . . . 11
2.2 Navigation System . . . . . . 12
2.2.1 RFID. . . . . . . . 12
2.2.2 NFC. . . . . . . . 14
2.2.3 QR Code. . . . . . . . 15
2.3 Interaction between System and User. . . . . . 16
2.3.1 Interface. . . . . . . . 16
2.3.2 Input. . . . . . . . 16
2.3.2.1 Buttons. . . . . . . . 16
2.3.2.2 Voice Command. . . . . 16
2.3.2.3 Gesture Interface. . . . . 17
2.3.3 Output. . . . . . . . 17
2.4 Usage of Smartphone within Visually Impaired People. 18
2.5 Android as a Mobile Development Platform. . . . . . 19
**CHAPTER 3: METHODOLOGY** . . . . . . 21
3.1 Research and Project Development Methodology 21
3.2 Key Milestone . . . . . . 22
3.2 Grant Chart. . . . . . . . 24
3.4 Project Activities . . . . . . 25
CHAPTER 4: RESULT AND NEXT STEP . . . 42
4.1 Market Survey . . . . . . 42
4.2 QR Code Experiment . . . . . . 47
4.3 Interfaces . . . . . . 49
4.4 Price Differentiation . . . . . . 50
CHAPTER 5: RECOMMENDATION & DISCUSSION . . 51
CHAPTER 6: CONCLUSION . . . . . . 52
REFERENCES . . . . . . . . . . 53
APPENDICES . . . . . . . . . . 54
CHAPTER 1
INTRODUCTION
1.1 BACKGROUND OF STUDY
Visual impairment is one of the common disability largely found. According to 2013 World Health Organization (WHO) survey 285 million people are estimated to be visually impaired worldwide which 39 million are blind and 246 have low vision. Moreover, about 90% of the world’s visually impaired live in developing countries.
In Malaysia this disability is commonly known and there are large number of people affected by it. As Malaysia going through modernization phase, more visually impaired persons are trying overcome their disability and want to have a common man’s life. This been a positive change on visually impaired society since there been a lot of measures taken by government and non-government organization to improve their lifestyle. Currently there are more number of visually impaired person are moving within the streets as normal people with help of white cane with a red tip; the international symbol of blindness.
However, in Malaysia the most common and major problem faced by the visually impaired persons is mobility. Currently, there been only few measures taken in the capital city of Malaysia which is the Kuala Lumpur on helping the visually impaired people on mobility. For example, blocks are installed mainly at rail, subway, LRT and monorail stations and the surrounding sidewalks. In some locations warning and directional blocks are installed while in other locations directional indicators are carved into the pavement and warning blocks are installed where direction markers intersect and where pedestrians are to stop. Due to this architectural improvement, number of visually impaired people mobilization highly concentrated within the Kuala Lumpur streets compare to any other part of Malaysia.
Tactile paving proved to be one of the effective way on helping the visually impaired people but it is expensive and for already built buildings or pathways it is not cost effective to restructure it. Besides that, there been many devices developed by many researches and inventors on helping
mobility of the visually impaired people and it did not accepted largely in Malaysia either due to high cost of purchasing it or too techy for Malaysia culture.
Since smartphone become a big leap in human culture currently, many researches focused and developed application is many platforms for visually impaired people. Smartphone is a common technology device owned almost by most of the people include visually impaired people. In that case, it is the most effective method to develop app which help visually impaired people to moving around or improve their mobility chance. So, there are many apps can be found on market currently and many to come more. Each have their own unique functionality, scope and method. There have been few invention on devices and apps so far for visually challenged people to go shopping and buy things. But, it either being expensive to purchase by Malaysians or not compatible to use it in Malaysia.
1.2 PROBLEM STATEMENT
Visually challenged people are facing hard time to moving around. This cause them to not having a normal lifestyle as a common man or as they wish. This issue can be related to many daily activities. Author have focused on solving the issue of visually impaired person on moving around supermarkets to buy things. At the same time people who are new to a place having difficulties in getting to the intended destination and may get lost in moving around on their own. Therefore the author came out with the idea to develop a mobile application in android platform called QR Trail – app for visually impaired person mobilization. This mobile will help to overcome the problem of moving especially within the supermarket and hypermarkets. At the same time, author planned to develop the app to be easily implemented with less cost and compatible with Malaysian environment. Besides that, author will use QR code technology on developing this app which will be a platform to increase the usage of QR code technology more in Malaysia.
1.3 OBJECTIVE
The author has set three objectives that need to be achieves which is:
1. Visually challenged person find difficulties in moving autonomously without the help of someone else around in shopping malls.
2. People who are new to a place having difficulties in getting to the intended destination and may get lost in moving around on their own.
3. To develop an app with an inexpensive solution and easily deployable in smart phones in Malaysian environment.
4. To increase the usage of QR code and NFC technology within Malaysians.
1.4 SCOPE OF STUDY
The scope of developing QR Trail Mobile Application is defined specifically for the visually impaired or partially impaired smart phone users with age range from 13 years old till 30 years old. This mobile application will be developed on Android platform. Besides that, author’s app requires a part implementation process from the supermarket side. Therefore, the scope of study includes the supermarkets and hypermarkets where this app going to be use by the users. The focus will be on supermarkets and hypermarkets own by big organization where it is big space to explore by visually impaired people and big organization will keen to implement such practices where social responsibility is part of their business model.
Moreover, author have planned if this app is successfully implemented and used; using the same concept can be implemented in streets of Malaysia. Scope of study required more on capability of QR code and how it can be used in moving visually impaired people.
1.5 LIMITATION
The limitation of the application is to keep the smartphones to stay connect with internet connection through 3G or 4G. Since, GPS navigation requires internet connections whereas in
Malaysia some of the places the 3G or 4G connections are still unavailable or weak. Besides that, it is will challenging for visually impaired people to find the QR code to scan it.
1.6 FEASIBILITY STUDIES
1.6.1 Technical Feasibility
There are a lot of benefits when doing a project based on Android Smartphone. This is proven by the research done by Gartner whereby the worldwide Smartphone sales are reaching 468 million units in 2011, increasing 57.7% from 2010. Android is becoming the most popular operating system (OS) worldwide and building on its strength to account for 49% of the Smartphone market. Therefore, there will be a good point to develop this project for Android as more people are using it. Besides that, QR code technology is an easy technology which have high potential and can creatively make use of it.
1.6.2 Economic Feasibility
The application is builds in android platform. All the software and coding is open source so and no cost occurs along the development time. This application is upload in Google Play and can be download for free into all smartphones that running android operating system. Moreover, QR code can be generator free from many sites and services. As overall, this project is considered economic feasible.
16.3 Organizational Feasibility
This system will be an introduction for every supermarket and hypermarkets in Malaysia. This app is meant for blind or visually impaired to navigate within the premises and ease them in shopping. Besides that, implementation of the app will be accepted and welcome by most of the supermarket and hypermarket in the form corporate social responsibility. From the perspective of the supermarkets and hypermarkets the system is organizational feasible.
1.7 SWOT ANALYSIS
**STRENGTH**
- Simple and can be deploy easily
- Developing and implementation cost is very less
- Opportunity for CSR
- Give opportunity for blind people for blending into society
**WEAKNESSES**
- User need to have a smartphone to use this application.
- User need internet access
- GPS embedded in phone not accurate
- User need practice to identify QR code on the floor
**OPPORTUNITY**
- Can be a base for creating more efficient navigation system
- Can be used for tourism purpose
- Can attract blind people to shop
**THREAT**
- Imitation of application as android application is open to android market.
- There is a lot of room for improvement
- Supermarket and Hypermarkets should agree to implement or use the system
Figure 1: SWOT Analysis for QR Trail
CHAPTER 2
LITERATURE REVIEW
2.1 MANUVERING OF VISUALLY IMPAIRED PEOPLE
People who are blind rely on their other senses-smell, touch, hearing, taste-to help them manage in the world. Blind people have to memorize identifying features, like sounds and smells, of the places that they often go. They also have to pay close attention to where things are located in their homes in order to get around safely, always putting objects in the same places after use so that they can be found again.
Some blind people use canes or guide dogs to get around. A white cane indicates that the person using it is visually impaired. Blind people tap their canes on sidewalks, floors, and streets. They learn to identify the locations of things-like steps, walls, or doors-simply by the different sounds that their cane taps make. Various high-tech devices have been invented, including laser canes, that use sound or light waves that bounce off objects and send signals to the user about where these objects are located, what they might be made of, and how big they are. Guide dogs, or seeing-eye dogs, are specially trained to lead blind people about. The dog and the person work as a team, with the dog following commands that help the blind person go about her day. The dog, in turn, signals the person when she is approaching a curb or when it is safe to cross a street.[1]
Besides that, visually challenged people are given Orientation and Mobility (O&M) training to ease them more on moving around. Visually impaired people who graduated from O&M training have special skills and are more capable in moving. Orientation is the ability to use one’s remaining senses to understand one’s location in the environment at any given time. While mobility is the capacity or facility of movement. Orientation and mobility defined as teaching the concepts, skills and techniques required by the visually challenged people to travel safely, efficiently, and gracefully through any environment and under all circumstances. There are many modules taught during the O&M training. There important modules that involves visually impaired people’s
moving are predicates to independent mobility, basic long cane and self-familiarization skills, and indoor and outdoor orientation and mobility skills.[2]
Moreover, there are proper way of using and handling the white cane. Visually impaired people will use the cane as per taught to them and the guidance are universally accepted and practiced. The proper way is that the wrist will settle to somewhere between the belly button and waist, slightly to one side, and cane will be gently swing from side to side. The tip will be always stay in contact with the ground, swinging approximately the width of shoulders. When walking, the swing will be alternate with the steps. As the visually impaired person steps with the right foot, the cane will go to the left, and vice-versa. If the cane is swinging in the wrong direction, the cane will be stopped in that general direction and fix it with next few steps. The head be held high and shoulders kept relaxed. This will allow to use any remaining vision and whatever hearing to aid mobility.
2.2 NAVIGATION SYSTEM
Navigation system is most crucial part of this project. This is most important for guiding the visually impaired person throughout the supermarkets or hypermarkets because the lack of visual disability. Navigation will ease them to moving and make them faster reach the destination they wish to reach. At the same times, it will boost their confident that they are in right path. There been a lot of studies, research and invention in blind shopping.
2.2.1 RFID
Radio-frequency identification (RFID) is the wireless non-contact use of radio frequency electromagnetic fields to transfer data, for the purposes of automatically identifying and tracking tags attached to objects. The tags contain electronically stored information. Some tags are powered by and read at short ranges via magnetic fields. Past years RFID have been the primary technology used by many researchers in developing innovation in blind shopping. In Utah State University, Robocart a robotic supermarket assistant in the form of a custom built market cart with a laptop, laser range finder and RFID reader was developed. They uses the RFID reader attached to the cart and passive RFID tags scattered at the different points in a supermarket for the navigation part. Navigation part been challenge for many of the inventions. Even RFID been a successful breakthrough and promising, but it presents problems such as cost of tags and readers
remain prohibitive for tagging all but high-value products.\[3\] Moreover, technical problems, environment hazards and consumer perception of trust, privacy and risk, mixed with fear remain significant acceptance barriers to RFID item-level tagging. Most of the inventions still depend on usage of the white canes in their products such as Tinetra. Meanwhile, Carnegie Mellon University presented GroZi where they use a verbal feedback for navigation. [4]
Beside that another successful way of using RFID to help the visually impaired person to moving around supermarkets or hypermarkets is by white cane augmented with an RFID reader at its tip. It provides through a headphone connected to the smartphone simple verbal navigation instructions. It combines a white cane with a portable RFID reader attached to its tip, a set of road mark-like RFID tag lines distributed throughout the corridors of the supermarkets and hypermarkets. This approached was used by the same Utah State University researchers in a product called ShopTalk. Besides that, iCare another innovation where they still relied on RFID reader embedded in a hand glove to detect the location. When the user move the hand along the shelf, the system will indicate the user what location are you passing or in. But still for more effectiveness iCare still use white cane enhanced with an RFID reader. [5]

2.2.2 NFC
Near field communication (NFC) is a set of standards for smartphones and similar devices to establish radio communication with each other by touching them together or bringing them into close proximity. Communication is also possible between an NFC device and an unpowered NFC chip called a "tag". NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards.
With the growing number of NFC-equipped phones, NFC tags are becoming an increasingly popular way to take advantage of this sprouting technology.
NFC is another promising technology that enable use in inventing shopping for visually impaired people or replace RFID in process of creating this system. This is due to, the simplicity transactions are initialized automatically after touching a reader, another NFC device or NFC compliant transponder. This simplicity allow many NFC enabled application and services are developed which are operating in three different modes which are reader/writer, peer-to-peer and card emulation. So far, NFC only been used in payment, ticketing, loyalty services, identification, access control, content distribution, smart advertising, peer-to-peer data/money transfers and set-up services. NFC technology is a promising technology and a booming technology. More application and services can be relied or developed using NFC.
But, there are some setbacks in NFC that RFID is providing such as NFC readers work at a maximum range of about 4 inches (10 centimeters). NFC readers are not suitable for RFID style inventory tracking; their range is too short. NFC is more up-close-and-personal type of wireless.[6]
2.2.3 QR CODE
QR Codes, “QR” abbreviated from Quick Responses, are rapidly growing marketing phenomenon currently. The QR code is a two dimensional (datamatrix) barcode that is designed to be scanned by smartphone camera, in combination with a barcode decoding application.

Figure 3: QR Code
Data are translated into a QR code by QR generators which are available in online for free. The decoding software available in smartphones interprets the code and the hand phone will display the text or launch a browser to display specified web page.
QR code technology is another promising technology that can be used in navigation and replace RFID. Even though the usage of the QR code still focused on product tracking, item identification, time tracking, document management and general marketing, but still the capability of the QR code technology is high. This is because QR code’s storage capacity is greater than standard UPC barcodes. At the same time, QR code is free to generate and free to scan. Unfortunately, the setback is to scan a QR code the device, smart phone need to be connected to internet or stay online.
2.3 INTERACTION BETWEEN SYSTEM AND USER
Interaction is another crucial part for the system where the communication, interaction and delivery from the system or between user and system must be smooth and successful. The system must be user friendly to the visually impaired person. The system must be easily navigated within the smartphone and ease to handle the system. The more easy the system to be used or handled by the user the more the user friendliness of the system.
2.3.1 Interface
The interface designed to the system must focus on the ability and capability of the visually challenged people. The interface is not important to be attractive with objects, animation or colors. The higher priority should be given to the functionality and usability. The buttons on the smartphone screen should be in appropriate size which will be bigger than normally created in apps and easily navigated by the user to be identify by them. The hard part of the designing of the interface is that the smartphones available in markets are vary in sizes. So, the optimal design need to be identify so that the created design is suitable to be used in all devices.
2.3.2 Input
Input by the user into the system can be done through clicking the created the buttons, gesture interface and voice command. Each have it is own advantages and weaknesses.
2.3.2.1 Buttons
Buttons are something fixed in the system, where the user visually impaired user can memories after several usage. But, there are possibilities of user make mistake by clicking wrong buttons and not knowing it. Besides that, user also can make mistake if the smartphone is hold in different direction. The tendency of the user to make mistake is button type of input is very high.
2.3.2.2 Voice Command
Input through voice command is very convenient where can be done without hand movement and through headphone embedded with microphones. It is easy for user to send desirable commands to the system even without taking out smartphone from the pocket. But the problems arise when the system cannot identify the command due to interference such as crowded
supermarkets and hypermarkets, announcement made in supermarkets and songs or advertisement played in supermarkets. Moreover, there are chances where the system cannot understand the user’s command due to the user’s dialect or slang. There possibility that the user need to voice out to give command where it will get unnecessary attention of other people around which most of the disable people not comfortable with.
2.3.2.3 Gesture Interface
Input through gesture interface is most suitable because the user can write anywhere on the screen of their smartphone. There no need to navigate to find buttons and it can be done silently without getting attentions. For example, user can write “F” on the screen to get direction to the fish market section in the supermarket. The disadvantage is the user need to hold on their phone.[5]
2.3.3 Output
Since the possible logic state that visually impaired people are not visible to colors, objects and animations, it most appropriate to use voice/sound and vibration as output. The system must communicate or send information through sound to the user via voice instructions or short music. The command will be heard through headphone for clearness and avoid unnecessary attentions. Vibrations also can be a good output to notify the user discretely where the only the user can receive the notification.
2.4 USAGE OF SMARTPHONE WITHIN VISUALLY IMPAIRED PEOPLE
There been study conducted on rate of usage of smartphone by the visually impaired people and the purpose of their usage by J. Liimatainen.[7] Questionnaires have been given to the visually impaired people consists of group of eleven. There were 6 blind and 5 low vision people in that group. The questionnaire covered the usage of the smartphones in everyday tasks and special questions about the mobile applications for physical activity. Figure 3 shows the results of the questionnaire conducted.

Table 1: Results of the Questionnaire Conducted
The results shows, most of the people either owned (36.4%) or had tried (54.5%) smartphones or feature phones with touch screen. The average experience of using mobile phones was 6-9 years. More than half (54.5%) of the people had experience with mobile game application. One of the person from the group had tried physical activity or health-related computer or mobile application before. Mostly the daily usage of mobile phone been for calling, text messages and listening music or radio. This study shows the usage of smartphone within visually impaired people are high and the eagerness of using it for their daily purpose or task are also high. [7]
2.5 ANDROID AS A MOBILE DEVELOPMENT PLATFORM
There are two most popular operating system (OS) that operate almost in all smart phones that available today which are iOS from Apple and Android, an open source mobile operating system. The best to choose android operating system as the platform to build system for the visually impaired people to do shopping in supermarkets or hypermarkets. One of the main reasons on why to choose Android platform is because the tendency for android application to crash is lesser compare to iOS. Below are the data gathered from the Crittercism, a mobile app. monitoring startup.

Table 2: Data of crashed per app launch compared between iOS and Android
Basically the data shows that iOS apps crashed more frequently than comparable apps on Android. As you can see in the data presented, iOS apps on iPhone, iPad and iPods Touch make up nearly 75% of total crashes in the period that the data was gathered. The researchers suggests that the reason why Android apps see far less crashes than iOS apps is because the Android platform allows developers to send out updates faster and users are able to set their Android devices to auto-update apps which allow bugs to be fixed much faster than can be done on iOS. On iOS developers pushing updates have to go through an approval process which can take weeks and there is no auto-update for users using iOS. On a day-to-day basis we do see more app crashes on iOS than on Android.
Besides that, the Android gains a lot of user because technology platform markets tend to standardize around a single dominant platform like Windows in PCs, Facebook in social, and Google in search. The developers are strongly support Android platform by building their apps and provide it to the Android. There are also two most popular Android application stores, GetJar and Google Play. Both GetJar and Google Play offer the user free and paid Android applications, and users can choose what type of applications they want to download.
CHAPTER 3
METHODOLOGY
4.1 RESEARCH AND PROJECT DEVELOPMENT METHODOLOGY
4.1.1 Methodology
In developing QR Trail app, the type of methodology to be used is Rapid Application Development (RAD) method. As the time given to develop the complete working prototype is only 10 months, RAD method is the suitable methodology because it enables the system to be developed faster. RAD is a concept of method that can help develop the system faster and of higher quality.
This methodology is also chosen as RAD could allow developer to do a lot of testing during the development phase. QR Trail App is a new type of navigation system that uses a smartphone for visually impaired person; therefore developer need to start develops this system from scratch. For these reason, developer is expected to face some error during developing this system. By using RAD methodology, developer able to fixed the system if an error is found during testing phase.
Besides, as the budget to develop this system is small, RAD could help reduce the development cost of this project as it provides flexibility to completely develop the system. [8] In order to satisfy the customer, developer might need to upgrade the system in future. Therefore, by apply the RAD method, developer can able to do the changes in the system faster and more efficient.
Under this methodology, the whole system development will be divided into four main phase of Rapid Application Development such as below:
I. Requirement analysis and System design
II. Prototyping cycles
III. Testing
IV. Implementation
3.2 Key Milestone
1. Final Year Project 1
<table>
<thead>
<tr>
<th>Activities</th>
<th>Week 1</th>
<th>Week 2</th>
<th>Week 3</th>
<th>Week 4</th>
<th>Week 5</th>
<th>Week 6</th>
<th>Week 7</th>
<th>Week 8</th>
<th>Week 9</th>
<th>Week 10</th>
<th>Week 11</th>
<th>Week 12</th>
<th>Week 13</th>
</tr>
</thead>
<tbody>
<tr>
<td>Finding Supervisor</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Research on propose title</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Propose final chosen title</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Project analysis</td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Planning system design</td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Project Testing</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Market Survey</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Analyze data</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td></td>
</tr>
</tbody>
</table>
Table 3: Key Milestone Final Year Project 1
2. Final Year Project 2
<table>
<thead>
<tr>
<th>Activities</th>
<th>Week 1</th>
<th>Week 2</th>
<th>Week 3</th>
<th>Week 4</th>
<th>Week 5</th>
<th>Week 6</th>
<th>Week 7</th>
<th>Week 8</th>
<th>Week 9</th>
<th>Week 10</th>
<th>Week 11</th>
<th>Week 12</th>
<th>Week 13</th>
</tr>
</thead>
<tbody>
<tr>
<td>Development</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Design Interface</td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>System Function</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>System Database</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>System Testing</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Internal Testing</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Review complete prototype with supervisor</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
</tr>
<tr>
<td>Maintenance</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td>x</td>
</tr>
</tbody>
</table>
Table 4: Key Milestone Final Year Project 2
Table below shows the key milestone that the author needs to achieve during entire timeline of from Sept 2013 until April 2014 in Final Year Project (FYP).
<table>
<thead>
<tr>
<th>Key Milestone</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project Proposal</td>
<td>29 September 2013</td>
</tr>
<tr>
<td>Extended Proposal (10%)</td>
<td>30 November 2013</td>
</tr>
<tr>
<td>Proposal Defense (40%)</td>
<td>11 December 2013</td>
</tr>
<tr>
<td>Interim Report (50%)</td>
<td>18 December 2013</td>
</tr>
<tr>
<td>Progress Report (10%)</td>
<td>06 February 2014</td>
</tr>
<tr>
<td>Pre-SEDEX (10%)</td>
<td>24 March 2014</td>
</tr>
<tr>
<td>Dissertation (40%)</td>
<td>30 April 2014</td>
</tr>
<tr>
<td>VIVA (30%)</td>
<td>22 April 2014</td>
</tr>
<tr>
<td>Technical Report (10%)</td>
<td>7 April 2014</td>
</tr>
</tbody>
</table>
Table 5: Final Year Project Milestone
### 3.3 Gantt chart
<table>
<thead>
<tr>
<th>Task Name</th>
<th>2013</th>
<th>2014</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>September</td>
<td>October</td>
</tr>
<tr>
<td>1 Proposal the project title</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2 Plan the project</td>
<td></td>
<td></td>
</tr>
<tr>
<td>3 Feasibility analysis</td>
<td></td>
<td></td>
</tr>
<tr>
<td>4 Create a work plan</td>
<td></td>
<td></td>
</tr>
<tr>
<td>5 Analysis</td>
<td></td>
<td></td>
</tr>
<tr>
<td>6 Information and data gathering</td>
<td></td>
<td></td>
</tr>
<tr>
<td>7 Requirement gathering and analysis</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8 Design</td>
<td></td>
<td></td>
</tr>
<tr>
<td>9 Develop UML</td>
<td></td>
<td></td>
</tr>
<tr>
<td>10 Develop Interface</td>
<td></td>
<td></td>
</tr>
<tr>
<td>11 Develop a prototype</td>
<td></td>
<td></td>
</tr>
<tr>
<td>12 Implementation</td>
<td></td>
<td></td>
</tr>
<tr>
<td>13 Test the system</td>
<td></td>
<td></td>
</tr>
<tr>
<td>14 Gather users feedback</td>
<td></td>
<td></td>
</tr>
<tr>
<td>15 Iteration 3, 4, 5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>16 Delivery the project</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Figure 4: Gantt chart for Final Year Project 1 and 2
3.4 Project Activities
![Rapid Application Development Cycle Diagram]
Figure 5: Rapid Application Development Cycle.
1. Requirement analysis and System design
This is the first phase of system development. Firstly, author have analysis and discover the important requirement needed based on the functionality of this system so that the system able to achieve the objective of this system.
QR Trail App to implement for usage, it requires a QR code to be generated and must content the audio file of the current position and instruction for moving to next position. The QR code that contains the audio file will be stick on the floor for the user which is the visually impaired person to track or identify it. The audio file which have created will be stored in some free servers or paid servers. Multiple QR code will be stick on the floor in each department.
Tracking and identifying the QR code that been stick on the floor will require a white cane which the visually impaired person often possessed and used for his moving purpose. The white cane usage which author believe mastered by the visually impaired person need to be function as the identifier for the QR code stick on the floor of supermarkets or hypermarkets.
Once identified and scanned, the instruction will be played and user will choose for desired location where the app will guide the user using GPS. The app will guide to desired location chosen by the user where another QR code on the chosen department will be end point. So, user can choose either stop and shop or continue scan and move again.
User identify QR code on the floor using white cane
User scan QR code on the floor using the app
Audio file containing current location and instruction will be played.
User need to input the desired department using gesture input
App will guide the user to desired location which that desired department’s QR code will be the end point. User either stop and shop at the department or continue scan and move to another department.
Figure 6: System design of QR Trail App
I. Process Flow of QR Trail App:
- User open the App by clicking manually
- Scan for QR code for identifying the location
- Code Scanned
- Audio File playing to indicate the location
- User Input by clicking the location
- (New Screen opens)
- User choose desired destination by clicking the button
- (Click button twice)
- Desired Location?
- Continue Navigate
- User Input by clicking the location
- Finish Navigate
- User Input by clicking cashier button
- (Cannot continue Navigate)
- Exit App
Figure 7: Process Flow of QR Trail App
II. Technologies to be applied in the QR Trail App:
a) Quick Response (QR) code:
- Quick Response or QR code is a two dimensional code design to encode information. The QR code is a trademark for a type of matrix barcode which different it is fast readability and have large storage capacity. The information encoded can be made up of four standardized kinds ("modes") of data (numeric, alphanumeric, byte/binary, Kanji), or through supported extensions, virtually any kind of data. QR Code carries information both horizontally and vertically, QR Code is capable of encoding the same amount of data in approximately one-tenth the space of a traditional bar code.
Process Flow of QR code:
- Process 1: Converting the information into a QR code:
The process of converting the information into a QR code is to be done by an application called QR code generator.

The process involved in converting audio file into QR code:
1. Recordmp3.org, used to record the instruction online.
2. It then saves the recording to the web.
3. It supplies with a URL that will take anyone who has it to your audio file. This link copied and create your QR Code.
4. URL created for your audio file from Recordmp3.org pasted to any QR code generators
5. The QR code ready to be printed.
- Process 2: Translate the information in the QR code:
The process of translating the information in the QR code is to be done by a QR code reader function within the QR Trail App. The device that will be functional as the QR code translator must be equipped with a camera because the device needs to capture the QR code image. After the image has been captured, the QR code reader will process the QR code image and then translate all the information stored in the QR code image. In this case, the information is the URL which will link to audio file.
Figure 9: Translating the information stored in the QR code image
b) The White Cane
There are many different kinds of tip and not all of them may be suitable for the kind of cane used for certain purpose. If using a particular tip is important to understand and then make sure it is compatible with the cane visually impaired considering before making a final decision. Some cane tip options are:
- **The pointer tip.** This is like a finger on the end of the cane. It's tapped over the ground so may give less information about the terrain. This tip is traditionally used with a guide cane.
- **The ball tip.** This is a ball the size of a small apple which is rolled over the ground in front of the user. It provides much more information about the terrain and has become a very popular choice for long cane users.

A = Pencil Tip, B = Bundu Basher Tip, C = Ball Race Overfit Tip, D = Rubber Support Cane Tip, E = Pear Tip, F = Rural Tip, G = Jumbo Roller Tip
c) Global Positioning System
These days most Android smartphone have AGPS (Assisted GPS) chips installed which takes the help of network towers and WI-FI hot spots to quickly determine the location nearby and help the Android GPS enabled smartphone to get a lock with GPS satellites. Android smartphones with AGPS chips can also have a lock with GPS satellites without the need of data plan or network but require clear sky view and some time to have a lock with GPS satellites. After user chose the desired location or department to go, these GPS functionality will be used to guide the user to move from one department to another department. The starting and the end point of each movement will be the QR codes. The QR code scanned will act as the starting point will the QR code at chosen location or department will be end point.
Figure 11: Android GPS Architecture
III. QR Trail App requirement:
a. Mobile Phone
i. Built on mobile operating system: Android
b. The smartphone have back camera.
c. The smartphone must have built in GPS
d. The visually impaired user must have a white cane
Current progress in system design:
The initial interfaces of QR Trail application have been completed. The development of the real interface will base on this initial interface.
<table>
<thead>
<tr>
<th>Interface</th>
<th>Information</th>
</tr>
</thead>
<tbody>
<tr>
<td>Process 1</td>
<td>• Once the App is clicked, the first screen will be the QR code scanner.</td>
</tr>
<tr>
<td></td>
<td>• Using the phone’s camera</td>
</tr>
<tr>
<td></td>
<td>• The interface will be the camera</td>
</tr>
</tbody>
</table>
Process 2:
- After QR code scanned, the app straight away go to audio player to play the current location and the instruction.
- The audio will be played until user tap on the screen.
Process 3:
- After user tap the screen on the audio interface, gesture input interface will come out for user to input the desired location.
- For example, in the instruction tells 1 for grocery, 2 for bread and 3 for cashier; user now have to write 1 or 2 or 3 on this screen.
### Process 4:
- Once the user input the desired location using gesture input, now the app will guide the user to the location using voice command.
- There will no activities will be done on the screen as interface just points arrow which to indicates that guiding process going on.

### Process 5:
- Once the user reach the desired location, the app automatically switched to QR code reader screen.
- Now user either can exit the app or continue using the app by scanning QR code again.

**Interface for QR code on the floor**
![QR Code Image]
**What is this?**
This QR code is for blind people to scan and move around the supermarket.
Kindly, please **Do Not Stand** on this **QR Code**
We like to thank you for your co-operation
From Supermarket Management
Figure 12: QR Code Proposed Interface
2. Prototyping cycles
In the prototyping cycles, there will be three main steps which are develop, demonstrate and refine. After the system design process has been finalized, the system prototype built. As been planned in the key milestone, the prototype development started in Final Year Project 2. However, the initial design of the system interfaces has been done during Final Year Project 1.
The development of this system started on the first week of Final Year Project 2. From the key milestone, the development process took eight weeks to complete. There were three activities in the development process. The first activity is to design the real interface for the system. The interfaces design based on the initial design that has been done during the Final Year Project 1. The real interfaces design process is expected to complete in three weeks.
After the interfaces have been completed, author start to develop the system function. The development of system function took five weeks to complete. Author used Eclipse IDE development tools. In Eclipse IDE, author used Java language to code the QR Trail app function. ZXing (Zebra Crossing) library is used to code QR code scanner. Can call on the resources in this open source library within our app, retrieving and processing the returned results. By importing the ZXing integration classes into QR Trail app, can make user scans easier and focus our development efforts on handling the scan results.
In the demonstrate process, author used android emulator to test run the system prototype. The aim of this process is to check the system prototype. If the system prototype is completely functional and meets the entire system requirement, then the prototype is ready to proceed with system testing stages. However, if the system prototype is fail during the demonstration process, the prototype undergo the refining process.
In the refining process, author re-built the system prototype and fix the problem found in the previous system prototype. The prototyping cycles continued until the complete working prototype is done and ready to the next stages.
Development Tools:
<p>| | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Eclipse IDE</td>
</tr>
<tr>
<td>2</td>
<td>Android Emulator</td>
</tr>
</tbody>
</table>
Table 6: Development tools
QR Coder Scanner Application for QR Trail App
In order to read the QR code placed on the floor, the QR Trail app need a QR code reader. Rather than creating a new QR code reader application inside the QR Trail app, author decided to integrate an existing QR code reader. This is because, existing QR code reader have a vast database of barcode list, have very few crushing incident and have the highest verdict compare any other scanner.
Author have chosen Zebra Crossing well known as ZXing barcode and QR code scanner. ZXing allows a user to scan single dimension or two dimension graphical barcodes with the camera on their Android devices. ZXing is an open source app which allow author to integrate with QR trail app without any fees.
Steps involves in integrating ZXing App into QR trail App
- **Step 1**: Obtain the ZXing source code
ZXing source code can be obtain from many resources, since ZXing is commonly used many programmers and it is an open source. Author obtained from following resource: [http://code.google.com/p/zxing/source/browse/trunk](http://code.google.com/p/zxing/source/browse/trunk).
- **Step 2**: Build ZXing core using Apache Ant
Author build the core project into a jar file using apache ant which was downloaded from here: [http://ant.apache.org/ivy/download.cgi](http://ant.apache.org/ivy/download.cgi)
- **Step 3**: Build ZXing Android using Eclipse
Create a New Android Project and name it to ZXing. Then, add the core.jar file into our project.
- **Step 4**: Include ZXing Android into your project.
These are the methods taken for integrating the ZXing into QR trail for scanning the QR code.
3. System Testing
The testing of the QR Trail App conducted by continuously tested on the ease of usability. Once a problem is identify, the app undergo further development to rectify the issues rose. The preliminary test conducted using visually well users, then the test go next level where the visually well users e tested by covering their eyes. This will helpful to understand which part of the app need to be improve for users ease on using the app
Once the app is tested and satisfy, then the testing done using the visually challenged person. This is to ensure, the app first easy to use by the normal people then he can brought to test by visually impaired person. This will ensure the success of the testing.
4. Implementation
After the prototype has been finalized, the implementation stage started. Before the system is ready to be implementing, the final prototype version review by supervisor. The system is expected to be review by supervisor during Final Year Project 2 in week 11.
The system prototype implemented in the chosen target user which is TESCO Seri Iskandar, Perak. The reason to choose TESCO Seri Iskandar, Perak is because this hypermarket has a big space, near to author’s university and less crowded where the implementation will be easy for test trial. Author also believe can find easily volunteers for the test trials around this area.
During the implementation process, author tested the app prototype whether this app can deliberate the function for the visually impaired person to move autonomously around the hypermarket. If the app able to deliver the function successfully and accepted by the users it is mean the app prototype is successful.
Figure 13 Layout of Shopping Complex
Figure 13 above showing the common layout of particular shopping malls or hypermarkets. In the layout above we could notice the position of the QR code placed between each section’s ends. One at an end and another at other end point without regards to which QR code is starting point or end point. The distance between each QR code will be within 1 meter for each section and the distance between QR code of same section will be depends of shelf size of each sections.
The red arrow also indicates the distance or walk path the user of QR trail have to move with the help of navigation function within the QR Trail. This is the time, where QR trail will guide the user to move from one QR code to another.
The reason for allowing distance for each QR code is to avoid the user from confused scanning which QR code. So, the QR trail actually uses the distance between QR codes to reduce the fault. The distance will be used as fault tolerance factor.
CHAPTER 4: RESULT AND DISCUSSIONS
4.1 Market Survey
Market surveys have been done to make a survey on the necessity of QR Trail app in the target market. The surveys have been conducted and have been divided into three different categories which are:
i. Smartphone user
ii. Shopping
iii. Supermarkets or Hypermarkets
1. Smartphone user
For the smartphone user survey category, there are 5 selected visually impaired citizens of Malaysia that was chosen randomly to be involved in this survey. Malaysia citizen was chosen because the QR Trail application will be implemented in a local Malaysian supermarkets or hypermarkets. This survey is conducted in Dato Keramat, Kuala Lumpur where there is school for disables. Below are the groups of age and profession for the selected visually impaired person that answer this application:

Table 7: Graph of respondent group age and profession.
The survey conducted on the visually impaired person is to check on the statistic of smartphone user in within visually impaired people. Besides, we also make a survey on the QR code technology whether the visually impaired people aware or unaware of this technology.
They are four questions to be asked in the questionnaire under this category. Only two main questions are included in this documentation.
<table>
<thead>
<tr>
<th>Result</th>
<th>Conclusion</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Question 1:</strong> Are you using a smartphone?</td>
<td>4 out of 5 selected visually impaired people are using a smartphone. We can say that QR Trail is suitable to be implemented in Malaysia as most of the visually impaired people are using a smartphone. Even there are possibilities where there is some visually impaired person still not using a smartphone, developer believe with the effort of Malaysia government to provide a smartphone subsidiaries, perhaps the percentage of citizen that have a smartphone will be increase.</td>
</tr>
<tr>
<td><img src="image1.png" alt="Pie Chart" /></td>
<td>4 YES, 1 NO</td>
</tr>
</tbody>
</table>
| **Question 2:** As a smartphone user, do you aware of QR code technology? | 4 out of 5 selected citizen were not aware on the existence of QR code technology in a smartphone. Developer is believed that by introducing a QR Trail, the visually impaired people will be more aware on the potential of QR code technology and many usage of it which can ease their life. |
|  | 1 YES, 4 NO |
Table 8: Result surveys for smartphone user category.
2. **Shopping**
For the shopping survey category, the same 5 visually impaired person chosen and been asked the questionnaires under this survey. They are four questions to be asked in the questionnaire under this category. Only three main questions are included in this documentation.
<table>
<thead>
<tr>
<th>Result</th>
<th>Conclusion</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Question 1: Do you go for shopping?</strong></td>
<td></td>
</tr>
<tr>
<td>YES</td>
<td>From the result of this survey, 2 out of 5 visually impaired person told they will go for shopping. But they added that, it is once in a blue moon. 3 others said they never went as the things they needed they get in other way.</td>
</tr>
<tr>
<td>NO</td>
<td></td>
</tr>
</tbody>
</table>
| **Question 2: Where do you go for Shopping?** |
| **Grocery Store** | 2 |
| **Supermarket** | 1 |
| **Hypermarket** | 0 |
| **Shopping Mall** | 0 |
Since, only two respondent can answer this questions, both of the respondents have shopped for things in grocery stores where they familiar with the owner and one respondents shopped in supermarket quite few times. They said, they never been to any shopping mall due to big space or area and crowded.
Question 3: Do you go shopping alone?
<table>
<thead>
<tr>
<th>YES</th>
<th>NO</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
Again this question also can only answered by the two respondents who have shopped before. For this question both answered that, it depends on the place. For grocery store which they familiar they will go alone while for new place or hypermarket they certainly need a companion.
Table 9: Result surveys for shopping category.
3. **Supermarkets and Hypermarkets**
For the supermarkets and Hypermarkets category, I have conducted survey in Tesco Seri Iskandar, Billion Seri Iskandar Tesco Seri Alam and Today’s Market Masai. There were three simple questions asked to the Human Resource in charge person. Below is the documentation of the three questions was asked.
<table>
<thead>
<tr>
<th>Result</th>
<th>Conclusion</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td>Question 1: Do you usually have visually impaired person as your customer?</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>YES</th>
<th>NO</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
From the result of this survey, 2 out of 2 places said they have noticed or had visually impaired person as their customers but is something rare. They added that, they never seen single visually impaired person do the shopping, it always been with a companion.
Question 2: If there is a system for blind people to do shopping, could you implement?
<table>
<thead>
<tr>
<th></th>
<th>YES</th>
<th>NO</th>
</tr>
</thead>
<tbody>
<tr>
<td>Count</td>
<td>0</td>
<td>4</td>
</tr>
</tbody>
</table>
All of the places, 4 out 4 places agreed and they could encourage such initiative where it will be promotion for them. They also express their concern if the system implement can cost them, they will rethink on the implementation.
Question 3: Do you agree to stick QR code on the floor?
<table>
<thead>
<tr>
<th></th>
<th>YES</th>
<th>NO</th>
</tr>
</thead>
<tbody>
<tr>
<td>Count</td>
<td>1</td>
<td>3</td>
</tr>
</tbody>
</table>
Tesco Seri Alam agree to stick the QR code on the floor where it been practice for that place to stick promotional ads on the floor. But other three places showed some hastiness and they told to refer to higher management regarding this.
Table 10: Result surveys for supermarkets and hypermarkets category
4.2 QR Code Experiment
An experiment conducted to determine the optimal size of QR code to be printed and to be stick on the floor. Therefore, it will be easily detected and scanned by the QR code scanner.
There are two factors that is important in this experiment which are:
- **The distance between the QR code and the scanning device** – which determines the size of the QR code in the viewport of the phone camera.
- **The size of the dots in the code** – the more data you put into the code the smaller the dots become.
**Scan Distance**
To effectively scan the QR code it should appear to be at least 1cm (0.4 inches) across in the viewport of the scanning device, and as the distance between the camera and the QR code increases, the size of the QR code will need to increase to compensate. Most smartphones the relationship between scan distance and minimum QR code size is approximately 10:1.
**Simple Formula:**
Minimum QR Code Size = Scanning Distance / 10
**Calculating the size**
The recommended minimum size of the QR code image is determined by the scanning distance and the size of the data dots in the QR code, and can be calculated by first determining:
- **Distance Factor:** Start off with a factor of 10 then reduce it by 1 for each of poor lighting in the scan environment, a mid-light colored QR code being used, or the scan not being done front on.
- **Data Density Factor:** Count the number of columns of dots in the QR code image and then divide that by 25 to normalize it back to the equivalent of a Version 2 QR code.
**Better Formula:**
Minimum QR Code Size = (Scanning Distance / Distance Factor) * Data Density Factor
Based on the Formula:
Scanning Distance = 914.63mm (3 ft.)
Distance Factor = 10 – 1 (for poor lighting) = 9
Density Factor = 25/25 = 1.0
Minimum Size = (914.63mm / 9) * 1.0 = 101.62 mm / 11 cm
Figure 14: 11cm x 11cm QR Code
4.3 Interfaces
Figure 15 App Interfaces
4.4 Price Differentiation Between using RFID Tags and QR Code
1) Using RFID Tags System
<table>
<thead>
<tr>
<th>Requirements</th>
<th>Price Range</th>
</tr>
</thead>
<tbody>
<tr>
<td>RFID Tags</td>
<td>RM 1.00 each (Passive Tags)* (100 Units)</td>
</tr>
<tr>
<td>RFID Reader</td>
<td>RM 150 each device</td>
</tr>
<tr>
<td>White Cane</td>
<td>RM 30 / given free my Government</td>
</tr>
<tr>
<td>Smartphone</td>
<td>RM 500 and above</td>
</tr>
<tr>
<td>Total</td>
<td><strong>RM 780 (Least)</strong></td>
</tr>
</tbody>
</table>
*Price Varies According to Supplier and Quantity
2) Using QR Code
<table>
<thead>
<tr>
<th>Requirements</th>
<th>Price Range</th>
</tr>
</thead>
<tbody>
<tr>
<td>Printed QR Code</td>
<td>RM 3.00 each (10 Units)</td>
</tr>
<tr>
<td>White Cane</td>
<td>RM 30 / given free my Government</td>
</tr>
<tr>
<td>Smartphone</td>
<td>RM 500 and above</td>
</tr>
<tr>
<td>Total</td>
<td><strong>RM 560 (Least)</strong></td>
</tr>
</tbody>
</table>
Table 11 Price Comparison between RFID and NFC tags Usage
CHAPTER 5: RECOMMENDATION
In the previous proposal defense presentation, developer has got positive and also negative comments on the project idea from the internal and external examiners which are Mr Izzatdin B Abd Aziz and Prof. Dr. Alan Oxley. Both examiners found that the idea of creating an app for blind people to navigate around supermarkets or hypermarkets is good and noble.
However, both examiners also found that there will be some technical issues on the QR Trail application. The technical issue is that the QR Trail application uses the GPS which sometimes will be not accurate and precise. The examiner has issue that it will be a troublesome for the user who are blind to move around without correct guidance. Besides that, another issue is the blind user to identify the QR code stick to the floor. Examiners concern on whether they will have difficulties in finding the QR code. Therefore, they also suggested for me to look into the degree of the blindness of the user for using the QR Trail app.
As a developer, a propose solution need to be provided in solving this issue. The developer has been providing a solution where in the initial stage of implementation, the developer will try to create the guidance part to navigate the user in offline mode where the smartphone does not need a built in GPS or internet access. This way, the app can be functional properly with accurate and precise results. Moreover, developer will study further on the QR code design to be stick on the floor to develop a design that easily can identify, no one step on it, easily scan able and stick nicely on the floor. Moreover, to increase the effectiveness of detecting the QR code, NFC tags can be placed on each QR code stick on the floor.
CHAPTER 6: CONCLUSION
In this documentation, it explained in detail the starting idea on this project which is QR Trail, an application that provides a navigation system for visually impaired users to moving around supermarket or hypermarket autonomously. There are four main part which are abstract, introduction, literature review, methodology, result and discussion.
By taking the initiative of latest technology such as smartphone and QR code, this system could help the visually impaired person to bravely come out of their comfort zone and blend with normal society as doing normal things like a normal person does. This system also could promote the android smartphone among visually impaired Malaysian people as this system will develop in android application and can lead to creative usage of QR code.
The development of system will be using the Rapid Application Development (RAD) method. There will be four main phases in the RAD which are Requirement analysis and system design, prototyping cycles, system testing and implementation. The system will be developing using Eclipse, an application development tool.
As a conclusion, perhaps this system could solve the problem for the visually impaired people to moving around supermarkets or hypermarkets autonomously and do some shopping.
REFERENCES
Figure 16: Questioner Smartphone and Shopping
Figure 17: Questioner Supermarkets and Hypermarkets
|
{"Source-Url": "http://utpedia.utp.edu.my/13955/1/Yoogaraj_13431_ICT_FYP2%20Dissertation_FinalDraft_24Apr%20(13431.pdf", "len_cl100k_base": 14726, "olmocr-version": "0.1.53", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 101518, "total-output-tokens": 16425, "length": "2e13", "weborganizer": {"__label__adult": 0.0009102821350097656, "__label__art_design": 0.002826690673828125, "__label__crime_law": 0.0007081031799316406, "__label__education_jobs": 0.044036865234375, "__label__entertainment": 0.00036787986755371094, "__label__fashion_beauty": 0.0008916854858398438, "__label__finance_business": 0.00127410888671875, "__label__food_dining": 0.0016736984252929688, "__label__games": 0.003757476806640625, "__label__hardware": 0.0194854736328125, "__label__health": 0.00632476806640625, "__label__history": 0.0015268325805664062, "__label__home_hobbies": 0.0009307861328125, "__label__industrial": 0.0008730888366699219, "__label__literature": 0.0012636184692382812, "__label__politics": 0.0004658699035644531, "__label__religion": 0.0007643699645996094, "__label__science_tech": 0.3203125, "__label__social_life": 0.0004367828369140625, "__label__software": 0.0262908935546875, "__label__software_dev": 0.56005859375, "__label__sports_fitness": 0.0010356903076171875, "__label__transportation": 0.0033740997314453125, "__label__travel": 0.0004887580871582031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65068, 0.04051]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65068, 0.35703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65068, 0.91905]], "google_gemma-3-12b-it_contains_pii": [[0, 332, false], [332, 1553, null], [1553, 2351, null], [2351, 3443, null], [3443, 4709, null], [4709, 5055, null], [5055, 7134, null], [7134, 9126, null], [9126, 10876, null], [10876, 12619, null], [12619, 13403, null], [13403, 15528, null], [15528, 18005, null], [18005, 19458, null], [19458, 21162, null], [21162, 22306, null], [22306, 24425, null], [24425, 25777, null], [25777, 27098, null], [27098, 28613, null], [28613, 29152, null], [29152, 30716, null], [30716, 33872, null], [33872, 34625, null], [34625, 35899, null], [35899, 37476, null], [37476, 37951, null], [37951, 38578, null], [38578, 39573, null], [39573, 40694, null], [40694, 41635, null], [41635, 42507, null], [42507, 43282, null], [43282, 43747, null], [43747, 44274, null], [44274, 44590, null], [44590, 46481, null], [46481, 46825, null], [46825, 48479, null], [48479, 50169, null], [50169, 50206, null], [50206, 51158, null], [51158, 52109, null], [52109, 53600, null], [53600, 54679, null], [54679, 55880, null], [55880, 56710, null], [56710, 58372, null], [58372, 58599, null], [58599, 58640, null], [58640, 59924, null], [59924, 61674, null], [61674, 62977, null], [62977, 64971, null], [64971, 65017, null], [65017, 65068, null]], "google_gemma-3-12b-it_is_public_document": [[0, 332, true], [332, 1553, null], [1553, 2351, null], [2351, 3443, null], [3443, 4709, null], [4709, 5055, null], [5055, 7134, null], [7134, 9126, null], [9126, 10876, null], [10876, 12619, null], [12619, 13403, null], [13403, 15528, null], [15528, 18005, null], [18005, 19458, null], [19458, 21162, null], [21162, 22306, null], [22306, 24425, null], [24425, 25777, null], [25777, 27098, null], [27098, 28613, null], [28613, 29152, null], [29152, 30716, null], [30716, 33872, null], [33872, 34625, null], [34625, 35899, null], [35899, 37476, null], [37476, 37951, null], [37951, 38578, null], [38578, 39573, null], [39573, 40694, null], [40694, 41635, null], [41635, 42507, null], [42507, 43282, null], [43282, 43747, null], [43747, 44274, null], [44274, 44590, null], [44590, 46481, null], [46481, 46825, null], [46825, 48479, null], [48479, 50169, null], [50169, 50206, null], [50206, 51158, null], [51158, 52109, null], [52109, 53600, null], [53600, 54679, null], [54679, 55880, null], [55880, 56710, null], [56710, 58372, null], [58372, 58599, null], [58599, 58640, null], [58640, 59924, null], [59924, 61674, null], [61674, 62977, null], [62977, 64971, null], [64971, 65017, null], [65017, 65068, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65068, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65068, null]], "pdf_page_numbers": [[0, 332, 1], [332, 1553, 2], [1553, 2351, 3], [2351, 3443, 4], [3443, 4709, 5], [4709, 5055, 6], [5055, 7134, 7], [7134, 9126, 8], [9126, 10876, 9], [10876, 12619, 10], [12619, 13403, 11], [13403, 15528, 12], [15528, 18005, 13], [18005, 19458, 14], [19458, 21162, 15], [21162, 22306, 16], [22306, 24425, 17], [24425, 25777, 18], [25777, 27098, 19], [27098, 28613, 20], [28613, 29152, 21], [29152, 30716, 22], [30716, 33872, 23], [33872, 34625, 24], [34625, 35899, 25], [35899, 37476, 26], [37476, 37951, 27], [37951, 38578, 28], [38578, 39573, 29], [39573, 40694, 30], [40694, 41635, 31], [41635, 42507, 32], [42507, 43282, 33], [43282, 43747, 34], [43747, 44274, 35], [44274, 44590, 36], [44590, 46481, 37], [46481, 46825, 38], [46825, 48479, 39], [48479, 50169, 40], [50169, 50206, 41], [50206, 51158, 42], [51158, 52109, 43], [52109, 53600, 44], [53600, 54679, 45], [54679, 55880, 46], [55880, 56710, 47], [56710, 58372, 48], [58372, 58599, 49], [58599, 58640, 50], [58640, 59924, 51], [59924, 61674, 52], [61674, 62977, 53], [62977, 64971, 54], [64971, 65017, 55], [65017, 65068, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65068, 0.20513]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
418dc6a767049da7e3bd3296ea84a5f5ef6b54c4
|
[REMOVED]
|
{"Source-Url": "https://www.cs.rice.edu/~vardi/papers/arts99.pdf", "len_cl100k_base": 8411, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45154, "total-output-tokens": 12078, "length": "2e13", "weborganizer": {"__label__adult": 0.0005273818969726562, "__label__art_design": 0.0006003379821777344, "__label__crime_law": 0.0007863044738769531, "__label__education_jobs": 0.0018911361694335935, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.00029349327087402344, "__label__finance_business": 0.0004224777221679687, "__label__food_dining": 0.0007624626159667969, "__label__games": 0.0014495849609375, "__label__hardware": 0.0017299652099609375, "__label__health": 0.001987457275390625, "__label__history": 0.0005774497985839844, "__label__home_hobbies": 0.00022840499877929688, "__label__industrial": 0.0010585784912109375, "__label__literature": 0.0010023117065429688, "__label__politics": 0.0006299018859863281, "__label__religion": 0.0010395050048828125, "__label__science_tech": 0.46435546875, "__label__social_life": 0.00016987323760986328, "__label__software": 0.006214141845703125, "__label__software_dev": 0.51171875, "__label__sports_fitness": 0.0004734992980957031, "__label__transportation": 0.0014829635620117188, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37092, 0.0352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37092, 0.58065]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37092, 0.79903]], "google_gemma-3-12b-it_contains_pii": [[0, 2720, false], [2720, 6878, null], [6878, 9949, null], [9949, 14198, null], [14198, 17646, null], [17646, 21249, null], [21249, 24828, null], [24828, 28491, null], [28491, 32041, null], [32041, 35907, null], [35907, 37092, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2720, true], [2720, 6878, null], [6878, 9949, null], [9949, 14198, null], [14198, 17646, null], [17646, 21249, null], [21249, 24828, null], [24828, 28491, null], [28491, 32041, null], [32041, 35907, null], [35907, 37092, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37092, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37092, null]], "pdf_page_numbers": [[0, 2720, 1], [2720, 6878, 2], [6878, 9949, 3], [9949, 14198, 4], [14198, 17646, 5], [17646, 21249, 6], [21249, 24828, 7], [24828, 28491, 8], [28491, 32041, 9], [32041, 35907, 10], [35907, 37092, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37092, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
164e28c2fad46c5182a8c753d4db831a42270b0a
|
[REMOVED]
|
{"Source-Url": "https://web.archive.org/web/20150910214933/http://ti.arc.nasa.gov/m/profile/kyrozier/papers/Rozier_Vardi_Final_STTT_2010.pdf", "len_cl100k_base": 11859, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 55265, "total-output-tokens": 16380, "length": "2e13", "weborganizer": {"__label__adult": 0.0004382133483886719, "__label__art_design": 0.0005168914794921875, "__label__crime_law": 0.000606536865234375, "__label__education_jobs": 0.000988006591796875, "__label__entertainment": 0.0001392364501953125, "__label__fashion_beauty": 0.00023293495178222656, "__label__finance_business": 0.00039124488830566406, "__label__food_dining": 0.0004949569702148438, "__label__games": 0.0010595321655273438, "__label__hardware": 0.0013971328735351562, "__label__health": 0.0007748603820800781, "__label__history": 0.0004315376281738281, "__label__home_hobbies": 0.00013780593872070312, "__label__industrial": 0.0008473396301269531, "__label__literature": 0.000438690185546875, "__label__politics": 0.000545501708984375, "__label__religion": 0.0008025169372558594, "__label__science_tech": 0.18603515625, "__label__social_life": 0.00013458728790283203, "__label__software": 0.00949859619140625, "__label__software_dev": 0.79248046875, "__label__sports_fitness": 0.0004277229309082031, "__label__transportation": 0.0010175704956054688, "__label__travel": 0.0002512931823730469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58172, 0.03286]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58172, 0.33731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58172, 0.82818]], "google_gemma-3-12b-it_contains_pii": [[0, 2882, false], [2882, 9055, null], [9055, 14883, null], [14883, 19917, null], [19917, 24975, null], [24975, 28464, null], [28464, 32743, null], [32743, 36730, null], [36730, 39087, null], [39087, 40673, null], [40673, 44433, null], [44433, 47266, null], [47266, 48757, null], [48757, 55677, null], [55677, 58172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2882, true], [2882, 9055, null], [9055, 14883, null], [14883, 19917, null], [19917, 24975, null], [24975, 28464, null], [28464, 32743, null], [32743, 36730, null], [36730, 39087, null], [39087, 40673, null], [40673, 44433, null], [44433, 47266, null], [47266, 48757, null], [48757, 55677, null], [55677, 58172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58172, null]], "pdf_page_numbers": [[0, 2882, 1], [2882, 9055, 2], [9055, 14883, 3], [14883, 19917, 4], [19917, 24975, 5], [24975, 28464, 6], [28464, 32743, 7], [32743, 36730, 8], [36730, 39087, 9], [39087, 40673, 10], [40673, 44433, 11], [44433, 47266, 12], [47266, 48757, 13], [48757, 55677, 14], [55677, 58172, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58172, 0.01873]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
bc63b3314835ec1637c3a756d6205409c3ec8764
|
Lightweight Adaptive Filtering for Efficient Learning and Updating of Probabilistic Models
Antonio Filieri
University of Stuttgart
Stuttgart, Germany
Lars Grunske
University of Stuttgart
Stuttgart, Germany
Alberto Leva
Politecnico di Milano
Milan, Italy
Abstract—Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives.
I. INTRODUCTION
Non-functional properties such as reliability, performance, or energy consumption are a central factor in the design of software systems, moving from the niche of critical systems to everyday software. Probabilistic quantitative properties, which are able to characterize the uncertainty and unpredictability of external phenomena affecting software behavior from the interaction with the users to the contention on accessing physical resources. For this reason, significant research effort has been conducted in recent years about specification and verification of probabilistic quantitative properties [1–7]. These verification approaches commonly build upon convenient formal models able to capture the probabilistic nature of the described phenomena (e.g., Markov models or queuing networks).
However, most of these models are constructed at design time based on initial assumptions about the software and its execution environment. These assumptions might be invalidated by unforeseen changes the software may undergo during its execution [8, 9]. To handle this issue, probabilistic models need to be continually updated during runtime [10–13] to provide a current view on the running systems, supporting also the runtime verification of the desired properties.
In general, designing time efficient and accurate algorithms to keep a probabilistic model continuously updated during runtime is an open problem, deeply investigated by the Software Engineering community [12–15]. An early example is the Kami approach [12] that uses a Bayesian estimator to learn transition probabilities of Discrete-Time Markov Chains (DTMCs). However, the longer a Kami estimator runs the higher the effect of the historical data is on the estimation. Thus KAMI is producing inaccurate results once the probabilities change. The authors of the initial Kami approach have also noticed this and have extended their approach with a change point detection algorithm [16], which resets the estimation once the observed transition probabilities have significantly changed. Adding a change point detection method to Kami significantly increases the robustness towards change, however it comes at the cost of an increased runtime overhead. The Cove approach [17] enhances Kami’s Bayesian estimator by adding an aging mechanism to forget old information. Cove results are thus more robust to changes, however the intrinsic noise filtering capability of the original Bayesian estimator is weakened by the aging mechanism, leading to more noisy estimates. Cove has been extended with a procedure to automatically set an optimal aging factor [14]. In the area of performance tracking, Kalman filters are configured and used to estimate performance measures and to keep them updated at runtime [15]. Kalman filters are well known for their ability to reduce input noise and to provide smooth estimates. However, this comes at the price of slower responses to abrupt changes. A good trade-off between these two aspects usually requires a non-trivial tuning of the algorithm’s parameters.
Trading off noise-rejection, prompt reaction to changes, and computational overhead remains an open problem. In this paper we propose a novel lightweight adaptive filter to learn and continuously update the transition probabilities of a DTMC that:
- is specifically designed to improve the trade-off between smooth estimation and prompt reaction to changes
- is equipped with an online auto-tuning procedure to robustly discriminate between actual changes and outliers
- is provably stable, unbiased, and consistent, with a formal quantification of its convergence time and its noise filtering strength
- requires a negligible runtime computational overhead.
We implemented our approach in Python [18] and formally proved (cp. Section IV) that the algorithm satisfies the desired properties. We further performed a preliminary experimental evaluation (cp. Section VI) with common input data patterns to highlight the strengths and weaknesses. Additionally, we applied the algorithm to learn the operational profile of a large case study [19] to underpin these results (cp. Section VII).
II. BACKGROUND
This section briefly recalls essential background concepts for our approach. In Section II-A a formal definition of Discrete-Time Markov Chains (DTMCs) is provided. In Section II-B we will introduce basic definitions and assumptions about statistical inference for DTMCs.
A. Discrete Time Markov Models
A Discrete-Time Markov Chain is a state-transition system where the choices among successor states are governed by a probability distribution. Formally, a DTMC is a tuple $(S, s_0, P, L, AP)$ [20], where $S$ is a (finite) set of states, $s_0 \in S$ is the initial state, $P : S \times S \rightarrow [0, 1]$ is a stochastic matrix, $AP$ is a set of atomic propositions, and $L : S \rightarrow 2^{AP}$ is a labeling function that associates to each state the set of atomic propositions that are true in that state. An element $p_{ij}$ of the Matrix $P$ represents the transition probability from state $s_i$ to state $s_j$, i.e. the probability of going from state $s_i$ to state $s_j$ in exactly one step.
The probability of moving from $s_i$ to $s_j$ in exactly two steps can be computed as $\sum_{s_{k} \in S} P_{i s_k} \cdot P_{s_k j}$, that is the sum of the probabilities of all the paths originating in $s_i$, ending in $s_j$, and having exactly one intermediate state. The previous sum is, by definition, the entry $(i, j)$ of the power matrix $P^2$. Similarly, the probability of reaching $s_j$ from $s_i$ in exactly $k$ steps is the entry $(i, j)$ of matrix $P^k$. As a natural generalization, the matrix $P^0 \equiv I$ represents the probability of moving from state $s_i$ to state $s_j$ in zero steps, i.e., 1 if $s_i = s_j$, 0 otherwise.
Since $P$ is a stochastic matrix, the sum of the elements for each of its rows has to be 1. Formally, each row $i$ of $P$ identifies a categorical distribution [21]. Furthermore, thanks to the Markov property [22], these categorical distributions are pairwise probabilistically independent, since the choice of the next state only depends on the current one. This property will be exploited in the next section to support the definition of a localized learning approach of the transition matrix $P$.
B. Statistical Learning for DTMCs
The identification of DTMC models from the observation of a running system is a well-known statistical problem [23, 24], with relevant applications in many disciplines [22], including software engineering [25–32].
In this paper we focus on learning the transition probability matrix $P$ of a DTMC, assuming its structure does not change, i.e., only transition probabilities may be unknown or subject to change [29, 33, 34]. Thanks to the Markov property, this problem can be reduced to the independent learning of $n$ independent categorical distribution, where $n$ is the number of states composing the DTMC. This simplifies both the monitoring and the learning tasks.
Several approaches have been proposed in literature, including maximum likelihood estimators [23] and Bayesian estimators [13, 35]. The latter have recently gained more relevance for online learning thanks to the (usually) faster convergence and the ability to embed expert knowledge in the form of an assumed prior next state distribution [12–14, 17]. Despite their ability to estimate the actual transition probabilities, even in presence of noisy observations, for time-invariant processes, most statistical approaches fail to promptly react to changes in the transition probability. This leads to slow convergence time after a change and, consequently, poor accuracy and reliability of the estimates.
III. LEARNING THROUGH FILTERS
In this section we will introduce our online learning approach for DTMCs based on filtering. The input to our systems is a sequence of measures representing the average transition frequency from a state $s_i$ to each state $s_j$ over a period of observation. Calling $k$ the $k$-th period of observation (also referred to as time step), the average transition frequencies $p_{ij}^m(k)$ at time step $k$ are defined as $n_{ij}(k)/\sum_x n_{ix}(k)$, where $n_{ij}(k)$ is the number of transitions from $s_i$ to $s_j$ occurred at time step $k$. Since those counts are obtained by monitoring the system for a limited time, we assume the observed frequencies to include an additive zero-mean noise component, accounting for both the uncertainty of the sampling procedure and possible issues with the monitoring infrastructure (e.g., communication delays). The values of the noise for each time step are assumed to be independent among one another and, approximately, normally distributed, with unknown variance [36, 37].
For the considerations stated in Section II-B, we will instantiate a filter for each state, aiming at learning its next state distribution. A similar approach is followed by most state-of-the-art approaches for learning DTMCs [12–14].
To describe our approach, let us first focus in Section III-A on the estimation of a scalar parameter not subject to any constraint. In Section III-B, we will extend the approach to cope with multiple dependent variables, whose sum has to be equal to a given value. This extension is needed to handle the structural dependencies among transitions of a DTMC originating from the same state. Finally, in Section III-C, we will introduce an online auto-tuning procedure to automatically adapt the change point detection mechanism of the filter to cope with changing and unpredictable operation scenarios.
A. Learning a Scalar Measure
The goal of our learning procedure is to estimate an unknown, time-varying probability vector $p(k)$ from the (noisy) measurements $p^m(k)$. The output of the filter will be an estimate $\tilde{p}(k)$ of $p(k)$. The simplest viable filter for our purpose is a unity-gain, first-order discrete-time filter [38], whose dynamics is described in Equation (1):
$$\tilde{p}(k) = a \cdot \tilde{p}(k-1) + (1-a) \cdot p^m(k-1), \quad 0 < a < 1$$
For this filter high values of $a$ (i.e., close to 1) provide good noise filtering and smoothing, which is desirable to estimate a stationary probability from noisy observation. However, the tracking of abrupt (e.g., stepwise) variations of $p(k)$ would be very slow. On the other hand, small values of $a$ (i.e., close to 0) would promptly follow abrupt variations of $p(k)$ but at the price of poor noise filtering. An example of such behavior is shown in Figure 1. Ideally, we would like to have
on the behavior of $a$ introduce our strategy for the dynamic adaptation of $p(k)$ measurement $1$. For smoothness reasons, let us define the value of $a(k)$ (the introduction of the adaptation mechanism made it time-dependent) as:
$$a(k) = a_0 + \Delta a f_a(e(k) - e_{thr})$$
(2)
where $a_0 = 0.5(a_{hi} + a_{lo})$, $\Delta a > 0$ tunes the adaptation speed (the larger the faster; default $a_{hi} - a_{lo}$), and $e(k) = |p^m(k) - \hat{p}(k) - 1|$ and $f_a(\cdot)$ is a continuous, differentiable, strictly monotonically decreasing function such that
$$\lim_{x \to -\infty} f_a(x) = 0.5, \quad f_a(0) = 0, \quad \lim_{x \to \infty} f_a(x) = -0.5.$$
(3)
To ensure that $a_{lo} < a(k) < a_{hi}$ we choose:
$$f_a(x) = -\frac{\arctan(\mu \cdot x)}{\pi}$$
(4)
where $\mu$ is a design parameter determining the gradient of $f_a(\cdot)$ around the origin, and, in turn, the “speed” of transition between the two asymptotic values $\pm 0.5$. Notice that the definition of $f_a(\cdot)$ in terms of the $\arctan(\cdot)$ function satisfies all the requirements stated above. Furthermore, the selection of $\arctan(\cdot)$ is typical [39], when arbitrarily steep transitions between two values are required to be obtained through a continuous and continuously differentiable function. Alternative definitions of $f_a(\cdot)$ are possible, but their analysis is beyond the scope of this paper. Higher-order filters can also provide for a finer specification of the transition function, though with an increase computational complexity and, in general, less provable stability result. Our solution aims at achieving the simplest strategy suitable for solving our problem, and with the lowest possible computational overhead.
Combining Equations (1) and (2), under the assumption that we have properly quantified $e_{thr}$, results in the nonlinear discrete-time dynamic equations of our learning filter:
$$\begin{align*}
e(k) &= |p^m(k) - \hat{p}(k) - 1| \\
a(k) &= a_0 + \Delta a \cdot f_a(e(k) - e_{thr}) \\
\hat{p}(k) &= a(k) \cdot \hat{p}(k-1) + (1 - a(k)) \cdot p^m(k-1)
\end{align*}$$
where in a mathematical sense $p^m$ is the input, and $\hat{p}$ both the state and the output of the dynamic system [37].
The filter in (5) is the core of our learning approach. In the next section we will extend it to estimate categorical distributions instead of scalar values, while in Section III-C we will formally describe the online adaptation mechanism that allows for automatically adjusting the value of $e_{thr}$, and, consequently, of $a(k)$.
B. Learning Categorical Distributions
The filter defined in the previous section can be used to estimate each single probability $p_{ij}$ individually. However, the obtained estimates for each row of $P$ would most likely not constitute correct categorical distributions (sum is not 1).
In order to ensure the estimation of correct categorical distribution of each state $s_i$, we will first estimate each probability $p_{ij}$ individually and then applying a convenient “correction” procedure. This procedure minimizes the Euclidean norm of the distance between the vector $\hat{p}$ of the uncorrected estimates and the vector of the corrected one, subject to a unity sum constraint for the latter (and only positive probabilities by construction). Formally, let $\hat{p}$ and $p^c$ be $\hat{p}(k) = [\hat{p}_1(k), \ldots \hat{p}_n(k)]'$ and $p^c(k) = [p_{11}^c(k), \ldots p_{nn}^c(k)]'$ Our correction procedure requires to solve the following optimization problem:
$$\begin{align*}
\min_{p^c(k)} & (p^c(k) - \hat{p}(k))' (p^c(k) - \hat{p}(k)) \\
\text{subject to} & \sum_{i=1}^n \hat{p}_{ii}(k) = 1
\end{align*}$$
(6)
Using the Lagrange multipliers method to solve this optimization problem [40], the Lagrangian of the problem is
$$L(k) = \sum_{i=1}^n (\hat{p}_{ii}(k) - \hat{p}_{ii}(k))^2 + \lambda \left( \sum_{i=1}^n \hat{p}_{ii}(k) - 1 \right)$$
(7)
and solving the corresponding Karush, Kuhn, and Tucker (KKT) equations [41] leads to the (affine) correction formula
$$\hat{p}^c(k) = F_c \cdot \hat{p}(k) + H_c$$
(8)
where $F_c = I_n - \frac{1_{n \times n}}{n}$, $H_c = \frac{1_{n \times 1}}{n}$, and the symbol $1_{p \times c}$ denoting a $p \times c$ matrix with unity elements, and $I_n$ being the identity matrix of order $n$.
In the remainder of this paper, we will always refer to the corrected estimator for the transition probabilities of a DTMC (i.e., $\hat{p}_{ij} = \hat{p}_{ij}^c$), unless otherwise specified.
The core element of the online adaptation mechanism is the dynamic correction of the parameter $e_{thr}$. This parameter has to capture the dispersion of an input measurement $p^m(k)$ around the actual value it is measuring $p(k)$.
An effective and sequentially computable index of the dispersion of a probability distribution is its variance. An efficient and numerically stable algorithm for the sequential estimation of the variance has been proposed by Knuth [42] and reported in Algorithm 1. The input distribution is assumed to be Gaussian centered around the actual value it is measuring $p(k)$.
We adapt Knuth’s algorithm by executing the body of the loop for each incoming measurement $p^m(k)$, and updating the input variance ($\sigma^2$). This way we can efficiently keep our dispersion index updated after every new measurement is gathered.
In order to compute $e_{thr}$, we assume the input measurements to have Gaussian distribution centered around the actual measurement and with variance $\sigma^2$ [36, 37]. We then want to decide for each incoming measurement if it can be reasonably explained under this assumption or not. To do so, we operate similarly to a statistical hypothesis test on the mean of the input distribution. Our hypothesis is that the actual mean of the input distribution is $\bar{p}(k-1)$, i.e., the latest estimate from our filter, and try to decide whether the measurement $p^m(k)$ is far enough from $\bar{p}(k-1)$ to contradict our hypothesis. Since the variance of the input distribution has been estimated, we should refer in this case to a $t$-test [21]. However, assuming enough measurements have been retrieved (as a rule of thumb, at least 30), we can safely use a $z$-test [21]. After deciding a confidence value $\alpha$, we can decide whether the last measurement retrieved should be considered “explainable” with the current estimation or the outcome of a change in the process (it may also be the case of an outlier, but we will discuss how to handle this situation later on). In particular, we assume a change point occurred if
$$\frac{|p^m(k) - \bar{p}(k-1)|}{\sigma} \geq z_{1-\frac{\alpha}{2}} \tag{10}$$
where $\sigma = \sqrt{\sigma^2}$ is the standard deviation of the input distribution. Equation (10) can be straightforwardly used to adjust the value of $e_{thr}$ after a new sample has been gathered:
$$e_{thr}(k) = \sigma(k) \cdot z_{1-\frac{\alpha}{2}}(k) \tag{11}$$
The decision about $\alpha$ constitutes a trade-off between a quick response to smooth changes in the process and robustness to measurement outliers. Particularly relevant for this decision is the (average) number of transitions observed in each time step: a small number will likely lead to noisy measurements and increases the likelihood of outliers; in such case a very small value of $\alpha$ is recommended (see also Section VI).
The filter introduced in Section III-A has four configuration parameters: $\mu$, $a_{lo}$, $a_{hi}$, and $\alpha$. While the role of $\alpha$ has been already discussed in the previous section, we will briefly (and informally) discuss here how the values of the others affect the performance of the filter. Some addition formal considerations will be provided in Section IV.
Intuitively, there are two extreme operating conditions for a filter: the probability to be estimated is stationary, or it is undergoing an abrupt change. In the first case, we aim at obtaining a “clean” and smooth estimate. This means we would like $\alpha$ to be close to 1. If the input measurements come from a stationary distribution they will most likely be recognized as “compatible” with the current estimate (see Section III-C). This will drive the value of $\alpha$ (asymptotically) to $a_{hi}$. Thus, setting $a_{hi}$ close to 1 will provide smooth estimates of a stationary probability. Analogous considerations can be stated for the case of abrupt changes, when the value of $\alpha$ will be moved toward its lower bound $a_{lo}$.
However, while it is quite safe to set $a_{hi}$ very close to 1, bringing $a_{lo}$ close to 0 reduces the filter’s robustness to outliers. Indeed, $\alpha$ allows to set a threshold to decide if an input measurement is not compatible with the current estimate. However, there is always the chance that an extreme measurement is over the threshold and brings $\alpha$ towards $a_{lo}$. Thus, a more conservative choice of $a_{lo}$ is recommended if the input measures are known to have a high variance. A rule of thumb to decide such value is that $1 - a_{lo}$ can be considered as the maximum degree of trust on a new input measurement. However, setting $a_{lo} > .3$ may produce a sensible slow down in change point tracking, and should be done carefully.
The value of $\mu$ determines how “fast” to move from $a_{lo}$ to $a_{hi}$ and vice versa. High values of $\mu$ are recommended ($\geq 1000$) to obtain prompt switches when a change occurs.
Finally, a hidden parameter to consider is the duration of a time step. Indeed, longer time steps may allow to collect more events within the period, which in turn improves the quality of the input measurements (which are the transition frequencies computed over each time step). However, since the filter updates its estimates every time step, extending their duration may slow down the filter reaction to changes.
The proposed adaptive filter requires a constant number of operations each time step, as easily observable from Equation (5) and the filter adaptation procedure in Algorithm 1. Furthermore, the operations are mostly elementary floating point operations, and only a minimal amount of memory is required, making our approach suitable to be executed even on low-power devices (e.g., embedded systems). An empirical estimation of the actual execution time on a general purpose computer will be shown in Section VI.
However, the low computational overhead was only one of our goals. In this section we will formally prove several other critical properties of our approach. The proofs will provide a theoretical guarantee about its applicability and the quality of its results. An empirical assessment based on selected case studies and the comparison with state of the art alternatives will also be provided in Section VI.
In Section IV-A, we will prove the stability of our system, i.e., its ability to converge to a steady state equilibrium for every constant value of the input. In Section IV-B we will prove that our filter is an unbiased and consistent estimator of the transition probabilities we aim at learning. In Section IV-C, we will precisely quantify the ability of our filter to reduce the noise in the input measurements (i.e., their variance, under the weak assumption of Gaussian distribution) as a function of the filter’s configuration parameters. Finally, in Section IV-D we will assess the settling time required for our estimates to converge after a change as occurred.
A. Stability
A dynamic system is asymptotically stable if there exists an equilibrium point to which the system tends; i.e., for any given constant input, the output converges to a specific value (within a convenient accuracy) regardless of the initial state [37]. As time tends to infinity, the distance to the equilibrium point has to tend to zero.
Our filter is formally defined by the dynamic system of Equation (5). In the following we will prove that the filter always converges to an equilibrium value, regardless its initial conditions and for all the (valid) values of the input measures.
Let (5) be subject to the constant input \( p^m(k) = \bar{p}^m \). The corresponding equilibrium value \( \bar{p} \) can be obtained by computing the fixed point solutions of the dynamic system [37]:
\[
\bar{p} = \pi \cdot \bar{p} + (1 - \pi) \cdot p^m
\]
(12)
where \( \pi \) is the equilibrium value for \( a(k) \). This yields the unique solution
\[
\bar{p} = p^m
\]
(13)
since the case \( \pi = 1 \) is excluded by construction (Equation (1)). Also, since at the computed equilibrium \( \pi = 0 \) (because of Equation (13)), we can compute the value of \( \pi \) as follows:
\[
\pi = a_0 + \Delta a f_a(-\epsilon_{thr})
\]
(14)
Hence, for any \( \bar{p}^m \) there is one equilibrium with \( \bar{p} = \bar{p}^m \), and \( \pi \) given by (14). To prove the stability of all equilibria (i.e., the stability of the filter in every operating condition), we can analyze the response of the system to a deviation from such equilibrium [37]. We define the system output variation with respect to the equilibrium value as
\[
\bar{p}^d(k) \triangleq \bar{p}(k) - \bar{p}
\]
(15)
Combining Equation (15) with the last equation in (5):
\[
\bar{p}^d(k) = a(k) \cdot \bar{p}^d(k-1) + (1 - a(k)) \cdot (p^m(k-1) - \bar{p})
\]
(16)
Defining now the input variation with respect to the equilibrium value as
\[
p^m^d(k) \triangleq p^m(k) - \bar{p}^m
\]
(17)
we have
\[
\bar{p}^d(k) = a(k) \cdot \bar{p}^d(k-1) + (1 - a(k)) \cdot p^m^d(k-1).
\]
(18)
Furthermore, exploiting again the equilibrium,
\[e(k) = |p^{m^d}(k) + \bar{p}^m - (\bar{p}^d(k-1) + \bar{p})| = |p^{m^d}(k) - \bar{p}^d(k-1)|
\]
(19)
and the estimator can be rewritten in input/output variational form [37] as
\[
\begin{cases}
a(k) &= a_0 + \Delta a \cdot f_a(|p^{m^d}(k) - \bar{p}^d(k-1)| - \epsilon_{thr}) \\
\bar{p}^d(k) &= a(k) \cdot \bar{p}^d(k-1) + (1 - a(k)) \cdot p^{m^d}(k-1)
\end{cases}
\]
(20)
For the purpose of the equilibrium stability, it is required to study the motion of (20) under the constant input corresponding to the equilibrium, i.e., with \( p^{m^d}(k) = 0 \). This means analyzing the system
\[
\begin{cases}
a(k) &= a_0 + \Delta a \cdot f_a(|\bar{p}^d(k-1)| - \epsilon_{thr}) \\
\bar{p}^d(k) &= a(k) \cdot \bar{p}^d(k-1)
\end{cases}
\]
(21)
Given the bounds on \( a(k) \) inherent to \( f_a(\cdot) \), \( \bar{p}^d(k) \) in (21) eventually converges to zero irrespectively of its initial value. Thus, all the equilibria of the dynamic system in Equation (5) are globally asymptotically stable.
The stability proof guarantees the applicability of our learning approach under any possible (valid) input measurements, in particular to learn the transition probability of any DTMC.
B. Unbiasedness and Consistency
Denoting with \( p^o \) the true (constant) value of the measure to estimate, from Equation (13) and the stability proof provided in the previous section it follows that
\[
\lim_{k \to \infty} \hat{p}(k) = p^o \quad \forall \hat{p}(0)
\]
(22)
Hence, we can state
\[
\lim_{k \to \infty} E[\hat{p}(k)] = p^o \quad \text{and} \quad \lim_{k \to \infty} E[(\hat{p}(k) - p^o)^2] = 0
\]
(23)
Thus the estimator is (asymptotically) unbiased and consistent.
C. Variance of the Estimate
Consider the case where the input measurements provided to our filter are a realization of a white Gaussian noise input, i.e., a Gaussian distribution with mean 0 and variance \( \sigma_{w}^2 \) (i.e. the simplest distribution allowing to arbitrarily set its variance). For the sake of simplicity, assume \( a \) to be a constant value.
The ratio of the output variance over the input variance (i.e., their variance, under the weak assumption of Gaussian distribution) as a function of noise in the input measurements (within a convenient accuracy) regardless of the initial state [37]. As time tends to infinity, the distance to the equilibrium point has to tend to zero.
The stability proof guarantees the applicability of our learning approach under any possible (valid) input measurements, in particular to learn the transition probability of any DTMC.
B. Unbiasedness and Consistency
Denoting with \( p^o \) the true (constant) value of the measure to estimate, from Equation (13) and the stability proof provided in the previous section it follows that
\[
\lim_{k \to \infty} \hat{p}(k) = p^o \quad \forall \hat{p}(0)
\]
(22)
Hence, we can state
\[
\lim_{k \to \infty} E[\hat{p}(k)] = p^o \quad \text{and} \quad \lim_{k \to \infty} E[(\hat{p}(k) - p^o)^2] = 0
\]
(23)
Thus the estimator is (asymptotically) unbiased and consistent.
C. Variance of the Estimate
Consider the case where the input measurements provided to our filter are a realization of a white Gaussian noise input, i.e., a Gaussian distribution with mean 0 and variance \( \sigma_{w}^2 \) (i.e. the simplest distribution allowing to arbitrarily set its variance). For the sake of simplicity, assume \( a \) to be a constant value.
The ratio of the output variance over the input variance (i.e., their variance, under the weak assumption of Gaussian distribution) as a function of noise in the input measurements (within a convenient accuracy) regardless of the initial state [37]. As time tends to infinity, the distance to the equilibrium point has to tend to zero.
The stability proof guarantees the applicability of our learning approach under any possible (valid) input measurements, in particular to learn the transition probability of any DTMC.
B. Unbiasedness and Consistency
Denoting with \( p^o \) the true (constant) value of the measure to estimate, from Equation (13) and the stability proof provided in the previous section it follows that
\[
\lim_{k \to \infty} \hat{p}(k) = p^o \quad \forall \hat{p}(0)
\]
(22)
Hence, we can state
\[
\lim_{k \to \infty} E[\hat{p}(k)] = p^o \quad \text{and} \quad \lim_{k \to \infty} E[(\hat{p}(k) - p^o)^2] = 0
\]
(23)
Thus the estimator is (asymptotically) unbiased and consistent.
C. Variance of the Estimate
Consider the case where the input measurements provided to our filter are a realization of a white Gaussian noise input, i.e., a Gaussian distribution with mean 0 and variance \( \sigma_{w}^2 \) (i.e. the simplest distribution allowing to arbitrarily set its variance). For the sake of simplicity, assume \( a \) to be a constant value.
The ratio of the output variance over the input variance (i.e., their variance, under the weak assumption of Gaussian distribution) as a function of noise in the input measurements (within a convenient accuracy) regardless of the initial state [37]. As time tends to infinity, the distance to the equilibrium point has to tend to zero.
Note that $\|G\|_2$ tends to $1^-$ and $0^+$ when $a$ tends to $0^+$ and $1^-$, respectively. This means that the output variance is never greater than that of the input, and is reduced by higher values of $a$. This is in line with the intuitive considerations on the impact of large and small values of $a$ provided in Section III-A.
D. Convergence Time
For an intuitive analysis, assume $p^a(k)$ undergoes a step from zero to one, and that the convergence time $k_c$ is taken as the number of steps required to drive the estimation error magnitude down to the same threshold $\epsilon_{thr}$, used for switching from the “fast tracking” to the “sharp filtering” modes of the system, by moving $a$ towards $a_{lo}$ and $a_{hi}$, respectively. This immediately leads to determine $k_c$ as the minimum value of $k$, s.t. $a^k_{lo} < \epsilon_{thr}$.
$$k_c = \left\lceil \frac{\log \epsilon_{thr}}{\log a_{lo}} \right\rceil$$
V. RELATED WORK
Markov model learning and inferring of transition probabilities of a DTMC has been widely used in different domains [23, 24]. Two additional requirements for Software Engineering applications are the possibility of embedding experts or domain knowledge and the ability of performing the estimation online, by continuously improving on the prior knowledge initially assumed.
One of the first approaches providing these features is Kami [12, 13]. This approach implements an established Bayesian estimator to learn the transition probabilities of a DTMC online. It requires a low computational overhead and provides high accuracy and noise filtering for the estimation of stationary processes. However, it provides slow responses in presence of changes [13]. The same authors [16] faced the problem of change point detection, again following a Bayesian approach. The resulting technique is designed to operate offline on recorded execution traces. It is quite accurate in identifying change points, however it involves the use of a Gibbs sampling techniques to compute the posterior change point distribution [43]. Such randomized method requires a large number of operations for each change point probe. This requires a relatively high computational power, which might be too expensive to be deployed on many embedded system. The execution time is orders of magnitudes higher than the original approach.
In [15], the authors propose an approach for the continuous tracking of time-varying parameters of performance models. The approach is based on (Extended) Kalman filters [37] and is able to estimate also correlated parameters, to take into account nonlinear constraints among their values, and to embed a prior knowledge about the distribution to estimate. However, as reported by the authors, Kalman filters provide their optimal performance when the model describing the temporal evolution and the dependencies among parameters is linear. Despite having been proposed in the domain of performance, the approach of [15] can be easily adapted to also learn the (time-varying) transition probabilities of a DTMC. The simplest way is to use a Kalman filter to estimate each single transition probability and then to apply the correction strategy we introduced in Section III-B for the transitions originating from the same state. The configuration of the Kalman filters requires specifying two parameters: the measurement error covariance $R$ and the disturbance error covariance $Q$ [15]. If we estimate each single transition probability independently, the two matrices reduce to two scalar values $r$ and $q$, representing the variance of the measurement error and the variance of the disturbance error. Informally, a high value of $r$ means a poor information from measures, while a high value of $q$ means high drift expected for the parameters’ estimates. By tuning these two parameters it is possible to define a tradeoff between noise filtering (thus smoother estimates) and quick reaction to changes in the estimated probabilities.
Another approach based on Bayesian estimation that aims at overcoming the limitations of Kami in presence of changing transition probabilities is the Cove approach [10, 14, 17]. The basic intuition is to scale of each input measurement with an aging factor that gives more relevance to recent observations [17]. In presence of a change, this input aging allows to quickly discard old information and to give more relevance to the new ones. The configuration of this approach requires the specification of a prior knowledge for the distribution to estimate, the confidence $c_0 > 0$ on such an initial prior, and the value of a parameter $\alpha_c$ which determines the aging factors: a input measurement which has been observed $t$ time steps ago it will be discounted by a factor in the order of $\alpha_c^{-t}$. Cove has been extended with a procedure to automatically adjust the values of $c_0$ and $\alpha_c$ [14]. In the special case for the aging factor ($\alpha_c = 1$) Cove reduces to Kami.
VI. EXPERIMENTAL EVALUATION
In this section we report on the experimental evaluation of our lightweight adaptive filtering approach to learn transition probabilities of a DTMC. We will benchmark it against related approaches with respect to (i) the estimation accuracy and (ii) the time required for the estimations. Following from the discussion in Section V, we selected for comparison the two algorithms by Calinescu et al. [14] and Zheng et al. [15]. They will be referred to as Cove and Kalman respectively.
The three approaches will be compared in this section on six different change patterns. On the first part of the comparison we will consider a selected case of each pattern for which visualizing the behavior of the estimates and assessing their accuracy. The scope of the comparison will be then extended to a set of 7,000 execution traces composed by both randomized realizations of each pattern and combination thereof. Finally, we will report on the computation time required by the three approaches and discuss possible threats to validity.
Accuracy metrics. As accuracy metrics the Mean Average Relative Error (MARE) [44] (similar to [15, 45]) will be used:
$$MARE = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{p(i) - \hat{p}(i)}{p(i)} \right|.$$
where \(\hat{p}(i)\) represents the estimate at time \(i\), \(p(i)\) the actual value to estimate, and \(n\) is the number of points.
**Experimental settings.** We implemented all the approaches in Python (v2.7) and executed the experiments on a quad-core Intel(R) Xeon(R) CPU E31220 @3.10GHz with 32Gb of memory and running an Ubuntu Server 12.04.4 64bit. The memory consumption of the three approaches was negligible. Our implementation is available at [18].
The three algorithms are compared on their performance on estimating the next state distribution of a state of a DTMC (i.e., a row of its transition matrix). The input traces are composed by sequences of 30,000 events. Each event is an integer number identifying the destination state of the taken transition. This destination state is randomly selected among a set of reachable states has been set to 3 according to known (time-varying) transition probabilities. For this reason we defined a time step to occur every 75 events. This value is relatively small. However, a large time step may slow down the reaction to changes (see Section III-D).
**Experimental results.** We implemented and compared the three approaches over six input data patterns that are commonly used to evaluated the related approaches [14, 15]: Noisy, Step, Ramp, Square wave, Triangle wave and Outlier. By covering these input data patterns we are stressing different aspects of the learning problem.
<table>
<thead>
<tr>
<th>TABLE I</th>
<th>SIX-PATTERNS BENCHMARK.</th>
</tr>
</thead>
<tbody>
<tr>
<td>LAF</td>
<td>Kalman</td>
</tr>
<tr>
<td>Noisy</td>
<td>4.41%</td>
</tr>
<tr>
<td>Step</td>
<td>4.19%</td>
</tr>
<tr>
<td>Ramp</td>
<td>4.28%</td>
</tr>
<tr>
<td>Square</td>
<td>6.54%</td>
</tr>
<tr>
<td>Triangle</td>
<td>7.79%</td>
</tr>
<tr>
<td>Outlier</td>
<td>3.68%</td>
</tr>
</tbody>
</table>
The accuracy results (MARE) for a single run of all the input patterns are reported in Table I. For readability, we report in Figures 3a to 3f only the estimates of the probability of moving toward one target state. For all the plots, a dashed grey line represents the actual transition probability to estimate, while the continuous black line represents its estimate. In the following we discuss each of the six transition patterns and the performance of the three approaches.
**Noisy.** For this case (Figure 3a), we do not sample from the actual stationary transition probabilities but we add them a white noise with standard deviation 0.01. For LAF and Kalman this white noise adds to the unavoidable measurement noise. After an initial transitory, both LAF and Kalman converge to the actual mean value of the estimated probability (0.2) and provide similar values for the MARE index. In this situation, LAF has not perceived any significant change in the measures and is thus operation as a low pass filter with pole in \(a_{hi}\). Hence, increasing \(a_{hi}\) would lead to a slower initial convergence, but smoother estimate, as it is for Kalman, whose optimality properties in this scenario are well studied [38] (the worst accuracy of Kalman is mostly due to the initial slow convergence). Cove converges almost immediately, but with a poor filtering of the input noise.
**Step.** In the step change pattern the estimated probability suddenly changes from 0.2 to 0.4. Kalman provides the smoothest estimates, though at the price of a slower reaction to the change. On the other hand, LAF promptly reacts to the change. Notice on Figure 3b the exponential convergence toward the new estimate, as expected from the stability proof and the settling time assessment of Section IV. The settling
time can be improved by reducing the value of $a_{lo}$. However, too small values of it may lead to overshooting due to an overreaction. While Cove reacts immediately to the change, its estimates keep being noisy. Notably, in this case LAF is about two times more accurate than the others.
**Ramp.** The estimate transition probability moves here linearly from .2 to .4 in 10,000 time points. This situation is particularly stressful for the change-point detection mechanism of LAF, whose $\varepsilon_{thr}$ gets continuously updated until the variance estimator converges to the new steady value. The accumulation of the errors of the internal variance estimator and the main LAF filter might lead to false positives during or right after the ramp (as in Figure 3c around time 26,000). In this cases, a not too small value of $a_{lo}$ (as a rule of thumb between .2 and .35) may reduce the deviation after the false detection and allow for a faster recovery, as evident from the figure. As expected from [15, 38, 45] Kalman can cope reasonably well with smooth changes, however it is slower than LAF, which leverages change reaction to perform step-shaped cuts of the estimate error (see Figure 3c around time 15,000 and 19,000). Cove reacted immediately to the change and has been able to follow the ramp, though with the usual noise. Also in this case LAF is about two times more accurate than the others.
**Square wave.** The square wave amplifies the issues relate to the step change, by allowing for a shorter learning time before each change. While Kalman suffers a slow convergence rate, LAF and Cove follow the changes, again with Cove producing a more noisy output. Under this scenario, the accuracy of LAF is about three times higher than Kalman, while producing a smoother estimate than Cove.
**Triangle wave.** In this case, smooth changes between two probability values alternates periodically. With respect to the case of ramp, the slopes are steeper, requiring a faster convergence to the estimators. Cove can follow the changes as quickly as for the ramp case. Kalman provides smooth estimates, but its convergence time is too long and fails to follow the repeated changes. On the other hand, LAF copes with the continuous changes by combining a shorter convergence rate of the adaptive low-pass filter with occasional step-shaped error cuts triggered by the change point detection mechanism (as already observed for the ramp). However, as for the case of the ramp, the continuously changing distribution may lead to the accumulation of internal estimation errors that increase the chance of false positives (see time 20,000 of Figure 3e), whose effects are recovered by $a_{lo} = .3$.
**Outlier.** In the last case we artificially introduced an outlier with the duration of 25 events. LAF and Kalman show a negligible reaction to the outlier. This is both due to the filtering actions of the two and to the fact that, operating on a time window, the impact of the outlier is already reduced by the preliminary computation the window’s transition frequencies. Despite the triggering of a false change detection, $a_{lo} = .3$ keeps the filter robust to outliers, making it achieve an accuracy slightly higher than Kalman, whose effective in filtering outliers is known [38]. Notice that, as for the case of Noisy, the main loss of accuracy of Kalman is due to the initial convergence. The very fast reaction to change of Cove made it quickly follow the outlier, though recovering to the correct estimate right after.
**General comments.** As final remarks, we noticed that Cove provides a very fast reaction to changes, which make their presence almost irrelevant as for the impact on the MARE. This comes however at the price of having noisy estimates. To obtain a similar behavior with LAF, both $a_{lo}$ and $a_{hi}$ have to be set to very low values: this way LAF will approximate a low pass filter with a very small pole, which, looking at the equations, would behave similarly to Cove. A well-known problem of statistical estimation for probability values is the difficulty of catching rare events, i.e. with probability close to 0 (or close to 1, since this implies another transition probability has to be very small). This issue is present for the three approaches. The stability prove of the two filters LAF and Kalman guarantees that they will eventually converge to the estimated probability, however, for such extreme cases the convergence time might be longer.
**Randomized pattern instances.** To further investigate the accuracy of our filter, we generated for each pattern a set of 1,000 random instances and analyzed the performance of the three approaches on this broader set of problems. Concerning the generation of the random instances: Noisy and Outlier require to generate a baseline stationary distribution and, respectively, the standard deviation of the noise (sampled between .001 and .1) and the duration of the outlier (we take as amplitude half of the maximum gap allowed by the baseline distribution); all the other patterns require to define two distributions and, for the square and triangle wave patterns, the period of the wave (30,000/n with n ∈ [2, 15]). The MAREs of the three approaches are reported in the first six boxplots in Figure 3g. Notably, the results for all the patterns resemble the accuracies reported in Table I for the exploratory study, thus the behavior of the three approaches in each of the six change patterns does not depend, on average, on the characteristics of the specific instance of such pattern.
Finally, the last box of Figure 3g (VarMix) shows the accuracy of the three approaches on long traces (from 50,000 to 500,000 events) obtained by sequentially combining multiple random instances of the six patterns. The duration and of each instance and their oerder are randomized as well. The results of VarMix confirm the earlier results of the single patterns. In particular, Kalman suffers from the presence of fast changes, while the MARE of Cove is not significantly affected by these changes, but by the high noise of its estimates.
**Runtime overhead.** On average over 6,000 runs, LAF, Kalman, and Cove required 50, 80, and 126 ms to process 30,000 events. Consequently, LAF reduces the runtime overhead of Kalman, and Cove. Notice that LAF and Kalman update their estimates every time window, while Cove updates every new measure.
**Threats to validity.** A threat to external validity is the use of predefined input data patterns for the comparison of
the approaches and the ability to generalize these results to common traces of realistic software systems. We have selected these inputs inline with common theory, common practice in control theory to stress the response of dynamic systems (Step, Ramp), and of filters in particular (Noisy, Periodic, Outlier), and the related approaches [14, 15, 37] and could observe similar patterns also in QoS data sets of web services [46] and web systems (cp. next Section VII). Consequently, we argue that a good performance for these basic input data types will also results in a good general performance.
The threats to internal validity include obviously the selection of the parameters $c_0$, $c_\epsilon$, $q$ and $r$ for the related approaches and the number of events per time step. We have specifically selected these parameters with defaults that are also defined in the original papers. Furthermore, parameter sweeps other the range of these parameters confirmed that they were good choices for our experiment. Another threat to internal validity is the implementation of the related approaches and measurement environment. As can be seen from the experimental setup we tried to avoid systematic measurement errors and for the implementation we did follow the instructions for the algorithms provided in the original literature.
VII. APPLICATION TO A REALISTIC PROBLEM
A common application of DTMC learning is to learn users behavior for a software system (e.g., [47]). This problem can be refined to the scenario of estimating the probability of a user browsing from one webpage to another capturing online log events. To evaluate suitability of LAF in this scenario we took the open logs of the World Cup 98 website [19].
The logs spread over a period of about 3 months. The website is composed of a total of over 32000 pages. We mapped every webpage to a unique integer identifier. Each line of the log includes a unique client id. To identify a client session we set a timeout of 30 minutes after the last occurrence of the client id to consider the corresponding user disconnected. A session is thus described by the sequence of pairs [time,pageld] visited by a client. During a session, the client is expected to move from one page to another following navigation links. For demonstration purpose, we show in Figure 2 the online estimation of the transition probability from the page “/english/teams/teambio160.htm” to the page “/english/competition/statistics.htm”.
The monitored webpage has been active for about 33 days for a total of 2835186 recorded transitions. We reduced the granularity of the observations by applying a sliding window of 500 seconds. Excluding the windows where no events occurred, 14852 windows have been processed.
The gray line represents the transition frequency observed during the corresponding window reported on the x-axis. The black thicker line represents the value of the LAF estimate.
Since we do not know the real value to be estimated, it is hard to evaluate the ability of LAF to capture the average transition probability other than by visual inspection (indeed, a proper computation of MARE would require to know the actual transition probability of whom the observed transition frequencies are realizations). However, it is easy to recognize in Figure 2 the occurrence of several patterns we analyzed previously in this section for which a deep quantitative investigation has been provided. The execution time (over 5 runs) to estimate the transition probabilities from the observed state (5 destination states) have been 1703, 2177, and 15953 ms for LAF, Kalman, and Cove respectively. Since Cove operates per transition, its execution time is higher than LAF and Kalman, which instead updates their estimates every 75 transitions and performed inline with the results on the benchmark patterns. Overall, with the new algorithm LAF we have been able to process the data for this realistic case and we could confirm the results of the experimental evaluation.
VIII. CONCLUSIONS
In this work we presented a lightweight adaptive filter for online learning of the transition matrix of a DTMC. We proved it is stable and provides an unbiased and consistent estimate of the transition probabilities. We also quantified its ability to reduce the variance of noisy input measurements and its convergence time after a change has occurred. The filter introduces a minimal computational overhead, being able to process 30,000 events in about 0.05 seconds on a general purpose computer. Its memory demand does not depend on the number of events to be processed, and it is fairly negligible. The experimental results show an high accuracy of the obtained estimates.
We plan to extend this work along several directions. First, we will expand the developed algorithm to other probabilistic quality evaluation models, including queuing networks for performance analysis, and integrate it into a general framework for continual verification [10]. Second, we plan to increase the order of the filter to further improve its ability of trading off reaction to changes versus robustness to outliers. Finally, we plan to investigate the combination of LAF with forecasting techniques [46, 48] for proactive problem detection.
ACKNOWLEDGEMENTS
This work has been partially supported by the DFG (German Research Foundation) under the Priority Programme SPP1593: Design For Future - Managed Software Evolution.
(a) Stationary signal with white noise ($\sigma = .001$)
(b) Step (gap=.2)
(c) Ramp (gap=.2, duration=10000 steps)
(d) Square wave (gap=.2, period=10000 steps)
(e) Triangle wave (gap=.2, period=10000 steps)
(f) Outlier (gap=4, duration=25 steps)
(g) Boxplots of the relative error over 1000 random instances of the six change patterns and combinations thereof.
Fig. 3. First six rows: estimates of the probability of moving toward the first state obtained by LAF (left column), Kalman (central column), and Cove (right column). Last row: IQR boxplots of the relative errors obtained over 1000 random instances of the six patterns and combinations thereof.
|
{"Source-Url": "https://spiral.imperial.ac.uk:8443/bitstream/10044/1/33280/2/2015-icse.pdf", "len_cl100k_base": 12300, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 47611, "total-output-tokens": 14024, "length": "2e13", "weborganizer": {"__label__adult": 0.0004074573516845703, "__label__art_design": 0.0007047653198242188, "__label__crime_law": 0.00035452842712402344, "__label__education_jobs": 0.0017766952514648438, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.0002288818359375, "__label__finance_business": 0.0004727840423583984, "__label__food_dining": 0.0004324913024902344, "__label__games": 0.0009975433349609375, "__label__hardware": 0.001338958740234375, "__label__health": 0.0008835792541503906, "__label__history": 0.0004124641418457031, "__label__home_hobbies": 0.00016796588897705078, "__label__industrial": 0.0005803108215332031, "__label__literature": 0.0004963874816894531, "__label__politics": 0.0003037452697753906, "__label__religion": 0.0004773139953613281, "__label__science_tech": 0.15966796875, "__label__social_life": 0.00012505054473876953, "__label__software": 0.011566162109375, "__label__software_dev": 0.8173828125, "__label__sports_fitness": 0.0003631114959716797, "__label__transportation": 0.00055694580078125, "__label__travel": 0.0002665519714355469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55293, 0.05354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55293, 0.38115]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55293, 0.89489]], "google_gemma-3-12b-it_contains_pii": [[0, 5366, false], [5366, 11808, null], [11808, 16321, null], [16321, 22194, null], [22194, 30545, null], [30545, 36800, null], [36800, 40574, null], [40574, 47129, null], [47129, 52580, null], [52580, 53243, null], [53243, 53243, null], [53243, 55293, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5366, true], [5366, 11808, null], [11808, 16321, null], [16321, 22194, null], [22194, 30545, null], [30545, 36800, null], [36800, 40574, null], [40574, 47129, null], [47129, 52580, null], [52580, 53243, null], [53243, 53243, null], [53243, 55293, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55293, null]], "pdf_page_numbers": [[0, 5366, 1], [5366, 11808, 2], [11808, 16321, 3], [16321, 22194, 4], [22194, 30545, 5], [30545, 36800, 6], [36800, 40574, 7], [40574, 47129, 8], [47129, 52580, 9], [52580, 53243, 10], [53243, 53243, 11], [53243, 55293, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55293, 0.03371]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
35b902e34e6826fd149ecf1c866759471ed335c1
|
Software Inspections are a set of formal technical review procedures held at selected key points during software development for the purpose of finding defects in software documents. Inspections are a Quality Assurance tool and a Management tool. Their primary purposes are to improve overall software system quality while reducing lifecycle costs and to improve management control over the software development cycle. The Inspections process can be customized to specific project and development type requirements and are specialized for each stage of the development cycle.
For each type of Inspection, materials to be inspected are prepared to predefined levels. The Inspection team follows defined roles and procedures and uses a specialized checklist of common problems in reviewing the materials. The materials and results from the Inspection have to meet explicit completion criteria before the Inspection is finished and the next stage of development proceeds. Statistics, primarily time and error data, from each Inspection are captured and maintained in a historical database. These statistics provide feedback and feedforward to the developer and manager and longer term feedback for modification and control of the development process for most effective application of design and quality assurance efforts.
HISTORY
Software Inspections were developed in the early mid-1970s at IBM by Dr. Mike Fagan, who was subsequently named software innovator of the year. Fagan also credits IBM members O.R.Kohli, R.A.Radice and R.R.Larson for their contributions to the development of Inspections. In the IBM Systems Journal [1], Fagan described Inspections and reported that in controlled experiments at IBM with equivalent systems software development efforts, significant gains in software quality and a 23% gain in development productivity were made by using Inspections based reviews at the end of design and end of coding (clean compile) rather than structured walkthroughs at the same points. Fagan reported that the Inspections caught 82% of development cycle errors before unit test, and that the inspected software had 38% fewer errors from unit test through seven months of system testing compared to the walkthrough sample with equivalent testing. Fagan also cites an applications software example where a 25% productivity gain was made through the introduction of design and code inspections. As further guidelines for using Inspections, IBM published an Installation Management Manual [2] with detailed instructions and guidelines for implementing Inspections.
Inspections were introduced to NASA/Ames Research Center in 1979 by Informatics General Corporation on the Standardized Wind Tunnel System (SWTS) and other pilot projects. The methods described by IBM were adapted to meet the less repetitious character of Ames applications and research/development software as compared to that of IBM's systems software development. Though not able to duplicate IBM's controlled environments and experiments, our experience at Ames of gains in quality and productivity through using Inspections have been similar. From a developed Wind Tunnel software application which had been reviewed in structured walkthroughs and then later was rewritten and reviewed using
G. Wenneson
Informatics General Corp
Inspections, the Inspected version had 35-65% less debug and test time and about 40% fewer post-release problems. Inspections implemented prior to unit test have been shown to detect over 90% of software’s lifetime problems. Inspection results have been sufficiently productive in terms of increased software quality, decreased development times, and management visibility into development progress, that Inspections have been integrated into Informatics’ development methodology as the primary Quality Assurance defect removal method.
When Inspections were first implemented at Ames, only design and code Inspections were introduced. The scope and usage has expanded so that currently, Inspections are used to review both system level and component level Goals (requirements) Specifications, Preliminary Design, Detailed Design, Code, Test Plans, Test Cases, and modifications to existing software. Inspections are used on most Informatics staffed development tasks where the staff level and environment are appropriate. Inspections implementation and usage at Ames are described in NASA Contractor Report 166521 [3]. Within Informatics contracts outside of the Ames projects, Inspections are also used to review Phase Zero (initial survey and inventory of project status), Project Goals, and Requirements Specifications generated through structured analysis.
PARTICIPANTS
The Inspectors operate as a team and fill five different types of roles. The Author(s) is the primary designer, developer, or programmer who prepares the materials to be inspected. The author is a passive Inspector, answering questions or providing clarification as necessary. The Moderator directs the flow of the meetings, limiting discussion to finding errors and focusing the sessions to the subject. The moderator also records the problems uncovered during the meetings. A Reader paraphrases the materials, to provide a translation of the materials different from the authors' viewpoint. One or more additional Inspectors complete the active components of the team. A limited number of Observers, who are silent non-participants, may also attend for educational or familiarizing purposes. Of the team members, the moderator and a reader are the absolute minimum necessary to hold an Inspection.
Team composition and size are important. Composition using knowledgeable designers and implementors having similar background or from interfacing software enable cross training of group members; understanding is enhanced and startup time is lessened. However, team members must be sufficiently different so that alternate viewpoints are present. Fagan recommends a four member team composed of a moderator and the software's designer, implementor, and tester. Our experience is that the most effective team size seems to be three to five members, exclusive of author and observers; more than this is a committee, less may not have critical mass for the process. We also try to keep the team together for all of the software's Inspections.
TOOLS
Written tools are used by the participants during the Inspections process to assist in the preparation, the actual sessions, and the completion of the Inspection. Standards are necessary as guidelines for preparing both design and coding products. The Entrance Criteria for inspection materials define what materials are to be inspected at each type of Inspection, the level of detail of preparation, and other prerequisites for an Inspection to occur. Checklists of categories (Data Area Usage, External Linkages, etc.) of various types of problems to look for are used during the sessions to help locate errors and focus attention on areas of project
concern. The Checklists are also used by the author during his preparation of materials and by the inspectors while they are studying the materials. Exit Criteria define what must be done before the Inspection is declared complete and the materials can proceed to the next stage of development. Each of these tools will have been customized for each project's type of development work, language, review requirements, and emphasis that will be placed on each stage of the development process.
PROCEDURES
An Inspection is a multi-step sequential process. Prior to the Inspection, the Author prepares the materials to the level specified in the Entrance Criteria (and to guidelines detailed in the project development or coding standards). The moderator examines the materials and, if they are adequately prepared, selects team members and schedules the Inspection. (IBM lists these preparations as the Planning step.) The Inspection begins with a short educational Overview session of the materials presented by the author to the team. Between the overview and the first Inspection session, Preparation of each Inspector by studying the materials occurs outside of the meetings. In the actual Inspection sessions, the Reader paraphrases while the Inspectors review the materials for defects; the Moderator directs the flow of the meetings, ensures the team sticks only to problem finding, and records problems on a Problem Report form along with the problem location. Checklists of frequent types of problems for the type of software and type of Inspection are used during the preparation and Inspections sessions as a reminder to look for significant or critical problem areas. After the Inspection sessions, the moderator labels errors as major or minor, tabulates the Inspection time and error statistics, groups major errors by type, estimates the rework time, prepares the summaries, and gives the error list to the author. The author Reworks the materials to correct problems on the problem list. Follow-up by the moderator (or re-inspection, if necessary) of the problems ensures that all problems have been resolved.
In certain cases, a desk Inspection or "desk check" may be a more effective use of time than a full Inspection. Desk Inspections differ from normal Inspections in that during the preparation period each inspector individually records errors found and a single Inspection session is held to resolve ambiguities in the problems. The moderator compiles all collected error reports to produce a single report. All other Inspection steps proceed normally. Desk Inspections can be appropriate for code or design that the team is familiar with and that has already been through previous Inspections. Desk Inspections do not have the group synergy generated during "normal" Inspections. The SWTS Inspections database for FORTRAN code Inspections indicates that the desk check has an 80% error detection rate but only takes 40% of the time required of a full Inspection.
STATISTICS
The statistics captured from the Inspection and tabulated by the moderator consist of time and error values. The time statistics are average per person preparation time (excluding the author) and Inspections sessions meeting time, both normalized to a thousand lines of code (KLOC). The error statistics are the numbers of major and minor errors detected, also normalized to a KLOC. As part of the tabulating and summarizing process, error distributions of major errors by Checklist headings are recorded and summarized for the Inspection as a whole. The tabulated statistics are entered into a database as weighted averages by size in lines of design or code and keyed by expected implementation language and type of Inspection. The SWTS Inspections database currently contains almost 250 entries of data for FORTRAN and Assembler languages for the Goals (Functional Requirements), Preliminary
Design, Detailed Design, and Code (desk and non-desk check) types of Inspections held on developed Wind Tunnel System software from 1980 through 1985. Over half of the entries are for code Inspections. Figure 1 contains summary figures from the database. The database summaries provide guidelines from which general conclusions and assumptions can be drawn. The database was generated as a development and management tool from several related SWTS project's Inspections and not from tightly controlled experiments. As such, when comparing individual Inspections figures to the database figures, variances from one-half to twice the average amounts summarized from the database are not considered extraordinary.
STATISTICS USE
The Inspections statistics in their raw and weighted forms can be used by the author, the design team and manager, the project manager, and Software Engineering as feedback, feedforward, and control mechanisms for individual, team, project and Inspections process behavior modification for future work to achieve better results. In addition, the statistics can be used in the current project and for future work and projects for tracking, estimating, planning, and scheduling of development and QA work.
The author uses the statistics to determine immediately what is deficient in inspected design or code and, over the longer term, patterns and general problem areas on which to focus attention for future work. The problem list, besides providing a working list of detected problems, includes locations of what needs to be fixed before the next development stage can proceed. Additionally, a distribution of major errors by checklist category across each module provides warning signals of error prone modules and high or higher density error rates by error type. A history of high error rates of certain error types also provides a pointer to design areas which need more work or training to develop or better understand.
The programming team and manager use error distribution by type and module from individual Inspections and Inspections of related software to locate common problem areas and thus focus future work and communication to diminish these. Error rates higher than normal for the group as a whole or error distributions in particular areas may indicate a group misunderstanding or a misstatement of the requirements. Higher error densities in modules interfacing to existing (or new) software, for example, can alert and direct effort to understanding the interface or provide warning to another group to clarify or improve that interface. For the designer and the team manager, lines of design (or lines of code, depending on development stage) and complexity per module give immediate feedback for design considerations of module size, cohesion, and coupling; this additionally provides an opportunity to ensure that modules are not proliferating from one design stage to the next. The completion of any individual Inspection along with module quantity and sizing gives quantitative and qualitative feedback for validity of component estimating, scheduling, and tracking information.
The Project Manager utilizes the statistics to help locate trends in various problem categories and help the team improve performance through group meetings or education. The statistics provide a quantitative evaluation of software correctness and allow prediction, based on Inspections held, of error prone sections of design or code, in order to concentrate development, QA, and testing resources on the most important areas. Additionally, each Inspection's results can be "validated" to ensure proper procedures were followed and the results are legitimate as compared to the project database. As an example, for a FORTRAN detailed design inspection, time
## SUMMARY OF INFORMATICS SWTS PROJECT INSPECTIONS STATISTICS
<table>
<thead>
<tr>
<th>Type of Inspect’n</th>
<th>Lang.</th>
<th>Total Number Held</th>
<th>Total No "Lines" Inspected</th>
<th>DENSITY-OF-PROBS. Per 1000 Lines</th>
<th>TIME-PER-PERSON Per 1000 Lines</th>
<th>Meet’g Prep’n</th>
</tr>
</thead>
<tbody>
<tr>
<td>CODE - ALL Lang</td>
<td>94</td>
<td>51186</td>
<td>22.0</td>
<td>59.9</td>
<td>81.9</td>
<td>4.6</td>
</tr>
<tr>
<td>NON-DESK Only</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>FORTRAN</td>
<td>90</td>
<td>49389</td>
<td>22.4</td>
<td>60.4</td>
<td>82.8</td>
<td>4.6</td>
</tr>
<tr>
<td>ASSEMBLY</td>
<td>4</td>
<td>1797</td>
<td>10.1</td>
<td>44.5</td>
<td>54.6</td>
<td>5.0</td>
</tr>
<tr>
<td>CODE - ALL Lang</td>
<td>47</td>
<td>23206</td>
<td>21.0</td>
<td>51.3</td>
<td>72.3</td>
<td>3.9</td>
</tr>
<tr>
<td>DESK</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>FORTRAN</td>
<td>43</td>
<td>21308</td>
<td>19.1</td>
<td>48.1</td>
<td>67.2</td>
<td>3.7</td>
</tr>
<tr>
<td>ASSEMBLY</td>
<td>4</td>
<td>1898</td>
<td>42.6</td>
<td>87.6</td>
<td>130.3</td>
<td>6.3</td>
</tr>
<tr>
<td>DETAILED DESIGN</td>
<td>ALL Lang</td>
<td>44</td>
<td>10349</td>
<td>76.74</td>
<td>144.6</td>
<td>221.3</td>
</tr>
<tr>
<td>FORTRAN</td>
<td>40</td>
<td>9205</td>
<td>83.1</td>
<td>143.4</td>
<td>226.5</td>
<td>14.5</td>
</tr>
<tr>
<td>ASSEMBLY</td>
<td>4</td>
<td>1144</td>
<td>25.3</td>
<td>153.9</td>
<td>179.2</td>
<td>14.3</td>
</tr>
<tr>
<td>PRELIMINARY DESIGN</td>
<td>ALL Lang</td>
<td>43</td>
<td>13268</td>
<td>68.1</td>
<td>107.5</td>
<td>175.7</td>
</tr>
<tr>
<td>FORTRAN</td>
<td>41</td>
<td>12570</td>
<td>54.3</td>
<td>89.8</td>
<td>144.1</td>
<td>9.1</td>
</tr>
<tr>
<td>ASSEMBLY</td>
<td>2</td>
<td>698</td>
<td>316.6</td>
<td>426.8</td>
<td>743.4</td>
<td>39.8</td>
</tr>
</tbody>
</table>
This chart summarizes the statistics from Informatics inspections on the NASA Ames SWTS project. The statistics are weighted averages, each inspection being weighted by its size, in lines of design or code.
Figure 1
SWTS Inspections Database Summaries
G. Wenneson
Informatics General Corp
5 of 22
guidelines are 23 hrs/KLOD (Thousand Lines of Design) per person for preparation plus meeting time and the team can expect to find 83 major and 143 minor problems per KLOD. Meeting times and error rates significantly different should be examined to determine their cause. A trend toward increasing error rates may mean that not enough attention is being directed to proper design. A decreasing error rate may mean design is becoming more effective or, when accompanied by decreasing preparation and meeting times, may mean Inspections are becoming less effective.
The statistics are also used to modify the Inspection process itself or its application. At the beginning of the project, the entrance and exit criteria, the checklists, and the methodology and standards are specialized to the project's particular development environment, languages, and review requirements. As statistics are compiled, evaluations of the data may lead to modifications to the entrance criteria to change the level of materials preparation, to the checklists to alter the attention given to certain design or code areas, and to the project standards to remove ambiguity or set new standards as necessary. Removing software components from an Inspection requirement or adding or deleting an Inspection as a quality gate at a particular design stage to more optimally use available time are options made more apparent by the statistics.
DATABASE ANALYSIS
Examination and analysis of the SWTS Inspection database indicate correlations between preparation time, meeting time, inspection rate, and errors detected. These correlations and others allow the overall Inspections procedures to be modified and guidelines established for the optimal conduct of Inspections within a project.
For FORTRAN code Inspections, errors detected are related to inspection rate (LOC inspected per hour), figure 2. Most sessions inspected code at the rate of 100 to 300 LOC per hour and detected between 10 and 80 major errors/KLOC. When the Inspection rate is too rapid, the error detection rate falls gradually. When the Inspection rate is excessively slow, there is a wide range of error densities. For excessively slow Inspection rates, we believe this wide range of error densities results from Inspecting two types of materials: "Difficult Materials" where the materials are complex and require a slower Inspection rate to evaluate but result in a normal to above normal error density; and "Poorly Prepared Materials" which were not ready for Inspection, but were still inspected and thus generated a large number of errors, were difficult to understand, and slow to inspect. The inspection of "Poorly Prepared Materials" represent abnormal situations which the moderator is supposed to prevent prior to scheduling or holding an Inspection. To this end, there are also cut-off limits before and within the Inspection, if the Inspected materials are too hard to understand and/or are producing too many errors, that is, they are probably not ready to be Inspected, the Inspection is stopped and the materials are returned to the author to be properly prepared.
There is a linear correlation between inspection rate and preparation rate (LOC/hr), figure 3. Materials requiring a slower preparation rate also experience a slower Inspection rate, and vice versa. We believe the correlating factor is complexity of materials, more "difficult" code takes more inspector preparation time and more inspection time (lower inspection rate).
Of any Inspection, we believe the Preliminary Design Inspection is the most critical Inspection to hold, as it helps find modularization errors, data definition errors, and can help to emphasize software re-usability before unit development begins. Based upon major error detection rate and translating preliminary and detailed design lines of design (LOD) to implemented lines of code (LOC), the preliminary design Inspection detects (and removes) a greater number of errors. The translation from lines of design to lines of code is based on a development methodology that requires a preliminary design modularization with logic development where 1 LOD can eventually be coded by 15 to 20 LOC; detailed design logic development is where 1 LOD can be coded by 3 to 10 LOC. Using major errors normalized to estimated implemented LOC, the preliminary design Inspection finds and fixes about 1000 errors per KLOC, the detailed design Inspection locates about 600 errors per KLOC, while the code Inspection is least effective by detecting a mere 20 errors per KLOC. Using the generally accepted cost to repair of an order of magnitude for errors between successive development steps further emphasizes these figures for cost savings purposes: a few ounces of prevention are worth pounds of cure. The SWTS environment uses walkthroughs for reviewing functional requirements specifications; for environments that uniformly use Structured Analysis to generate specifications, the Requirements Specification Inspection would undoubtedly supercede the Preliminary Design Inspection in importance.
Experience in performing Inspections is cumulative and if applied can have an effect on the Inspections process. Over the first two years on the SWTS project, the error rates were widely scattered. In the second year, an examination of the Inspections process resulted in changes in error definition, Inspections procedures, and staff education. Consequently error rates dropped significantly and today remain in a much smaller range.
CONCLUSION
Inspections are not a panacea for Quality Assurance defect removal. They are technical review procedures and may not be appropriate for some situations such
as those needing heavy user interaction (such as user interface definition). They should be used in conjunction with (but probably not as a substitute for) military PDR/CDR large reviews. In appropriate situations, they have been proven to be effective and efficient error detection methods which have extremely important and beneficial "side effects" of accurate planning, scheduling, and tracking for project management and control. The primary effect of Inspections is to move error detection and correction to the earlier (and less costly) development stages. As such, this front-loads the project schedule, but the time is more than recovered during the coding and implementation phases. Consequently, Inspections usage on a project requires proper education, scheduling, and implementation and should not be used on schedule driven projects where the customer understands only two development phases: code and test.
At NASA Ames, based on experience gained using the original IBM model on pilot projects, Inspections have been modified and specialized for numerous projects, development phases, and environments. At Ames, Inspections are expected to play an increasingly major role as a Quality Assurance tool in software development. Some of the directions this can be expected to take are expansion to cover new software languages, incorporation of new structured development methodologies, and modification of the methodologies for the Ames environment based on information gained during Inspections of software developed using those methodologies. Inspections are a significant Quality Assurance tool in their own right and flexible enough to be integrated and implemented with other tools, especially defect prevention, to provide a comprehensive Quality Assurance environment to approach zero defect products.
REFERENCES
G. Wenneson
Informatics General Corp
8 of 22
THE VIEWGRAPH MATERIALS
for the
G. WENNESON PRESENTATION FOLLOW
SOFTWARE INSPECTIONS AT NASA AMES
METRICS FOR
FEEDBACK
AND
MODIFICATION
GREG WENNESON
INFORMATICS GENERAL CORPORATION
WHAT THEY ARE (AND ARE NOT)
INSPECTIONS:
FORMAL REVIEW PROCEDURES
FOR ERROR DETECTION ONLY
DEFINED TEAM MEMBER ROLES
SPECIFICALLY DEFINED TOOLS
HELD AT SELECTED POINTS IN DEVELOPMENT CYCLE
DEFINED INPUT
DEFINED OUTPUT
INSPECTIONS ARE NOT:
DESIGN SESSIONS
WALKTHRU OUGHS
EVALUATIONS OF THE AUTHOR
RUBBER STAMP PROCEDURES
HISTORY
AT IBM
MIKE FAGIN, PUBLISHED - 1976
ALSO - O.R.KOHLI, R.R.LARSON, R.A.RADICE
FORMAL GUIDELINES - 1977, 1978
PRODUCTIVITY GAIN 23%
ERROR DETECTION 82%
ERROR REDUCTION 38%
AT NASA AMES
PILOT PROJECTS BY INFORMATICS - 1979
(ALSO COMMERCIAL PILOT PROJECTS)
STANDARDIZED WIND TUNNEL SYSTEM (SWTS)
PRODUCTIVITY GAIN 40%
ERROR DETECTION 90%
ERROR REDUCTION 40%
(* - INCLUDES MAJOR METHODOLOGY CHANGES)
NOW USED ON MOST INFORMATICS AMES PROJECTS
INSPECTION COMPONENTS
DEFINED TOOLS
STANDARDS
CRITERIA FOR MATERIALS PREPARATION
CHECKLISTS FOR ERRORS
EXIT CRITERIA
WRITTEN RECORDS AND STATISTICS
TEAM MEMBERS
MODERATOR
READER
INSPECTORS
AUTHOR
INSPECTION PROCESS
TEAM SELECTION (PLANNING)
OVERVIEW
PREPARATION
INSPECTIONS SESSIONS
REWORK
FOLLOW-UP
DESK INSPECTION
PROBLEM AND STATISTICS RECORDING
PROBLEM RECORDING
MODULE INSPECTION PROBLEM REPORT
"GENERAL" PROBLEMS REPORT
PROBLEM STATISTICS
MODULE PROBLEM SUMMARY
MODULE TIME AND DISPOSITION REPORT
INSPECTION STATISTICS
INSPECTOR TIME REPORT
INSPECTION GENERAL SUMMARY
OUTLINE OF REWORK SCHEDULE
### INSPECTIONS DATA BASE FOR SWTS
- **SUMMARIES** -
**SUMMARY OF INFORMATICS SWTS PROJECT INSPECTIONS STATISTICS**
<table>
<thead>
<tr>
<th>Type of Inspect'n</th>
<th>Lang.</th>
<th>Total No. Held</th>
<th>Total No. "Lines" Inspected</th>
<th>DENSITY-OF-PROBLEMS Per Thousand Lines</th>
<th>TIME-PER-PERSON Per Thousand Lines</th>
<th>Meet'g Prep'n Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>CODE - NON-DESK</td>
<td>ALL Lang</td>
<td>94</td>
<td>51186</td>
<td>22.0</td>
<td>59.9</td>
<td>81.9</td>
</tr>
<tr>
<td></td>
<td>FORTRAN</td>
<td>90</td>
<td>49389</td>
<td>22.4</td>
<td>60.4</td>
<td>82.8</td>
</tr>
<tr>
<td></td>
<td>ASSEMBLY</td>
<td>4</td>
<td>1797</td>
<td>10.1</td>
<td>44.5</td>
<td>54.6</td>
</tr>
<tr>
<td>CODE - DESK</td>
<td>ALL Lang</td>
<td>47</td>
<td>23206</td>
<td>21.0</td>
<td>51.3</td>
<td>72.3</td>
</tr>
<tr>
<td></td>
<td>FORTRAN</td>
<td>43</td>
<td>21308</td>
<td>19.1</td>
<td>48.1</td>
<td>67.2</td>
</tr>
<tr>
<td></td>
<td>ASSEMBLY</td>
<td>4</td>
<td>1898</td>
<td>42.6</td>
<td>87.6</td>
<td>130.3</td>
</tr>
<tr>
<td>DETAILED DESIGN</td>
<td>ALL Lang</td>
<td>44</td>
<td>10349</td>
<td>76.74</td>
<td>144.6</td>
<td>221.3</td>
</tr>
<tr>
<td></td>
<td>FORTRAN</td>
<td>40</td>
<td>9205</td>
<td>83.1</td>
<td>143.4</td>
<td>226.5</td>
</tr>
<tr>
<td></td>
<td>ASSEMBLY</td>
<td>4</td>
<td>1144</td>
<td>25.3</td>
<td>153.9</td>
<td>179.2</td>
</tr>
<tr>
<td>PRELIMINARY DESIGN</td>
<td>ALL Lang</td>
<td>43</td>
<td>13268</td>
<td>68.1</td>
<td>107.5</td>
<td>175.7</td>
</tr>
<tr>
<td></td>
<td>FORTRAN</td>
<td>41</td>
<td>12570</td>
<td>54.3</td>
<td>89.8</td>
<td>144.1</td>
</tr>
<tr>
<td></td>
<td>ASSEMBLY</td>
<td>2</td>
<td>698</td>
<td>316.6</td>
<td>426.8</td>
<td>743.4</td>
</tr>
</tbody>
</table>
This chart summarizes the statistics from Informatics inspections on the NASA Ames SWTS project. The statistics are weighted averages, each inspection being weighted by its size, in lines of design or code.
G. Wenneson
Informatics General Corp
15 of 22
STATISTICS USE
AUTHOR
PROBLEM REPORTS
MODULE PROBLEM SUMMARY
PREVIOUS INSPECTION STATISTICS
DESIGN TEAM AND MANAGER
PROBLEM REPORTS
MODULE PROBLEM SUMMARY
OUTLINE OF REWORK SCHEDULE
MODULE TIME AND DISPOSITION
INSPECTION GENERAL SUMMARY
PREVIOUS INSPECTION STATISTICS
PROJECT MANAGER; TEST GROUP; QA GROUP
MODULE PROBLEM SUMMARY
INSPECTION GENERAL SUMMARY
PREVIOUS INSPECTION STATISTICS
SOFTWARE ENGINEERING
MODULE PROBLEM SUMMARY
INSPECTION GENERAL SUMMARY
PREVIOUS INSPECTION STATISTICS
# CODE INSPECTION SUMMARIES
## NEW FORTRAN CODE, MODIFICATIONS, AND BOTH
### SUMMARY OF INFORMATICS SWTS PROJECT INSPECTIONS STATISTICS
<table>
<thead>
<tr>
<th>Type of Inspect'n</th>
<th>Total Lang. Held</th>
<th>Total No "Lines" Inspected</th>
<th>DENSITY-OF-PROBLEMS Per Thousand Lines</th>
<th>TIME-PER-PERSON Per Thousand Lines</th>
<th>Meet'g Prep'n Total</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>CODE - NON-DESK CHECK</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>FORTRAN</td>
<td>90</td>
<td>49389</td>
<td>22.4</td>
<td>60.4</td>
<td>82.8</td>
</tr>
<tr>
<td>/New</td>
<td>46</td>
<td>25981</td>
<td>26.3</td>
<td>68.3</td>
<td>94.6</td>
</tr>
<tr>
<td>/Mods</td>
<td>13</td>
<td>7019</td>
<td>17.2</td>
<td>42.4</td>
<td>59.6</td>
</tr>
<tr>
<td>/Both</td>
<td>31</td>
<td>16389</td>
<td>18.5</td>
<td>55.6</td>
<td>74.1</td>
</tr>
<tr>
<td><strong>CODE - DESK CHECK</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>FORTRAN</td>
<td>43</td>
<td>21308</td>
<td>19.1</td>
<td>48.1</td>
<td>67.2</td>
</tr>
<tr>
<td>/New</td>
<td>8</td>
<td>4121</td>
<td>26.3</td>
<td>51.7</td>
<td>78.0</td>
</tr>
<tr>
<td>/Both</td>
<td>25</td>
<td>14453</td>
<td>18.6</td>
<td>50.1</td>
<td>68.7</td>
</tr>
<tr>
<td>/Mods</td>
<td>10</td>
<td>2734</td>
<td>10.6</td>
<td>32.2</td>
<td>42.8</td>
</tr>
</tbody>
</table>
This chart summarizes the statistics from Informatics inspections on the NASA Ames SWTS project. The statistics are weighted averages, each inspection being weighted by its size, in lines of design or code.
# INSPECTIONS DATA BASE
## "MAJOR" PROBLEM DISTRIBUTION, BY PERCENT
### PRELIMINARY DESIGN
<table>
<thead>
<tr>
<th>Category</th>
<th>FORTRAN ASSEMBLER</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPECIFICATION</td>
<td>10%</td>
</tr>
<tr>
<td>CLARIFICATION</td>
<td>17</td>
</tr>
<tr>
<td>DATA</td>
<td>18</td>
</tr>
<tr>
<td>LOGIC</td>
<td>21</td>
</tr>
<tr>
<td>I/F</td>
<td>5</td>
</tr>
<tr>
<td>LINKAGES</td>
<td>20</td>
</tr>
<tr>
<td>PERFORMANCE</td>
<td>4</td>
</tr>
</tbody>
</table>
### DETAILED DESIGN
<table>
<thead>
<tr>
<th>Category</th>
<th>FORTRAN ASSEMBLER</th>
</tr>
</thead>
<tbody>
<tr>
<td>DETAIL</td>
<td>9</td>
</tr>
<tr>
<td>LOGIC</td>
<td>29</td>
</tr>
<tr>
<td>DATA</td>
<td>20</td>
</tr>
<tr>
<td>LINKAGES</td>
<td>22</td>
</tr>
<tr>
<td>RETURN CODES</td>
<td>5</td>
</tr>
</tbody>
</table>
### CODE
<table>
<thead>
<tr>
<th>Category</th>
<th>FORTRAN ASSEMBLER</th>
</tr>
</thead>
<tbody>
<tr>
<td>FUNCTIONALITY</td>
<td>9</td>
</tr>
<tr>
<td>DATA</td>
<td>19</td>
</tr>
<tr>
<td>CONTROL</td>
<td>18</td>
</tr>
<tr>
<td>LINKAGES</td>
<td>24</td>
</tr>
<tr>
<td>READABILITY</td>
<td>17</td>
</tr>
<tr>
<td>REG. USE</td>
<td>12</td>
</tr>
</tbody>
</table>
# PREVIOUS INSPECTIONS EFFECT ON MAJOR ERROR RATES
<table>
<thead>
<tr>
<th>STAGE OF DEVELOPMENT</th>
<th>NUMBER OF PREVIOUS INSPECTIONS</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0</td>
</tr>
<tr>
<td>CODE NON-DESK</td>
<td>17.7</td>
</tr>
<tr>
<td>CODE DECK</td>
<td>15.1</td>
</tr>
<tr>
<td>DETAIL DESIGN</td>
<td>95</td>
</tr>
<tr>
<td>PRELIM. DESIGN</td>
<td>58</td>
</tr>
</tbody>
</table>
**Major Errors Per KLOC**
---
# AND ON PREPARATION AND MEETING TIME
<table>
<thead>
<tr>
<th>STAGE OF DEVELOPMENT</th>
<th>NUMBER OF PREVIOUS INSPECTIONS</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0</td>
</tr>
<tr>
<td>CODE NON-DESK</td>
<td>8.2</td>
</tr>
<tr>
<td>CODE DECK</td>
<td>4</td>
</tr>
<tr>
<td>DETAIL DESIGN</td>
<td>27.7</td>
</tr>
<tr>
<td>PRELIM. DESIGN</td>
<td>14.7</td>
</tr>
</tbody>
</table>
**HOURS of Preparation plus Meeting time Per KLOC**
An important area of consideration is the amount of preparation time required in order to allow the participants to proceed at a reasonable rate in the inspection meeting. The graph below, based on the individual inspections to date, suggests that preparation times of 4-7 hours per 1,000 lines may allow the team to proceed at an optimum rate in the meetings. Less preparation time will cause the meeting to slow down because of poor understanding and many questions. More preparation time may have a negative impact on the rate because of over-emphasizing minor problems or discussing the functionality or goals during code or design inspections.
**Upper and Lower Ranges of Rates Achieved in Inspections with Various Preparation Times**
- Inspection Rate (Lines per hour)
- Preparation Time (Hours per Person per Thousand Lines)
G. Wenneson
Informatics General Corp
20 of 22
INSPECTIONS AS A PROJECT COORDINATION TOOL
INSPECTIONS CAN INTEGRATE THE FOUR MAJOR PROJECT FACTORS:
PROJECT MANAGEMENT
METHODOLOGY
QUALITY ASSURANCE
STAFF PERFORMANCE
THRU:
REINFORCEMENT OF METHODOLOGY AND STANDARDS
MAJOR MILESTONE TRACKING INFORMATION MATCHING WBS
DETAILED TRACKING AND ESTIMATING INFORMATION MATCHING WBS
DETAILED ERROR AND DESIGN NEEDS AT EACH DEVELOPMENT STAGE
EASY EXTRACTION OF TECHNICAL INFORMATION ABOUT COMPONENTS
INDICATIONS OF TRAINING AREAS NEEDING ATTENTION ACROSS THE PROJECT
INDICATIONS DIRECTLY TO INDIVIDUAL STAFF MEMBERS OF THEIR TRAINING NEEDS
ALMOST THE END
CAUTIONS
DOESN'T SUBSTITUTE FOR THINKING
MUST BE SCHEDULED AT BEGINNING - CAN'T BE "TACKED" ON
PARTICIPANTS MUST BE PROPERLY TRAINED
NEED CUSTOMER UNDERSTANDING AND SUPPORT
MANAGEMENT DIRECTION AND SUPPORT CRUCIAL
STATISTICS ARE FOR BETTER SOFTWARE AND MANAGEMENT,
NOT A NUMBERS EXERCISE
WHERE TO GO FROM HERE
EXPAND TO NEW LANGUAGES AND DESIGN TECHNIQUES
EXPAND TO NEW METHODOLOGIES AND SUPPORT TOOLS
FEEDBACK TO CURRENT METHODOLOGIES
EXPAND TO OTHER APPLICABLE COMPANY/CONTRACT AREAS
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19860020891.pdf", "len_cl100k_base": 8359, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 42120, "total-output-tokens": 8965, "length": "2e13", "weborganizer": {"__label__adult": 0.00032806396484375, "__label__art_design": 0.0003657341003417969, "__label__crime_law": 0.0003135204315185547, "__label__education_jobs": 0.0018520355224609375, "__label__entertainment": 5.4717063903808594e-05, "__label__fashion_beauty": 0.00015437602996826172, "__label__finance_business": 0.0006594657897949219, "__label__food_dining": 0.00024700164794921875, "__label__games": 0.000598907470703125, "__label__hardware": 0.0008573532104492188, "__label__health": 0.00026607513427734375, "__label__history": 0.00020933151245117188, "__label__home_hobbies": 0.00010818243026733398, "__label__industrial": 0.0005278587341308594, "__label__literature": 0.0001977682113647461, "__label__politics": 0.0001380443572998047, "__label__religion": 0.0002448558807373047, "__label__science_tech": 0.00970458984375, "__label__social_life": 9.22083854675293e-05, "__label__software": 0.0078887939453125, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0002460479736328125, "__label__transportation": 0.0004029273986816406, "__label__travel": 0.0001970529556274414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37333, 0.03124]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37333, 0.18429]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37333, 0.90462]], "google_gemma-3-12b-it_contains_pii": [[0, 3313, false], [3313, 6989, null], [6989, 10883, null], [10883, 14676, null], [14676, 17532, null], [17532, 21032, null], [21032, 23225, null], [23225, 25591, null], [25591, 25657, null], [25657, 25777, null], [25777, 26101, null], [26101, 26553, null], [26553, 26920, null], [26920, 27211, null], [27211, 30519, null], [30519, 31016, null], [31016, 33103, null], [33103, 34199, null], [34199, 35356, null], [35356, 36237, null], [36237, 36825, null], [36825, 37333, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3313, true], [3313, 6989, null], [6989, 10883, null], [10883, 14676, null], [14676, 17532, null], [17532, 21032, null], [21032, 23225, null], [23225, 25591, null], [25591, 25657, null], [25657, 25777, null], [25777, 26101, null], [26101, 26553, null], [26553, 26920, null], [26920, 27211, null], [27211, 30519, null], [30519, 31016, null], [31016, 33103, null], [33103, 34199, null], [34199, 35356, null], [35356, 36237, null], [36237, 36825, null], [36825, 37333, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37333, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37333, null]], "pdf_page_numbers": [[0, 3313, 1], [3313, 6989, 2], [6989, 10883, 3], [10883, 14676, 4], [14676, 17532, 5], [17532, 21032, 6], [21032, 23225, 7], [23225, 25591, 8], [25591, 25657, 9], [25657, 25777, 10], [25777, 26101, 11], [26101, 26553, 12], [26553, 26920, 13], [26920, 27211, 14], [27211, 30519, 15], [30519, 31016, 16], [31016, 33103, 17], [33103, 34199, 18], [34199, 35356, 19], [35356, 36237, 20], [36237, 36825, 21], [36825, 37333, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37333, 0.28571]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
63591c0e1d126eba6e2805a2aa17b7f4c53938e1
|
Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants
DOI
10.1109/SANER.2016.98
Publication date
2016
Document Version
Submitted manuscript
Published in
2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER)
Citation (APA)
Important note
To cite this publication, please use the final published version (if applicable). Please check the document version above.
Evaluating Automatic Spreadsheet Metadata Extraction on a Large Set of Responses from MOOC Participants
Sohon Roy, Felienne Hermans, Efthimia Aivaloglou, Jos Winter, Arie van Deursen
Dept. of Software and Computer Technology
Delft University of Technology
Delft, Netherlands
{S.Roy-1, F.F.J.Hermans, E.Aivaloglou}@tudelft.nl, J.Winter@student.tudelft.nl, Arie.vanDeursen@tudelft.nl
Abstract—Spreadsheets are popular end-user computing applications and one reason behind their popularity is that they offer a large degree of freedom to their users regarding the way they can structure their data. However, this flexibility also makes spreadsheets difficult to understand. Textual documentation can address this issue, yet for supporting automatic generation of textual documentation, an important pre-requisite is to extract metadata inside spreadsheets. It is a challenge though, to distinguish between data and metadata due to the lack of universally accepted structural patterns in spreadsheets.
Two existing approaches for automatic extraction of spreadsheet metadata were not evaluated on large datasets consisting of user inputs. Hence in this paper, we describe the collection of a large number of user responses regarding identification of spreadsheet metadata from participants of a MOOC. We describe the use of this large dataset to understand how users identify metadata in spreadsheets, and to evaluate two existing approaches of automatic metadata extraction from spreadsheets. The results provide us with directions to follow in order to improve metadata extraction approaches, obtained from insights about user perception of metadata. We also understand what type of spreadsheet patterns the existing approaches perform well and on what type poorly, and thus which problem areas to focus on in order to improve.
I. INTRODUCTION
Spreadsheets are popular and widely used in industry across all domains. Panko [1] estimates that 95% of US firms use spreadsheets for financial reporting. One of the reasons for the popularity of spreadsheets is that they offer a large degree of freedom to their users regarding the way they can structure their data. However, this flexibility can be a double edged sword, which makes it very difficult for spreadsheet users to comprehend spreadsheets. From previous research [2] we know that spreadsheet comprehension poses a difficulty when users transfer spreadsheets to each other, to auditors for error-checking, and to software developers for migration. Especially that final scenario is difficult as the developers responsible for migration usually do not have extensive domain knowledge.
We assert that in such cases of spreadsheet transfer, a deep understanding of the spreadsheet at hand can be helpful. In previous work, this has been addressed by, for example, extracting class diagrams and dataflow diagrams [3], [2], but to comprehend a class diagram or a dataflow diagram, a user still needs knowledge of those formalisms. Hence, we would prefer to support them with a simple way of comprehending spreadsheets: natural language. Ultimately, our goal is to extract documentation from spreadsheets automatically.
As a first step in the automatic extraction of documentation from spreadsheets, we aim to extract metadata: determine what cells in a spreadsheet are metadata (or: labels) and what cells they describe. Contrary to software systems or databases, where metadata is structured, spreadsheets do not have universally accepted structural patterns, increasing the difficulty of distinguishing between data and metadata, or to retrieve the corresponding mappings between them.
In previous work, two approaches have been developed that perform metadata extraction from spreadsheets: The UCheck approach developed by Abraham et al. [4] with the goal of error-checking in spreadsheets, and the GyroSAT approach developed by Hermans et al. [2] with the goal of dataflow visualization in spreadsheets. It is difficult however to determine the usefulness of these two approaches for the goal of documentation generation, since both approaches were never evaluated on a large dataset, and the evaluations did not have user inputs.
In this paper, we address those shortcomings by collecting a large number of responses from the participants of a popular Massive Open Online Course (MOOC) conducted by the second author of this paper. As part of an optional exercise included in the MOOC, the participants were asked to identify metadata in spreadsheets. We analyze this data, and compare performance of both the approaches against the participant responses. As such, this paper addresses the following research questions:
RQ1: How do users perceive and identify metadata in spreadsheets? Insights about this can be used to improve or train automatic extraction approaches.
RQ2: How well do two existing automatic approaches perform compared to the users? An empirical evaluation can be used to assess if the approaches
can be reliably used for the purpose of documentation
generation.
RQ3: In what type of spreadsheets do the approaches perform well, and in what type of spreadsheets do they have difficulties compared to users? An analysis can be used to improve the approaches.
The results of our analysis show that:
1) Identification of metadata by users is characterized by traits or patterns. For example, groups of commonly used words - like Name, Description, Name of Country, Name of day frequently get identified as metadata by users. Also data located in specific positions inside tables of spreadsheets - like column headers and row headers, tend to get identified as metadata.
2) Compared to the users, the two approaches yield recall values of 34% and 45% indicating the need to be improved further in order to be practically reliable.
3) Specific types of spreadsheet structures pose challenges to both approaches, like nested block structures sharing metadata, and data blocks separated by blank rows. These challenges need to be overcome in order to make automatic documentation generation feasible.
The contributions of this paper are:
• A dataset with over 100,000 user-identified pairs of spreadsheet cells and the metadata that describe them.
• Insights from this dataset about how users identify metadata in spreadsheets.
• An empirical evaluation of two existing spreadsheet metadata extraction approaches on this dataset.
• An analysis of situations in which the two approaches perform well and poorly.
II. BACKGROUND AND MOTIVATING EXAMPLE
In this section, we illustrate the concept of spreadsheet metadata, present a definition, and provide summaries of the UCheck and GyroSAT approaches.
A. Example
As an example, consider the spreadsheet shown in Figure 1. The selected cell E2 is described by the column header “Interest Due”, for customer “John”. This example illustrates a simple case in which identification of the metadata is relatively easy.
However spreadsheets offer a large degree of freedom regarding the spatial arrangement of data; and this can result in a more complicated example as shown in Figure 2.
The selected cell G14, outlined in red, represents the “(Projected)” values, but in addition to this, it is also described by the hierarchical label “CURRENT YEAR”, and has the row header “Fees”.
In this case, the cells have multiple cells acting as metadata for it, and metadata is hierarchical (defined in the next subsection).
Hierarchical metadata is metadata of metadata, as shown in Figure 2, where the hierarchical order of metadata is PROGRAM BUDGET - REVENUE - Earned Revenue - Fees.
In the next subsection we describe the two approaches of metadata extraction evaluated in this paper.
C. Two Approaches for Spreadsheet Metadata Extraction
1) UCheck approach: The UCheck approach [4], [6] was developed by Abraham et al. for supporting error checking in spreadsheets based on their unit reasoning system [7]. In order to achieve this, the approach performs spreadsheet metadata extraction. The metadata extraction system developed for this approach, referred to by its authors as the header inference system, is an integration framework for four different strategies that are used to classify spreadsheet cells into the categories Header, Core, Footer, and Filler as described in Table I. The system classifies the cells following each of the four strategies. However, since the authors of the system believed that the strategies are not equally accurate in identifying cell types, they allocated confidence levels ranging from 0 (low) to 10 (high) for the classifications based on the respective strategies followed. Therefore after the classifications are completed, if one particular cell gets classified into different categories, the system selects the most suitable category by summing up the respective confidence levels and picking the highest sum. For example if a cell is classified as Header by strategy S1 with confidence level 5, and as Core with strategies S2 and S3 with confidence levels 4 and 2 respectively, then it is classified as a Core cell.
The four strategies used for cell classification are as follows.
- **Content-Based Cell Classification**: Cells are classified based on their contents. For example, cells with aggregation formulas are classified as footer cells, cells with numerical values are classified as core cells, and cells with string values are classified as header cells.
- **Fence Identification** and **Region-Base Cell Classification**: First ‘fences’ or boundaries of tables are identified and thereafter cells lying on these boundaries are classified with increased levels of confidence due to their position. For example, top-most or left-most cells are classified as headers and lower-most cells as footers.
- **Footer to Core Expansion**: Firstly cells with aggregate formulas are identified and marked as footers. Next the cells that are referred by these footers are marked as core cells, and so is their immediate neighbours if they have the same type of content. In this manner the core region is expanded. Thereafter the left over cells are classified either as header or filler depending on whether they are empty or not.
Once classification of all the cells of a spreadsheet is completed, the header inference system assigns the core cells a row header and a column header. For any particular core cell, the nearest header cell to the left of it and the nearest header cell above it, are assigned as the row and column headers respectively. Apart from this, the header cells themselves are assigned hierarchical second and higher level headers which are inferred based on a set of rules in a recursive fashion. As the end result, core cells (data) in a spreadsheet get associated with two header cells (metadata) at most, and header cells themselves get associated with higher level header cells (metadata of metadata), except for those for which headers could not be found.
2) GyroSAT Approach: Hermans et al. [2] developed the GyroSAT approach for the purpose of aiding spreadsheet comprehension through dataflow visualizations. This approach extracts metadata as it is necessary for labelling the diagrams with the name of the entities they represent. In this approach the algorithm for metadata extraction first performs classification of spreadsheet cells into categories Formula, Data, Label, and Empty as described in Table I. However, unlike the UCheck approach, the cell classification process in this case is based on one single strategy. The strategy is inspired by the Footer to Core Expansion strategy of the UCheck approach and the algorithm first identifies all cells containing formulas marking them as Formula. Next, based on the contents of the formulas, it marks cells that get referenced by the formula as Data unless they got already typed as Formula in the previous step. Thereafter, it types the remaining cells either as Empty if they are empty, or else as Label.
Once the classification of cells is completed, the algorithm proceeds to determine data blocks. A data block is defined as a rectangle containing a connected group of cells of type Data, Label, or Formula. The algorithm identifies a data block by starting with the left-most and top-most non-empty cell in a spreadsheet and successively expanding it to include the horizontal, vertical, and diagonal neighbours until a point is reached when all immediate neighbours of a cell have either been already included in the block or are empty. Thus, in its purpose, this bears some similarity to the Fence Identification strategy of the UCheck approach, as both try to determine the boundaries of the tables inside a spreadsheet.
After identification of data blocks, the algorithm assumes that any data cell, say C12, can have two associated labels, one from its column ‘C’ and one from its row ‘12’, which we refer to as column label and row label respectively. The algorithm also assumes that these labels can be found on the borders of the data block that the cell C12 is contained in. The algorithm starts by inspecting the first cell in column C and if it is of type Label, then it assigns that cell as column label for C12. Otherwise, the algorithm moves down along the column, cell by cell, until it finds a Label type cell. If it encounters cells of type Formula or Data before it finds a label, then it quits the search without returning any cell as column label. It employs a similar strategy starting with the first cell in row 12 in order to identify the row label for C12. As the end result, most data cells in the spreadsheet get associated with two labels (metadata) at most except for those whose labels could not be found.
3) Comparison of the approaches: An important step in both the approaches is to classify cells into different categories based on the nature of their contents. This classification serves as a basis for distinction between data and metadata. The
categories defined in the two approaches are similar to each other, but the respective authors use different nomenclature as shown in Table I \[4\], \[3\].
We observe that two different terms have been used to imply spreadsheet metadata. In the UCheck approach the term *Header* is used to indicate spreadsheet metadata. On the other hand, in the GyroSAT approach the term *Label* is used. In this paper, we use the term *Label* to indicate spreadsheet metadata, except when used specifically in conjunction with the UCheck approach.
We also observe that in the UCheck approach, the elaborate cell classification mechanism is the essence. In contrast, the GyroSAT approach uses a single cell classification strategy without confidence levels. Also, the GyroSAT approach concentrates on determination of data blocks, and assigning labels to data cells starting from the boundaries of the data blocks. In the UCheck approach however, assignment of the headers is done by moving outwards from the core cells instead of starting at the boundaries. Nevertheless, the approaches are similar in the fact that they both try to retrieve two labels or headers for data cells, one from the row and one from the column.
D. Existing Empirical Evaluations
Abraham *et al.* tested their Ucheck approach \[4\] on two sets of spreadsheets; the first set consisted of 10 spreadsheet examples from a book by Filby \[8\] and the second set consisted of 18 spreadsheets developed by undergraduate Computer Science students.
Hermans *et al.* performed an empirical evaluation of their GyroSAT approach on 50 spreadsheets \[3\] and compared them to a benchmark manually created by the authors themselves.
III. EXPERIMENTAL SETUP
As demonstrated by the above approaches, there is research interest in extracting metadata from spreadsheets, with the aim of supporting comprehension or to perform validation. However, a clear limitation these papers present is that they have never been validated with a large set of data. In this paper, we address this shortcoming in both papers, by creating a large, user-generated benchmark of labeling data and comparing both approaches against it. To gather labeling data of real-life spreadsheets, we designed an online game in which subjects were asked to select labels for a given cell in a spreadsheet.
A. Participants
To recruit participants for the labeling game, we included a link to it in the coursework of a popular Excel MOOC: *EX101x: Data Analysis: Take it to the Max*\(^1\). The second author of this paper heads the instructor team of this course. The primary goal of the course is to teach participants perform data analysis in general, and work with Excel in particular. The course covers topics like conditional formulas, pivot tables, array formulas and named ranges. It does not however provide any guidance about interpretation or selection of labels for spreadsheet cells, and as such should have no influence on the decisions of the labeling game participants. The course is free and open to everyone, though the target audience are practitioners from various fields who work with spreadsheets often in their daily work. The course also has an optional paid mode offering certificates for identity-verified participants.
To lower the threshold for participating, we did not ask for demographic information of those playing the game, however, we do have the demographics of the entire MOOC: In the two times the MOOC ran, almost 60,000 students participated.
\(^1\)https://www.edx.org/course/data-analysis-take-it-max-delftx-ex101x
The first run of the MOOC started in April 2015, and as shown in Figure 3, the median age of students was 32, and most students (56.7%) fell in the 26 to 40 age group. Almost half of the participants (45.6%) had an advanced degree (MSc or PhD), and a large majority (73.2%) were male. The top countries represented were US (29%), India (11%) and UK (6%).
The rerun in September 2015 was a bit smaller with 23,739 students. The demographics however were similar, with a median age of 30 and 53.5% of students between 26 and 40 years of age; 41.1% with an advanced degree; 72.3% male students and again US (21%), India (20%) and UK (4%) as top countries.
B. Spreadsheets
As a source of spreadsheets for the game we used spreadsheets from the EUSES corpus [9]. We split up all spreadsheets into separate worksheets, and disregarded worksheets with fewer than 15 non-empty cells, leading to a test set of 1200 spreadsheets. When a user plays the game, they get a random spreadsheet and a random cell to label.
C. The Labelling Game
1) Description: Dubbed ‘The Labelling Game’ our experiment is presented to users as a game in the browser, as depicted in Figure 4. When playing the game, the user is presented with a spreadsheet in where one cell is highlighted (orange in Figure 4), which we refer to as the target cell.
Once the participant has studied the spreadsheet, they can select all the cells that they think describe the target cell, simply by clicking on them. Then, the clicked cells also get highlighted (green in Figure 4) and their contents are recorded as the participant’s responses. The participant also has the choice to decline to answer or ‘skip’ a challenge with the option to record his or her reasons for skipping. Once the participant is satisfied that they have identified all labels for the target cell, they can proceed to the next challenge for a new target cell, and repeat this process for as long as they like.
We attempted to make the labeling game fun by using a smiley displayed in the user’s screen as shown in the right of Figure 4. As such there was no ‘end’ from the perspective of the participants; however to encourage the participants in attempting multiple challenges, the ‘happiness’ of the smiley was increased with each new target cell they encountered and did respond without skipping. An overview of the number of cells labeled in the past day, week, month, and year was also displayed. The exercise was entirely optional for the course participants and there was no attached benefits promised.
2) Implementation: We implemented the web interface using Javascript and jQuery. For the backend, we use a .NET aspx page accepting json data and writing the results into text files. We use Microsoft’s OneDrive and Excel Services JavaScript API to present the spreadsheets to the participants and to collect the participants’ cell selections. This API does not, however, support changing the color of cells, which is essential for highlighting the target cell and its user-selected labels. To provide this functionality, we used conditional formatting rules which we included inside hidden worksheets in the spreadsheet workbooks during the pre-processing described in Section III.B.
D. Phases
We ran the Labelling Game in two phases, referred to as the Pilot phase and the Evaluation phase. The Pilot phase was ran in April 2015 during the first run of the Data Analysis: Take it to the Max() course, in order to explore the possibility of using such a game for empirical studies. The Evaluation phase was ran during the rerun in September 2015 with some modifications as explained below in order to make it suitable for the evaluation of the UCheck and GyroSAT approaches.
The Pilot phase was intended as a trial in order to gauge the level of involvement from the participants and to assess if such a game could yield sufficient data that can be used for a study. In this phase the target cells were selected randomly at runtime from the set of 1200 spreadsheets used for the game. Thus, the chance of the same target cell being offered to multiple participants was low and we seldom got responses from multiple participants for the same target cell. We realized this was a limitation, as we wanted to establish correctness of the identified labels through the number of participants identifying them, or voting mechanism.
We therefore redesigned the experiment slightly during the Evaluation Phase, which was intended for evaluation of the UCheck and GyroSAT approaches. For this phase we manually pre-selected 384 target cells and modified the implementation to randomly pick target cells only from the set of those pre-selected cells. Thus, the probability of the same target cell reappearing to multiple participants was largely increased.
IV. THE DATASET
The above described Labelling Game resulted in a large set of data which we describe in this section.
TABLE II
THE DATASET
<table>
<thead>
<tr>
<th>Description</th>
<th>Pilot phase</th>
<th>Evaluation phase</th>
</tr>
</thead>
<tbody>
<tr>
<td>Total no. of responses</td>
<td>97,526</td>
<td>39,155</td>
</tr>
<tr>
<td>Total no. of responses used for study</td>
<td>77,169</td>
<td>30,728</td>
</tr>
<tr>
<td>Total no. of target cells in study</td>
<td>26,497</td>
<td>355</td>
</tr>
<tr>
<td>Total no. of participants</td>
<td>3040</td>
<td>1183</td>
</tr>
</tbody>
</table>
A. Description
Table II gives an overview of the data, divided in Pilot phase and Evaluation phase. The total number of user-generated responses in the Pilot and Evaluation phases were 97,526 and 39,155 respectively. However, for our analysis we had to discard 21.3% and 21.5% of responses from the two phases respectively due to the following reasons:
1) For our study we analyze the originating spreadsheets from the EUSES corpus, which was not possible for all the cases due to technical limitations, and thus 7.5% and 7.8% of responses in each phase were discarded.
2) The Labelling Game gives the participants the choice to decline from identifying a label by ‘skipping’ a challenge. Since for our present study we wanted to focus on positively identified labels, we decided to discard such ‘skipped’ responses amounting to 3.8% and 1.9% from the phases respectively. However, for a future study this subset of responses may make a good candidate for further investigation into understanding what makes it difficult for users to identify labels.
3) To lower the threshold for participating in the game, we did not request for identities of the participants and therefore for practical purposes we decided to assume the IP addresses as unique identifiers for unique participants. In some of the cases IP was not obtained resulting in removal of 0.5% responses in the Pilot phase and 1 response in the Evaluation phase.
4) Lastly, in certain cases we observed single participant identifying an abnormal number of labels for a single target cell. We assumed this type of behavior was due to either misunderstanding the game’s objective, or insincerity on part of the participant, and therefore we decided to discard responses tied to all instances where a single user had selected more than 10 labels for one target cell, which amounted to 9.5% and 11.7% in each of the phases respectively.
The total number of responses used for our present study are therefore 77,169 and 30,728 from the Pilot phase and the Evaluation phase respectively.
The total number of randomly picked target cells occurring in Pilot phase was 26,497. The number of target cells randomly selected from the set of 384 pre-selected target cells in Evaluation phase was 355.
The number of participants in Pilot phase and Evaluation phase were 3040 and 1183 respectively.
B. Top-3 Ranking and Majority Voting in the Evaluation Phase
A concern regarding the evaluation of the UCheck and GyrosAT approaches was to ascertain the ‘correctness’ of the labels identified by the participants. One way to address this would be to let multiple participants identify labels for the same target cell and then do a majority analysis on the set of labels for each target cell. In order to achieve this in the Evaluation phase, the target cells were randomly selected by the Labelling Game from a set of 384 pre-selected cells. Consequently, for each target cell T we obtained a set of labels \( L = \{l_1, l_2, l_3, \ldots, l_n\} \) along with their frequencies or votes based on how many participants selected each of them. Therefore it was possible to rank the labels in L based on number of votes.
However, the groups of participants who selected each of the labels in L were not exclusive and had overlaps between them. Therefore, we analyzed the Evaluation phase dataset and for each target cell, retrieved the subset of L, say \( L' = \{l_1', l_2', l_3', \ldots\} \) which more than 50% of unique participants had voted for, or in other words, the subset of L that had a majority. We found that for all the 355 target cells in Evaluation phase, in over half of the cases, three labels sufficed to obtain majority. Therefore we decided to evaluate the approaches, as described in Section V, on the top-3 voted labels from the Evaluation phase dataset.
C. User Perception of Labels
We gathered insights about how users identify labels from the dataset we obtained, described in the previous section. These insights could be used to improve existing or develop new metadata extraction approaches. First, we obtained a
characterization of the type of words that frequently get identified as labels. Second, we obtained knowledge about spatial localization of identified labels.
1) Frequently used words as labels: In Table III, we have shown 25 most frequently occurring words across multiple spreadsheets that users have identified as labels. This data is provided from the Pilot phase where, as shown in Table II, the number of target cells occurring was much larger, and consequently coming from larger variety of spreadsheets than that in the Evaluation phase. This gives an idea of the words that are commonly identified as labels across 443 EUSES corpus spreadsheets that occurred in the Pilot phase dataset. We observe that:
1. Numbers are often identified as label or metadata. This is a key finding as the automatic approaches usually neglect this aspect, for example in the UCheck approach, the Content-Based classification strategy classifies cells containing numbers as data.
2. Some common words, marked in bold in Table III, like Title, Description, Total, year, Name, Year 2000, 2001 often are identified as labels. A probable way to use this insight could be the creation of a library of common terms to train the automatic extraction approaches, which is not done as of present in any of the extraction approaches.
2) Role of Row and Column in Labelling: An important aspect to investigate is the spatial location of the labels chosen by the users, as this can provide automatic approaches clues about where to search with more emphasize when attempting to extract labels from spreadsheets. This concept however is already exploited in both the UCheck and GyroSAT approaches. In Section II.C.3 we have re-iterated how the two approaches assume that labels can be found in tuples of two, one coming from the row and one coming from the column, on which the target cell is located. However this assumption in both the approaches was not based upon empirical observation, or validated against user inputs, and thus we wanted to investigate this through analysis on our large dataset from the Labelling Game.
Since both the approaches use this concept as an assumption, we wanted to compare the insight we obtained with the results of the evaluation of the approaches. Hence, we used the set of top-3 voted responses from the Evaluation phase for this analysis, which is the same set used for evaluation of the approaches as described in the next section (Section V) of the paper.
As seen in Figure 5, 38.5% and 44.5%, i.e. 83% in total, of the set of top-3 labels identified by users occur in the same row and same column respectively, as that of their corresponding target cells.
In Section II.C, we have observed how both the UCheck and GyroSAT approaches already have used this insight as an assumption since both try to retrieve labels in pairs, one coming from the row and one coming from the column. This result thus validates that assumption compared to user perception of labels. However, the validity is not lent to the actual results that they yield, which we examine in the next Section V, and which is the combined outcome of all assumptions and followed strategies.
V. EVALUATION
After having investigated how users perceive labels in spreadsheets, we turn our attention to how the two existing approaches perform compared to users.
A. Accuracy Measures
We compare the two approaches by calculating precision and recall. As explained in Section IV.B, since in over half of the cases in the Evaluation phase, three labels sufficed to obtain the majority vote, we decided to calculate precision and recall against the top-3 of voted answers for each target cell.
In other words, we are calculating whether the two approaches correctly identify the most popular three labels selected by users. The results are shown in Table IV.
As seen in Table IV, both UCheck and GyroSAT approaches perform fairly well in terms of Precision with average precision of 70% and 71% respectively.
A possible explanation of this can be found in the fact that both the approaches assume rectangular table structures and assume that labels can be found in tuples of two, one coming from the column and one coming from the row. Both the approaches, thus, usually retrieve at most two labels per target cell and therefore evidently are fairly good in terms of precision as compared to users. This is also partly explained by the result (Figure 5) that 83% of the top-3 user selected labels occur in either the same row or the same column of their corresponding target cell.
2) Recall: The value of Recall measures how many of the top-3 labels are selected by the approaches for each target cell. The average recall is calculated over the whole dataset. As seen in Table IV, the approaches UCheck and GyroSAT do not perform well in terms of Recall with average recall of 34% and 45% respectively. This result indicates that the approaches are not sufficiently capable of retrieving labels compared to users when performing over a large dataset comprising of real-life spreadsheets. A question that arises at this point is, even though the approaches assume that labels are found either in the same row or column as that of the target cell, which is a valid assumption based on Figure 5, why are they yet unable to provide higher average recall value? The answer to this lies in the fact that the approaches retrieve labels from within the innermost immediately enclosing data-blocks in which the target cell is contained and do not travel across the boundaries of the innermost data-block in search of labels. Yet, our results show, detailed in Section V.B.2, that nested block structures which share a common set of labels across all the blocks are quite common, and are a type of spreadsheet structure that is largely hindering the performance of both UCheck and GyroSAT approaches. Further information on this follows in the next subsection where we explore in detail what type of spreadsheet structures pose difficulties for the approaches.
B. Performance vs. Spreadsheet Structures
In the previous, we evaluated the two approaches in general terms of their accuracy. In this subsection we zoom in more, and explore two different types of spreadsheet structures: those where the approaches perform well, and those where they perform poorly. We are not interested in investigating cases where one approach is performing better than the other as we believe in such cases, one of them has already overcome the other’s shortcomings and by combining the two approaches such scenarios can be effectively tackled.
1) Structure Type-I: Both the approaches perform well: Table V shows the top 5 files on which the UCheck approach performed best. We see that on all of them the GyroSAT approach has also faired similarly well, and has performed better in 4 out of the 5 cases. Figure 6 shows the top one in the list 02YEFinSAMPLE.xls and we see that it has relatively simple structure, similar to the spreadsheet shown in Figure 1, with only one table, no nested data blocks, and no hierarchical headers.
<table>
<thead>
<tr>
<th>Spreadsheet Filename</th>
<th>UCheck Match Percentage</th>
<th>GyroSAT Match Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>02YEFinSAMPLE.xls</td>
<td>60.82%</td>
<td>69.22%</td>
</tr>
<tr>
<td>free-excel-tutorial.xls</td>
<td>58.43%</td>
<td>48.34%</td>
</tr>
<tr>
<td>Brocade%20OSF%20Comments.xls</td>
<td>51.91%</td>
<td>72.13%</td>
</tr>
<tr>
<td>databaseleonerev.xls</td>
<td>46.63%</td>
<td>55.74%</td>
</tr>
<tr>
<td>bb5-list.xls</td>
<td>45.93%</td>
<td>59.28%</td>
</tr>
</tbody>
</table>
Therefore, in order to create spreadsheets from which automatic extraction of metadata is easier, users can try to adhere to the above mentioned characteristics.
2) Structure Type-II: Both the approaches perform poorly: Table VI shows the 5 spreadsheet files on which both the approaches performed poorly. We see that except for one, the approaches perform poorly with spreadsheets having:
- only one table per sheet
- no nested data blocks
- no hierarchical headers
<table>
<thead>
<tr>
<th>Spreadsheet Filename</th>
<th>UCheck Match Percentage</th>
<th>GyroSAT Match Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>2003FinalPopAgeStruct#A857A.xls</td>
<td>0.00%</td>
<td>6.59%</td>
</tr>
<tr>
<td>amendment2SectionJ01a.xls</td>
<td>0.00%</td>
<td>6.78%</td>
</tr>
<tr>
<td>Funded%20-%20February#A835C.xls</td>
<td>9.41%</td>
<td>2.25%</td>
</tr>
<tr>
<td>lesson%20planner-soli#A840C.xls</td>
<td>12.33%</td>
<td>0.00%</td>
</tr>
<tr>
<td>DCMA.xls</td>
<td>0.00%</td>
<td>13.26%</td>
</tr>
</tbody>
</table>
for the rest either one of the approaches has completely failed to retrieve results. Thus for illustration we choose \texttt{Funded\%20\%20February\#A835C.xls} on which both the approaches have managed to retrieve results but very poorly. As shown in Figure 7, we see that this spreadsheet has similarities with Figure 2 and is characterized by nested vertical data blocks, blank rows separating the blocks, and all the vertical blocks sharing one single set of column headers.
<table>
<thead>
<tr>
<th>The approaches perform poorly with spreadsheets having</th>
</tr>
</thead>
<tbody>
<tr>
<td>• repeated or nested vertical blocks</td>
</tr>
<tr>
<td>• blank rows used to separate the blocks</td>
</tr>
<tr>
<td>• vertical blocks all sharing same column headers</td>
</tr>
</tbody>
</table>
The poor performance of the approaches on spreadsheets having above characteristics stems from the fact that both the approaches depend heavily on determination of block structures. UCheck approach uses the fence identification strategy and GyroSAT approach follows the determination of data-blocks based on connected cells. When such blocks are identified, the approaches look for labels in the borders of the blocks, or move outwards from the target cell looking for labels till border is reached. However, as shown in Figure 7, several vertical blocks are repeated, yet they share the same column headers acting as metadata on the top of the spreadsheet. This set of metadata on the top is missed by the extraction approaches when the target cell is located in any of the blocks below that of the first one from top. This is also the reason due to which, as reported in Section IV.C.2, in spite of the assumption that labels occur in the same row or same column being valid compared to user perception, the approaches still fail to perform reliably as the emphasis they put on the immediate surrounding data-blocks, pre-empts their search at the boundaries of such blocks. Therefore for improvement, automatic approaches need to overcome the challenge imposed by such block structures with shared metadata across their boundaries.
VI. Research Questions Revisited
After the analysis of the dataset in Section IV, and the evaluation of the UCheck and GyroSAT approaches in Section V, in this section we revisit our research questions and reflect on the answers.
RQ1: How do users perceive and identify metadata in spreadsheets?
From the results presented in Section IV.C, we can conclude that the way users identify metadata in spreadsheets can be characterized.
Firstly, as shown in Section IV.C.1, we note that numbers are often identified as metadata, a fact which the automatic approaches tend to overlook, as for example the UCheck approach classifies cells containing numbers as data cells based on its Content-Based Classification strategy.
We also observe that certain generic words like \textit{Title}, \textit{Description}, \textit{Total}, \textit{year}, \textit{Name}, \textit{Year} 2000, 2001 get frequently identified as metadata across multiple spreadsheets. Since it is difficult for automatic approaches to derive any semantic information from contents in a spreadsheet, this finding can prove to be useful if a library is created of frequently used words as metadata. For example, approaches could classify cells containing such words as metadata with higher level of confidence, when using the confidence level technique of the UCheck approach (Section II.C.1). Such libraries can also be created for domain specific terminology to make them more fine grained.
Secondly, as shown in Section IV.C.2, a large majority (83\%) of the top-3 labels identified by the users are located in the same row or same column as that of their corresponding target cells. This characteristic is however already utilized as both the UCheck and GyroSAT approaches assume this, and their assumptions are thus validated compared to users.
RQ2: How well do two existing automatic approaches perform compared to the users?
From the results shown in Section V.A, we observe that the UCheck and GyroSAT approaches have average precision of 70\% and 71\%. We also observe they have average recall of 34\% and 45\% respectively. From these results we can conclude that although the approaches are fairly precise, they are not practically reliable in terms of capability of retrieval. To be reliably used for documentation generation, a higher recall value is desired. Since both the approaches limit the number of metadata retrieved by 2, a proposition could be to raise this limit to higher values irrespective of the risk of decreasing the precision as precision is already fairly high.
RQ3: In what type of spreadsheets do both the approaches perform well, and in what type of spreadsheets they have difficulties identifying metadata as compared to the users?
As shown in Section V.B.1, we observe that for relatively simple spreadsheet structures with one single table per sheet the approaches perform well. However, as shown in Section V.B.2, for complex spreadsheet structures the approaches fail to perform well. It is observed that nested block structures that share common set of metadata poses problem for the approaches as they only search for metadata around their innermost enclosing data blocks. Therefore it is necessary to develop algorithms that do not limit their search at the boundaries of the innermost enclosing data blocks, but can traverse across boundaries in order to reach the borders of the outermost block or table as well.
VII. Related Work
There are several works related to this research direction. The two approaches under consideration Ucheck [4], [6] and GyroSAT [2] are related, for a more extensive overview, see Section II-C1.
Furthermore there is our own work on the extraction of class diagrams [3] and dataflow diagrams [2]. Cunha \textit{et al.} also worked on extracting information from spreadsheets, with the goal of transforming them into relational databases [10].
Specifically focusing on the spreadsheets made by scientists, de Vos \textit{et al.} [11] have designed a methodology to extract
ontologies in the form of class diagrams from spreadsheets. While their described method is currently manual, they state it could be automated in the future, leading to an interesting new test set for our current work.
Most related is the work by Chatvichienchai, who proposed a spreadsheet layout based metadata extraction approach [5], [12] for the purpose of searching spreadsheets over the web or in document repositories. While the approach shares the goal of extracting metadata, their overarching goal is to return better search results of relevant spreadsheets, causing their metadata to be more high-level than ours.
Chen et al. too presented an approach for extracting information from spreadsheets on the web [13] with the goal of integrating spreadsheets with relational database management systems. This approach also performs metadata extraction, but only supports spreadsheets with a simple, data frame structure.
While their goals differ, these two final approaches are interesting related works, and in future work we plan to study these too, as they also have not been evaluated against large number of user responses.
Another group of related works concern the usage of MOOC data by researchers. Vihavainen et al. used data collected from MOOC participants to successfully introduce techniques for improving participant approval and engagement in a MOOC on programming [14]. Huang et al. used data collected from MOOCs to understand behavior of students with increased inclination to post in the MOOC forums [15]. However, this type of research is intended to utilize MOOC data for the education and online education field. In this paper, we have used MOOC data to address the need of large scale user participation in context of empirical software engineering research.
**VIII. DISCUSSION**
A. Covering Other Approaches of Metadata Extraction
In this paper we evaluated two existing approaches for metadata extraction from spreadsheets. We also created a user generated benchmark to evaluate approaches on. Subsequently, we can evaluate other approaches like [13] against this benchmark as discussed in Section VII.
B. Threats to Validity
1) Threats to External Validity: A threat to external validity of our results concerns the representativeness of the EUSES [9] corpus. However it is a large set, the spreadsheets have been collected from practice, and it has been used in several works of spreadsheet research [16]. In his work Jansen [17] shows how the EUSES corpus is also similar to the more recent ENRON corpus [18], which is a collection of spreadsheets obtained from the e-mail archives of Enron Corporation, disclosed during the trials related to its bankruptcy.
2) Threats to Internal Validity: A threat to internal validity of our results is caused by the manual pre-selection of target cells used for the Labelling Game, during the Evaluation phase. However, completely random pre-selection of target cells results in irrelevant cells being selected, for example blank or empty cells, for which participants tend to ‘skip’ answering. Thus to obtain more meaningful results, this was a necessary trade-off we opted for.
**IX. CONCLUDING REMARKS**
The objective of this work is to understand how users identify spreadsheet metadata, and how two existing approaches perform compared to the users. The goal is to assess if the approaches can be reliably used as an initial step in automatic generation of documentation from spreadsheets.
In this paper, we have described an experimental setup which consists of an online game included as part of a MOOC. From the large resulting dataset consisting of responses from the MOOC participants, we have learned how users identify spreadsheet metadata, and obtained insights that could be used to improve or develop automatic metadata extraction approaches. In addition, we have also performed evaluation of two existing metadata extraction approaches on the dataset. We observe that the UCheck and GyroSAT approaches of spreadsheet metadata extraction yield average Precision of 70% and 71%, and average Recall of 34% and 45%, over the whole dataset, indicating the need to be improved further in order to be practically reliable. Specific types of spreadsheet structures pose challenges to both the approaches, like nested block structures sharing same set of metadata, and data blocks separated by blank rows. The results also show that identification of metadata by users is characterized by traits or patterns. For example, groups of commonly used words, and data located in specific positions of tables inside spreadsheets - like column headers and row headers - get frequently identified as metadata by users.
For future work, using all the results and insights obtained from this paper, we aim to develop a spreadsheet metadata extraction approach that can yield better recall compared to the baseline we have obtained for the UCheck and GyroSAT approaches in this study. Addressing the problem of nested block structures, and using a library of frequently used terms as labels for training our extraction approach, are two directions we would like to explore next, towards our ultimate goal of automatic generation of documentation from spreadsheets.
REFERENCES
|
{"Source-Url": "http://pure.tudelft.nl/ws/portalfiles/portal/7610373/TUD_SERG_2016_002.pdf", "len_cl100k_base": 9847, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 44225, "total-output-tokens": 11914, "length": "2e13", "weborganizer": {"__label__adult": 0.0003898143768310547, "__label__art_design": 0.0009145736694335938, "__label__crime_law": 0.00044035911560058594, "__label__education_jobs": 0.0188751220703125, "__label__entertainment": 0.0001264810562133789, "__label__fashion_beauty": 0.0002605915069580078, "__label__finance_business": 0.0007166862487792969, "__label__food_dining": 0.0003838539123535156, "__label__games": 0.0006465911865234375, "__label__hardware": 0.0007963180541992188, "__label__health": 0.0005431175231933594, "__label__history": 0.0005216598510742188, "__label__home_hobbies": 0.00019097328186035156, "__label__industrial": 0.00047469139099121094, "__label__literature": 0.00069427490234375, "__label__politics": 0.00032711029052734375, "__label__religion": 0.0005207061767578125, "__label__science_tech": 0.0625, "__label__social_life": 0.00030159950256347656, "__label__software": 0.039886474609375, "__label__software_dev": 0.86962890625, "__label__sports_fitness": 0.00021851062774658203, "__label__transportation": 0.0004973411560058594, "__label__travel": 0.0002543926239013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51433, 0.02931]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51433, 0.48818]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51433, 0.93192]], "google_gemma-3-12b-it_contains_pii": [[0, 902, false], [902, 902, null], [902, 902, null], [902, 5886, null], [5886, 8354, null], [8354, 14866, null], [14866, 18444, null], [18444, 23361, null], [23361, 27857, null], [27857, 31854, null], [31854, 36572, null], [36572, 42700, null], [42700, 47948, null], [47948, 51433, null], [51433, 51433, null], [51433, 51433, null], [51433, 51433, null]], "google_gemma-3-12b-it_is_public_document": [[0, 902, true], [902, 902, null], [902, 902, null], [902, 5886, null], [5886, 8354, null], [8354, 14866, null], [14866, 18444, null], [18444, 23361, null], [23361, 27857, null], [27857, 31854, null], [31854, 36572, null], [36572, 42700, null], [42700, 47948, null], [47948, 51433, null], [51433, 51433, null], [51433, 51433, null], [51433, 51433, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51433, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51433, null]], "pdf_page_numbers": [[0, 902, 1], [902, 902, 2], [902, 902, 3], [902, 5886, 4], [5886, 8354, 5], [8354, 14866, 6], [14866, 18444, 7], [18444, 23361, 8], [23361, 27857, 9], [27857, 31854, 10], [31854, 36572, 11], [36572, 42700, 12], [42700, 47948, 13], [47948, 51433, 14], [51433, 51433, 15], [51433, 51433, 16], [51433, 51433, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51433, 0.12195]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9a67161bd2c039028d6efa7caaed5e1855f4fd9f
|
Citation for published version
DOI
Link to record in KAR
https://kar.kent.ac.uk/24064/
Document Version
UNSPECIFIED
Copyright & reuse
Content in the Kent Academic Repository is made available for research purposes. Unless otherwise stated all content is protected by copyright and in the absence of an open licence (eg Creative Commons), permissions for further reuse of content should be sought from the publisher, author or other copyright holder.
Versions of research
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Enquiries
For any further enquiries regarding the licence status of this document, please contact: researchsupport@kent.ac.uk
If you believe this document infringes copyright then please contact the KAR admin team with the take-down information provided at http://kar.kent.ac.uk/contact.html
Functional Programming
Olaf Chitil
University of Kent, United Kingdom
Abstract
Functional programming is a programming paradigm like object-oriented programming and logic programming. Functional programming comprises both a specific programming style and a class of programming languages that encourage and support this programming style. Functional programming enables the programmer to describe an algorithm on a high-level, in terms of the problem domain, without having to deal with machine-related details. A program is constructed from functions that only map inputs to outputs, without any other effect on the program state. Thus a function will always return the same output, regardless of when and in which context the function is used. These functions provide clear interfaces, separate concerns and are easy to reuse. A small and simple set of highly orthogonal language constructs assists in writing modular programs.
1 Introduction
Functional programs are written by composing expressions that can have values of any type, including functions and large unbounded data structures. The functional programming paradigm avoids the complications of imperative programming language features such as mutable variables and statements in favour of a small set of highly orthogonal language constructs. The simplicity of the functional computation model assists in writing modular programs, that is, programs that separate concerns, reuse code and provide clear interfaces between modules.
A functional programming language encourages and supports the functional programming style. Languages such as Haskell [1, 2] and Clean [3] are called purely functional. Many widely used functional languages such as Lisp [4], Scheme [5, 6], ML [7, 8] and Erlang [9] still include imperative language features that conflict with functional programming ideals. Other languages such as APL and Python include constructs that support functional programming but are not considered functional programming languages, because they do not strongly encourage a functional programming style. The programming language of the Mathematica system and the XML transformation language XSLT are functional languages. Popular spreadsheet languages such as Microsoft Office Excel are restricted functional languages.
Like logic programming, functional programming is a declarative programming paradigm. This paradigm comprises programming on a high level, expressing tasks directly in terms
of the problem domain, without having to deal with implementation details such as memory allocation. Nonetheless functional programming is not about specifying a problem without knowing a constructive solution; instead functional programming allows the programmer to describe algorithms without distraction by unnecessary, machine-related details.
Historically functional languages have been used intensively for artificial intelligence and symbolic computations. More generally, functional languages are often chosen for rapid prototyping and the implementation of complex algorithms, possibly working on complex data structures, that are hard to “get right” in other programming paradigms.
2 Characteristic Features
Functional programming is characterised by a small set of language and programming style features.
2.1 Expressions have values — no side-effects
In imperative programming every procedure or method is defined by a sequence of statements. A computation consists of sequential execution of one statement after the other; each statement changes the global state of the computation, that is, changes the values of variables, writes output or reads input. Such changes of a global state are called side-effects.
In contrast, a functional program consists of a set of function definitions. Each function is defined by an expression. Expressions are formed by applying functions to other expressions; otherwise constants, variables and a few special forms are also expressions. Every expression has a value. A computation consists of determining the value of an expression, that is, evaluating it. This evaluation of an expression only determines its value and has no side-effects. A variable represents a fixed value, not a memory location whose content can be modified.
Unless otherwise stated, all examples in this chapter will be written in Haskell [1, 2], a purely functional programming language. The following program defines two functions, isBinDigit and max. Each definition consists of a line declaring the type of the function, that is, its argument and result types, and the actual definition is given in the form of a mathematical equation. The function isBinDigit takes a character as argument and decides whether this character is a binary digit, that is, whether it is the character ‘0’ or (operator ||) the character ‘1’. The function max takes two integer arguments and returns the greater of them. In contrast to imperative languages, the conditional if then else is an expression formed from three expressions, here $x > y$, $x$ and $y$, not a statement with statements after then and else.
\[
\begin{align*}
isBinDigit :: \text{Char} & \rightarrow \text{Bool} \\
isBinDigit x &= (x == '0') || (x == '1') \\
max :: \text{Integer} & \rightarrow \text{Integer} \rightarrow \text{Integer}
\end{align*}
\]
max x y = if x > y then x else y
In standard mathematics and most (also functional) programming languages the arguments of a function are surrounded by parenthesis and separated by commas. In Haskell, however, functions and their arguments are separated by blanks. So isBinDigit 'a' evaluates to False and max 7 4 evaluates to 7 whereas max (7,4) is not a valid expression. Parenthesis are needed to group subexpressions; for example, max 7 (max 4 9) evaluates to 9 whereas max 7 max 4 9 is an invalid expression. This syntax is convenient when higher-order functions are used (Section 2.4).
2.2 Iteration through recursion
Functional programming disallows or at least discourages a modification of the value of a variable. So how shall an iterative process be programmed? Imperative languages use loops to describe iterations. Loops rely on mutable variables so that both the loop condition changes its value and the desired result is accumulatively obtained. For example the following program in the imperative language C computes the product of all numbers from 1 to a given number n.
```c
int factorial(int n) {
int res = 1;
while (n > 1) {
res = n * res;
n = n-1;
}
return res;
}
```
Functional programming implements iteration through recursion. The following functional program is a direct translation of the C program. The iteration is performed by the recursively defined function facWork, that is, the function is defined in terms of itself, it calls itself.
```haskell
factorial :: Integer -> Integer
factorial n = facWork n 1
facWork :: Integer -> Integer -> Integer
facWork n res = if n > 1 then facWork (n-1) (n*res) else res
```
In every recursive call of the function facWork the two parameter variables have new values; the value of a variable is never changed, but every function call yields a new instance of the parameter variables. The parameter variable res is called an accumulator. In general a parameter variable is an accumulator if in recursive calls it accumulates the result of the function, which is finally returned by the last, non-recursive call.
The evaluation of an expression can be described as a sequence of reduction steps. In each step a function is replaced by its defining expression, or a primitive function is evaluated:
```
factorial 3
= facWork 3 1
= if 3 > 1 then facWork (3-1) (3*1) else 1
= if True then facWork (3-1) (3*1) else 1
= facWork (3-1) (1*3)
= facWork 2 (1*3)
= facWork 2 3
= if 2 > 1 then facWork (2-1) (2*3) else 3
= if True then facWork (2-1) (2*3) else 3
= facWork (2-1) (2*3)
= facWork 1 (2*3)
= facWork 1 6
= if 1 > 1 then facWork (1-1) (1*6) else 6
= if False then facWork (1-1) (1*6) else 6
= 6
```
For the same expression several different reduction step sequences exist, as Section 4 will show, but all finite sequences yield the same value.
In imperative languages programmers usually avoid recursion because of its high performance costs, including its use of space on the runtime stack. The lack of side-effects enables compilers for functional programming languages to easily translate simple recursion schemes as present in `facWork` into code that is as efficient as that obtained from the imperative loop.
The following is a simpler definition of the `factorial` function that does not use an accumulator. It resembles the common mathematical definition of the function.
```
factorial :: Integer -> Integer
factorial n = if n > 1 then factorial (n-1) * n else 1
```
### 2.3 Data structures
Functional programming languages directly support unbounded data structures such as lists and trees. Such data structures are first-class citizens, that is, they are used like built-in primitive types such as characters and numbers. They do not require any explicit memory allocation or indirect construction via pointers or references.
A list is a sequence of elements, for example `[4,2,2,5]`. It can have any length. In statically typed languages all elements must be of the same type; `[Integer]` is the type of lists whose elements are of type `Integer`. The function `enumFromTo` constructs a list:
```
enumFromTo :: Integer -> Integer -> [Integer]
```
The value of the expression `enumFromTo 3 7` is the list of integers \([3,4,5,6,7]\). In the function definition \([\ ]\) denotes the empty list and : is an operator that combines a value and a list to a list, such that the value is the first element. \([\ ]\) and : are constants and operators for lists similar to 0 and + for numbers.
The list is the most frequently used data structure in functional programming. Lists can be used for representing many other data structures such as sets and bags. Lists are also frequently used as intermediate data structures that replace and modularise iterative processes:
```haskell
factorial :: Integer -> Integer
factorial n = product (enumFromTo 1 n)
```
This definition expresses clearly that the factorial of \(n\) is the product of the numbers from 1 to \(n\). Both functions `product` and `enumFromTo` have clear meanings and are likely to be reused elsewhere. Some optimising compilers will remove the intermediate list and produce the same efficient code as for the imperative loop (cf. Chapter 7.6 of \[2\]).
In several functional languages the definition of some tree-structured data type looks similar to a context free grammar:
```haskell
data Expr = Val Bool
| And Expr Expr
| Or Expr Expr
```
`Expr` is a new type whose values are built from the `data constructors` `Val`, `And` and `Or`. Hence `And (Val True) (Or (Val False) (Val True))` is an expression that constructs the syntax tree of `True && (False || True)`.
Many functional languages also provide `pattern matching` as a mechanism that simultaneously tests the top data constructor of a value and gives access to its components:
```haskell
eval :: Expr -> Bool
eval (Val b) = b
eval (And e1 e2) = eval e1 && eval e2
eval (Or e1 e2) = eval e1 || eval e2
```
So the value of `eval (And (Val True) (Or (Val False) (Val True)))` is `True`. Data structures as first-class citizens and pattern matching together enable clear and succinct definitions of complex algorithms on unbounded data structures, for example standard algorithms on balanced ordered trees \[10\]. Functional programming encourages the programmer to view a large data structure as a single value instead of concentrating on its many constituent parts.
Besides data constructors and variables, patterns may also contain values of built-in types such as numbers. If the patterns of several defining equations overlap, then the
first matching equation defines the function result. In the next definition of the function
**factorial** the first equation defines the result value for the argument 0 and the second
equation defines it for all other arguments.
```haskell
factorial :: Integer -> Integer
factorial 0 = 1
factorial n = n * factorial (n-1)
```
Because functional languages are often used for symbolic computations, many func-
tional languages provide a large set of numeric types, including arbitrary size integers,
rationals and complex numbers, and aim for precise and efficient implementations of
basic numeric operations.
### 2.4 Higher-order functions
In functional programming functions are first-class citizens. The value of an expression
may be a function and functions can be passed as arguments to other functions and
returned as results from functions. A function that takes another function as argument
or that returns a function is called a **higher-order function**.
A standard higher-order function is the function `map`:
```haskell
map :: (a -> b) -> [a] -> [b]
```
It takes a function and a list as arguments and applies the function to all list elements,
returning the list of the results. For example, the value of `map even [1,2,3,4]` is
`[False,True,False,True]`. The type variables `a` and `b` in the type of `map` will be
discussed in Section 3.
Even though the function `map` is usually defined recursively and hence iteratively
consumes its argument list and produces its result list, the programmer can view a
higher-order function such as `map` as processing a whole large data structure in a single
step.
Many traditional imperative programming languages such as C also allow passing
functions as arguments and results, and hence the definition of a higher-order function
such as `map`, but the definition of new functions through composing existing functions is
rather limited. For example, such limitations make it impossible to define the function
`scale` by composing the existing functions `map` and `*` (or a multiplication function; in
C operators are different from functions). Here the function `scale` shall take a list of
numbers (e.g. prices) and multiply all of them by the same given factor; for example,
`scale 1.25 [2,0,4]` yields `[2.5,0,5]`. The difficulty in defining the function `scale` in
terms of `map` and `*` lies in that the function to be mapped over the list is not `*`, which
requires two arguments, but a function that takes only one argument and multiplies it
with the given factor. A functional language provide at least one of two ways of solving
this task:
```haskell
scale :: Float -> [Float] -> [Float]
```
scale factor prices = map scaleOne prices
where
scaleOne :: Float -> Float
scaleOne p = factor * p
The preceding, first solution defines a function scaleOne locally, so that the local definition can use the variable factor, because it is in scope. The second definition below builds the function that is to be mapped over the list by partially applying the function $\ast$ to one argument. So ($\ast$) factor is an expression denoting the function that takes one argument and multiplies it with factor.
scale :: Float -> [Float] -> [Float]
scale factor prices = map ((*) factor) prices
For functional programming not just the presence of higher-order functions, but also the language support for composing arbitrary functions to generate an unbounded number of functions at execution time are essential.
Many higher-order functions are included in the definitions of functional languages or their standard libraries. For lists, besides the function map, the function foldr (or reduce) is the most commonly used higher-order function. This function combines all list elements with a given binary function, using a given constant for processing the empty list:
product :: [Integer] -> Integer
product xs = foldr (*) 1 xs
So
product [3,2,4]
= 3 * (2 * (4 * 1))
= 24
Although our examples only show higher-order functions that take simple (first-order) functions as arguments, functions that take functions as arguments which take functions as arguments and so forth are used frequently [11].
2.5 Point-free programming
There exists a shorter definition of the function product as the functional value of the expression foldr ($\ast$) 1:
product :: [Integer] -> Integer
product = foldr ($\ast$) 1
The factorial function can also be defined using the function composition operator ($.$):
factorial :: Integer -> Integer
factorial = product . (enumFromTo 1)
Expressions or function definitions without argument variables are called point-free. Often they are shorter and simplify program transformation, but they can be harder to understand and to modify. Most functional programs are written in a mixture of point-free and “point-full” style.
2.6 Embedded Domain Specific Languages
Identifying the right abstractions is a key component of designing a program. In functional programming the reuse of existing, mostly higher-order functions or, especially for new data structures and problem domains, the identification of new higher-order functions is central. Several functional programming languages such as Lisp [4] and Scheme [5, 6] also provide an elaborate macro mechanism for extending the language by new constructs. Thus the design of a solution to a problem and especially the design of a general library for a problem domain often leads to the design of an embedded domain specific language. This is is a collection of higher-order functions or new language constructs that together substantially simplify programming solutions in a given domain. The embedded language hides domain specific algorithms and data structures behind an easy to use interface.
As a simple example for an embedded domain specific language the following interface outlines an embedding of propositional logic. The implementation of the type of propositional formulae, Formula, is hidden.
true :: Formula
false :: Formula
variable :: String -> Formula
(\&) :: Formula -> Formula -> Formula
(\|) :: Formula -> Formula -> Formula
negate :: Formula -> Formula
satisfiable :: Formula -> Bool
tautology :: Formula -> Bool
Logical formulae can be constructed and checked for whether they are satisfiable or even tautologies. For example, tautology (negate (variable "a") \| variable "a") yields True.
More complex are the widely studied and used embedded domain specific languages for describing parsers through context-free grammars. The following example is a simple parser for fully bracketed Boolean expressions, using Swierstra’s parser interface [12].
pExpr :: Parser Expr
pExpr = Val True <$ pStr "True"
8
Val False <$ pStr "False"
And <$ pSym '(' <*> pExpr <*> pStr "&&" <*> pExpr <*> pSym ')
Or <$ pSym '(' <*> pExpr <*> pStr "||" <*> pExpr <*> pSym ')
The definition of the parser pExpr looks like a context-free grammar. The operator <$ combines alternative parsers. The operators <*> and <*> concatenate two parsers. Only <$ does not relate to a construct of a context-free grammar; it turns a function for constructing the desired result into a parser and concatenates it with another parser. All operators associate to the left. For the operators <*> and <*> only the left argument yields the parser’s result, whereas for the operator <*> both arguments contribute to the parser’s result. The function pStr constructs a parser that accepts the given string, returning a required but superfluous empty tuple (). Parsing "(True&&(False||True))" with the parser pExpr will yield And (Val True) (Or (Val False) (Val True)).
Simple implementations use backtracking and define the parser type as a function that maps the input string to a list of possible parse results and remaining input:
```haskell
data Parser a = P (String -> [(a,String)])
```
Here a is a type variable as will be discussed in Section 3. More efficient parser implementations use more sophisticated parser representations [13].
An embedding of the logical language Prolog in Haskell is described in [14]. Pretty printing, graphics, simulation and music composition are further domain examples [15].
There is no clear boundary between an abstract data type and an embedded domain specific language, but the later gives the programmer the illusion of a new programming language for a specific domain. An embedded domain specific language strives to hide some features of the host language, give domain specific compiler errors and enable domain specific debugging.
### 2.7 Program Algebra
Because in pure functional programming evaluation of an expression only determines its value but does not cause any side-effects, functional programs have a rich algebra. For example the law
```
map f . map g = map (f . g)
```
holds for any expressions f and g (whose values must be functions). If the functions f and g modified a common variable, this equation would be unlikely to hold. Hence in imperative programming languages hardly any non-trivial semantic equalities hold. The term *referentially transparent* is often used synonymously with *side-effect free* in functional programming. By definition a language is referentially transparent if a subexpression can be replaced by an equal subexpression without changing the meaning of
the whole expression or program. This, however, is just the definition of what it means
for two subexpressions to be equal. Relevant and useful is that many equations with
arbitrary unknown subexpressions hold, that is, the equational algebra is rich.
Standard higher-order functions such as map and foldr come with well-known laws.
In a new problem domain functional programmers strive for identifying functions with
rich algebraic properties. Such functions are highly versatile and thus reusable.
Program algebra has already been used to describe the evaluation of factorial 3 as
a sequence of single reduction steps. So evaluation can be described within the language,
without any reference to, for example, the memory locations of a computer.
Functional programming cultivates a school of program development by algebraic
derivation. The programmer starts with a set of desired properties expressed as equa-
tions or a highly inefficient implementation. These are then transformed step by step
using equational reasoning until an efficient implementation is obtained. Only using pro-
gram algebra guarantees that specification and implementation are semantically equal.
Reaching an efficient implementation is not automatic but requires the ingenuity of the
programmer. However, many strategies and heuristics for deriving programs have been
developed [2, 16].
Compilers for functional programming languages use program algebra for optimis-
tations. For example, standard evaluation of map (f . g) is more efficient than the eval-
uation of map f . map g, because the later produces an intermediate list. A compiler
optimisation may hence replace the latter by the former expression. Compilers usually
perform a large number of very simple transformations, but altogether they may change
a program substantially [17]. In contrast, optimising compilers for imperative languages
require sophisticated analyses to detect side-effects that invalidate most optimisations.
Algebraic laws also prove to be useful for testing. A law such as
reverse (reverse xs) = xs
is a partial specification of the function reverse, which returns a list with all elements
in reverse order. A correct implementation of reverse should meet this property for all
finite lists xs. A simple tool can automatically test the law for a large number of lists
[18].
In a language without side-effects, program components can be tested separately and
test cases can be set up more easily. Equational properties are both documentation and
expressive test cases. They encourage the programmer to identify functions that meet
non-trivial equational properties.
3 Types
Functional programming languages support both avoidance and localisation of program
faults by having strong type systems. The type systems guarantee that all execution
errors such as the application of a function to unsuitable arguments are trapped before
they occur. There exist both functional languages with dynamic type systems (e.g. Lisp,
Erlang) that provide flexibility by performing all type checks at run-time and that often
do not include a fixed syntax for types, and functional languages with static type systems (e.g. ML, Haskell).
Most static type systems of functional languages are based on the Hindley-Milner type system [19]. This type system is flexible in that it allows (parametrically) polymorphic functions and data structures. That is, a function may take arguments of arbitrary type if its definition does not depend on that type. For example, the function `reverse` that takes a list and returns a list of all elements in reversed order has the type `[a] -> [a]`. Here `a` is a type variable that represents an arbitrary type. The function `reverse` can be applied to a list with elements of any type. The re-occurrence of `a` in the type of the result states clearly that the elements of the result list are of the same arbitrary type as the elements of the argument list. A more complex type is that of `map` given before in Section 2.4:
\[
\text{map} :: (a -> b) -> [a] -> [b]
\]
The repeated occurrences of the type variables `a` and `b` clearly state that (a) the type of the argument list elements has to agree with the argument type of the function, (b) the result type of the function has to agree with the type of the result list elements, but these two types can be different, as in the case of `map even [1,2,3,4]` evaluating to `[False,True,False,True]`.
Another feature of the Hindley-Milner type system is that types can be inferred automatically. Hence type declarations such as
\[
\text{reverse} :: [a] -> [a]
\]
are optional and many programmers only add them when program development has stabilised after an initial phase of rapid prototyping.
Several functional languages extend the Hindley-Milner type system substantially. ML [7] is renowned for its expressive module system. Types describe the interfaces of modules, how modules can be combined and how abstract data types can be defined. The Haskell [1, 2] class system uses classes, which are similar to types and remind of the object-oriented paradigm, to describe interfaces of smaller pieces of code (e.g. a few functions that express an ordering) than modules and to enable their combination with little syntactic overhead. OCaml [20, 8] has a subtyping relationship between its class types to enable object-oriented programming. Clean [3] annotates standard types with uniqueness information to express that certain values are used only in a single-threaded way, which enables a form of purely function input/output (see Section 5) and compilation to more efficient code. Clean also supports generic, also called polytypic programming. Polytypic language features enable the programmer to define a function by induction on the structure of values of types. Like a parametrically polymorphic function such a function works on all types, but its definition depends on the structure of the values. Example applications are pretty printers, parsers and equality functions.
Further extensions of type systems in many other directions are a major topic of research. A type describes a property of an expression or a piece of code. Types can describe non-standard properties such as how much time or space evaluation of the
expression will cost (mainly for applications in embedded systems), or whether evaluation
of the expression may raise an exception or cause a side effect. Type inference is then
a form of automatic program analysis [21]. Dependent type systems allow types to be
parameterised not just by other types but also by values. For example, such a type
system can express that a vector addition function takes two vectors of any size and
returns a vector, but the sizes of all these vectors have to be the same. Dependent
type systems realise the Curry-Howard isomorphism which states that types are logical
formulae and the typed expressions are proofs of the formulae. Thus a program and
proofs of its properties can be written within the same advanced programming language.
The type systems of current functional programming languages already allow a limited
amount of dependent typing, usually based on non-trivial encodings of values in types.
The development of functional languages with dependent type systems that are easy
to use is a long-standing research topic [22]. In general most research on type systems
concentrates on functional programming languages with their simple and well-defined
semantics [23].
4 Non-Strict vs. Strict Semantics
A function is strict, if its result is undefined (error or evaluation does not terminate)
whenever any of its arguments is undefined. Like imperative languages many functional
programming languages (e.g. Lisp, ML, Erlang) have a strict semantics, that is, allow
only the definition of strict functions. This follows directly from their eager evaluation
order: in a function application first the arguments are fully evaluated and then the
function is applied to the argument values.
In contrast, languages with a non-strict semantics (e.g. Haskell, Clean) allow the
definition of non-strict functions and infinite data structures. A function enumFrom
yields an infinitely long list and is used in the definition of the infinite list of factorial
numbers, factorials. The factorial function then just takes the $n$-th element of this
list (list index numbers start at 0):
$$
\begin{align*}
\text{enumFrom} &\colon \text{Integer} \to [\text{Integer}] \\
\text{enumFrom} \; n &\equiv n : \text{enumFrom} \; (n+1) \\
\text{factorials} &\colon [\text{Integer}] \\
\text{factorials} &\equiv 1 : (\text{zipWith} \; (*) \; \text{factorials} \; \text{enumFrom} \; 1) \\
\text{factorial} &\colon \text{Integer} \to \text{Integer} \\
\text{factorial} \; n &\equiv \text{genericIndex} \; \text{factorials} \; n
\end{align*}
$$
The expression zipWith (*) takes two lists and combines their elements pairwise by
multiplication (*). The idea underlying the recursive definition of the list of factorial
numbers is expressed by the following table:
<table>
<thead>
<tr>
<th>factorials</th>
<th>1</th>
<th>1</th>
<th>2</th>
<th>6</th>
<th>24</th>
<th>120</th>
<th>...</th>
</tr>
</thead>
<tbody>
<tr>
<td>* * * * * *</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>enumFrom 1</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>...</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>factorials</td>
<td>1</td>
<td>1</td>
<td>2</td>
<td>6</td>
<td>24</td>
<td>120</td>
<td>720</td>
</tr>
</tbody>
</table>
Even though semantically several infinite lists are defined, the evaluation of a factorial number is finite:
```
factorial 3
= genericIndex factorials 3
= genericIndex (1 : zipWith (*) (1 : ... ) (enumFrom 1))) 3
= genericIndex (1 : (1*1) : (zipWith (*) (1 : ... ) (1 : (enumFrom (1+1)))))) 3
= genericIndex (1 : (1*1) : (zipWith (*) ((1*1) : ... ) ((1+1) : (enumFrom ((1+1)+1)))))) 3
= genericIndex (1 : (1*1) : (((1*1)*((1+1))): (zipWith (*) ... (enumFrom ((1+1)+1)))))) 3
= genericIndex (1 : 1 : (1*(1+1)) : (zipWith (*) ... (enumFrom ((1+1)+1)))))) 3
= genericIndex (1 : 1 : (1*2) : (zipWith (*) ... (enumFrom (2+1)))))) 3
= genericIndex (1 : 1 : 2 : (zipWith (*) ... (enumFrom (2+1)))))) 3
= genericIndex (1 : 1 : 2 : (2 : ... ) ((2+1) : (enumFrom ((2+1)+1)))))) 3
= genericIndex (1 : 1 : 2 : (2*(2+1)) : (zipWith (*) ... (enumFrom ((2+1)+1)))))) 3
= 2*(2+1)
= 2*3
= 6
```
Implementations of non-strict functional languages usually use lazy evaluation, passing arguments in unevaluated form to functions but avoiding duplicated evaluation through sharing of unevaluated expressions. Non-strict semantics enables modular solutions to many programming problems [24]. Recursive definitions of constants are important for defining parsers [12]. The programmer can also define new control structures like if then else, which is non-strict in the two last arguments in all programming languages:
```
isPositive :: Integer -> a -> a -> a
isPositive n yes no = if n > 0 then yes else no
factorial :: Integer -> Integer
factorial n = isPositive n (n * factorial (n-1)) 1
```
Non-strict languages have a simpler program algebra than strict languages, because in the latter many equations do not hold for expressions with undefined values. However, the time and especially the space behaviour of non-strict functional programs is much harder to predict than that of strict ones.
5 Necessary Side-Effects
Functional programming aims to minimise or eliminate side-effects. However, an executing program usually does not just transform an input into an output but also has to communicate with users, other processes, the file system etc.; in short, it has to perform I/O. Many functional languages such as Lisp and ML use simple side-effecting functions for I/O, but some languages use I/O models that perform the side-effects required by I/O such that the program algebra remains unaffected, as if no side-effects were present.
Non-strict languages such as Miranda [25] and early versions of Haskell use the lazy stream model. The program transforms a list of input events into a list of output events. The non-strict semantics ensures that part of the output list can already be produced after processing only part of the input list and hence earlier output events can influence later input events [26]. Using this I/O model strengthens the intuition for non-strict semantics. All other I/O models work for both strict and non-strict languages.
The uniqueness model is used in Clean [3]. This model is based on the idea that there exists a special token, the world value, which every I/O function requires as an argument and returns as part of its result. The world value can be used only in a single-threaded way, that is, the world value cannot be duplicated or an old value be used twice. A uniqueness type system ensures single-threaded use of the world value (cf. Section 3).
Early versions of Haskell also used the continuation model [26]. The idea of the continuation model is that a function that performs I/O never returns; instead it takes an additional argument, the continuation function, and after performing the side-effecting I/O operation calls this continuation function, passing any result of the I/O operation as argument to the continuation function. In general a program written such that functions do not return but instead pass their results to other functions is said to be in continuation passing style. Continuation passing style enables the programmer to tightly control the evaluation order [27] and thus ensure the required sequential execution of I/O operations.
Later versions of Haskell use the monad model. The monad model is similar to the continuation model but allows easier composition of I/O computations. Every I/O operation returns an element of the abstract monad type and monadic values can only be composed by a sequence operator, thus enforcing the sequential order of I/O operations. The following Haskell I/O operation reads characters from standard input until the newline character is read and returns the list of read characters.
```
readLine :: IO [Char]
readLine = do
c <- getChar
if c == '\n'
then return []
else do
rest <- readLine
return (c:rest)
```
IO is the monad and the type of readLine is IO [Char] because this operation returns a list of characters, just as the type of getChar is IO Char. The do construct is
syntactic sugar that makes monadic programs look very similar to imperative ones. The keyword `do` is followed by a number of I/O operations, all of monadic IO type, which are executed sequentially. The `<-` notation gives access to normal values computed by monadic operations.
In general monads are useful for embedding various operations that must be executed in a specific order. For example, they can be used to add mutable references to a pure functional language or to implement backtracking as used by many parser libraries [28].
The algebra for monadic expressions is more complex and, for arbitrary monads, more limited compared to non-monadic expressions; by definition the compositionality of monadic code is restricted.
Programmers use side-effects also for other purposes than I/O. Many well-known algorithms rely on the modification of data structures to achieve their efficiency, especially those that transform graph-structured data. In principle a mutable memory can be simulated by a balanced tree in a functional program with a logarithmic loss of time complexity. Nicholas Pippenger showed [29] that there are problems that can be solved in linear time in an imperative language but that can be solved in a strict eagerly evaluated functional programming language only with a logarithmic slowdown. However, this theoretical argument does not apply to non-strict languages using lazy evaluation [30].
In practice many efficient purely functional algorithms exist [10]. Arrays are most efficiently processed by operations that construct whole new arrays from existing ones instead of emphasising individual elements [26]. Finally mutable references can be embedded into pure functional languages using monads, but most functional programmers prefer to use the expressibility of functional programming to develop new algorithms or tackle problems that are too complex for imperative languages.
6 Implementation Techniques
In contrast to imperative languages functional languages are not based on standard computer architecture and hence many different implementation models have been explored. Backus [31] suggested that functional languages could inspire new computer architectures and during the 1970s specially designed computers for running Lisp, Lisp machines, were popular. However, Backus also noted that only when functional languages “have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them”. The speed of mass produced processors grew far faster than that of specially designed hardware. Backus still saw the efficient and correct implementation of the lambda calculus as a major obstacle [31] and graph reduction machines reducing combinators (top-level functions) were devised to circumvent this problem. Nowadays the compilation of functional programs into code on standard hardware that is comparable in speed to that of imperative programs is well understood and, although there exist many variations, compilation is surprisingly similar to compilation of imperative languages [32, 33]. The two main additional issues are: First, a functional language allocates most data objects on the heap and has to use
a garbage collector [34], because the lifetimes of data objects are not determined by the
program structure. Second, to implement functions as first class citizens they have to
be represented as closures. The standard representation of a closure is a pointer to the
function code plus an environment, a data structure that maps variables to their values.
Additionally implementations of non-strict functional languages have to pass unevaluated
expressions as arguments; these are represented as thunks that can be implemented
identically to closures. Strictness analysis is used to reduce the number of unnecessary
and costly thunks. Compiler optimisations mostly work on the level of the functional
language, using the rich program algebra for program transformations (cf. Section 2.7).
The implementation model of a functional language is usually described by an abstract
machine. The first and best known, but not the most simple or most efficient, is Peter
Landin’s SECD machine.
Pure functional languages lend themselves naturally to parallel evaluation. In prin-
ciple all arguments of a function could be evaluated in parallel. Hence especially the
1980s saw substantial research into parallel implementations of functional languages.
The main problem proved to be that the implicit parallelism of functional languages is
of fine granularity and hence process creation and communication overheads are high.
7 Theoretical Foundations
The main theoretical foundation of functional programming is the lambda calculus [23,
35] which was developed by Alonzo Church in the 1930s, not as a programming language
but as a small mathematical calculus for describing the operational behaviour of math-
ematical functions. The syntax of the lambda calculus consists of only three different
kinds of expressions: variables, applications and abstractions. An abstraction, written
λx.e, where x is a variable and e an expression, denotes a function with parameter
variable x and body e. An application (e1 e2) applies a function e1 to its argument e2.
To evaluate expressions only a single reduction rule called β-reduction is needed:
\[(\lambda x.e_1)e_2 \rightarrow e_1[e_2/x]\]
All occurrences of the parameter variable x in the function body e1 are replaced by the
argument e2. β-reduction can be applied anywhere in an expression. Evaluation is a
sequence of β-reduction steps:
\[(\lambda x.((\lambda y.((\lambda z.z)))) \rightarrow (\lambda x.((\lambda x.((\lambda z.z)))) \rightarrow (\lambda z.z))\]
Usually there are many ways to evaluate an expression. An alternative to the previous
one is
\[(\lambda x.((\lambda y.((\lambda z.z)))) \rightarrow (\lambda y.((\lambda z.z)))) \rightarrow (\lambda z.z)\]
An important property of the lambda calculus is its confluence, which ensures that
all evaluation sequences for an expression that terminate will yield the same final value.
The restriction of the lambda calculus to functions with one argument is not a limitation, because the result of an application can be another function that is then applied to its argument. For example, in \((e_1 e_2) e_3\) the expression \(e_1\) can be viewed as a function that takes two arguments, namely \(e_2\) and \(e_3\). The function \(e_1\) is said to be \textit{curried}. We usually write \((e_1 e_2 e_3)\). Many functional languages have adopted this notation for function application instead of the more familiar \(e_1(e_2, e_3)\).
The power of the lambda calculus stems from the fact that functions can be applied to themselves. This allows functions that are usually defined recursively to be defined in a non-recursive form. Hence the lambda calculus is Turing-complete even without having a recursion construct. However, nearly all functional programming languages include explicit recursion for convenience. There exist many typed variants of the lambda calculus; without an additional recursion construct most of them are strongly normalising, that is, the evaluation of any expressions terminates, and thus they are not Turing-complete but can still be very expressive.
All data structures such as natural numbers, Booleans and lists can be represented in the lambda calculus via their \textit{Church-encodings}. For most practical purposes these Church-encodings are too inefficient, but they prove that built-in data structures are not strictly required.
The lambda calculus forms the core of most functional programming languages and thus also provides the foundation for their semantics and implementation. The theory of \textit{term rewriting systems} [36] provides a similar foundation. A term rewriting system is basically a functional program, but most of the theory of term rewriting systems does not cover higher-order functions.
Besides the operational semantics given by sequences of reduction steps, functional programs also have useful \textit{denotational semantics} [37]. First, denotational semantics associates every type with a set, the set of values of this type. For example, the set of type \texttt{Int} is the set of integers and the set of type \texttt{Int \rightarrow Int} is a set of functions that take an integer and return an integer. Second, each expression is interpreted as an element of the set of values of its type. This interpretation is defined by a simple induction on the structure of the expression. For example, from knowing that the semantic value of a function identifier \texttt{add} is the addition function and knowing the values of the expressions \(3\) and \(4\) we conclude that the expression \(\texttt{add 3 4}\) has the value 7, without any reduction sequence expanding the definition of \texttt{add}. Thus denotational semantics is compositional and also less dependent on the syntax of the programming language than operational semantics. Denotational semantics proved particularly useful as foundation for numerous static program analysis methods [21].
8 Combinations with Other Programming Paradigms
Most functional programming languages are impure and thus include an \textit{imperative programming} language. Input and output are realised by side-effects and the values of variables can be modified. In some languages such as ML [7] and Caml [8] mutable variables have different types from non-mutable ones. So these languages enable and
encourage the functional programming style but do not require it.
The object-oriented programming paradigm comprises a number of features which can be combined with a functional programming language in various ways. OCaml [20] and some Lisp dialects provide features as they are familiar to object-oriented programmers. Most functional programming languages achieve the modularity and code-reuse aimed for by object-oriented programming by related but different means, often through their flexible module and type systems.
Both functional and logic programming languages are declarative, that is, they abstract from many implementation details and concentrate on describing the problem. Several functional logic research languages combine both paradigms, Mercury [38] augments logic programming by functional programming and Curry [39] augments a Haskell-like functional language by logic programming features.
Several extensions of standard functional programming languages with constructs for concurrent programming exist. Erlang [9] was designed from the start as a concurrent functional programming language where any non-trivial program defines numerous processes. Processes do not share data but communicate via message passing. Process creation and communication are the only side-effects in the language. Limitation of side-effects simplifies the language and enables an Erlang system to provide code updating at runtime.
9 A Brief History of Functional Languages
Lisp [4] was the first functional programming language and is one of the oldest programming languages still in use. John McCarthy started developing Lisp in the late 1950s as an algebraic list-processing language for artificial intelligence research. A central feature of Lisp is the construction of dynamic lists from simple cons cells and the use of a garbage collector for reclaiming unused cells. Lisp provides many higher-order functions over lists and further higher-order functions can easily be defined. Lisp is not a pure functional language: list structures can be modified and already defining a function is implemented through side-effects. Lisp has a very simple prefix syntax that represents both program and data alike. Thus Lisp programs require numerous parentheses, but it is very simple to extend the language within itself. The development of Lisp and Lisp applications thrive on this easy extensibility. Although Lisp adopted the lambda abstraction for defining functions from the lambda calculus, otherwise it was originally little influenced by the lambda calculus. Hence most Lisp dialects still use dynamic binding, where the scope of local identifiers is based on the call structure of the program, instead of static binding, where local identifiers are bound by their enclosing definitions in the program text. Scheme is a small modern dialect of Lisp (with static binding) that has become particularly popular in teaching functional and imperative programming concepts [5, 6]. The following definition in Scheme demonstrates its simple syntax:
\[
(\text{define } (\text{factorial } n) (\text{if } (> n 1) (* (\text{factorial } (- n 1)) n) 1))
\]
One of the most cited papers on functional programming is John Backus’ 1977 Turing Award lecture [31]. Backus’ arguments have particular authority, because he received the Turing Award for his pioneering work on developing Fortran and significant influence on Algol. Backus criticises existing imperative programming languages as being too tightly bound to the conventional von Neumann machine architecture. The assignment statement directly reflects memory access in the von Neumann architecture. Thus programming is dominated by a word-at-a-time sequential programming style instead of thinking in terms of larger conceptual units. Furthermore, Backus attacked the “division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs”. Backus argues that an algebra of programs is far more useful than the logics designed for reasoning about imperative programs. Backus identifies two main problems of functional languages existing at that time: First, the substitution operation required for implementing the lambda calculus was difficult to efficiently implement; therefore his language FP is completely point-free, defining new functions by combining existing ones. Second, functional languages are not history sensitive, they cannot easily store data beyond the runtime of a single program; hence he defines a traditional state transition system on top of his functional FP system.
The language ML was originally developed at the end of the 1970s as command language for a theorem prover but soon developed into a popular stand-alone language. It’s main new feature is it’s advanced static type system, based on the Hindley-Milner type system, and an expressive system for defining and combining modules. ML is not pure because its I/O system is based on side-effects, but modification of variables is limited to the use of separate reference types. Besides Standard ML [7] the Caml dialect [8] is used widely. The following definition of the factorial function in Standard ML leaves it to the system to infer the function type:
fun factorial x = if x = 0 then 1 else x * factorial (x-1)
In the 1970s and 1980s David Turner developed a series of influential functional languages, SASL [40], KRC [41] and Miranda [25], which in contrast to previous languages have purely non-strict semantics. Similar to ML a program is a system of equations but the syntax is even closer to common mathematical notation. Miranda also uses the Hindley-Milner type system. Miranda is purely functional, the I/O system uses lazily evaluated lists. In the late 1980s and early 1990s Miranda was widely used in university teaching.
In the late 1970s and the 1980s a large number of similar non-strict purely functional languages appeared and hence at the end of the 1980s a committee was formed to define a common language: Haskell. Its main novelties are the class system that extends its Hindley-Milner type system and in later revisions the use of a monad to support purely functional I/O. Haskell is widely used in teaching and its application outside the academic community is growing [1, 2]. The purely functional language Clean [3] is similar to Haskell but has a uniqueness type system to enable purely functional I/O and generation of efficient code.
In the late 1980s Ericsson started the development of Erlang, a concurrent functional programming language [9]. Erlang was designed to support the development of distributed, fault-tolerant, soft-real-time systems.
The proceedings of the three ACM SIGPLAN conference on History of programming languages (HOPL I,II,III) give historic details about many functional programming languages.
10 Summary
Functional programs are built from simple but expressive expressions. User-defined unbound data structures substantially simplify most symbolic applications. Features such as higher-order functions and the lack of side-effects support writing and composing reusable program components. Program components cannot interact via hidden side-effects but only via their visible interface. Thus all aspects of program development from rapid prototyping, testing and debugging to program derivation and verification are simplified. Ideas developed within functional programming, such as garbage collection and several type system features, have been adopted by many other programming languages. Several modern compilers produce efficient code. The abstraction from machine details allows short and elegant formulation of algorithms. The regular Programming Pearls in the Journal of Functional Programming [42] provide numerous small examples. Writing solutions that are both elegant and efficient for applications that perform substantial I/O or transform graph-structured data is still a challenge.
References
|
{"Source-Url": "https://kar.kent.ac.uk/24064/1/FuncOlaf.pdf", "len_cl100k_base": 11535, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 53119, "total-output-tokens": 14514, "length": "2e13", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.00034046173095703125, "__label__crime_law": 0.00028061866760253906, "__label__education_jobs": 0.0008549690246582031, "__label__entertainment": 7.283687591552734e-05, "__label__fashion_beauty": 0.00015652179718017578, "__label__finance_business": 0.000186920166015625, "__label__food_dining": 0.00042057037353515625, "__label__games": 0.0005078315734863281, "__label__hardware": 0.0006375312805175781, "__label__health": 0.0004749298095703125, "__label__history": 0.0002467632293701172, "__label__home_hobbies": 9.518861770629884e-05, "__label__industrial": 0.0003440380096435547, "__label__literature": 0.0004351139068603515, "__label__politics": 0.0002484321594238281, "__label__religion": 0.0005621910095214844, "__label__science_tech": 0.011077880859375, "__label__social_life": 9.34600830078125e-05, "__label__software": 0.00327301025390625, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.00032711029052734375, "__label__transportation": 0.0005636215209960938, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59963, 0.0304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59963, 0.7827]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59963, 0.86144]], "google_gemma-3-12b-it_contains_pii": [[0, 1193, false], [1193, 3663, null], [3663, 6503, null], [6503, 8623, null], [8623, 10678, null], [10678, 13110, null], [13110, 15773, null], [15773, 17568, null], [17568, 19782, null], [19782, 22390, null], [22390, 25472, null], [25472, 28668, null], [28668, 31406, null], [31406, 33593, null], [33593, 36598, null], [36598, 39843, null], [39843, 42733, null], [42733, 46154, null], [46154, 49306, null], [49306, 52749, null], [52749, 55036, null], [55036, 57294, null], [57294, 59522, null], [59522, 59963, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1193, true], [1193, 3663, null], [3663, 6503, null], [6503, 8623, null], [8623, 10678, null], [10678, 13110, null], [13110, 15773, null], [15773, 17568, null], [17568, 19782, null], [19782, 22390, null], [22390, 25472, null], [25472, 28668, null], [28668, 31406, null], [31406, 33593, null], [33593, 36598, null], [36598, 39843, null], [39843, 42733, null], [42733, 46154, null], [46154, 49306, null], [49306, 52749, null], [52749, 55036, null], [55036, 57294, null], [57294, 59522, null], [59522, 59963, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59963, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59963, null]], "pdf_page_numbers": [[0, 1193, 1], [1193, 3663, 2], [3663, 6503, 3], [6503, 8623, 4], [8623, 10678, 5], [10678, 13110, 6], [13110, 15773, 7], [15773, 17568, 8], [17568, 19782, 9], [19782, 22390, 10], [22390, 25472, 11], [25472, 28668, 12], [28668, 31406, 13], [31406, 33593, 14], [33593, 36598, 15], [36598, 39843, 16], [39843, 42733, 17], [42733, 46154, 18], [46154, 49306, 19], [49306, 52749, 20], [52749, 55036, 21], [55036, 57294, 22], [57294, 59522, 23], [59522, 59963, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59963, 0.01285]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
a684a7484426d07db750eb6d81ca76472c4dbbc9
|
$D^2STM$: Dependable Distributed Software Transactional Memory
Maria Couceiro
INESC-ID/IST
maria.couceiro@ist.utl.pt
Paolo Rornano
INESC-ID
romanop@gsd.inesc-id.pt
Nuno Carvalho
INESC-ID/IST
nonius@gsd.inesc-id.pt
Luis Rodrigues
INESC-ID/IST
ler@ist.utl.pt
May 2009
Abstract
Software Transactional Memory (STM) systems have emerged as a powerful paradigm to develop concurrent applications. At current date, however, the problem of how to build distributed and replicated STMs to enhance both dependability and performance is still largely unexplored. This paper fills this gap by presenting D²STM, a replicated STM that makes use of the computing resources available at multiple nodes of a distributed system. The consistency of the replicated STM is ensured in a transparent manner, even in the presence of failures. In D²STM transactions are autonomously processed on each node, avoiding any replica inter-communication during transaction execution, and without incurring in deadlocks. Strong consistency is enforced at transaction commit time by a non-blocking distributed certification scheme, which we name BFC (Bloom Filter Certification). BFC exploits a novel Bloom Filter-based encoding mechanism that permits to significantly reduce the overheads of replica coordination at the cost of a user tunable increase in the probability of transaction abort. Through an extensive experimental study based on standard STM benchmarks we show that the BFC scheme permits to achieve remarkable performance gains even for negligible (e.g. 1%) increases of the transaction abort rate.
Keywords: Dependability, Software Transactional Memory, Replication, Bloom Filters.
D²STM: Dependable Distributed Software Transactional Memory *
Maria Couceiro Paolo Romano Nuno Carvalho Luis Rodrigues
Abstract
Software Transactional Memory (STM) systems have emerged as a powerful paradigm to develop concurrent applications. At current date, however, the problem of how to build distributed and replicated STMs to enhance both dependability and performance is still largely unexplored. This paper fills this gap by presenting D²STM, a replicated STM that makes use of the computing resources available at multiple nodes of a distributed system. The consistency of the replicated STM is ensured in a transparent manner, even in the presence of failures. In D²STM transactions are autonomously processed on each node, avoiding any replica inter-communication during transaction execution, and without incurring in deadlocks. Strong consistency is enforced at transaction commit time by a non-blocking distributed certification scheme, which we name BFC (Bloom Filter Certification). BFC exploits a novel Bloom Filter-based encoding mechanism that permits to significantly reduce the overheads of replica coordination at the cost of a user tunable increase in the probability of transaction abort. Through an extensive experimental study based on standard STM benchmarks we show that the BFC scheme permits to achieve remarkable performance gains even for negligible (e.g. 1%) increases of the transaction abort rate.
Keywords: Dependability, Software Transactional Memory, Replication, Bloom Filters
Paper Category: Regular Paper
1 Introduction
Software Transactional Memory (STM) systems have emerged as a powerful paradigm to develop concurrent applications [23, 21, 17]. When using STMs, the programmer is not required to deal explicitly with concurrency control mechanisms. Instead, she has only to identify the sequence of instructions, or transactions, that need to access and modify concurrent objects atomically. As a result, the reliability of the code increases and the software development time is shortened.
*This work was partially supported by the Pastramy (PTDC/EEA/72405/2006) project.
While the study of STMs has garnered significant interest, the problem of architecting distributed STMs has started to receive the required attention only very recently [31, 8, 28]. Furthermore, the solutions proposed so far have not addressed the important issue of how to leverage replication not only to improve performance, but also to enhance dependability. This is however a central aspect of distributed STM design, as the probability of failures increases with the number of nodes and becomes impossible to ignore in large clusters (composed of hundreds of nodes [8]). Strong consistency and fault-tolerance guarantees are also essential when STMs are used to increase the robustness of classic service-oriented applications. This is the case, for instance, of the FenixEDU system [13], a complex web-based Campus activity management system that is currently used in several Portuguese universities. FenixEDU extensively relies on STM technology for transactionally manipulating the in-memory state of its (J2EE compliant) application server. Providing critical services (such as students’ grading or research funds management) to a population of more than 14000 users, the FenixEDU system deployed at the IST Campus of Lisbon is one of the main drivers of our research in the quest for efficient and scalable replication mechanisms [10].
This paper addresses the problems above by introducing D²STM, a Dependable Distributed Software Transactional Memory that allows programmers to leverage on the computing resources available in a cluster environment, using a conventional STM interface, transparently ensuring non-blocking and strong consistency guarantees even in the case of failures.
The replica synchronization scheme employed in D²STM is inspired by recent database replication approaches [35, 26, 34], where replica consistency is achieved through a distributed certification procedure which, in turn, leverages on the properties of an Atomic Broadcast [16] primitive. Unlike classic eager replication schemes (based on fine-grained distributed locking and atomic commit), that suffer of large communication overheads and fall prey of distributed deadlocks [18], certification based schemes avoid any onerous replica coordination during the execution phase, running transactions locally in an optimistic fashion. The consistency of replicas (typically, 1-Copy serializability) is ensured at commit-time, via a distributed certification phase that uses a single Atomic Broadcast to enforce agreement on a common transaction serialization order, avoiding distributed deadlocks, and providing non-blocking guarantees in the presence of (a minority of) replica failures. Furthermore, unlike classic read-one/write-all approaches that require the full execution of update transactions at all replicas [6], only one replica executes an update transaction, whereas the remaining replicas are only required to validate the transaction and to apply the resulting updates. This allows to achieve high scalability levels even in the presence of write-dominated workloads, as long as the transaction conflict rate remains moderate [35].
For the reasons above, certification based replication schemes appear attractive to apply in the STM context. Unfortunately, as previously observed in [38] (and confirmed by the experimental results presented later in this paper), the overhead of previously published Atomic Broadcast based certification
schemes can be particularly detrimental in STM environments. In fact, unlike in classical database systems, STMs incur neither in disk access latencies nor in the overheads of SQL statement parsing and plan optimization. This makes the execution time of typical STM transactions normally much shorter than in database settings [38] and leads to a corresponding amplification of the overhead of inter-replica coordination costs. To tackle this issue, D²STM, leverages a novel transaction certification procedure, named BFC (Bloom Filter Certification), which takes advantage of a space-efficient Bloom Filter-based encoding to significantly reduce the overhead of the distributed certification scheme at the cost of a marginal, and user configurable, increase of the transaction abort probability.
D²STM is built on top of JVSTM [12], an efficient STM library that supports multi-version concurrency control and, as a result, offers excellent performance for read-only transactions. D²STM takes full advantage of the JVSTM's multi-versioning scheme, sheltering read-only transactions from the possibility of aborts due both to local or remote conflicts. Through an extensive experimental evaluation, based on both synthetic micro-benchmarks, as well as complex STM benchmarks we show that D²STM permits to achieve significant performance gains at the cost of a marginal growth of the abort rate.
The rest of this paper is organized as follows. Section 2 discusses related work. A formal description of the considered system model and of the consistency criteria ensured by D²STM is provided in Section 3, whereas Section 4 overviews the whole architecture of the D²STM system and discusses the issues related to the integration of JVSTM within D²STM. The BFC scheme is presented in Section 5 and Section 6 presents the results of our experimental evaluation study. Finally, Section 7 concludes the paper.
2 Related Work
In this section we briefly survey related research. We begin by analyzing the main design choices of existing distributed STM systems, critically highlighting their main drawbacks from both the fault-tolerance and performance perspectives. Next we review recent literature on database replication schemes, discussing pros and cons of these approaches when adopted in a distributed STM context. Finally, we discuss other works related to D²STM in a wider sense.
2.1 Distributed STMs
The only distributed STM solutions we are aware of are those in [28, 8, 31]. As already noted in the introduction, unlike D²STM, none of these solutions leverages on replication in order to ensure cluster-wide consistency and availability in scenarios of failures, or failure suspicions. While it could be possible to somehow extend the distributed STM solutions proposed in these works with orthogonal fault-tolerance mechanisms, this is far from being a trivial task and, perhaps more importantly, the overhead associated
with these additional mechanisms could seriously hamper their performance. In D²STM, on the other hand, dependability is seen as a first class design goal, and the STM performance is optimized through a holistic approach that tightly integrates low level fault-tolerance schemes (such as Atomic Broadcast) with a novel, highly efficient distributed transaction certification scheme.
In the following, we critically highlight the most relevant differences, from a performance oriented perspective, of the replica coherency schemes adopted by the aforementioned schemes with respect to D²STM during failure-free runs. The work in [31] exploits the simultaneous presence of different versions of the same transactional dataset across the replicas, to implement a distributed multi-versioning scheme (DMV). Like centralized multi-version concurrency control schemes [6] (including JVSTM [12]), DMV allows read-only transactions to be executed in parallel with conflicting updating transactions. This is achieved by ensuring that the former is able to access older, committed snapshots of the dataset. However, in DMV each replica maintains only a single version of each data granule, and explicitly delays applying (local or remote) updates to increase the chance of not having to invalidate the snapshot of currently active read-only transactions (and to consequently abort them). This allows DMV to avoid maintaining multiple versions of the same data at each node, unlike in conventional multi-version concurrency control solutions (although DMV requires buffering the updates of not yet applied transactions). On the other hand, while multi-version concurrency control solutions provide deterministic guarantees on the absence of aborts for read-only transactions, the effectiveness of the DMV scheme depends on the timing of the concurrent accesses to data by conflicting transactions (actually, with DMV a read-only transaction may be aborted also due to the concurrent execution of “younger”, local read-only transaction). Optimizing the performance of read-only transactions, which largely dominate in many realistic workloads, is an important design goal common to both DMV and D²STM. However, D²STM relies on a multi-versioned STM, namely JVSTM, which maintains a sufficient number of versions of each transactionalized data item in order to guarantee that no read-only transaction is ever aborted. Further, this is done in an autonomous manner by the local STM, in a transparent manner for the replication logic, greatly simplifying the design and implementation of the whole system. Another significant difference between D²STM and DMV is in that the latter requires each committing transaction to acquire a cluster-wide unique token, which globally serializes the commit phases of transactions. Unfortunately, given that committing a transaction imposes a two communication step synchronization phase (for updates propagation), the token acquisition phase can introduce considerable overhead and seriously hamper performance [28]. Conversely, in D²STM the Atomic Broadcast-based replica coordination phase can be executed in full concurrency by the various replicas, which are required to sequentially execute only the local transaction validation phase aimed at verifying whether a committing transaction must be aborted due to some conflict.
The work in [28] does not rely on multi-versioning schemes, but, analogously to the one in [31], re-
lies on a distributed mutual exclusion mechanism scheme. Mutual exclusion is aimed at ensuring that at any time there are no two replicas attempting to simultaneously commit conflicting transactions. The use of multiple leases, based on the actual datasets accessed by transactions, permits to partially alleviate the performance problems incurred by the serialization of the whole (distributed) commit phase. However, this phase may still become a bottleneck with conflict intensive workloads. As already discussed, this problem is completely circumvented in $D^2$STM thanks to the use of an Atomic Broadcast based certification procedure. Additionally, in [28] the lease establishment mechanism is coordinated by a single, centralized node which is likely to become a performance bottleneck for the whole system as the number of replicas increase; in fact, the experimental evaluation in [28] relies on a dedicated node for lease management and does not report results for more than four replicas.
Finally, Cluster-STM, presented in [8], focuses on the problem of how to partition the dataset across the nodes of a large scale distributed Software Transactional Memory. This is achieved by assigning to each data item a home node, which is responsible for maintaining the authoritative version (and the associated metadata) of the data item. The home node is also in charge of synchronizing the accesses of conflicting remote transactions. In [8] any caching or replication scheme is totally delegated to the application level, which has then to take explicitly into account the issues related to data fetching and distribution, with an obvious increase in the complexity of the application development. Currently, $D^2$STM only provides support for total replication of the transactional dataset (even though leveraging transparent, selective replication of data across the nodes represents part of our future work). On the other hand, $D^2$STM provides programmers with the powerful abstraction of single system image, which permits to port applications previously running on top of non-distributed STMs with minimal modifications. Further, Cluster-STM treats the processors as a flat set, not distinguishing between processors within a node and processors across nodes, and not exploiting the availability of shared memory between multiple cores/processors on each replica to speed up intra-node communication. Finally, Cluster-STM does not exploit a multi-versioning local concurrency control to maximize the performance of read-only transactions, and is constrained to run only a single thread for each processor. Being layered on top of a fully fledged, multi-version STM, $D^2$STM overcomes all of the above limitations.
2.2 Database Replication
The problem of replicating a STM is naturally closely related to the problem of database replication, given that both STMs and DBs share the same key abstraction of atomic transactions. The fulcrum of modern database replication schemes [35, 34, 15, 2, 26] is the reliance on an Atomic Broadcast (ABcast) primitive [16, 20], typically provided by some Group Communication System (GCS) [33, 4]. ABcast plays a key role to enforce, in a non-blocking manner, a global transaction serialization order without incurring in the scalability problems affecting classical eager replication mechanisms based on distributed
locking and atomic commit protocols, which require much finer grained coordination and fall prey of deadlocks [18]. Existing ABcast-based database replication literature can be coarsely classified in two main categories, depending on whether transactions are executed optimistically [35, 26] or conservatively [27].
In the conservative case, which can be seen as an instance of the classical state machine/active replication approach [39], transactions are serialized through ABcast prior to their actual execution and are then deterministically scheduled on each replica in compliance with the ABcast determined serialization order. This prevents aborts due to concurrent execution of conflicting transactions in different replicas and avoids the cost of broadcasting the transactions’ read-sets and write-sets. On the other hand, the need for enforcing deterministic thread scheduling at each replica requires a careful identification of the conflict classes to be accessed by each transaction, prior to its actual execution. Unfortunately, this requirement represents a major hurdle for the adoption of these techniques in STM systems which, unlike relational DBMSs with SQL-like interfaces, allow users to define arbitrary, and much less predictable, data layouts and transaction access patterns (e.g. determined through direct pointer manipulations). In practice, it is very hard or simply impossible to predict the datasets that are to be accessed by a newly generated transaction. This is particular troublesome, given that a labeling error can lead to inconsistency, whereas coarse overestimations can severely limit concurrency and hamper performance.
Optimistic approaches, such as [35], avoid these problems, hence appearing better suited to be adopted also in STM contexts. In these approaches, transactions are locally processed on a single replica and validated a posteriori of their execution through an ABcast based certification procedure aimed at detecting remote conflicts between concurrent transactions. The certification based approaches can be further classified into voting and non-voting schemes [26, 37], where voting schemes, unlike non-voting ones, need to atomic broadcast only the write-set (which is typically much smaller than the read-set in common workloads), but on the other hand incur the overhead of an additional uniform broadcast [20] along the critical path of the commit phase. As highlighted in our previous work [38], the replica coordination latency has an amplified cost in STM environments when compared to conventional database environments, given that the average transaction execution time in STM settings is typically several orders of magnitude shorter than in database applications. This makes voting certification schemes, which introduce an additional latency of at least 2 extra communication steps with regard to non voting protocols, unattractive in replicated STM environments. On the other hand, as it will be demonstrated by our experimental study, and as one could intuitively expect, the actual efficiency of non voting certification protocols is, in practical settings, profoundly affected by the actual size of read-sets.
The replica coordination scheme employed in D^2STM, namely BFC (Bloom Filter Certification), can be classified as a non voting certification scheme that exploits a Bloom Filter based encoding of the transactions’ read-set to achieve the best of both the voting and non voting approaches, requiring only a
single ABcast while avoiding to flood the network with large messages, at the cost of a small, and user tunable increase in the transactions abort rate.
2.3 Other Related Works
The large body of literature on Distributed Shared Memories (DSM) is clearly related to our work, sharing our same base goal of providing developers with the simple abstraction of a single system image transparently leveraging the resources available across distributed nodes. To overcome the strong performance overheads introduced by straightforward DSM implementations [30] ensuring strong consistency guarantees with the granularity of a single memory access [29], several DSM systems have been developed that achieve better performance through relaxing memory consistency guarantees [25]. Unfortunately, developing software for relaxed DSM’s consistency models can be challenging as programmers are required to fully understand sometimes complicated consistency properties to maximize performances without endangering correctness. Conversely, the simplicity of the atomic transaction abstraction, at the core of STMs and of our D²STM platform, allows to increase programmers’ productivity [11] with respect to both locking disciplines and relaxed memory consistency models. Further, the strong consistency guarantees provided by atomic transactions can be supported through efficient algorithms that, like in D²STM, incur only in a single synchronization phase per transaction, effectively amortizing the unavoidable communication overhead across a set of (possibly large) memory accesses.
Finally, the notion of atomic transaction plays a key role also in the recent Sinfonia [3] platform, where these are referred to as “mini-transactions”. However, unlike in conventional STM settings and in D²STM, Symphonia assumes transactions to be static, i.e. that their datasets and operations are known in advance, which limits the generality of this solution.
3 System Model
We consider a classical asynchronous distributed system model [20] consisting of a set of processes \( \Pi = \{p_1, \ldots, p_n\} \) that communicate via message passing and can fail according to the fail-stop (crash) model. We assume that a majority of processes is correct and that the system ensures a sufficient synchrony level (e.g. the availability of a \( \Diamond S \) failure detector) to permit implementing an Atomic Broadcast (ABcast) service, with the following properties [16]: Validity: If a correct participant broadcasts a message, then all correct participants eventually deliver it. Uniform Agreement: If a participant delivers a message, then all correct participants eventually deliver it. Uniform Integrity: Any given message is delivered by each participant at most once, and only if it was previously broadcast. Uniform Total Order: If some participant delivers message A after message B, then every participant delivers A only after it has delivered B.
D²STM preserves the weak atomicity [32] and opacity [19] properties of the underlying JVSTM. The former property implies that atomicity is guaranteed only as to conflicting pairs of transactional accesses; conflicts between transactional and non-transactional accesses are not protected. Weak atomicity is less composable than strong atomicity (protecting all pairs where at least one is a transactional access). It also raises subtle problems, e.g., granular lost updates. However, the runtime overhead of strong atomicity can be prohibitively high in the absence of hardware support [32]. Opacity [19], on the other hand, can be informally viewed as an extension of the classical database serializability property with the additional requirement that even non-committed transactions are prevented from accessing inconsistent states.
Finally, concerning the consistency criterion for the state of the replicated (JV)STM instances, D²STM guarantees 1-copy serializability of reads and writes to transactional data [6], which ensures that transaction execution history across the whole set of replicas is equivalent to a serial transaction execution history on a not replicated (JV)STM.
4 D²STM Architecture
4.1 Node Components
The components of a node of the D²STM platform, depicted in Figure 1, is structured into 4 main logical layers. The bottom layer is a Group Communication Service (GCS) [16] which provides two main building blocks: view synchronous membership [20], and an Atomic Broadcast service. Our implementation
uses a generic group communication service (GCS) [14], which supports multiple implementations of the GCS (all the experiments described in this paper have been performed using the Appia GCS [33]). The core component of $D^2$STM is represented by the Replication Manager, implementing the distributed coordination protocol required for ensuring replica consistency (i.e. 1-copy serializability); this component is described in detail in Section 5. The Replication Manager interfaces, on one side, the GCS layer and, on the other side, with a local instance of a Software Transactional Memory, more precisely JVSTM [11]. A detailed discussion of the integration between the replication manager and JVSTM, along with a summary of the most relevant JVSTM internal mechanisms, is provided in Section 4.2. Finally, the top layer of $D^2$STM is a wrapper that intercepts the application level calls for transaction demarcation (i.e. to begin, commit or abort transactions), not interfering at all with the application accesses (read/write) to the VBoxes which are managed directly by the underlying JVSTM layer. This approach allows $D^2$STM to transparently extend the classic STM programming model, while requiring only minor modifications to pre-existing JVSTM applications.
4.2 Integration with JVSTM
JVSTM implements a multi-version scheme which is based on the abstraction of a versioned box (VBox) to hold the mutable state of a concurrent program. A VBox is a container that keeps a tagged sequence of values - the history of the versioned box. Each of the history’s values corresponds to a change made to the box by a successfully committed transaction and is tagged with the timestamp of the corresponding transaction. To this end, JVSTM maintains an integer timestamp, $commitTimestamp$, which is incremented whenever a transaction commits. Each transaction stores its timestamp in a local $snapshotID$ variable, which is initialized at the time of the transaction activation with the current value of $commitTimestamp$. This information is used both during transaction execution, to identify the appropriate values to be read from the VBoxes, and, at commit time, during the validation phase, to determine the set of concurrent transactions to check against possible conflicts. JVSTM relies on an optimistic approach which buffers transactions’ writes and detects conflicts only at commit time, by checking whether any of the VBoxes read by a committing transaction $T$ was updated by some other transaction $T'$ with a larger timestamp value. In this case $T$ is aborted. Otherwise, $T$’s $commitTimestamp$ is increased, its $snapshotID$ is set to the new value of $commitTimestamp$ and the new values of all the VBoxes it updated are atomically stored within the VBoxes.
To minimize performance overheads, the $D^2$STM’s replica coordination protocol, namely BFC, is tightly integrated with the JVSTM’s transaction timestamping mechanisms. The integration of JVSTM within the $D^2$STM required the implementation of three main (non-intrusive) modifications to JVSTM, extending its original API in order to allow the Replication Manager layer to:
1. extract information concerning internals of the transaction execution, i.e., its read-set, write-set, and snapshotID timestamp. In the remaining, we refer to the methods providing the aforementioned services for a transaction $T_x$, respectively, as $\text{getReadset}(\text{Transaction } T_x)$, $\text{getWritesset}(\text{Transaction } T_x)$ and $\text{getSnapshotID}(\text{Transaction } T_x)$.
2. explicitly trigger the transaction validation procedure (method $\text{validate}(\text{Transaction } T_x)$), that aims at detecting any conflict raised during the execution phase of a transaction $T_x$ with any other (local or remote) transaction that committed after $T_x$ started.
3. atomically apply, through the $\text{applyRemoteTransaction}(\text{Writesset WS})$ method, the write-set WS of a remotely executed transaction (i.e. atomically updating the VBoxes of the local JVSTM with the new values written by a remote transaction) and simultaneously increasing the JVSTM’s commitTimestamp.
4. permit cluster wide unique identification of the VBoxes updated by (remote) transactions, as well as of any object, possibly dynamically generated within a (remote) transaction, whose reference could be stored within a VBox. This is achieved by tagging each JVSTM VBox (and each object, mutable or immutable, assigned to a VBox within a Transaction) with a unique identifier. A variety of different schemes may be used to generate universal unique identifiers (UIDs), as long as it is possible to guarantee the cluster-wide uniqueness of UIDs and to enable the independent generation of UIDs at each replica. The current implementation of D$^2$STM relies on a widely recognized international standard, namely the ISO/IEC 11578:1996$^1$, which uses a 128 bits long encoding scheme$^2$ that includes the identifier of the generating node and a local timestamp based on a 100-nanosecond intervals.
5 Bloom Filter Certification
Bloom Filter Certification (BFC) is a novel non-voting certification scheme that exploits a space-efficient Bloom Filter-based encoding [7], allowing to drastically reduce the overhead of the distributed certification phase at the cost of a reduced (but controlled) increase in the risk of transaction aborts.
Before delving into the details of the BFC protocol, we review the fundamentals of Bloom filters (the interested reader may refer to [9] for further details). A Bloom filter for representing a set $S = \{x_1, x_2, \ldots, x_n\}$ of $n$ elements from a universe $U$ consists of an array of $m$ bits, initially all set to 0. The filter uses $k$ independent hash functions $h_1, \ldots, h_k$ with range $\{1, \ldots, m\}$, where it is assumed that these hash functions map each element in the universe to a random number uniformly over the range.
---
$^1$Also ITU-T Rec. X.667 - ISO/IEC 9834-8:2005, and integrated within the official Java library since version 1.5.
$^2$The standard Leach-Salz variant layout encoding was used.
For each element $x \in S$, the bits $h_i(x)$ are set to 1 for $1 \leq i \leq k$. To check if an item $y$ is in $S$, we check whether all $h_i(y)$ are set to 1. If not, then clearly $y$ is not a member of $S$. If all $h_i(x)$ are set to 1, $x$ is assumed to be in $S$, although this may be wrong with some probability. Hence a Bloom filter may yield a false positive, where it suggests that an element $x$ is in $S$ even though it is not. The probability of a false positive $f$ for a single query to a Bloom Filter depends on the number of bits used per item $m/n$ and the number of hash functions $k$ according to the following equation:
$$f = (1 - e^{-km/n})^k$$
where the optimal number $k$ of hash functions that minimizes the false positive probability $f$ given $m$ and $n$ can be shown to be equal to:
$$k = \lceil \ln 2 \cdot m/n \rceil$$
We now describe BFC in detail, with the help of the pseudo-code depicted in Figure 2. Read-only transactions are executed locally, and committed without incurring in any additional overhead. Leveraging on the JVSTM multi-version scheme, D²STM read-only transactions are always provided with a consistent committed snapshot and are spared from the risk of aborts (due to both local or remote conflicts).
A committing transaction with a non-null write-set (i.e. it has updated some VBox), is first locally validated to detect any local conflicts. This prevents the execution of the distributed certification scheme for transactions that are known to abort using only local information. If the transaction passes the local validation phase, the Replication Manager encodes the transaction read-set (i.e., the set of identifiers of all the VBoxes read by the transaction) in a Bloom Filter, and ABcasts it along with the transaction write-set (which is not encoded in the Bloom Filter). The size of the Bloom Filter encoding the transaction’s read-set is computed to ensure that the probability of a transaction abort due to a Bloom Filter’s false positive is less than a user-tunable threshold, which we denote as $\text{maxAbortRate}$. The logic for sizing of the Bloom Filter is encapsulated by the $\text{estimateBFSize()}$ primitive, which will be detailed later in the text.
As in classical non-voting certification protocols, update transactions are validated upon their ABcast-delivery. At this stage, it is checked whether $T_x$’s Bloom Filter contains any item updated by transactions with a $\text{snapshotID}$ timestamp larger than that of $T_x$’s. If no match is found, then $T_x$ can be safely committed. Committing a transaction $T_x$ consists of the following steps. If $T_x$ is a local transaction, it just suffices to request the local JVSTM to commit it. If, on the other hand, $T_x$ is a remote transaction, its write-set is atomically applied using the $\text{applyRemoteTransaction(WS}_{T_x})$ method.
Given that the validation phase of a transaction $T_x$ requires the availability of the write-sets of concurrent transactions previously committed, the Replication Manager locally buffers the UIDs of the VBoxes updated by any committed transaction in the $\text{CommittedXacts}$ set. To avoid an unbounded growth of this data structure, we rely on a distributed garbage collection scheme (analogous to the one employed in[36]), in which each replica exchange (as a piggyback to the AB-casted transaction validation message) the minimum $\text{snapshotID}$ of all the locally active update transactions. This allows each replica to gather global knowledge on the oldest timestamp among those of all the update transactions currently active on any replica. This information is used to garbage collect the $\text{CommitXacts}$ set by removing the information associated with any committed transactions whose execution can no longer invalidate any of the active transactions.
We now describe how the size of the Bloom Filter (BF) of a committing transaction is computed. The reader should note that for a transaction $T_x$ to be aborted due to a false positive it is sufficient to incur in a false positive for any of the items updated by transactions concurrent with $T_x$’s. In other words, determining the size of the Bloom Filter for a committing transactions, so to guarantee that a target $\text{maxAbortRate}$ is never exceeded, would require to know exactly the number $q$ of queries that will have to be performed against the Bloom Filter once the transaction gets validated (i.e. once it is ABcast-delivered). On the other hand, at the time in which $T_x$ enters the commit phase, it is not possible to exactly foresee neither how many transactions will commit before $T_x$ is ABcast-delivered, nor what will be the size of the write-sets of each of these transactions. On the other hand, any error
in estimating $q$ does not compromise safety, but may only lead to (positive or negative) deviations from the target $\text{maxAbortRate}$ threshold. Hence, BFC uses a simple and lightweight heuristic, which exploits the fact that each replica can keep track of the number of queries performed to the BF of any locally ABcast-delivered transaction. In detail, we rely on the moving average across the number of BF queries performed during the validation of phase of the last $\text{recComXacts}$ transactions as an estimator of $q$. Once $q$ is estimated, we can then determine the number $m$ of bits in the Bloom Filter by considering that the false positives for any distinct query are independent and identically distributed events which generate a Bernoullian process. At the light of this observation, the probability of aborting a transaction because of a false positive in the Bloom Filter-based validation procedure, $\text{maxAbortRate}$, can be expressed as:
$$\text{maxAbortRate} = 1 - (1 - f)^q$$
which, combined with Equations 1 and 2, allows us to estimate $m$ as:
$$m = \left\lceil -n \frac{\log_2(1 - (1 - \text{maxAbortRate})^{\frac{1}{q}})}{\ln 2} \right\rceil$$

**Figure 3:** Compression Factor achieved by BFC considering the ISO/IEC 11578:1996 UUID encoding.
The striking reduction of the amount of information exchanged, achievable by the BFC scheme, is
clearly highlighted by the graph in Figure 3, which shows the BFC’s compression factor (defined as the ratio between the number of bits for encoding a transaction’s read-set with the ISO/IEC 11578:1996 standard UID encoding, and with BFC) as a function of the target maxAbortRate parameter and of the number \( q \) of queries performed during the validation phase. The plotted data shows that, even for marginal increases of the transaction abort probability in the range of \([1\%-2\%]\), BFC achieves a \([5x-12x]\) compression factor, and that the compression factor extends up to \(25x\) in the case of \(10\%\) probability of transaction aborts induced by a false positive of the Bloom Filter.
The correctness of the BFC scheme can be (informally) proved by observing that i) replicas validate all write transactions in the same order (the one determined by the Atomic Broadcast primitive), and that, ii) the validation procedure, despite being subject to false positives, is deterministic given that all replicas rely on the same set of hash functions to query for the presence/determine the encoding of data items in the Bloom filter. Hence, as already highlighted, the occurrence of false positives results in an increase of the transaction abort rate, but can never lead to inconsistencies of the replicas’ states.
As a final note, in order to speed up the Bloom Filter construction (more precisely the insertion of items within the Bloom Filter), D\(^2\)STM exploits a recently proposed optimization [1] which generates the \( k = \lceil ln \ 2 \cdot m/n \rceil \) hash values required for encoding a data item within the Bloom Filter via a plain (and very efficient) linear combination of the output of only two independent hash functions. The choice of the hashing algorithm to be employed within D\(^2\)STM has been based on an experimental comparison of a spectrum of different hash functions trading off complexity, speed, and collision resistance. The one that exhibited the best performance while matching the analytically forecast false positive probability turned out to be MurmurHash2 [5], a simple, multiplicative hash function whose excellent performances have been also confirmed by some recent benchmarking results [24].
6 Evaluation
We now report results of an experimental study aimed at evaluating the performance gains achieved by the BFC scheme in a real distributed STM system, namely when using our D\(^2\)STM prototype, in face of a variety of both synthetic and more complex STM workloads. These results allow to assess the practical impact of the benefits estimated in the previous section, using the analytical model. The target platform for these experiments is a cluster of 8 nodes, each one equipped with an Intel QuadCore Q6600 at 2.40GHz equipped with 8 GB of RAM running Linux 2.6.27.7 and interconnected via a private Gigabit Ethernet. The Atomic Broadcast implementation used is based on a classic sequencer-based algorithm [20, 16].
We start by considering a synthetic workload (obtained by adapting the Bank Benchmark originally used for evaluating DSTM2 [22]) which serves for the sole purpose of validating the analytical model.
introduced in Section 5 for determining the Bloom Filter’s size as a function of a target maxAbortRate factor. In detail, we initialize the STM at each replica with a vector of numThreads-numMachines·10,000 items. Each thread $i \in [0, numThreads - 1]$ executing on replica $j \in [0, numMachines - 1]$ accesses a distinct fragment (of indexes $[(i + j \cdot numThreads) \cdot 10,000, (1 + i + j \cdot numThreads) \cdot 10,000 - 1]$) of 10,000 elements of the array, reading all these elements and randomly updating a number of elements uniformly distributed in the range [50-100]. Given that the fragments of the array accessed by different threads never overlap, this ensures that any transaction abort is only due to false positives in the Bloom Filter based validation.
The plots in Figure 4 show the percentage of aborted transactions when using the BFC scheme with a target maxAbortRate of 1%, 5%, 10% as we vary the number of active replicas from 1 to 8 (with 4 threads executing on each replica), highlighting the tight matching between the analytical forecast and the experimental results in presence of heterogeneous load conditions.
Next we consider a more complex micro-benchmark, namely a Red Black tree (again obtained by adapting the implementation originally used for evaluating DSTM2[22]). In this case we consider a mix of three different transactions: i) a read-only transaction, performing a sequence of searches, ii) a write transaction performing a sequence of searches and insertions, and iii) a write transaction performing a sequence of searches and removals. More in detail, the tree is pre-populated with 50,000 (randomly
determined) integer values in the range $[-100,000,100,000]$. Read-only transactions consist of 200 range queries, each one spanning 5 tree's entries around a randomly chosen integer value. The insertion, resp. removal, write transactions perform first of all 20 range queries, where each query range spans 50 tree's entries, which are aimed at identifying at least a value $v$ which is absent, resp. present, in the tree. If the sequence of range queries fail to identify any such element, the tree is sequentially scanned starting from a randomly chosen value as long as $v$ is found or the maximum value storable by the tree, namely 100.000 is reached (though this case is in practice extremely rare). Finally, if $v$ was found, it is inserted in, resp. removed from, the tree. Note that this logic is aimed at ensuring that the insertion/removal transactions actually perform an update of the tree without, in the case of insertions, introducing duplicate keys. Also, the initial size of the data structure is sufficiently large to yield a light/moderate contention level.
In Figure 5, Figure 6 and Figure 7, we depict the throughput of the system (i.e. number of committed transactions per second) for the three considered workloads when using BFC with the maxAbortRate parameter set to 1%. Each plot shows the system throughput for a different combinations of number of replicas and number of server threads in each replica. The number of replicas is varied from 2 to 8 and the number of threads in each replica is varied from 1 to 4. One interesting aspect of these results is that one can observe linear speedups when the number of replicas increases, even in the scenario where 90% of the transactions are write transactions (Figure 5). The latter is, naturally, the scenario with worse performance, given that almost all transactions require the write set to be AB-casted and applied everywhere. Still, even in this case, we can double the throughput of the system when we move from 2 to 6 replicas. As expectable, when the percentage of update transactions is smaller, the system's performance remarkably improve. For instance, for 10% updates (Figure 7) a configuration with 8 replicas and 4 threads achieves a throughput above 8000 tps (against the 1600 tps for the 90% update case). Also, when considering the workload with 10% updates, the configuration with 8 replicas and 4 threads per replica almost triplicates the performance of the same system with only 2 replicas (more precisely, throughput grows from 3000 tps to more than 8000 tps).
In Figure 8 we show the improvement in the execution time of write transactions that is obtained by the use of Bloom Filters for the scenario with 90% write transactions with respect to a standard non-voting certification algorithm requiring to atomically broadcast the whole transaction's readset, e.g. [2]. As below, Bloom Filters are configured to induce less the 1% of aborts due to false positives. As it can be observed in the plot, our optimizations reduce the execution time of write transactions up to approximately 37% in scenarios with a large number of replicas and threads. This is due to the 10x compression of the messages achieved thanks to the Bloom Filter encoding and to the corresponding reduction of the ABcast latency, which represents a dominant component of the whole transaction's execution time. Note that since the cost of multicast grows with the number of replicas, the reduction also grows proportionally.
Figure 5: Throughput - Red Black Tree, maxAbortRate=1%, 90% writes
Figure 6: Throughput - Red Black Tree, maxAbortRate=1%, 50% writes
Figure 7: Throughput - Red Black Tree, maxAbortRate=1%, 10% writes
Figure 8: Reduction of the Execution Time of Write Transactions - Red Black Tree, maxAbortRate=1%
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>NumAtomicPerComp</td>
<td>100</td>
</tr>
<tr>
<td>NumConnPerAtomic</td>
<td>3</td>
</tr>
<tr>
<td>DocumentSize</td>
<td>20000</td>
</tr>
<tr>
<td>ManualSize</td>
<td>100000</td>
</tr>
<tr>
<td>NumCompPerModule</td>
<td>250</td>
</tr>
<tr>
<td>NumAssmPerAssr</td>
<td>3</td>
</tr>
<tr>
<td>NumAssmLevels</td>
<td>7</td>
</tr>
<tr>
<td>NumCompPerAssm</td>
<td>3</td>
</tr>
<tr>
<td>NumModules</td>
<td>1</td>
</tr>
</tbody>
</table>
Table 1: Parameters used to build the initial data structure of STMBenchmark benchmark.
We finally show results using the STMBenchmark benchmark. This benchmark features a number of operations with different levels of complexity which manipulate an object-graph with millions of objects heavily interconnected and three types of workload (read dominated, read-write and write dominated). This benchmark can generate very demanding workloads which include, for instance, heavy-weight write transactions performing long traversals of the object graph generating huge readsets. In order to avoid the excessive growth of the size of the messages exchanged when using a standard non-voting certification algorithm (which would lead to the saturation of the network even with a small number of replicas), we found necessary to reduce the size of some of the benchmark’s data structures with respect to their default configuration. The exact settings of the benchmark’s scale parameters is reported in Table 1 in order to ensure reproducibility of our experiments.
Figure 9 depicts the performance of the system using the “read dominated with long traversals” workload. As before, each plot shows the system throughput for a different combination of number of replicas (from 2 to 8) and threads per replica (from 1 to 4). The speedup results are consistent with the results obtained with the Red Black tree benchmark. Looking at the throughput numbers in Figure 9(a), we can also observe linear speedups with the increase in the number of replicas. For instance, by moving from 2 to 8 replicas, the system performance increases of a factor 4x independently of the number. Figure 9(b) highlights the performance gains achievable thanks to the usage of Bloom Filter with respect to a classic non voting certification scheme. To this purpose, we report the reduction of execution time for write transactions (namely the only ones to require a distributed certification) which fluctuates in the range from around 20% to around 40%. These gains were achieved, in this case, thanks to the 3x message compression factor permitted by the use of Bloom Filters.
An interesting finding highlighted by our experimental analysis is that, in realistic settings, the BFC scheme achieves significant performance gains even for a negligible (i.e. 1%) additional increase of the transaction’s abort rate. This makes the BFC scheme viable, in practice, even in abort-sensitive applications.
In conclusion, the Bloom Filter Certification procedure implemented in D²STM provides fault-
tolerance, makes it possible to use additional replicas to improve the throughput of the system (mainly, in the presence of read dominated workloads) and, last but not the least, permits to use (faster) non-voting certification approaches in the presence of workloads with large read sets.
7 Conclusions
In this work we introduced D^2STM, which is, to the best of our knowledge, the first Distributed Software Transactional Memory ensuring both strong consistency and high availability despite the occurrence of (a minority of) replicas' failures.
The replica consistency mechanism at the core of D^2STM's, namely the BFC protocol, leverages on a novel Bloom Filter based encoding scheme which allows achieving striking reductions of the overhead associated with the transaction certification phase. Further, thanks to a tight integration with a multi-versioned STM, D^2STM can process read-only transactions locally, without incurring in the risk of aborts induced by local or remote conflicts and avoiding any communication overhead.
References
Figure 9: STMBench7, read dominated with long traversals, maxAbortRate=1%
|
{"Source-Url": "https://iid-intranet.inesc-id.pt/publications/downloadfile?id=920242b7-649e-42d5-9676-45f2f34ca97d", "len_cl100k_base": 10275, "olmocr-version": "0.1.50", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 28863, "total-output-tokens": 13827, "length": "2e13", "weborganizer": {"__label__adult": 0.00028395652770996094, "__label__art_design": 0.00029468536376953125, "__label__crime_law": 0.0002529621124267578, "__label__education_jobs": 0.00040268898010253906, "__label__entertainment": 7.176399230957031e-05, "__label__fashion_beauty": 0.00012242794036865234, "__label__finance_business": 0.0002211332321166992, "__label__food_dining": 0.00028586387634277344, "__label__games": 0.0004830360412597656, "__label__hardware": 0.0011014938354492188, "__label__health": 0.0004045963287353515, "__label__history": 0.0002465248107910156, "__label__home_hobbies": 7.56382942199707e-05, "__label__industrial": 0.0003485679626464844, "__label__literature": 0.00020301342010498047, "__label__politics": 0.0002123117446899414, "__label__religion": 0.00040650367736816406, "__label__science_tech": 0.037261962890625, "__label__social_life": 7.200241088867188e-05, "__label__software": 0.01003265380859375, "__label__software_dev": 0.9462890625, "__label__sports_fitness": 0.00022935867309570312, "__label__transportation": 0.00044918060302734375, "__label__travel": 0.0001996755599975586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57645, 0.03783]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57645, 0.19761]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57645, 0.88827]], "google_gemma-3-12b-it_contains_pii": [[0, 297, false], [297, 297, null], [297, 1698, null], [1698, 1698, null], [1698, 3834, null], [3834, 7285, null], [7285, 10216, null], [10216, 13672, null], [13672, 17043, null], [17043, 20538, null], [20538, 23473, null], [23473, 25004, null], [25004, 28161, null], [28161, 31132, null], [31132, 32387, null], [32387, 35912, null], [35912, 37327, null], [37327, 40509, null], [40509, 42161, null], [42161, 45657, null], [45657, 45792, null], [45792, 45958, null], [45958, 48917, null], [48917, 51643, null], [51643, 51717, null], [51717, 54581, null], [54581, 57645, null]], "google_gemma-3-12b-it_is_public_document": [[0, 297, true], [297, 297, null], [297, 1698, null], [1698, 1698, null], [1698, 3834, null], [3834, 7285, null], [7285, 10216, null], [10216, 13672, null], [13672, 17043, null], [17043, 20538, null], [20538, 23473, null], [23473, 25004, null], [25004, 28161, null], [28161, 31132, null], [31132, 32387, null], [32387, 35912, null], [35912, 37327, null], [37327, 40509, null], [40509, 42161, null], [42161, 45657, null], [45657, 45792, null], [45792, 45958, null], [45958, 48917, null], [48917, 51643, null], [51643, 51717, null], [51717, 54581, null], [54581, 57645, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57645, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57645, null]], "pdf_page_numbers": [[0, 297, 1], [297, 297, 2], [297, 1698, 3], [1698, 1698, 4], [1698, 3834, 5], [3834, 7285, 6], [7285, 10216, 7], [10216, 13672, 8], [13672, 17043, 9], [17043, 20538, 10], [20538, 23473, 11], [23473, 25004, 12], [25004, 28161, 13], [28161, 31132, 14], [31132, 32387, 15], [32387, 35912, 16], [35912, 37327, 17], [37327, 40509, 18], [40509, 42161, 19], [42161, 45657, 20], [45657, 45792, 21], [45792, 45958, 22], [45958, 48917, 23], [48917, 51643, 24], [51643, 51717, 25], [51717, 54581, 26], [54581, 57645, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57645, 0.06587]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
3c2a7fb23f7d3020d2ff7d11620f6568fff97169
|
[REMOVED]
|
{"Source-Url": "https://pdfs.semanticscholar.org/0907/d29867ae3cb2393c4f18d25a7b7451c3b241.pdf", "len_cl100k_base": 12790, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 55576, "total-output-tokens": 14253, "length": "2e13", "weborganizer": {"__label__adult": 0.0003299713134765625, "__label__art_design": 0.00026917457580566406, "__label__crime_law": 0.0003223419189453125, "__label__education_jobs": 0.0004930496215820312, "__label__entertainment": 5.251169204711914e-05, "__label__fashion_beauty": 0.00013649463653564453, "__label__finance_business": 0.0001842975616455078, "__label__food_dining": 0.00036525726318359375, "__label__games": 0.00048470497131347656, "__label__hardware": 0.0007982254028320312, "__label__health": 0.0004401206970214844, "__label__history": 0.00016295909881591797, "__label__home_hobbies": 9.077787399291992e-05, "__label__industrial": 0.0003685951232910156, "__label__literature": 0.0002409219741821289, "__label__politics": 0.0002236366271972656, "__label__religion": 0.0004525184631347656, "__label__science_tech": 0.01323699951171875, "__label__social_life": 6.598234176635742e-05, "__label__software": 0.004058837890625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0002655982971191406, "__label__transportation": 0.0005064010620117188, "__label__travel": 0.00017178058624267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48156, 0.01686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48156, 0.45279]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48156, 0.85434]], "google_gemma-3-12b-it_contains_pii": [[0, 3207, false], [3207, 7043, null], [7043, 10606, null], [10606, 14641, null], [14641, 17809, null], [17809, 20007, null], [20007, 23190, null], [23190, 26236, null], [26236, 28314, null], [28314, 30948, null], [30948, 33764, null], [33764, 36880, null], [36880, 38670, null], [38670, 42265, null], [42265, 45970, null], [45970, 48156, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3207, true], [3207, 7043, null], [7043, 10606, null], [10606, 14641, null], [14641, 17809, null], [17809, 20007, null], [20007, 23190, null], [23190, 26236, null], [26236, 28314, null], [28314, 30948, null], [30948, 33764, null], [33764, 36880, null], [36880, 38670, null], [38670, 42265, null], [42265, 45970, null], [45970, 48156, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48156, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48156, null]], "pdf_page_numbers": [[0, 3207, 1], [3207, 7043, 2], [7043, 10606, 3], [10606, 14641, 4], [14641, 17809, 5], [17809, 20007, 6], [20007, 23190, 7], [23190, 26236, 8], [26236, 28314, 9], [28314, 30948, 10], [30948, 33764, 11], [33764, 36880, 12], [36880, 38670, 13], [38670, 42265, 14], [42265, 45970, 15], [45970, 48156, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48156, 0.08333]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
c35c5676417f996aed9fa4d03d70e63b930d0f55
|
GT 4.2.0 GridFTP: User's Guide
Introduction
# Table of Contents
1. Managing Files on a Grid (GridFTP Quickstart) ..................................................................................... 1
1. Basic procedure for using GridFTP (globus-url-copy) ........................................................................ 1
2. Accessing data from other data interfaces ...................................................................................... 2
3. Pipelining ........................................................................................................................................ 4
4. GridFTP Where There Is FTP (GWTFTP) ..................................................................................... 4
5. Multicasting ............................................................................................................................... 4
2. GridFTP Client Tool ................................................................................................................................. 7
globus-url-copy .................................................................................................................................. 8
3. Graphical User Interface ......................................................................................................................... 18
4. Security Considerations ....................................................................................................................... 19
1. Security Considerations ........................................................................................................... 19
5. Troubleshooting .................................................................................................................................. 21
1. Error Codes in GridFTP ........................................................................................................... 22
2. Establish control channel connection ........................................................................................ 22
3. Try running globus-url-copy ..................................................................................................... 23
4. If your server starts... ............................................................................................................... 24
6. Usage statistics collection by the Globus Alliance ................................................................................ 25
1. GridFTP-specific usage statistics ............................................................................................ 25
Glossary ........................................................................................................................................... 26
Index ............................................................................................................................................... 28
List of Figures
2.1. Effect of Parallel Streams in GridFTP ................................................................. 16
List of Tables
2.1. URL formats ............................................................................................................................... 10
5.1. GridFTP Errors .......................................................................................................................... 22
Chapter 1. Managing Files on a Grid (GridFTP Quickstart)
1. Basic procedure for using GridFTP (globus-url-copy)
If you just want the "rules of thumb" on getting started (without all the details), the following options using globus-url-copy will normally give acceptable performance:
```
globus-url-copy -vb -tcp-bs 2097152 -p 4 source_url destination_url
```
where:
- **-vb** specifies verbose mode and displays:
- number of bytes transferred,
- performance since the last update (currently every 5 seconds), and
- average performance for the whole transfer.
- **-tcp-bs** specifies the size (in bytes) of the TCP buffer to be used by the underlying ftp data channels. This is critical to good performance over the WAN.
**How do I pick a value?**
- **-p** Specifies the number of parallel data connections that should be used. This is one of the most commonly used options.
**How do I pick a value?**
The source/destination URLs will normally be one of the following:
- **file:///path/to/my/file** if you are accessing a file on a file system accessible by the host on which you are running your client.
- **gsiftp://hostname/path/to/remote/file** if you are accessing a file from a GridFTP server.
1.1. Putting files
One of the most basic tasks in GridFTP is to "put" files, i.e., moving a file from your file system to the server. So for example, if you want to move the file /tmp/foo from a file system accessible to the host on which you are running your client to a file name /tmp/bar on a host named remote.machine.my.edu running a GridFTP server, you would use this command:
```
globus-url-copy -vb -tcp-bs 2097152 -p 4 file:///tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
```
**Note**
In theory, remote.machine.my.edu could be the same host as the one on which you are running your client, but that is normally only done in testing situations.
1.2. Getting files
A get, i.e., moving a file from a server to your file system, would just reverse the source and destination URLs:
Tip
Remember file: always refers to your file system.
```
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://remote.machine.my.edu/tmp/bar file:///tmp/foo
```
1.3. Third party transfers
Finally, if you want to move a file between two GridFTP servers (a third party transfer), both URLs would use gsiftp: as the protocol:
```
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://other.machine.my.edu/tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
```
1.4. For more information
If you want more information and details on URLs and the command line options, the Key Concepts gives basic definitions and an overview of the GridFTP protocol as well as our implementation of it.
2. Accessing data from other data interfaces
2.1. Accessing data in a non-POSIX file data source that has a POSIX interface
If you want to access data in a non-POSIX file data source that has a POSIX interface, the standard server will do just fine. Just make sure it is really POSIX-like (out of order writes, contiguous byte writes, etc).
2.2. GridFTP and DSIs
The following information is helpful if you want to use GridFTP to access data in DSIs (such as HPSS and SRB), and non-POSIX data sources.
Architecturally, the Globus GridFTP server can be divided into 3 modules:
- the GridFTP protocol module,
- the (optional) data transform module, and
- the Data Storage Interface (DSI).
In the GT 4.2.0 implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.3. Data Storage Interface (DSI) / Data Transform module
The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN).
The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc..
Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
[TODO: pointer to DSI development docs]
2.3. Latest information about HPSS
Last Update: August 2005
Working with Los Alamos National Laboratory and the High Performance Storage System (HPSS) collaboration (http://www.hpss-collaboration.org), we have written a Data Storage Interface (DSI) for read/write access to HPSS. This DSI would allow an existing application that uses a GridFTP compliant client to utilize an HPSS data resources.
This DSI is currently in testing. Due to changes in the HPSS security mechanisms, it requires HPSS 6.2 or later, which is due to be released in Q4 2005. Distribution for the DSI has not been worked out yet, but it will *probably* be available from both Globus and the HPSS collaboration. While this code will be open source, it requires underlying HPSS libraries which are NOT open source (proprietary).
Note
This is a purely server side change, the client does not know what DSI is running, so only a site that is already running HPSS and wants to allow GridFTP access needs to worry about access to these proprietary libraries.
2.4. Latest information about SRB
Last Update: August 2005
Working with the SRB team at the San Diego Supercomputing Center, we have written a Data Storage Interface (DSI) for read/write access to data in the Storage Resource Broker (SRB) (http://www.npaci.edu/DICE/SRB). This DSI will enable GridFTP compliant clients to read and write data to an SRB server, similar in functionality to the sput/sget commands.
This DSI is currently in testing and is not yet publicly available, but will be available from both the SRB web site (here) and the Globus web site (here). It will also be included in the next stable release of the toolkit. We are working on performance tests, but early results indicate that for wide area network (WAN) transfers, the performance is comparable.
When might you want to use this functionality:
• You have existing tools that use GridFTP clients and you want to access data that is in SRB
• You have distributed data sets that have some of the data in SRB and some of the data available from GridFTP servers.
3. Pipelining
Pipelining allows the client to have many outstanding, unacknowledged transfer commands at once. Instead of being forced to wait for the "Finished response" message, the client is free to send transfer commands at any time.
Pipelining is enabled by using the -pp option:
```
globus-url-copy -pp
```
4. GridFTP Where There Is FTP (GWFTFTP)
GridFTP Where There Is FTP (GWFTFTP) is an intermediate program that acts as a proxy between existing FTP clients and GridFTP servers. Users can connect to GWFTP with their favorite standard FTP client, and GWFTP will then connect to a GridFTP server on the client’s behalf. To clients, GWFTP looks much like an FTP proxy server. When wishing to contact a GridFTP server, FTP clients instead contact GWFTP.
Clients tell GWFTP their ultimate destination via the FTP USER <username> command. Instead of entering their username, client users send the following:
```
USER <GWFTFTP username>::<GridFTP server URL>
```
This command tells GWFTP the GridFTP endpoint with which the client wants to communicate. For example:
```
USER bresnaha::gsiftp://wiggum.mcs.anl.gov:2811/
```
Note
Requires GSI C security.
5. Multicasting
To transfer a single file to many destinations in a multicast/broadcast, use the new -mc option.
Note
To use this option, the admin must enable multicasting. Click here for more information.
The `filename` must contain a line-separated list of destination urls. For example:
```
gsiftp://localhost:5000/home/user/tst1
gsiftp://localhost:5000/home/user/tst3
gsiftp://localhost:5000/home/user/tst4
```
For more flexibility, you can also specify a single destination url on the command line in addition to the urls in the file. Examples are:
```
globus-url-copy -MC multicast.file gsiftp://localhost/home/user/src_file
```
```
globus-url-copy -MC multicast.file gsiftp://localhost/home/user/src_file gsiftp://localhost/home/user/dest_file1
```
### 5.1. Advanced multicasting options
Along with specifying the list of destination urls in a file, a set of options for each url can be specified. This is done by appending a ? to the resource string in the url followed by semicolon-separated key value pairs. For example:
```
gsiftp://dst1.domain.com:5000/home/user/tst1?cc=1;tcpbs=10M;P=4
```
This indicates that the receiving host `dst1.domain.com` will use 4 parallel stream, a tcp buffer size of 10 MB, and will select 1 host when forwarding on data blocks. This url is specified in the `-mc` file as described above.
The following is a list of key=value options and their meanings:
- `P=integer` The number of parallel streams this node will use when forwarding.
- `cc=integer` The number of urls to which this node will forward data.
- `tcpbs=format- ted integer` The TCP buffer size this node will use when forwarding.
- `urls=string list` The list of urls that must be children of this node when the spanning tree is complete.
- `local_write=boolean: y|n` Determines if this data will be written to a local disk, or just forwarded on to the next hop. This is explained more in the Network Overlay section.
- `subject=string` The DN name to expect from the servers this node is connecting to.
### 5.2. Network Overlay
In addition to allowing multicast, this function also allows for creating user-defined network routes.
If the `local_write` option is set to `n`, then no data will be written to the local disk, the data will only be forwarded on.
If the `local_write` option is set to `n` and is used with the `cc=1` option, the data will be forwarded on to exactly 1 location.
This allows the user to create a network overlay of data hops using each GridFTP server as a router to the ultimate destination.
Chapter 2. GridFTP Client Tool
Name
globus-url-copy -- Multi-protocol data movement
globus-url-copy
Tool description
globus-url-copy is a scriptable command line tool that can do multi-protocol data movement. It supports gsiftp:// (GridFTP), ftp://, http://, https://, and file:/// protocol specifiers in the URL. For GridFTP, globus-url-copy supports all implemented functionality. Versions from GT 3.2 and later support file globbing and directory moves.
Before you begin
Command syntax
Command line options
• Informational options
• Utility options
• Reliability options
• Performance options
• Security-related options
Default usage
MODES in GridFTP
If you run a GridFTP server by hand
• How do I choose a value for the TCP buffer size (-tcp-bs) option?
• How do I choose a value for the parallelism (-p) option?
Limitations
Interactive clients for GridFTP
Before you begin
⚠️ Important
To use gsiftp:// and https:// protocols, you must have a certificate to use globus-url-copy. However, you may use ftp:// or http:// protocols without a certificate.
1. First, as with all things Grid, you must have a valid proxy certificate to run globus-url-copy in certain protocols (gsiftp:// and https://, as noted above). If you are using ftp:// or http:// protocols, security is not mandatory and you may skip the rest of this table.
If you do not have a certificate, you must obtain one.
If you are doing this for testing in your own environment, the SimpleCA provided with the Globus Toolkit should suffice.
If not, you must contact the Virtual Organization (VO) with which you are associated to find out whom to ask for a certificate.
One common source is the DOE Science Grid CA, although you must confirm whether or not the resources you wish to access will accept their certificates.
Instructions for proper installation of the certificate should be provided from the source of the certificate.
Please note when your certificates expire; they will need to be renewed or you may lose access to your resources.
2. Now that you have a certificate, you must generate a temporary proxy. Do this by running:
grid-proxy-init
Further documentation for grid-proxy-init can be found here.
3. You are now ready to use globus-url-copy! See the following sections for syntax and command line options and other considerations.
**Command syntax**
The basic syntax for globus-url-copy is:
```
globus-url-copy [optional command line switches] Source_URL Destination_URL
```
where:
<table>
<thead>
<tr>
<th>[optional command line switches]</th>
<th>See Command line options below for a list of available options.</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Source_URL</strong></td>
<td>Specifies the original URL of the file(s) to be copied.</td>
</tr>
<tr>
<td></td>
<td>If this is a directory, all files within that directory will be copied.</td>
</tr>
<tr>
<td><strong>Destination_URL</strong></td>
<td>Specifies the URL where you want to copy the files.</td>
</tr>
<tr>
<td></td>
<td>If you want to copy multiple files, this must be a directory.</td>
</tr>
</tbody>
</table>
**Note**
Any url specifying a directory must end with `/`.
**URL prefixes**
As of GT 3.2, we support the following URL prefixes:
- file:// (on a local machine only)
- ftp://
- gsiftp://
- http://
---
1 http://www.doegrids.org/pages/cert-request.htm
• **https://**
By default, **globus-url-copy** expects the same kind of host certificates that **globusrun** expects from gatekeepers.
**Note**
We do **not** provide an interactive client similar to the generic FTP client provided with Linux. See the Interactive Clients section below for information on an interactive client developed by NCSA/NMI/TeraGrid.
### URL formats
URLs can be any valid URL as defined by RFC 1738 that have a protocol we support. In general, they have the following format: *protocol://host:port/path*.
**Note**
If the path ends with a trailing / (i.e. /path/to/directory/) it will be considered to be a directory and all files in that directory will be moved. If you want a recursive directory move, you need to add the `-r/-recurse` switch described below.
<table>
<thead>
<tr>
<th>Format</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="http://myhost.mydomain.com/mywebpage/default.html">http://myhost.mydomain.com/mywebpage/default.html</a></td>
<td>Port is not specified; therefore, GridFTP uses protocol default (in this case, 80).</td>
</tr>
<tr>
<td>file:///foo.dat</td>
<td>Host is not specified; therefore, GridFTP uses your local host. Port is not specified; therefore, GridFTP uses protocol default (in this case, 80).</td>
</tr>
<tr>
<td>file://foo.dat</td>
<td>This is also valid but is not recommended because, while many servers (including ours) accept this format, it is <strong>not</strong> RFC conformant and is not recommended.</td>
</tr>
</tbody>
</table>
**Important**
For GridFTP (gsiftp://) and FTP (ftp://), it is legal to specify a user name and password in the the URL as follows:
```
gsiftp://myname:[mypassword]@myhost.mydomain.com/foo.dat
```
If you are using GSI security, then you may specify the username (but you may **not** include the : or the password) and the grid-mapfile will be searched to see if that is a valid account mapping for your distinguished name (DN). If it is found, the server will setuid to that account. If not, it will fail. It will NOT fail back to your default account.
If you are using anonymous FTP, the username **must** be one of the usernames listed as a valid anonymous name and the password can be anything.
If you are using password authentication, you must specify both your username and password. **THIS IS HIGHLY DISCOURAGED, AS YOU ARE SENDING YOUR PASSWORD IN THE CLEAR ON THE NETWORK.** This is worse than no security; it is a false illusion of security.
Command line options
Informational Options
- **-help | -usage** Prints help.
- **-version** Prints the version of this program.
- **-versions** Prints the versions of all modules that this program uses.
- **-q | -quiet** Suppresses all output for successful operation.
- **-vb | -verbose** During the transfer, displays:
- number of bytes transferred,
- performance since the last update (currently every 5 seconds), and
- average performance for the whole transfer.
- **-dbg | -debugftp** Debugs FTP connections and prints the entire control channel protocol exchange to STDERR. Very useful for debugging. Please provide this any time you are requesting assistance with a globus-url-copy problem.
- **-list <url>** This option will display a directory listing for the given url.
Utility Ease of Use Options
- **-a | -ascii** Converts the file to/from ASCII format to/from local file format.
- **-b | -binary** Does not apply any conversion to the files. This option is turned on by default.
- **-f filename** Reads a list of URL pairs from a filename.
Each line should contain:
`sourceURL destURL`
Enclose URLs with spaces in double quotes (""). Blank lines and lines beginning with the hash sign (#) will be ignored.
- **-r | -recurse** Copies files in subdirectories.
- **-notpt | -no-third-party-transfers** Turns third-party transfers off (on by default).
Site firewall and/or software configuration may prevent a connection between the two servers (a **third party transfer**). If this is the case, globus-url-copy will "relay" the data. It will do a GET from the source and a PUT to the destination.
This obviously causes a performance penalty but will allow you to complete a transfer you otherwise could not do.
Reliability Options
- **-rst | -restart** Restarts failed FTP operations.
-rst-retries <retries> Specifies the maximum number of times to retry the operation before giving up on the transfer.
Use 0 for infinite.
The default value is 5.
-rst-interval <seconds> Specifies the interval in seconds to wait after a failure before retrying the transfer.
Use 0 for an exponential backoff.
The default value is 0.
-rst-timeout <seconds> Specifies the maximum time after a failure to keep retrying.
Use 0 for no timeout.
The default value is 0.
Performance Options
-tcp-bs <size> | -tcp-buffer-size <size> Specifies the size (in bytes) of the TCP buffer to be used by the underlying ftp data channels.
⚠️ Important
This is critical to good performance over the WAN.
How do I pick a value?
-p <parallelism> | -parallel <parallelism> Specifies the number of parallel data connections that should be used.
🔍 Note
This is one of the most commonly used options.
How do I pick a value?
-bs <block size> | -block-size <block size> Specifies the size (in bytes) of the buffer to be used by the underlying transfer methods.
-pp (New starting with GT 4.1.3) Allows pipelining. GridFTP is a command response protocol. A client sends one command and then waits for a "Finished response" before sending another. Adding this overhead on a per-file basis for a large data set partitioned into many small files makes the performance suffer. Pipelining allows the client to have many outstanding, unacknowledged transfer commands at once. Instead of being forced to wait for the "Finished response" message, the client is free to send transfer commands at any time.
-mc filename source_url (New starting with GT 4.2.0) Transfers a single file to many destinations. Filename is a line-separated list of destination urls. For more information on this option, click here.
Multicasting must be enabled for use on the server side.
Security Related Options
-s <subject> | -subject <subject> Specifies a subject to match with both the source and destination servers.
**Note**
Used when the server does not have access to the host certificate (usually when you are running the server as a user). See the section called “If you run a GridFTP server by hand...”.
-ss <subject> | -source-subject <subject> Specifies a subject to match with the source server.
**Note**
Used when the server does not have access to the host certificate (usually when you are running the server as a user). See the section called “If you run a GridFTP server by hand...”.
-ds <subject> | -dest-subject <subject> Specifies a subject to match with the destination server.
**Note**
Used when the server does not have access to the host certificate (usually when you are running the server as a user). See the section called “If you run a GridFTP server by hand...”.
-nodcau | -no-data-channel-authentication Turns off data channel authentication for FTP transfers (the default is to authenticate the data channel).
**Warning**
We do not recommend this option, as it is a security risk.
-dcsafe | -data-channel-safe Sets data channel protection mode to SAFE.
Otherwise known as integrity or checksumming.
Guarantees that the data channel has not been altered, though a malicious party may have observed the data.
**Warning**
Rarely used as there is a substantial performance penalty.
-dcpriv | -data-channel-private Sets data channel protection mode to PRIVATE.
The data channel is encrypted and checksummed.
Guarantees that the data channel has not been altered and, if observed, it won't be understandable.
Warning
VERY rarely used due to the VERY substantial performance penalty.
Default globus-url-copy usage
A globus-url-copy invocation using the gsiftp protocol with no options (i.e., using all the defaults) will perform a transfer with the following characteristics:
• binary
• stream mode (which implies no parallelism)
• host default TCP buffer size
• encrypted and checksummed control channel
• an authenticated data channel
MODES in GridFTP
GridFTP (as well as normal FTP) defines multiple wire protocols, or MODES, for the data channel.
Most normal FTP servers only implement stream mode (MODE S), i.e. the bytes flow in order over a single TCP connection. GridFTP defaults to this mode so that it is compatible with normal FTP servers.
However, GridFTP has another MODE, called Extended Block Mode, or MODE E. This mode sends the data over the data channel in blocks. Each block consists of 8 bits of flags, a 64 bit integer indicating the offset from the start of the transfer, and a 64 bit integer indicating the length of the block in bytes, followed by a payload of length bytes. Because the offset and length are provided, out of order arrival is acceptable, i.e. the 10th block could arrive before the 9th because you know explicitly where it belongs. This allows us to use multiple TCP channels. If you use the -p | -parallelism option, globus-url-copy automatically puts the servers into MODE E.
Note
Putting -p 1 is not the same as no -p at all. Both will use a single stream, but the default will use stream mode and -p 1 will use MODE E.
If you run a GridFTP server by hand...
If you run a GridFTP server by hand, you will need to explicitly specify the subject name to expect. The subject option provides globus-url-copy with a way to validate the remote servers with which it is communicating. Not only must the server trust globus-url-copy, but globus-url-copy must trust that it is talking to the correct server. The validation is done by comparing host DNs or subjects.
If the GridFTP server in question is running under a host certificate then the client assumes a subject name based on the server's canonical DNS name. However, if it was started under a user certificate, as is the case when a server is started by hand, then the expected subject name must be explicitly stated. This is done with the -ss, -sd, and -s options.
-ss Sets the sourceURL subject.
-ds Sets the destURL subject.
If you use this option alone, it will set both URLs to be the same. You can see an example of this usage under the Troubleshooting section.
**Note**
This is an unusual use of the client. Most times you need to specify both URLs.
**How do I choose a value?**
**How do I choose a value for the TCP buffer size (-tcp-bs) option?**
The value you should pick for the TCP buffer size (-tcp-bs) depends on how fast you want to go (your bandwidth) and how far you are moving the data (as measured by the Round Trip Time (RTT) or the time it takes a packet to get to the destination and back).
To calculate the value for -tcp-bs, use the following formula (this assumes that Mega means 1000^2 rather than 1024^2, which is typical for bandwidth):
\[-tcp-bs = \text{bandwidth in Megabits per second (Mbs)} \times \text{RTT in milliseconds (ms)} \times 1000 / 8\]
As an example, if you are using fast ethernet (100 Mbs) and the RTT was 50 ms it would be:
\[-tcp-bs = 100 \times 50 \times 1000 / 8 = 625,000 \text{ bytes}.\]
So, how do you come up with values for bandwidth and RTT? To determine RTT, use either ping or traceroute. They both list RTT values.
**Note**
You must be on one end of the transfer and ping the other end. This means that if you are doing a third party transfer you have to run the ping or traceroute between the two server hosts, not from your client.
The bandwidth is a little trickier. Any point in the network can be the bottleneck, so you either need to talk with your network engineers to find out what the bottleneck link is or just assume that your host is the bottleneck and use the speed of your network interface card (NIC).
**Note**
The value you pick for -tcp-bs limits the top speed you can achieve. You will NOT get bandwidth any higher than what you used in the calculation (assuming the RTT is actually what you specified; it varies a little with network conditions). So, if for some reason you want to limit the bandwidth you get, you can do that by judicious choice of -tcp-bs values.
So where does this formula come from? Because it uses the bandwidth and the RTT (also known as the latency or delay) it is called the bandwidth delay product. The very simple explanation is this: TCP is a reliable protocol. It must save a copy of everything it sends out over the network until the other end acknowledges that it has been received.
As a simple example, if I can put one byte per second onto the network, and it takes 10 seconds for that byte to get there, and 10 seconds for the acknowledgment to get back (RTT = 20 seconds), then I would need at least 20 bytes of storage. Then, hopefully, by the time I am ready to send byte 21, I have received an acknowledgement for byte 1 and I can free that space in my buffer. If you want a more detailed explanation, try the following links on TCP tuning:
- [http://www.psc.edu/networking/perf_tune.html](http://www.psc.edu/networking/perf_tune.html)
How do I choose a value for the parallelism (-p) option?
For most instances, using 4 streams is a very good rule of thumb. Unfortunately, there is not a good formula for picking an exact answer. The shape of the graph shown here is very characteristic.
Figure 2.1. Effect of Parallel Streams in GridFTP
You get a strong, nearly linear, increase in bandwidth, then a sharp knee, after which additional streams have very little impact. Where this knee is depends on many things, but it is generally between 2 and 10 streams. Higher bandwidth, longer round trip times, and more congestion in the network (which you usually can only guess at based on how applications are behaving) will move the knee higher (more streams needed).
In practice, between 4 and 8 streams are usually sufficient. If things look really bad, try 16 and see how much difference that makes over 8. However, anything above 16, other than for academic interest, is basically wasting resources.
Limitations
There are no limitations for globus-url-copy in GT 4.2.0.
Interactive clients for GridFTP
The Globus Project does not provide an interactive client for GridFTP. Any normal FTP client will work with a GridFTP server, but it cannot take advantage of the advanced features of GridFTP. The interactive clients listed below take advantage of the advanced features of GridFTP.
There is no endorsement implied by their presence here. We make no assertion as to the quality or appropriateness of these tools, we simply provide this for your convenience. We will not answer questions, accept bugs, or in any way shape or form be responsible for these tools, although they should have mechanisms of their own for such things.
UberFTP was developed at the NCSA under the auspices of NMI and TeraGrid:
- NCSA Uberftp only download: http://dims.ncsa.uiuc.edu/set/uberftp/download.html
Chapter 3. Graphical User Interface
Globus does not provide any interactive client for GridFTP, either GUI or text based. However, NCSA, as part of their TeraGrid activity, produces a text based interactive client called UberFTP, which you may want to check out. See the section called “Interactive clients for GridFTP” for more information.
Chapter 4. Security Considerations
1. Security Considerations
1.1. Ways to configure your server
As discussed in Section 2, "Types of configurations", there are three ways to configure your GridFTP server: the default configuration (like any normal FTP server), separate (split) process configuration and striped configuration. The latter two provide greater levels of security as described here.
1.2. New authentication option
There is a new authentication option available for GridFTP in GT 4.2.0:
- SSH Authentication Globus GridFTP now supports SSH based authentication for the control channel. In order for this to work:
- Configure server to support SSH authentication,
- Configure client (globus-url-copy) to support SSH authentication,
- Use sshftp:// urls in globus-url-copy
For more information, see Section 4, "SSHFTP (GridFTP-over-SSH)".
1.3. Firewall requirements
If the GridFTP server is behind a firewall:
1. Contact your network administrator to open up port 2811 (for GridFTP control channel connection) and a range of ports (for GridFTP data channel connections) for the incoming connections. If the firewall blocks the outgoing connections, open up a range of ports for outgoing connections as well.
2. Set the environment variable GLOBUS_TCP_PORT_RANGE:
export GLOBUS_TCP_PORT_RANGE=min,max
where min,max specify the port range that you have opened for the incoming connections on the firewall. This restricts the listening ports of the GridFTP server to this range. Recommended range is 1000 (e.g., 50000-51000) but it really depends on how much use you expect.
3. If you have a firewall blocking the outgoing connections and you have opened a range of ports, set the environment variable GLOBUS_TCP_SOURCE_RANGE:
export GLOBUS_TCP_SOURCE_RANGE=min,max
where min,max specify the port range that you have opened for the outgoing connections on the firewall. This restricts the outbound ports of the GridFTP server to this range. Recommended range is twice the range used for GLOBUS_TCP_PORT_RANGE, because if parallel TCP streams are used for transfers, the listening port would remain the same for each connection but the connecting port would be different for each connection.
Note
If the server is behind NAT, the `--data-interface <real ip/hostname>` option needs to be used on the server.
If the GridFTP client is behind a firewall:
1. Contact your network administrator to open up a range of ports (for GridFTP data channel connections) for the incoming connections. If the firewall blocks the outgoing connections, open up a range of ports for outgoing connections as well.
2. Set the environment variable GLOBUS_TCP_PORT_RANGE
```
export GLOBUS_TCP_PORT_RANGE=min,max
```
where min,max specify the port range that you have opened for the incoming connections on the firewall. This restricts the listening ports of the GridFTP client to this range. Recommended range is 1000 (e.g., 50000-51000) but it really depends on how much use you expect.
3. If you have a firewall blocking the outgoing connections and you have opened a range of ports, set the environment variable GLOBUS_TCP_SOURCE_RANGE:
```
export GLOBUS_TCP_SOURCE_RANGE=min,max
```
where min,max specify the port range that you have opened for the outgoing connections on the firewall. This restricts the outbound ports of the GridFTP client to this range. Recommended range is twice the range used for GLOBUS_TCP_PORT_RANGE, because if parallel TCP streams are used for transfers, the listening port would remain the same for each connection but the connecting port would be different for each connection.
Additional information on Globus Toolkit Firewall Requirements is available [here](http://www.globus.org/toolkit/security/firewalls/)
Chapter 5. Troubleshooting
If you are having problems using the GridFTP server, try the steps listed below. If you have an error, try checking the server logs if you have access to them. By default, the server logs to stderr, unless it is running from inetd, or its execution mode is detached, in which case logging is disabled by default.
The command line options -d, -log-level, -L and -logdir can affect where logs will be written, as can the configuration file options log_single and log_unique. See the globus-gridftp-server(1) for more information on these and other configuration options.
You should also be familiar with the security considerations.
For a list of common errors in GT, see Error Codes.
1. Error Codes in GridFTP
Table 5.1. GridFTP Errors
<table>
<thead>
<tr>
<th>Error Code</th>
<th>Definition</th>
<th>Possible Solutions</th>
</tr>
</thead>
<tbody>
<tr>
<td>globus_ftp_client: the server responded with an error 530 530-globus_xio:</td>
<td>This error message indicates that the GridFTP server doesn't trust the</td>
<td>You need to ask the GridFTP server administrator to install your CA certificate</td>
</tr>
<tr>
<td>Authentication Error 530-OpenSSL Error: s3_srvr.c:2525: in library: SSL</td>
<td>the certificate authority (CA) that issued your certificate.</td>
<td>chain in the GridFTP server's trusted certificates directory.</td>
</tr>
<tr>
<td>routines, function SSL3_GET_CLIENT_CERTIFICATE: no certificate returned</td>
<td></td>
<td></td>
</tr>
<tr>
<td>530-globus_gsi_callback_module: Could not verify credential 530-globus_</td>
<td></td>
<td></td>
</tr>
<tr>
<td>gsi_callback_module: Can't get the local trusted CA certificate: Untrusted</td>
<td></td>
<td></td>
</tr>
<tr>
<td>self-signed certificate in chain with hash d1b603c3 530 End.</td>
<td></td>
<td></td>
</tr>
<tr>
<td>gss_init_sec_context failed OpenSSL Error: s3_clnt.c:951: in library: SSL</td>
<td>This error message indicates that your local system doesn't trust the</td>
<td>You need to ask the resource administrator which CA issued their certificate and</td>
</tr>
<tr>
<td>routines, function SSL3_GET_SERVER_CERTIFICATE: certificate verify failed</td>
<td>the certificate authority (CA) that issued the certificate on the resource</td>
<td>install the CA certificate in the local trusted certificates directory.</td>
</tr>
<tr>
<td>globus_gsi_callback_module: Could not verify credential globus_gsi_callback</td>
<td>you are connecting to.</td>
<td></td>
</tr>
<tr>
<td>module: Can't get the local trusted CA certificate: Untrusted self-signed</td>
<td></td>
<td></td>
</tr>
<tr>
<td>certificate in chain with hash d1b603c3</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
2. Establish control channel connection
Verify that you can establish a control channel connection and that the server has started successfully by telnetting to the port on which the server is running:
```
% telnet localhost 2811
Trying 127.0.0.1...
Connected to localhost.
```
Escape character is '^]'.
220 GridFTP Server mldev.mcs.anl.gov 2.0 (gcc32dbg, 1113865414-1) ready.
If you see anything other than a 220 banner such as the one above, the server has not started correctly.
Verify that there are no configuration files being unexpectedly loaded from /etc/grid-security/gridftp.conf or $GLOBUS_LOCATION/etc/gridftp.conf. If those files exist, and you did not intend for them to be used, rename them to .save, or specify -c none on the command line and try again.
If you can log into the machine where the server is, try running the server from the command line with only the -s option:
$GLOBUS_LOCATION/sbin/globus-gridftp-server -s
The server will print the port it is listening on:
Server listening at gridftp.mcs.anl.gov:57764
Now try telnet to that port. If you still do not get the banner listed above, something is preventing the socket connection. Check firewalls, tcp-wrapper, etc.
If you now get a correct banner, add -p 2811 (you will have to disable (x)inetd on port 2811 if you are using them or you will get port already in use):
$GLOBUS_LOCATION/sbin/globus-gridftp-server -s -p 2811
Now telnet to port 2811. If this does not work, something is blocking port 2811. Check firewalls, tcp-wrapper, etc.
If this works correctly then re-enable your normal server, but remove all options but -i, -s, or -S.
Now telnet to port 2811. If this does not work, something is wrong with your service configuration. Check /etc/services and (x)inetd config, have (x)inetd restarted, etc.
If this works, begin adding options back one at a time, verifying that you can telnet to the server after each option is added. Continue this till you find the problem or get all the options you want.
At this point, you can establish a control connection. Now try running globus-url-copy.
3. Try running globus-url-copy
Once you've verified that you can establish a control connection, try to make a transfer using globus-url-copy.
If you are doing a client/server transfer (one of your URLs has file: in it) then try:
globus-url-copy -vb -dbg gsiftp://host.server.running.on/dev/zero file:///dev/null
This will run until you control-c the transfer. If that works, reverse the direction:
globus-url-copy -vb -dbg file:///dev/zero gsiftp://host.server.running.on/dev/null
Again, this will run until you control-c the transfer.
If you are doing a third party transfer, run this command:
globus-url-copy -vb -dbg gsiftp://host.server1.on/dev/zero gsiftp://host.server2.on/dev/null
Again, this will run until you control-c the transfer.
If the above transfers work, try your transfer again. If it fails, you likely have some sort of file permissions problem, typo in a file name, etc.
4. **If your server starts...**
If the server has started correctly, and your problem is with a security failure or gridmap lookup failure, verify that you have security configured properly [here](http://dev.globus.org/wiki/Mailing_Lists).
If the server is running and your client successfully authenticates but has a problem at some other time during the session, please ask for help on [gt-user@globus.org](mailto:gt-user@globus.org). When you send mail or submit bugs, please always include as much of the following information as possible:
- Specs on all hosts involved (OS, processor, RAM, etc).
- `globus-url-copy -version`
- `globus-url-copy -versions`
- Output from the telnet test above.
- The actual command line you ran with `-dbg` added. Don't worry if the output gets long.
- Check that you are getting a FQDN and `/etc/hosts` that is sane.
- The server configuration and setup (`/etc/services` entries, `(x)inetd` configs, etc.).
- Any relevant lines from the server logs (not the entire log please).
---
1 [http://dev.globus.org/wiki/Mailing_Lists](http://dev.globus.org/wiki/Mailing_Lists)
Chapter 6. Usage statistics collection by the Globus Alliance
1. GridFTP-specific usage statistics
The following GridFTP-specific usage statistics are sent in a UDP packet at the end of each transfer, in addition to the standard header information described in the Usage Stats\(^1\) section.
- Start time of the transfer
- End time of the transfer
- Version string of the server
- TCP buffer size used for the transfer
- Block size used for the transfer
- Total number of bytes transferred
- Number of parallel streams used for the transfer
- Number of stripes used for the transfer
- Type of transfer (STOR, RETR, LIST)
- FTP response code -- Success or failure of the transfer
Note
The client (globus-url-copy) does NOT send any data. It is the servers that send the usage statistics.
We have made a concerted effort to collect only data that is not too intrusive or private and yet still provides us with information that will help improve and gauge the usage of the GridFTP server. Nevertheless, if you wish to disable this feature for GridFTP only, use the \(-\text{disable-usage-stats}\) option of \texttt{globus-gridftp-server}. Note that you can disable transmission of usage statistics globally for all C components by setting "\texttt{GLOBUS\_USAGE\_OPTOUT=1}" in your environment.
Also, please see our policy statement\(^2\) on the collection of usage statistics.
\(^1\) ./././Usage_Stats.html
\(^2\) ./././Usage_Stats.html
Glossary
C
client A process that sends commands and receives responses. Note that in GridFTP, the client may or may not take part in the actual movement of data.
E
extended block mode (MODE E) MODE E is a critical GridFTP component because it allows for out of order reception of data. This in turn, means we can send the data down multiple paths and do not need to worry if one of the paths is slower than the others and the data arrives out of order. This enables parallelism and striping within GridFTP. In MODE E, a series of “blocks” are sent over the data channel. Each block consists of:
- an 8 bit flag field,
- a 64 bit field indicating the offset in the transfer,
- and a 64 bit field indicating the length of the payload,
- followed by length bytes of payload.
Note that since the offset and length are included in the block, out of order reception is possible, as long as the receiving side can handle it, either via something like a seek on a file, or via some application level buffering and ordering logic that will wait for the out of order blocks.
S
server A process that receives commands and sends responses to those commands. Since it is a server or service, and it receives commands, it must be listening on a port somewhere to receive the commands. Both FTP and GridFTP have IANA registered ports. For FTP it is port 21, for GridFTP it is port 2811. This is normally handled via inetd or xinetd on Unix variants. However, it is also possible to implement a daemon that listens on the specified port. This is described more fully in the Architecture section of the GridFTP Developer's Guide.
stream mode (MODE S) The only mode normally implemented for FTP is MODE S. This is simply sending each byte, one after another over the socket in order, with no application level framing of any kind. This is the default and is what a standard FTP server will use. This is also the default for GridFTP.
T
third party transfers In the simplest terms, a third party transfer moves a file between two GridFTP servers.
The following is a more detailed, programmatic description.
In a third party transfer, there are three entities involved. The client, who will only orchestrate, but not actually take place in the data transfer, and two servers one of which will be sending data to the other. This scenario is common in Grid applications where you may wish to stage data from a data store somewhere to a supercomputer you have reserved. The commands are quite similar to the client/server transfer. However, now the client must establish two control channels, one to each server. He will then choose one to listen, and send it the PASV command. When it responds with the IP/port it is listening on, the client will send that IP/port as part of the PORT command to the other server. This will cause the second server to connect to the first server, rather than the client. To initiate the actual movement of the data, the client then sends the RETR “filename” command to the server that will read from disk and write to the network (the “sending” server) and will send the STOR “filename” command to the other server which will read from the network and write to the disk (the “receiving” server).
See Also client/server transfer.
Index
A
accessing data
HPSS, 3
non-POSIX data source, 2
non-POSIX file data source that has a POSIX interface, 2
SRB, 3
C
commandline tool
globus-url-copy, 8
E
errors, 22
G
globus-url-copy, 8
GUI information for GridFTP, 18
I
interactive clients
UberFTP, 16
M
moving files
basic procedure, 1
between two GridFTP servers (a third party transfer), 2
existing FTP, 4
from a server to your file system, 2
from your file system to the server, 1
many outstanding transfers at once (pipelining), 4
single file to many destinations, 4
advanced options, 5
user-defined network routes, 5
S
security considerations for GridFTP, 19
T
troubleshooting for GridFTP, 21
U
usage statistics for GridFTP, 25
|
{"Source-Url": "http://toolkit.globus.org/toolkit/docs/4.2/4.2.0/data/gridftp/user/gridftpUserGuide.pdf", "len_cl100k_base": 11351, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 64377, "total-output-tokens": 12855, "length": "2e13", "weborganizer": {"__label__adult": 0.00028896331787109375, "__label__art_design": 0.00032329559326171875, "__label__crime_law": 0.0002875328063964844, "__label__education_jobs": 0.0015153884887695312, "__label__entertainment": 0.00017070770263671875, "__label__fashion_beauty": 0.00012791156768798828, "__label__finance_business": 0.0003821849822998047, "__label__food_dining": 0.0002357959747314453, "__label__games": 0.0009255409240722656, "__label__hardware": 0.00325775146484375, "__label__health": 0.00020503997802734375, "__label__history": 0.0003955364227294922, "__label__home_hobbies": 0.00012695789337158203, "__label__industrial": 0.0004782676696777344, "__label__literature": 0.0003097057342529297, "__label__politics": 0.00026702880859375, "__label__religion": 0.0004467964172363281, "__label__science_tech": 0.0682373046875, "__label__social_life": 0.00016033649444580078, "__label__software": 0.1400146484375, "__label__software_dev": 0.78125, "__label__sports_fitness": 0.0002142190933227539, "__label__transportation": 0.0003466606140136719, "__label__travel": 0.0002124309539794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50968, 0.01495]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50968, 0.28843]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50968, 0.84937]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 119, false], [119, 3032, null], [3032, 3161, null], [3161, 3471, null], [3471, 5352, null], [5352, 7121, null], [7121, 10289, null], [10289, 12654, null], [12654, 14857, null], [14857, 14986, null], [14986, 15017, null], [15017, 16390, null], [16390, 18371, null], [18371, 20732, null], [20732, 22568, null], [22568, 24412, null], [24412, 26080, null], [26080, 28503, null], [28503, 31445, null], [31445, 33220, null], [33220, 33377, null], [33377, 33720, null], [33720, 35953, null], [35953, 37518, null], [37518, 38232, null], [38232, 41699, null], [41699, 44271, null], [44271, 45531, null], [45531, 46974, null], [46974, 49073, null], [49073, 50226, null], [50226, 50968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 119, true], [119, 3032, null], [3032, 3161, null], [3161, 3471, null], [3471, 5352, null], [5352, 7121, null], [7121, 10289, null], [10289, 12654, null], [12654, 14857, null], [14857, 14986, null], [14986, 15017, null], [15017, 16390, null], [16390, 18371, null], [18371, 20732, null], [20732, 22568, null], [22568, 24412, null], [24412, 26080, null], [26080, 28503, null], [28503, 31445, null], [31445, 33220, null], [33220, 33377, null], [33377, 33720, null], [33720, 35953, null], [35953, 37518, null], [37518, 38232, null], [38232, 41699, null], [41699, 44271, null], [44271, 45531, null], [45531, 46974, null], [46974, 49073, null], [49073, 50226, null], [50226, 50968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50968, null]], "pdf_page_numbers": [[0, 0, 1], [0, 119, 2], [119, 3032, 3], [3032, 3161, 4], [3161, 3471, 5], [3471, 5352, 6], [5352, 7121, 7], [7121, 10289, 8], [10289, 12654, 9], [12654, 14857, 10], [14857, 14986, 11], [14986, 15017, 12], [15017, 16390, 13], [16390, 18371, 14], [18371, 20732, 15], [20732, 22568, 16], [22568, 24412, 17], [24412, 26080, 18], [26080, 28503, 19], [28503, 31445, 20], [31445, 33220, 21], [33220, 33377, 22], [33377, 33720, 23], [33720, 35953, 24], [35953, 37518, 25], [37518, 38232, 26], [38232, 41699, 27], [41699, 44271, 28], [44271, 45531, 29], [45531, 46974, 30], [46974, 49073, 31], [49073, 50226, 32], [50226, 50968, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50968, 0.04717]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8270dab90051070c4413259e7ab56b8e32f45883
|
The HighPerMeshes framework for numerical algorithms on unstructured grids
Samer Alhaddad1 | Jens Förstner1 | Stefan Groth2 | Daniel Grünewald3 | Yevgen Grynko1 | Frank Hannig2 | Tobias Kenter1 | Franz-Josef Pfreundt3 | Christian Plessl1 | Merlind Schotte4 | Thomas Steinke4 | Jürgen Teich2 | Martin Weiser4 | Florian Wende4
1Paderborn Center for Parallel Computing and Department of Computer Science and Department of Electrical Engineering, Paderborn University, Paderborn, Germany
2Hardware/Software Co-Design, Department of Computer Science, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
3Fraunhofer Institut für Techno- und Wirtschaftsmathematik, Kaiserslautern, Germany
4Zuse Institute, Berlin, Germany
Correspondence
Stefan Groth, Chair of Computer Science 12, Cauerstr. 11, 91058 Erlangen, Germany.
Email: stefan.groth@fau.de
Summary
Solving partial differential equations (PDEs) on unstructured grids is a cornerstone of engineering and scientific computing. Heterogeneous parallel platforms, including CPUs, GPUs, and FPGAs, enable energy-efficient and computationally demanding simulations. In this article, we introduce the HighPerMeshes C++-embedded domain-specific language (DSL) that bridges the abstraction gap between the mathematical formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different programming models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPerMeshes DSL, and demonstrate its usage with three examples. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters, is presented. A code generator and a matching back end allow the acceleration of HighPerMeshes code with GPUs. Finally, the achievable performance and scalability are demonstrated for different example problems.
KEYWORDS
code generation, distributed computing, domain-specific languages, numerical algorithms
1 INTRODUCTION
Simulations of physical systems described by Partial Differential Equations (PDEs) are the cornerstone of computational science and engineering. The ever-increasing number and scale of simulations have led to the rise of different and heterogeneous parallel computing platforms, ranging from multicore CPUs to parallel distributed systems to GPUs and FPGAs. Adapting and implementing complex simulation algorithms on these different architectures is a demanding task requiring in-depth computer science knowledge. Consequently, many large-scale simulation codes address only a narrow and often traditional range of computing environments, missing the performance opportunities offered by new architectures.
In this article, we present the HighPerMeshes embedded Domain-Specific Language (DSL) providing the right abstraction layer to C++ application developers to implement efficient mesh-based algorithms for PDE problems on unstructured grids. The focus of the DSL is on finite element
(FE) and discontinuous Galerkin (DG) or finite volume (FV) discretizations to address iterative and matrix-free solvers as well as time stepping schemes. Large parts of PDE simulation problems thus can be covered. HighPerMeshes draws heavily on the C++17 standard and template metaprogramming for genericity and extensibility. Additionally, compile-time information through template parameters can benefit the code generation for specific target architectures. Furthermore, we address the acceleration of HighPerMeshes with GPUs. To stay as general as possible, we use OpenCL as a back end, which allows targeting various GPUs and other heterogeneous architectures such as FPGAs. For this purpose, we provide a code generator that produces the necessary OpenCL code from HighPerMeshes code and a back end that allows executing the generated code.
2 | THE HIGHPERMESHES DOMAIN-SPECIFIC LANGUAGE
Picking the right abstraction level is central for every DSL or library interface targeting mesh-based algorithms for PDEs. On the one hand, it needs to provide idioms for specifying the algorithmic building blocks on an abstraction level that allows an efficient mapping to different computing platforms. On the other hand, it should be detailed enough to allow implementing a wide range of established or yet to be developed discretization schemes and numerical algorithms. The HighPerMeshes DSL aims at providing abstractions on a level that is just high enough to allow for an efficient mapping to sequential and multithreaded CPU execution, distributed memory systems, and accelerators. On this level, the core components of mesh-based PDE algorithms include mesh data structures, the association of Degrees of Freedom (DoFs) to mesh entities such as cells and vertices, and the definition of kernel functions that encapsulate local computations with shape functions defined on single mesh cells or faces.
2.1 | Mesh interface
Computational meshes decompose the computational domain $\Omega \subset \mathbb{R}^d$ into simple shapes such as triangles or tetrahedra by which PDE solutions can be represented. Unstructured meshes do so in an irregular pattern that can be adapted to complex geometries or local solution features in a flexible way. Unlike structured meshes, neighborhood relations between these cells are not implied by the storage arrangement of their constituting vertices, but are usually defined through connectivity lists that specify these neighborhoods. Therefore, the storage efficiency of unstructured meshes can be very low if the specifics of the hardware architecture are not taken into account. Similarly, when accessing or iterating over mesh entities (cells, faces, edges, and vertices for $d = 3$), the memory structuring and arrangement of, for example, geometrically neighboring entities can be critical to performance and present optimization targets on the mesh implementation for different architectures.
The construction of a mesh in the HighPerMeshes DSL starts from a set of vertices $V = \{\mathbf{v}_m \in \mathbb{R}^d\}$ and a set $C = \{i\}_{n = 0, \ldots, \#\text{cells} - 1}$ of connectivity lists $i_n \subset \{0, \ldots, |V| - 1\}$ representing the cells in the mesh. Users can create meshes by providing $V$ and $C$ directly or by using one of the available import parsers for common mesh data files. Each $i \in C$ references into the vertex set $V$ to encode an entity of the cell dimensionality $d_{\text{cell}} \leq d$. Subentities or constituting entities like edges and faces correspond to index sets $j \subset i \in C$ that are deduced according to a particular scheme that is specific to the entity type. All entities are stored in a $(d_{\text{cell}} + 1)$-dimensional set data-structure using their index sets. In this way, the duplication of subentities is avoided, and each entity can be assigned a unique identifier (ID) so that finding a specific one through its vertices can happen in logarithmic time complexity. In addition, our mesh implementation manages a lookup table which for each entity holds the IDs of all its constituting entities with one dimension lower, and another one with the IDs of all incident super-entities, if present.
2.2 | Buffer types for storing coefficient vectors
PDE solutions are generally discretized using finite-dimensional ansatz spaces and are represented by coefficient vectors with respect to a certain basis. In FE, FV, and DG methods, the basis functions are associated with mesh entities and have a support contained in the union of the cells incident to their entity. The mapping of coefficients, or DoFs, to storage locations and access to them depends on the target architecture and may involve nontrivial communication. Therefore, the DSL provides buffer types for coefficient vector storage to relieve the user from these considerations.
Depending on the ansatz space, a particular number of basis functions is associated with mesh entities of different dimensions. Therefore, the number of coefficients $\eta$ associated with entities of dimension $d \in \{0, \ldots, d_{\text{cell}}\}$ has to be specified when constructing a buffer. Additionally, global values as coefficients of the constant basis function can be stored, for example,
for $d_{\text{cell}} = d = 3$. The buffer holds one value of type `float` for each node, edge, face, and the cell itself. Two additional entries are provided for global values.
DoFs are accessed through a "local-view object" (`local_view` in Listing 1, line 8) inside kernel functions. These local views are a tuple of implementation-specific objects, that are accessible with the `GetDof` function, that requests DoFs of a certain dimension. This is necessary because access patterns may provide DoFs associated with mesh entities of different dimensions. Given a data access pattern (Section 2.3) and a specific entity—typical program executions loop over all or a subset of the entities in the mesh, one after the other—the corresponding local view makes for a linearly indexable type inside the kernel function, thereby hiding data layout and storage internals.
### 2.3 Iterating over the mesh with local kernels
In the PDE solver algorithms that we target, a significant part of PDE computation on meshes involves the evaluation of values, derivatives, or integrals on cells or faces, and is therefore local. This allows for various kinds of parallelization, depending on the target architecture. Typically, these local calculations in space are embedded into time stepping loops or iterative algorithms, which imply dependencies based on the data access patterns of the kernels. With a scheduler that suitably resolves these dependencies, additional parallelism can be exploited by partially overlapping subsequent time steps.
In HighPerMeshes, the application developer specifies the calculations as local kernels at entity granularity and invokes a dispatcher to take care of their parallel execution and scheduling. Line 1 of Listing 1 shows the definition of a distributed dispatcher that uses the command line arguments to set up its environment. The advantage of using this dispatcher model is a complete separation of parallelization techniques and kernel definitions. The interface is technology-agnostic, and the user does not need to know the intricate details of parallel and distributed programming.
The dispatcher's `Execute` method takes a number of kernels to be executed as its arguments. If required, those arguments might be supplemented by a range of time steps as shown in line 4 of Listing 1 in order to iterate the defined sequence of kernels more than once in the specified range. Each kernel must define a range of entities to iterate over. To enable flexible parallelization strategies, the DSL does not guarantee a processing order for these entities. For example, the function call `mesh.GetEntityRange<CellDimension>()` in line 6 specifies that the dispatcher iterates over all cells. `ForEachEntity` in line 5 defines an iteration over all entities in that range. Here, HighPerMeshes provides another option: `ForEachIncidence<D>` iterates over all subentities of a certain dimension $D$ for the entities in the given range.
The kernel requires a tuple of access definitions, as seen in line 7. Access definitions specify the mode (any of `Read`, `Write`, and `ReadWrite`) and the access pattern for the DoF access. This allows the scheduler to calculate dependencies between kernels, thereby avoiding conflicting DoF accesses in scatter operations despite parallelization. Access patterns determine the DoFs relevant for the calculation by specifying a set of mesh entities incident or adjacent to the local entity. `Cell` in line 7 means that the kernel requires access to the DoFs from the given `buffer` that are associated with the local cell, as frequently used in DG methods. Other common access patterns involve a local cell and all of its incident subentities, usually encountered in FE methods, or the two cells incident to a face for flux computations in DG or FV methods. While HighPerMeshes aims at providing all access patterns necessary for common kernel descriptions in FE or DG methods, they can be easily extended by providing the required neighborhood relationship in the mesh interface.
The last argument is a user-defined lambda, that is, an anonymous function (line 8). This lambda defines what is actually computed for each entity in the given range and must be callable with the specified entities, time steps, and a `local_view object` `local_view` as its arguments. The latter allows access to the requested DoFs.
```cpp
// DistributedDispatcher dispatcher(argc,argv);
dispatcher.Execute(
Range(100),
ForEachEntity(
mesh.GetEntityRange<CellDimension>(),
tuple<Write(Cell(buffer))>,
[](const auto& cell, auto step, auto& local_view) { /*kernel body*/ })
)
```
Listing 1: Example of a dispatcher definition and kernel execution
### 3 USING THE DSL
In this section, examples and code segments are presented to illustrate the methods described in Section 2 and to explain their use. The examples are elliptic and parabolic differential equations that were numerically solved using the DSL. The grids are mainly irregular 3D simplex meshes, but
TABLE 1 Overview of usage examples
<table>
<thead>
<tr>
<th>Section</th>
<th>Problem/Method</th>
<th>Local kernels</th>
<th>Solvers</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.1</td>
<td>Poisson, FE method</td>
<td>Matrix-free and rhs assembly</td>
<td>CG method, iterative</td>
</tr>
<tr>
<td>3.2</td>
<td>Maxwell’s eqs., DG method</td>
<td>Rhs assembly, multiple kernels</td>
<td>RK time stepping</td>
</tr>
<tr>
<td>3.3</td>
<td>Monodomain, FE method</td>
<td>Solver assembly</td>
<td>Euler time stepping</td>
</tr>
</tbody>
</table>
3.1 Matrix-free solver for the Poisson equation
For illustrating the usage of the DSL, the elliptic Poisson problem
\[-\Delta u = f \text{ in } \Omega \subset \mathbb{R}^3, \quad u = 0 \text{ on } \Gamma \subset \mathbb{R}^3\]
with homogeneous Dirichlet boundary conditions is solved by a matrix-free conjugate gradient (CG) method.\(^1\)\(^2\) By discretizing (1) with linear finite elements on a tetrahedralization of \(\Omega\), that is, with one DoF per vertex, a system \(Ax = b\) of linear equations is obtained.\(^3\) Since \(A\) is symmetric and positive definite, its solution is the minimizer of the convex minimization problem
\[F(x) = \frac{1}{2}x^TAx - b^Tx \rightarrow \min.\]
In order to solve this linear system of equations, the right-hand side (rhs) \(b\) must be assembled. This is done using the buffer datatype and the loop \texttt{ForEachEntity}, which iterates over the vertices of each cell (in this case tetrahedra) and stores the corresponding value in the buffer (Figure 1 code line 8).
The homogeneous Dirichlet boundary conditions can be built into the rhs here as well. Further information about the algorithms and examples can be found in the public repositories\(^1\). Table 1 summarizes the features of the presented examples.
3.1 Matrix-free solver for the Poisson equation
For illustrating the usage of the DSL, the elliptic Poisson problem
\[-\Delta u = f \text{ in } \Omega \subset \mathbb{R}^3, \quad u = 0 \text{ on } \Gamma \subset \mathbb{R}^3\]
with homogeneous Dirichlet boundary conditions is solved by a matrix-free conjugate gradient (CG) method.\(^1\)\(^2\) By discretizing (1) with linear finite elements on a tetrahedralization of \(\Omega\), that is, with one DoF per vertex, a system \(Ax = b\) of linear equations is obtained.\(^3\) Since \(A\) is symmetric and positive definite, its solution is the minimizer of the convex minimization problem
\[F(x) = \frac{1}{2}x^TAx - b^Tx \rightarrow \min.\]
In order to solve this linear system of equations, the right-hand side (rhs) \(b\) must be assembled. This is done using the buffer datatype and the loop \texttt{ForEachEntity}, which iterates over the vertices of each cell (in this case tetrahedra) and stores the corresponding value in the buffer (Figure 1 code line 8).
The homogeneous Dirichlet boundary conditions can be built into the rhs here as well. Further information about the algorithms and examples can be found in the public repositories\(^1\). Table 1 summarizes the features of the presented examples.
3.1 Matrix-free solver for the Poisson equation
For illustrating the usage of the DSL, the elliptic Poisson problem
\[-\Delta u = f \text{ in } \Omega \subset \mathbb{R}^3, \quad u = 0 \text{ on } \Gamma \subset \mathbb{R}^3\]
with homogeneous Dirichlet boundary conditions is solved by a matrix-free conjugate gradient (CG) method.\(^1\)\(^2\) By discretizing (1) with linear finite elements on a tetrahedralization of \(\Omega\), that is, with one DoF per vertex, a system \(Ax = b\) of linear equations is obtained.\(^3\) Since \(A\) is symmetric and positive definite, its solution is the minimizer of the convex minimization problem
\[F(x) = \frac{1}{2}x^TAx - b^Tx \rightarrow \min.\]
In order to solve this linear system of equations, the right-hand side (rhs) \(b\) must be assembled. This is done using the buffer datatype and the loop \texttt{ForEachEntity}, which iterates over the vertices of each cell (in this case tetrahedra) and stores the corresponding value in the buffer (Figure 1 code line 8).
The homogeneous Dirichlet boundary conditions can be built into the rhs here as well. Further information about the algorithms and examples can be found in the public repositories\(^1\). Table 1 summarizes the features of the presented examples.
3.1 Matrix-free solver for the Poisson equation
For illustrating the usage of the DSL, the elliptic Poisson problem
\[-\Delta u = f \text{ in } \Omega \subset \mathbb{R}^3, \quad u = 0 \text{ on } \Gamma \subset \mathbb{R}^3\]
with homogeneous Dirichlet boundary conditions is solved by a matrix-free conjugate gradient (CG) method.\(^1\)\(^2\) By discretizing (1) with linear finite elements on a tetrahedralization of \(\Omega\), that is, with one DoF per vertex, a system \(Ax = b\) of linear equations is obtained.\(^3\) Since \(A\) is symmetric and positive definite, its solution is the minimizer of the convex minimization problem
\[F(x) = \frac{1}{2}x^TAx - b^Tx \rightarrow \min.\]
In order to solve this linear system of equations, the right-hand side (rhs) \(b\) must be assembled. This is done using the buffer datatype and the loop \texttt{ForEachEntity}, which iterates over the vertices of each cell (in this case tetrahedra) and stores the corresponding value in the buffer (Figure 1 code line 8).
The homogeneous Dirichlet boundary conditions can be built into the rhs here as well. Further information about the algorithms and examples can be found in the public repositories\(^1\). Table 1 summarizes the features of the presented examples.
per cell and with \( \phi \), as shape functions (see line 5 of Listing 2). To show that the DSL provides a compact syntax, we provide a sketch of an equivalent Matlab implementation in Listing 3 for comparison. Note that the functions \( \text{vertexIndicesByCell} \) and \( \text{localStiffnessMatrix} \) have to be implemented by the user, adding additional programming effort to the Matlab code. We would like to stress that the code, except for some syntax overhead, is very close to the underlying mathematical concepts and completely independent of the target architecture, allowing users without any knowledge of parallel or distributed computing to concentrate on its mathematical structure. Finally, the result can be saved into a file and visualized using, for example, \textit{ParaView}.
Listing 2: Example of a matrix-free rhs assembly
```cpp
auto AssembleMatrixVecProduct =
ForEachEntity(cells, tuple(Vertex(s), Vertex(x)),
;
s += Alloc * x;
));
```
Listing 3: Equivalent Matlab sketch of a matrix-free rhs assembly
```cpp
function s = AssembleMatrixVecProduct(x)
for cell = 1:nCells
indices = vertexIndicesByCell(:, cell);
Alloc = localStiffnessMatrix(cell);
s(indices) = s(indices) + Alloc * x(indices);
end
```
### 3.2 Discontinuous Galerkin time domain (DGTD) Maxwell solver
In the following, we sketch an implementation of a Maxwell solver based on the DGTD numerical scheme. An initial value problem is solved in the time domain in a free space mesh with perfect electric conductor (PEC) boundary conditions. The user can modify the code accordingly if field sources, materials, or absorbing boundaries are needed. The simulation domain is discretized in a triangular or tetrahedral mesh, which is used as an input. Then, DoFs or calculation points are created within the cells, depending on the ansatz order specified by the user. For example, a three-dimensional simulation with third-order accuracy requires 20 DoFs in each cell to represent the unknown fields. The right-hand sides of Maxwell’s equations are evaluated during Runge–Kutta time integration at each time step according to the DGTD method formulation
\[
\dot{\mathbf{E}} = \mathbf{D} \times \mathbf{H} + (\mathbf{M})^{-1} \mathbf{F} (\Delta \mathbf{E} - \hat{n} \cdot (\hat{n} \cdot \Delta \mathbf{E}) + \hat{n} \times \Delta \mathbf{H}),
\]
\[
\dot{\mathbf{H}} = -\mathbf{D} \times \mathbf{E} + (\mathbf{M})^{-1} \mathbf{F} (\Delta \mathbf{H} - \hat{n} \cdot (\hat{n} \cdot \Delta \mathbf{H}) + \hat{n} \times \Delta \mathbf{E}).
\]
Here \( \mathbf{D} \times \mathbf{H} \) and \( \mathbf{D} \times \mathbf{E} \) are the curls of the magnetic and electric fields, correspondingly, \( \mathbf{M} \) is the mass matrix, \( \mathbf{F} \) the face matrix, \( \Delta \mathbf{E}, \Delta \mathbf{H} \) are field differences between the neighboring cells at the interfaces and \( \hat{n} \) the face normal. The first term (the curls) involves only cell-local DoFs and is therefore called "volume kernel" (see Listing 4).
```cpp
auto volumeKernelLoop = ForEachEntity(cells, tuple(Read(Cell)(h)), Cell(rhsE), ...),
.GetInverseJacobian() * 2.0;
ForEach(numVolumeNodes, ;
)
rhsE[m] += Curl(D, dE);
// code for rhsE: analogue to rhsE
)
);
```
Listing 4: Code segment for the Maxwell volume kernel
The electric field component $E_y$ in the simulation domain. Three parts of the partitioned results can be seen in (A), (B), and (C), while (D) shows the merged result of all four processes.
The second term in (3,4), the "surface kernel" (see Listing 5), stems from a surface integral over the cell's faces. It involves those DoFs from within the two incident cells located on these faces. Calculating the surface kernel requires some operations provided directly by the DSL like \(\text{GetNormal()}, \text{GetAbsJacobianDeterminant}()\). The implementation complexity of DG on unstructured meshes comes from the access or mapping to the neighboring cells DoFs in order to calculate fluxes across faces as described in (3) and (4). This access is performed with the data structure \text{NeighboringNodeMap} (line 15 in Listing 5), which provides the corresponding index for the DoFs in the local view.
In HighPerMeshes, the calculated field components in the DoFs can be written completely or selectively to an output file for each time step. The \text{writeLoop} method comfortably provides this functionality with its specific user defined iteration ranges. Figure 2 shows a visualization of the electric field component $E_y$ in the partitioned simulation domain. For this, the calculated filed values in the unstructured DoFs are transformed to arrange them in a structured grid with definable resolution. The simulation domain is a cavity box represented by an unstructured grid discretized spatially into 1585 tetrahedra with PEC (perfect electric conductor) boundary and initial value conditions. The simulation runs on four processes, showing that HighPerMeshes can output results in the distributed case that can be easily merged to be viewed with ParaView.
Listing 5: Code segment for the Maxwell surface kernel
```
auto surfaceKernelLoop = ForEachIncidence<2>(cells,
tuple(
Read<ContainingMeshElement(H>),
Read<ContainingMeshElement(E>),
Read<NeighboringMeshElementUrSelf(E>),
Write<ContainingMeshElement(rhsE>)
),
[0](const auto& cell, const auto& face..., auto& local_view)
auto& H, E, nH, nE, rhsE = local_view;
const auto& NeighboringNodeMap = DGNodeMap.Get(cell, face);
const int faceIndex = face.GetTopology().GetLocalIndex();
const auto faceUnitNormal = face.GetGeometry().GetNormal().GetUnitNormal();
const auto edg = (face.GetGeometry().GetNormal()) * 2.0 /
cell.GetGeometry().GetAbsJacobianDeterminant().Norm() * 0.5;
ForEach(numSurfaceNodes, [, i](const int i)
const auto& dH = edg * Delta(E, nH, m, NeighboringNodeMap);
const auto& dE = edg * DirectionalDelta(E, nE, face, m, NeighboringNodeMap);
const auto& fluxE = (dE * faceUnitNormal) * faceUnitNormal * CrossProduct(
faceUnitNormal, dH);
ForEach(numVolumeNodes, [, j](const int j)
rhsE[i][j] += LIFT(face_index)[m][j] * fluxE;
);
)
);
```
3.3 Finite elements for cardiac electrophysiology
The excitation of cardiac muscle tissue is described by electrophysiology models such as the monodomain model
$$\dot{u} = \nabla \cdot (\sigma \nabla u) + I_{ion}(u, w),$$
where $\sigma$ is the conductivity, $I_{\text{ion}}$ is the ion current that forms together with the gating dynamics $f(u,w)$ the membrane model. The simplest FitzHugh–Nagumo membrane model defines $I_{\text{ion}}(u,w) = u(1-u)(u-a) - w$ and $f(u,w) = \epsilon(u - bw)$ with $0 < a, b, \epsilon < 1$.\(^7\)
The method of lines\(^10\) discretizes the monodomain model (5) first in space and then in time. For the discretization of space, we use linear finite elements again, leading to the system
$$
M\dot{u} = \sigma A u + M \cdot I_{\text{ion}}(u,w),
$$
$$
\dot{w} = f(u,w),
$$
where $M$ and stiffness matrix $A$ are used for time discretization, the forward Euler method
$$
\begin{align*}
u_{t+1} &= u_t + \tau(M^{-1}\sigma A u_t + I_{\text{ion}}(u_t,w_t)), \\
w_{t+1} &= w_t + \tau f(u_t,w_t)
\end{align*}
$$
is widely used in cardiac electrophysiology due to its simplicity and its stability for reasonable step sizes $\tau$.\(^11\)
In order to avoid inverting the globally coupled mass matrix, the row-sum mass lumping technique is applied to $M$.\(^12\) This yields a diagonal approximation $M_l$ of $M$ and allows for efficient, explicit formation of $M_l^{-1}$ to be used in (6) instead of $M^{-1}$, and matrix-free storage in vector form. The right-hand side $u_t$ including the matrix–vector product $A u_t$ is assembled directly as in (2) without forming $A$:
```cpp
auto fEuler = ForEachEntity(
mesh.GetEntityRange <0>(),
[u, d] (const auto& vertex, ... auto& local_view) {
u[0] += tau * u_d[0];
});
```
Listing 6: Code example of an implementation of a first order solver (forward Euler).
## 4 | MAPPING TO PARALLEL SHARED- AND DISTRIBUTED MEMORY SYSTEMS
In the previous two sections, we introduced the HighPerMeshes DSL from a usage perspective, highlighting in Section 2.3 how a user can implement device- and parallelization-agnostic local kernels. When targeting multi-CPU architectures, the kernels get compiled along with parts of the HighPerMeshes infrastructure, which under the hood allows for parallel execution with a library and runtime system.
HighPerMeshes provides a distributed dispatcher that builds upon the Global Address Space Programming Interface (GASPI)\(^13\) for scaling over multiple nodes. The latter further uses ACE\(^14\) to accelerate the algorithms by either feeding tasks to ACE’s thread pool or by parallelizing the work defined by a task with OpenMP\(^\dagger\). For an in-depth explanation of this dispatcher, we refer to.\(^15\),\(^16\)
For the distribution of data and computation to multiple compute nodes and processors, HighPerMeshes manages a hierarchy of global and local mesh partitions. For mesh partitioning, we use the Metis library.\(^17\) Global partitions assign mesh entities to distinct compute nodes, while local partitions can add an additional layer for further work segmentation on each compute node. Each local partition belongs to a unique global partition that determines the actual compute node to which the associated mesh entities belong. Global partitions also store the DoFs corresponding to their owned mesh entities.
To achieve parallelization, we define tasks such that each task applies a kernel given by one loop to each entity in a local partition. The execution of these tasks can then be parallelized. However, the task still requires and produces specific data that other tasks might also access. This data may also be on another physical compute node, thus requiring communication.
The distributed dispatcher creates a dependency graph for all kernels to enforce correct behavior and constructs a dependency between two kernels if they access the same DoFs. For each of these dependencies, the dispatcher calculates the exact intersection of required buffer indices with the help of the specified access pattern. Suppose these index intersections lie on a different global partition. In that case, HighPerMeshes defines a precondition for the receiving task to wait for the data and defines a postcondition for the producing task to send the data to the correct process.
Figure 3 shows an example for two kernels accessing the same buffer, one writing and one reading. In HighPerMeshes, this can be expressed as dispatcher.Execute(Writer, Reader). Here, the ranges to be iterated over are already abstracted as the tasks $w_i$ and $r_i$ with $0 \leq i \leq 7$.
\(^\dagger\)https://www.openmp.org/.
In this example, the dispatcher must schedule $r_2$ after $w_1$, $w_2$, and $w_5$, because the writers produce data that the reader requires. Furthermore, because $r_2$ and $w_5$ are on different global partitions, that is, their data lies on physically distinct compute nodes, communication is required. The dispatcher can use the calculated intersection of DoFs to define a process that sends data from $w_5$ to $r_2$ when $w_5$ is finished.
Here, HighPerMeshes’ access patterns, as described in Section 2.3, show their advantage. A dispatcher implementation can use this abstract data access specification during compile time in order to construct the required procedures to communicate data, transparent to the end user. This would not be possible with random data access.
5 | SOURCE-TO-SOURCE OPENCL CODE GENERATION
When additionally aiming to execute HighPerMeshes kernels on accelerators like GPUs, the embedding of HighPerMeshes into C++ is currently a limitation, as no suitable compilers exist so far. Thus, in order to support also these architectures by HighPerMeshes, we developed a source-to-source code transformation infrastructure that complements the library and runtime system introduced in Section 4. The transformation infrastructure extracts the lambda code given in the kernels described in Section 2.3 into OpenCL.
In addition to the OpenCL kernel code that we generate by source-to-source transformations, OpenCL requires boilerplate code on the host side that either compiles and runs its kernels during runtime, or uses a third-party compiler to compile the kernel code. We chose an approach that reduces the amount of code that actually needs to be generated on the host and instead employ a library-based solution for managing OpenCL kernels on the host side. In this section, we introduce the combination of kernel transformation flow and library-based dispatcher that handles the host side of the OpenCL execution model.
5.1 | Intermediate representation and kernel generation
Clang can expose the abstract syntax tree (AST) of C++ programs and allows writing source-to-source translation tools with LibTooling. To change the resulting source code, LibTooling can extract the location of specific AST nodes in the source code. This range of characters can either be replaced or expanded with a new text. This means it is not possible to modify the AST directly. This is not a maintainable solution for context-sensitive transformations that depend on each other. Furthermore, each transformation must produce a valid C++ program; otherwise, the compiler cannot parse it, which might complicate certain transformations.
Because of this, we translate parts of the AST provided by Clang into a new intermediate representation (IR), where an AST can transform into another AST directly. The AST’s node types represent a subset of C++ that allows all the common operations in HighPerMeshes. In order to not have to implement a complete IR of C++, there are certain restrictions to what is allowed in kernels to be transformed with the code generator. Most notably, data structures not provided by HighPerMeshes are not allowed.
5.1.1 | Transformations
The transformation framework is based on the visitor pattern. Each node must provide a transform function that applies the visitor’s visit function to each of its members and creates a new member of its own type. Listing 7 shows the structure for a used variable and its corresponding transform function. This way, if visit does not return identity for some member, the latter is transformed into a new node.
---
https://clang.llvm.org/docs/LibTooling.html.
Visitors in the code generator can have multiple behaviors. They either traverse the AST depth-first or breadth-first and can also implement different stopping conditions. For example, we employ a visitor that only traverses until it encounters a member of a certain type.
Listing 8 shows an example for the initialization of one such visitor. When calling the transformer’s visit function on a node in the AST, it checks if the given node can be passed to the lambda it received (lines 2–4) and applies it if possible or returns the unmodified node. This way, each transformation creates a new AST from the old one, keeping the nodes that are unchanged and transforming nodes that match the passed lambda. For example, suppose the transformer’s visit function is applied to the root node of an AST. In that case, it returns a new AST where the function do_something_with changes each variable.
```
struct Visitor {
void visit(const Example& example) {
if (can_visit(example)) {
example = do_something_with(example);
}
}
};
```
Listing 7: Example of the transform function
5.1.2 Printing
To generate a new source file after all transformations are applied, each type used in the IR must also implement a print function that prints their representation in the actual source code. The function recursively prints the complete subtree that makes up the expression or statement. For example, Listing 9 shows this print function for the variable type, where only its name is printed. The resulting kernels are printed to a source-file that can be used on the host-side.
```
void print(Example* stream, const Variable& variable) {
print(stream, variable.name);
}
```
Listing 9: Example of the transform function
5.2 Host side integration
The buffer data types described in Section 2.2 can specify custom allocators similar to the containers provided by the standard template library\(^9\). With the C++ bindings of OpenCL 2.0\(^9\), it is possible to use the shared virtual memory capabilities introduced in OpenCL 2.0 using such an allocator. This heavily reduces the code that needs to be generated and the complexity involved in transferring data between host and device. The code generator employs coarse-grained shared virtual memory to allow compatibility with more devices.
\(^9\)https://github.khronos.org/OpenCL-CLHPP.
We provide a library implementation for initialization and kernel enqueuing that handles all the boilerplate code associated with compiling the kernels and enqueuing them. A class called OpenCLHandler allows reading kernels from either a string or binary. It then provides all the necessary functions to enqueue a kernel. Another class called OpenCLKernelEnqueuer is constructed with such an OpenCLHandler. It allows directly specifying the arguments of a kernel and execute it. We provide as much code as possible as a library solution because it is less error-prone and more comfortable to test than generating all the necessary code.
In summary, in order to handle the host side integration of OpenCL kernels, the code generator replaces the original dispatcher calls with the OpenCLKernelEnqueuer mentioned above, adds the initialization of an OpenCLHandler, and applies the correct allocator to all the buffers. For these host-side transformations, the translation into the code generator’s IR is not necessary because the code that needs to be generated is far simpler due to the library approach.
5.3 Code generator workflow
Figure 4 summarizes the entire code generator’s workflow with kernel extraction and generation and host side integration.
First, the user provides a HighPerMeshes source-file that can be tested by employing the sequential dispatcher provided by HighPerMeshes. The source-to-source generator finds all \texttt{dispatcher.Execute} calls and their corresponding kernels (\texttt{ForEachEntity}) based on the IR provided by Clang. We translate these kernels into a new IR that allows transforming the resulting AST into other AST. In this new IR, we can apply all transformations necessary to generate a valid OpenCL kernel from the provided lambdas.
Overall, we have implemented 33 transformers so far. Their most important purpose is to translate the different HighPerMeshes-specific language features such as vectors and matrices into structures that are usable with OpenCL, translate C++ syntax to C syntax, and to create explicit address generation code based on buffer base addresses, view-specific offsets, and offsets based on work-items. After all transformations are applied, the device code is generated by printing all nodes of the final AST.
Furthermore, the infrastructure adapts the given HighPerMeshes source-file to be usable with the new OpenCL kernels by employing LibTooling’s usual capabilities of directly modifying the source code. In this step, information from the kernel generation phase is used, such as generated kernel names.
6 EXPERIMENTS
6.1 Distributed scalability experiments
In this section, we analyze the distributed scalability of the matrix-vector product (Listing 2), the volume kernel (Listing 4), and the surface kernel (Listing 5). The experiments were performed on a cluster, where each compute node consists of two sockets. Each socket contains an Intel Xeon Gold 6148 “Skylake” CPU, which has 20 cores and a base frequency of 2.4 GHz. Hyperthreading is deactivated. The nodes are connected on a 100 Gb/s Intel Omni Path network. All experiments were executed with 20 threads per socket, as the scalability of our threading approach on a single compute node has already been shown. To show that HighPerMeshes is not implemented for a specific parallelization technology, we analyze two back ends.
Strong scaling speedup for iterating over the specified kernels on a mesh with 400,000 tetrahedra and 1000 time steps on an increasing amount of sockets compared with executing the same kernels on one socket. The evaluated back ends parallelize the programs with ACE’s thread pool (A) or by accelerating the scheduled tasks with OpenMP (B) provided by HighPerMeshes. The first one schedules tasks using ACE’s thread pool, while the other accelerates tasks with OpenMP, as described in Section 4.
We conducted strong scaling experiments for 1000 time steps on a synthetic mesh of 400,000 tetrahedra. Such a setup represents a typical problem size targeted by the distributed dispatcher. Figure 5 shows the speedup over a single node for the distributed dispatcher for both back ends and an increasing amount of compute nodes. As a baseline for each experiment, we measured the execution time with both back ends on a single socket, that is, 20 cores, and use the faster one. Thus, the theoretical optimum for each application is a speedup equal to the number of sockets. For 640 cores, the back end feeding threads to ACE’s thread pool achieves better speedups for the matrix-vector product with a speedup of 21.19. The volume and surface kernels achieve a better speedup in the case for OpenMP acceleration, with a speedup of 27.94 and 28.98, respectively. Furthermore, the volume and surface kernels scale better than the matrix–vector multiplication because they are more compute-intensive. They iterate over 20 DoFs instead of just one. To achieve this kind of scalability, the dispatcher requires a sufficient workload. The experiment shows that we reach approximately 90% of the optimal speedup for the surface kernel and above 80% for the volume kernel.
For the strong scaling experiment, we also calculated the standard deviation from the mean for the workload on each socket to determine the effectiveness of the load balancing scheme described in Section 4. Here we calculated a maximum standard deviation of 2.27%, showing that the work is evenly distributed between nodes and that no significant bottlenecks are introduced due to task dependencies.
Furthermore, we conducted weak scaling experiments with around 12,500 tetrahedrons per socket used, with minor deviations to the workload per socket depending on the mesh partitioning. Figure 6 shows the weak scaling results for the volume and surface kernel for both back ends. We show the parallel efficiency relative to the respective two-socket variant as a reference. All results are above 90% except for the surface kernel using ACE’s thread pool on 32 or 64 threads. On 64 sockets or 1280 cores, even this experiment still reaches an efficiency of 81% compared with the two-socket variant. The same weak scaling experiments were also performed for the matrix-vector product. As this example does not require communication, the measured parallel efficiency was always close to 100%. Therefore we omit these results in the figure to not clutter the graph.
Overall, the strong and weak scaling experiments show that HighPerMeshes allows an efficient and easy distribution of matrix-free algorithms at least to dozens of HPC nodes. They also show that HighPerMeshes provides suitable abstractions for different back ends, as the results for both back ends provide similar results for most of the experiments. Instead, both reference implementations achieve similar speedups, thus showing that the language is portable to different technologies.
In this section, we test the performance of HighPerMeshes' back end using OpenCL as described in Section 5. For this purpose, we calculate the speedup of three kernels compared with our back end using OpenMP. We test the forward Euler method shown in Listing 6, the Runge Kutta integration loop explained in Section 3.2, and the Maxwell volume kernel shown in Listing 4. All experiments are run for a single DoF per entity and use double-precision floating-point types.
Similar to the strong scaling scenario in Section 6.1, we use a mesh of 400,000 tetrahedra and simulate for 1000 time steps. We measured the experiments described in this section on a system with an AMD Ryzen 5 3600X CPU and an AMD Radeon RX 5600 XT GPU, thus comparing in this setup two mid-range consumer devices that a computational scientist might use before moving to an HPC system. Table 2 presents the measured GPU speedups for the three kernels, showing how HighPerMeshes can also exploit the acceleration potential of GPUs for suitable kernels.
In order to critically assess these results further, we investigated if the CPU implementation of our DSL leads to performance deficits compared with a hand-written implementation. Table 3 shows the results for two additional experiments, comparing the same generated GPU kernels with hand-optimized CPU variants of the same kernels. First, we implemented the Runge Kutta Integration loop without statements that are available in HighPerMeshes, instead using standard for-loops and vectors. The loop iterating over all entities in the range is parallelized with OpenMP, similar to the back end provided in HighPerMeshes. Here we can only see a slight difference in performance, only a speedup of 9.7 compared with 10.9, which means that HighPerMeshes introduces some overhead, but does not alter the resulting performance by a significant amount for its high level of abstraction. We also investigated if the influence of vector data structures as provided by HighPerMeshes impact on performance. For this purpose, we measured the speedup for the forward Euler kernel with a three-dimensional vector. Here we only see a speedup of 2.5 compared with 4.9. An explanation is that we now have to consider three vector entries instead of just one, which leads to uncoalesced memory accesses. Here, performance could be improved by using "structures of arrays" instead of using "arrays of structures."
**TABLE 2** Speedups of generated GPU kernels over baseline HighPerMeshes CPU execution
<table>
<thead>
<tr>
<th>Kernel</th>
<th>Speedup GPU over CPU</th>
</tr>
</thead>
<tbody>
<tr>
<td>Forward Euler</td>
<td>4.9</td>
</tr>
<tr>
<td>Runge Kutta integration</td>
<td>10.9</td>
</tr>
<tr>
<td>Maxwell volume</td>
<td>12</td>
</tr>
</tbody>
</table>
**TABLE 3** Speedups of generated GPU kernels over hand-written CPU execution
<table>
<thead>
<tr>
<th>Kernel</th>
<th>Speedup GPU over CPU</th>
</tr>
</thead>
<tbody>
<tr>
<td>Forward Euler with three-dimensional vectors</td>
<td>2.5</td>
</tr>
<tr>
<td>Runge Kutta integration with hand-written host-side code</td>
<td>9.7</td>
</tr>
</tbody>
</table>
**6.2 | GPU experiments**
In this section, we test the performance of HighPerMeshes' back end using OpenCL as described in Section 5. For this purpose, we calculate the speedup of three kernels compared with our back end using OpenMP. We test the forward Euler method shown in Listing 6, the Runge Kutta integration loop explained in Section 3.2, and the Maxwell volume kernel shown in Listing 4. All experiments are run for a single DoF per entity and use double-precision floating-point types.
Similar to the strong scaling scenario in Section 6.1, we use a mesh of 400,000 tetrahedra and simulate for 1000 time steps. We measured the experiments described in this section on a system with an AMD Ryzen 5 3600X CPU and an AMD Radeon RX 5600 XT GPU, thus comparing in this setup two mid-range consumer devices that a computational scientist might use before moving to an HPC system. Table 2 presents the measured GPU speedups for the three kernels, showing how HighPerMeshes can also exploit the acceleration potential of GPUs for suitable kernels.
In order to critically assess these results further, we investigated if the CPU implementation of our DSL leads to performance deficits compared with a hand-written implementation. Table 3 shows the results for two additional experiments, comparing the same generated GPU kernels with hand-optimized CPU variants of the same kernels. First, we implemented the Runge Kutta Integration loop without statements that are available in HighPerMeshes, instead using standard for-loops and vectors. The loop iterating over all entities in the range is parallelized with OpenMP, similar to the back end provided in HighPerMeshes. Here we can only see a slight difference in performance, only a speedup of 9.7 compared with 10.9, which means that HighPerMeshes introduces some overhead, but does not alter the resulting performance by a significant amount for its high level of abstraction. We also investigated if the influence of vector data structures as provided by HighPerMeshes impact on performance. For this purpose, we measured the speedup for the forward Euler kernel with a three-dimensional vector. Here we only see a speedup of 2.5 compared with 4.9. An explanation is that we now have to consider three vector entries instead of just one, which leads to uncoalesced memory accesses. Here, performance could be improved by using "structures of arrays" instead of using "arrays of structures."
7 | RELATED WORK
There are several other software projects addressing PDE computations on unstructured grids. Traditional library approaches such as deal.II, DUNE, or Kaskade focus on application building blocks and usually provide explicit parallelization based on threads or MPI, providing one or a few selected back ends such as PETSc. HighPerMeshes provides explicit features that allow implementing new back ends while these toolbox approaches do not grant such extensibility.
High-level DSLs such as FEniCS or FreeFEM allow specifying PDE problems in very abstract notation and use code generation techniques to create efficient simulation programs. The scope of HighPerMeshes is more on the side of solver-implementation than abstract mathematical formulations. The projects closest in scope and intention to our work are OP2 and Liszt. OP2 is an “active library” framework that allows distributing mesh-based compute kernels and accessing data associated with different mesh entities on multicore and many-core architectures. Here, the major difference to HighPerMeshes is that OP2 provides a direct parallelization statement instead of using a dispatcher. Liszt is a DSL for PDE solvers embedded in Scala that provides a cross-compiler that analysis FEMs written in their syntax, which is close to Scala. In contrast, we rely on template metaprogramming methods for distributing code while we use code transformation techniques for the OpenCL back end.
There are several DSLs that consider stencil codes on structured grids. In comparison, HighPerMeshes targets the domain of solver codes on unstructured grids. For example, STELLA is embedded in C++ and allows parallelization with OpenMP while Mint is embedded in C and uses source-to-source transformation to emit CUDA code. Another example in this domain is ExaStencils, a DSL that was developed in the ExaStencils project. It is a multilevel DSL that uses code generation to transform algorithms from more abstract algorithm definitions to concrete solver implementations that are then translated to C++ code.
Regarding code generation, Hipacc is a DSL in the domain of image processing that also uses Clang’s Libtooling to generate code. Compared with their approach, HighPerMeshes’s code generator introduces an additional IR to allow AST-to-AST transformations, while Hipacc only employs term-re-writing.
Alternatives to OpenCL include the CUDA programming model for NVIDIA GPUs and Intel’s recently released oneAPI specification around the more expressive DPC++ language based on SyCL and C++ to target its CPUs, GPUs, and FPGAs. Yet, OpenCL is currently still the most widely supported language for data-parallel architectures, also including AMD GPUs besides the previously mentioned targets. Of the mentioned technologies, HighPerMeshes currently only supports OpenCL, but is designed in such a way that a new back end can be introduced with a new dispatcher as explained in Section 2.3.
To summarize, HighPerMeshes is a DSL embedded in C++ focused on solver implementations for unstructured grids that completely separates parallelization technology from solver formulation. While HighPerMeshes already provides some back ends for different parallelization technologies, it is designed to easily implement the operation of other back ends with its dispatcher concept.
8 | CONCLUSION AND FUTURE WORK
HighPerMeshes is an embedded DSL providing high-level abstractions for the design of iterative, matrix-free algorithms on unstructured grids. It is a powerful framework enabling users to run simulations as well as implement their own modifications for complex multiscale problems from a broad range of application domains like optics, photonics, hydrodynamics, gas dynamics, and acoustics.
HighPerMeshes provides data structures and procedures that allow for efficient autoparallelization and distribution with the help of GASPI, ACE, OpenMP, and OpenCL. Here, the dispatcher concept allows a clean separation of the parallelization back ends from the rest of the DSL, allowing other technologies, such as SyCL, in the future. This tackles the problem of not being able to use the DSL if new technologies emerge and makes HighPerMeshes more future-proof compared with other approaches that only provide a fixed number of back end solutions. HighPerMeshes already includes some scalable high-performance back ends. This saves implementation time and effort on one side, and offers flexibility for different computing platforms without the need for code modification on the other side. In our preliminary practical experience, we found that the DSL can indeed be used by numerical analysts ignorant of modern parallel architectures to exploit these to a large extent. Thus, HighPerMeshes enables the user to take advantage of complex parallelization, task scheduling, and data distribution techniques, completely without requiring knowledge about parallelization. Moreover, relying on the HighPerMeshes abstraction relieves the user from adapting the application code to several different target architectures.
The back ends for parallelization and distribution, as described in Section 4 leave room for future work. Alternative back ends can be implemented and used without modification of user code due to the clear decoupling of algorithm and parallelization technology provided by the dispatcher concept. Alternative dispatchers could be based on a hybrid of MPI and OpenMP. Another significant next step is to combine the OpenCL back with the GASPI back end, allowing the usage of GPUs on multiple nodes. Another opportunity lies within generating code for
https://sycl.tech.
an accelerator and distributing the resulting kernels with the distributed dispatcher. While the code generator is functional, more sophisticated optimizations are also in development for future work.
ACKNOWLEDGMENTS
This work was partially funded by the German Federal Ministry of Education and Research (BMBF) within the collaborative research project “High-PerMeshes” (01H16005). The authors gratefully acknowledge the funding of this project by computing time provided by the Paderborn Center for Parallel Computing (PC²).
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
ORCID
Stefan Groth https://orcid.org/0000-0002-9043-0746
Frank Hannig https://orcid.org/0000-0003-3663-6484
REFERENCES
How to cite this article: Alhaddad S, Förstner J, Groth S, et al. The HighPerMeshes framework for numerical algorithms on unstructured grids. *Concurrency Computat Pract Exper.* 2021;e6616, [https://doi.org/10.1002/cpe.6616](https://doi.org/10.1002/cpe.6616)
|
{"Source-Url": "https://ris.uni-paderborn.de/download/24788/24789/2021-09%20Alhaddad%20-%20Concurrency...%20-%20The%20HighPerMeshes%20framework%20for%20numerical%20algorithms%20on%20unstructured%20grids.pdf", "len_cl100k_base": 12336, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39536, "total-output-tokens": 15524, "length": "2e13", "weborganizer": {"__label__adult": 0.0004351139068603515, "__label__art_design": 0.0006623268127441406, "__label__crime_law": 0.00046372413635253906, "__label__education_jobs": 0.0014810562133789062, "__label__entertainment": 0.0001512765884399414, "__label__fashion_beauty": 0.0002720355987548828, "__label__finance_business": 0.0003969669342041016, "__label__food_dining": 0.00046753883361816406, "__label__games": 0.0010089874267578125, "__label__hardware": 0.0022449493408203125, "__label__health": 0.0009617805480957032, "__label__history": 0.0006089210510253906, "__label__home_hobbies": 0.00018346309661865232, "__label__industrial": 0.0010919570922851562, "__label__literature": 0.00032973289489746094, "__label__politics": 0.0004792213439941406, "__label__religion": 0.000827789306640625, "__label__science_tech": 0.38671875, "__label__social_life": 0.00015783309936523438, "__label__software": 0.01163482666015625, "__label__software_dev": 0.58740234375, "__label__sports_fitness": 0.0005602836608886719, "__label__transportation": 0.0010395050048828125, "__label__travel": 0.00028324127197265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62796, 0.02175]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62796, 0.53087]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62796, 0.84579]], "google_gemma-3-12b-it_contains_pii": [[0, 3153, false], [3153, 8408, null], [8408, 13437, null], [13437, 19069, null], [19069, 22891, null], [22891, 26059, null], [26059, 30483, null], [30483, 34130, null], [34130, 36550, null], [36550, 39928, null], [39928, 43440, null], [43440, 49068, null], [49068, 54729, null], [54729, 60917, null], [60917, 62796, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3153, true], [3153, 8408, null], [8408, 13437, null], [13437, 19069, null], [19069, 22891, null], [22891, 26059, null], [26059, 30483, null], [30483, 34130, null], [34130, 36550, null], [36550, 39928, null], [39928, 43440, null], [43440, 49068, null], [49068, 54729, null], [54729, 60917, null], [60917, 62796, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62796, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62796, null]], "pdf_page_numbers": [[0, 3153, 1], [3153, 8408, 2], [8408, 13437, 3], [13437, 19069, 4], [19069, 22891, 5], [22891, 26059, 6], [26059, 30483, 7], [30483, 34130, 8], [34130, 36550, 9], [36550, 39928, 10], [39928, 43440, 11], [43440, 49068, 12], [49068, 54729, 13], [54729, 60917, 14], [60917, 62796, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62796, 0.04403]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
50677d3035b56509f706de0355ab6606945e41e9
|
Supporting user adaptation in adaptive hypermedia applications
Citation for published version (APA):
Document status and date:
Published: 01/01/2000
Publisher Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 05. Dec. 2019
Supporting User Adaptation in Adaptive Hypermedia Applications
Hongjing Wu, Geert-Jan Houben, Paul De Bra
Department of Computing Science
Eindhoven University of Technology
PO Box 513, 5600 MB Eindhoven
the Netherlands
phone: +31 40 2472733
fax: +31 40 2463992
email: \{hongjing, houben, debra\}@win.tue.nl
Abstract
A hypermedia application offers its users a lot of freedom to navigate through a large hyperspace. The rich link structure of the hypermedia application can not only cause users to get lost in the hyperspace, but can also lead to comprehension problems because different users may be interested in different pieces of information or a different level of detail or difficulty. Adaptive hypermedia systems (or AHS for short) aim at overcoming these problems by providing adaptive navigation support and adaptive content. The adaptation is based on a user model that represents relevant aspects about the user.
At the Eindhoven University of Technology we developed an AHS, named AHA [DC98]. To describe its functionality and that of future adaptive systems we also developed a reference model for the architecture of adaptive hypermedia applications, named AHAM (for Adaptive Hypermedia Application Model) [DHW99]. In AHAM knowledge is represented through hierarchies of large composite abstract concepts as well as small atomic ones. AHAM also divides the different aspects of an AHS into a domain model (DM), a user model (UM) and an adaptation model (AM). This division provides a clear separation of concerns when developing an adaptive hypermedia application.
In this paper, we concentrate on the user modeling aspects of AHAM, but also describe how they relate to the domain model and the adaptation model. Also, we provide a separation between the adaptation rules an author or system designer writes (as part of the adaptation model) and the system’s task of executing these rules in the right order. This distinction leads to a simplification of the author’s or system designer’s task to write adaptation rules. We illustrate authoring and adaptation in by some examples in the AHS AHA.
Keywords: adaptive hypermedia, user modeling, adaptive presentation, adaptive navigation, hypermedia reference model
1. Introduction
Hypermedia systems, and Web-based systems in particular, are becoming increasingly popular as tools for user-driven access to information. Hypermedia applications typically offer users a lot of freedom to navigate through a large hyperspace. Unfortunately, this rich link structure of the hypermedia application causes some serious usability problems:
- A typical hypermedia system presents the same links on a page, regardless the path a user followed to reach this page. When providing navigational help, e.g. through a map (or some fish-eye view) the system does not know which part of the link structure is most important for the user. The map cannot be simplified by filtering (or graying) out links that are less relevant for the user. Not having personalized maps is a typical navigation problem of hypermedia applications.
- Navigation in ways the author did not anticipate also causes comprehension problems: for every page the author makes an assumption about the foreknowledge the user has when accessing that page. However, there are too many ways to reach a page to make it possible for an author to anticipate all possible variations in foreknowledge when a user visits that page. A page is always presented in the same way. This often results in users visiting pages containing a lot of redundant information and pages that they cannot fully understand because they lack some expected foreknowledge.
---
Adaptive hypermedia systems (or AHS for short) aim at overcoming these problems by providing adaptive navigation support and adaptive content. Adaptive hypermedia is a recent area of research on the crossroad of hypermedia and the area of user-adaptive systems. The goal of this research is to improve the usability of hypermedia systems by making them personalized. The personalization or adaptation is based on a user model that represents relevant aspects about the user. The system gathers information about the user by observing the use of the application, and in particular by observing the browsing behavior of the user.
Many adaptive hypermedia systems exist to date. The majority of them are used in educational applications, but some are used for on-line information systems, on-line help systems, information retrieval systems, etc. An overview of systems, methods and techniques for adaptive hypermedia can be found in [B96]. At the Eindhoven University of Technology we developed an AHS system [DC98] out of Web-based courseware for an introductory course on hypermedia. In this system, called AHA, knowledge is represented with the same granularity as content: at the page level. In earlier versions of AHA, the user's knowledge about a given concept was a binary value: known or not known. The current version supports a more sophisticated representation in the sense that the knowledge level is represented by a percentage: reading a page can lead to an increase (or decrease) of the percentage. As part of the redesign process for AHA we have developed a reference model for the architecture of adaptive hypermedia applications: AHAM (for Adaptive Hypermedia Application Model) [DHW99], which is an extension of the Dexter hypermedia reference model [HS90, HS94]. AHAM acknowledges that doing “useful” and “usable” adaptation in an given application depends on three factors:
- The application must be based on a domain model, describing how the information content of the application (or “hyperdocument”) is structured. This model must indicate what the relationship is between the high (and low) level concepts the application deals with, and it must indicate how concepts are tied to information fragments and pages.
- The system must construct and maintain a fine-grained user model that represents a user’s preferences, knowledge, goals, navigation history and possibly other relevant aspects. The system can learn more about the user by observing the user’s behavior. The user’s knowledge is represented using the concepts from the domain model.
- The system must be able to adapt the presentation (of both content and link structure) to the reading and navigation style the user prefers and to the user’s knowledge level. In order to do so the author must provide an adaptation model consisting of adaptation rules, for instance indicating how relations between concepts influence whether it will be desirable to guide the user towards or away from pages about certain concepts. Most AHS will offer a default adaptation model, relieving the author from explicitly writing these rules. In the original definition of AHAM [DHW99] we used the terms teaching model (TM) and pedagogical rules. These terms stem from the primary application of AHS’s which is in education.
The key elements in AHAM are thus the domain model (DM), user model (UM) and adaptation model (AM). This division of adaptive hypermedia applications provides a clear separation of concerns when developing an adaptive hypermedia application.
The main shortcoming in many current AHS is that these three factors or components are not clearly separated:
- The relationship between pages and concepts is sometimes too vague (e.g. in [PDS98]). When an author decides that two pages each represent 30% of the same concept, there is no way of inferring whether together they represent 30%, 60% of the concept or any value in between. On the other hand systems like AHA [DC98] the relation between pages and concepts is strictly one-to-one, which leads to a very fragmented user model without high-level concepts.
- The adaptation rules can often not be defined at the conceptual level but only at the page level. In AHA [DC98], ELM-ART [BSW96a] and Interbook [BSW96b] for instance the destination of a link is (in almost all cases) a fixed page, described through a plain HTML anchor tag. (The “teach me” button in Interbook is an exception.)
- There may be a mismatch between the high level of detail in the user model and the low reliability of the information on which an AHS must update that user model. The basic information available to most AHS is the time at which a user requests a page (through a WWW-browser). Many educational AHS compensate for the unreliable event information by offering (multiple-choice) tests. A few systems, including AHA [DC98], capture reading time by logging both requests for pages and the time at which the user leaves a page (even when jumping to a different Web-site).
In this paper we focus on the user modeling aspects of AHAM and the use of adaptation rules to generate adaptive presentations and to update the user model. We extend the results of [WHD99b] by separating adaptation rules from the specification of the execution of these rules.
This paper is organized as follows. In Section 2 we describe the AHAM reference model for adaptive hypermedia applications. In Section 3 we elaborate on user modeling and on the use of adaptation rules in AHAM, that is how to construct the user model, update the user model by observing the user's behavior, and how to make content adaptation and link adaptation depending on the user model. In Section 4 we use AHAM to describe the user modeling and adaptation features of the AHA system, before we conclude in Section 5.
2. AHAM, a Dexter-based Reference Model
In hypermedia applications the emphasis is always on the information nodes and on the link structure connecting these nodes. The Dexter model captures this in what it calls the Storage Layer. It represents a domain model DM, i.e. the author's view on the application domain expressed in terms of concepts.
In adaptive hypermedia applications the central role of DM is shared with a user model UM. UM represents the relationship between the user and the domain model by keeping track of how much the user knows about each of the concepts in the application domain.
In order to perform adaptation based on DM and UM an author needs to specify how the user's knowledge influences the presentation of the information from DM. In AHAM this is done by means of a teaching model TM consisting of pedagogical rules. In this paper we use the terms adaptation model (AM) and adaptation rules to avoid the association with educational applications. An adaptive engine uses these rules to manipulate link anchors (from the Dexter model's anchoring) and to generate what the Dexter model calls the presentation specifications. Like the Dexter model, AHAM focuses on the Storage Layer, the anchoring and the presentation specifications. Figure 1 shows the structure of adaptive hypermedia applications in the AHAM model.

In this section we present the elements of AHAM that we will use in Section 3 to illustrate the user modeling and adaptation.
2.1 The domain model
A component is an abstract notion in an AHS. It is a pair (uid, cinfo) where uid is a globally unique (object) identifier for the component and cinfo represents the component's information. A component's information consists of:
- A set of attribute-value pairs;
- A sequence of anchors (for attaching links);
- A presentation specification.
We distinguish two "kinds" of components: concepts and concept relationships. A concept is a component representing an abstract information item from the application domain. It can be either an atomic concept or a composite concept. An atomic concept corresponds to a fragment of information. It is primitive in the model (and can thus not be adapted). Its attribute and anchor values belong to the "Within-component layer" and are thus implementation dependent and not described in the model. A composite component has two "special" attributes:
- A sequence of children (concepts);
- A constructor function (to denote how the children belong together).
The children of a composite concept are all atomic concepts (then we call it a page or in typical hypertext terms a node) or all composite concepts. The composite concept component hierarchy must be a DAG (directed acyclic graph). Also, every atomic concept must be included in some composite concept. Figure 2 illustrates a part of a concept hierarchy.

An anchor is a pair (aid, avalue), where aid is a unique (object) identifier for the anchor within the scope of its component and avalue is an arbitrary value that specifies some location, region, item or substructure within a concept component.
Anchor values of atomic concepts belong to the (implementation dependent) Within-Component layer. Anchor values of composite concepts are identifiers of concepts that belong to that composite.
A specifier is a tuple (uid, aid, dir, pres), where uid is the identifier of a concept, aid is the identifier of an anchor, dir is a direction (FROM, TO, BIDIRECT, or NONE), and pres is a presentation specification.
A concept relationship is a component, with two additional attributes:
- A sequence of specifiers
- A concept relationship type.
The most common type of concept relationship is the type link. This corresponds to the link components in the Dexter model, or links in most hypermedia systems. (Links typically have at least one FROM element and one TO or BIDIRECT element.) In AHAM we consider other types of relationships as well, which play a role in the adaptation. A common type of concept relationship is prerequisite. When a concept $C_1$ is a prerequisite for $C_2$ it means that the user should read $C_1$ before $C_2$. It does not mean that there must be a link from $C_1$ to $C_2$. It only means that the system somehow takes into account that reading about $C_2$ is not desired before some (enough) knowledge about $C_1$ has been acquired. Every prerequisite must have at least one FROM element and one TO element. Figure 3 shows a small set of (only binary) relationships, both prerequisites and links.

The atomic concepts, composite concepts and concept relationships together form the domain model $DM$ of an adaptive hypermedia application.
### 2.2 The user model
An AHS associates a number of user model attributes with each concept component of DM. For each user the AHS maintains a table-like structure, in which for each concept the attribute values for that concept are stored. Section 3 describes the user model in detail. For now it suffices to know that because of the relationships between abstract concepts and concrete content elements like fragments and pages a user model may contain other attributes than simply a knowledge level. For instance, the user model may also store information about what a user has actually read about a concept or whether a concept is considered relevant for the user.
Since the user model consists of "named entities" for which we store a number of attribute/value pairs, there is no reason to limit these "entities" to concepts about which the knowledge level is stored and updated. Concepts can be used (some might say abused) to represent other user features, such as preferences, goals, background and hyperspace experience. For the AHS (or the AHAM model) the actual meaning of concepts is irrelevant.
2.3 The adaptation (teaching) model
The adaptation of the information content of a hyperdocument and of the link structure is based on a set of rules. These rules form the connection between DM, UM and the presentation (specification) to be generated [WHD99a].
We partition the rules into four groups according to the adaptation "steps" to which they belong. These steps are IU, UU-Pre, GA, and UU-Post. An algorithm applies rules in each group. IU is to initialize the user model, under control of Initialize-UM; UU-Pre is to update UM before generating the page, under control of Update-UM-pre; GA is to generate adaptation, under control of Adaptation; UU-Post is to update UM after generating the page, under control of Update-UM-post. The four algorithms control how the rules in each group work together. By this we mean that an algorithm will trigger applicable rules (in some order) until no more rules can be applied or until the application of rules would no longer incur any change to UM.
A generic adaptation rule is a rule in which (bound) variables are used that represent concepts and concept relationships. A specific adaptation rule uses concrete concepts from DM instead of variables. Other than that both types of rules look the same. The syntax of the permissible rules depends on the AHS. In Section 3 we give examples of adaptation rules, using an arbitrarily chosen syntax. In Section 4 we give examples of adaptation rules as they are implemented in the AHA system [DC98]. Generic adaptation rules are often system-defined, meaning that an author need not specify them. Such a rule may for instance define how the knowledge level of an arbitrary concept C_i, influences the relevance of other concepts for which C_i is a prerequisite. Author-defined rules always take precedence over (conflicting) system-defined rules. (Some AHS do not provide the possibility for authors to define their own generic adaptation rules.) Specific rules always take precedence over generic rules.
While specific rules are typically used to create exceptions to generic rules they can also be used to perform some ad-hoc adaptation based on concepts for which DM does not provide a relationship. Specific adaptation rules must always be defined by the author.
The adaptation model AM of an AHS is the set of (generic and specific) adaptation rules.
An AHS does not only have a domain model, user model en adaptation model, but also an adaptive engine, which is a software environment that performs the following functions:
- It offers generic page selectors and constructors. For each composite concept the constructor is used to determine which page to display when the user follows a link to that composite concept. For each page the constructor is used for building the adaptive presentation of that page.
- It optionally offers a (very simple programming) language for describing new page selectors and constructors. Generic and specific adaptation rules (from UU-pre and GA) are used during page selection and construction.
- It performs adaptation by executing the page selectors and constructors. This means selecting a page, selecting fragments, sorting them, maybe presenting them in a specific way, etc. It also means performing adaptation to links by manipulating link anchors depending on the state of the link (like enabled, disabled, hidden, etc.).
- It updates the user model (instance) each time the user visits a page. It does so by triggering the necessary adaptation rules in UU-post. The engine will thus set the knowledge value for each atomic concept of displayed fragments of the page to a value that depends on a configurable amount (this could be 1 by default but possibly overridden by the author). It determines the influence on the knowledge value for page- and composite concepts. It also maintains other attribute values for each concept.
The adaptive engine thus provides the implementation dependent aspects while DM, UM and AM describe the information and adaptation at the conceptual, implementation independent level. An adaptive hypermedia application is a 4-tuple (DM, UM, AM, AE), where DM is a domain model. UM is a user model, AM is a adaptation model, and AE is an adaptive engine.
3. User Modeling and Adaptation in AHAM
According to AHAM the AHS maintains a fine-grained user model that represents the state of the user’s features not only at the page level but also at the abstract conceptual level. It offers the ability to consider navigation history and other relevant user aspects as part of the user model UM. The maintenance of the relevant user aspects in UM is achieved by the application of the adaptation rules that are part of the adaptation model AM.
3.1 Representation of user features using (attribute/value) pairs
By definition adaptive hypermedia applications reflect some features of the user in the user model. This model is used to express various visible aspects of the system that depend on the user and that are visible to that user. Brusilovsky [B96] states which aspects of the user can be taken into account when providing adaptation. Generally, there are five user features that are used by existing AHS:
- knowledge
- user goals
- background
- hyperspace experience
- preferences
Almost every adaptive presentation technique relies on the user’s knowledge as a source of adaptation. The system has to recognize the changes in the user’s knowledge state and update its user model accordingly. Often the user’s knowledge is represented by an overlay model. This overlay model is based on a conceptual structure of the subject domain. Sometimes a simpler stereotype user model is used to represent the user’s knowledge: this means that the user is classified according to some stereotype. As many adaptation techniques require a rather fine-grained approach, stereotype models are often too simple to provide adequate personalization and adaptation. Overlay models on the other hand are generally hard to initialize. Acceptable results are often achieved by combining stereotype and overlay modeling; stereotype modeling is used in the beginning to classify a new user and to set initial values for the overlay model; later a more fine-grained overlay model is used. Using the AHAM definition for user model, it is fairly straightforward how a user’s knowledge state can be represented by associating a knowledge value attribute to each concept.
Apart from the concept’s identifier (which may be just a name) a typical AHS will store not only a knowledge value for each concept, but also a read value which indicates whether (and how much) information about the concept has been read by the user, and possibly some other attribute values as well. While the model uses a table representation, implementations of AHS may use different data structures. For instance, a logfile can be used for the read attribute.
Table 1 illustrates the (conceptual) structure of a user model for a course on hypermedia: the concepts Xanadu and KMS were at least partially learnt. The concept WWW, consisting of two sub-parts, is partially learnt because WWW-page1 has been read but WWW-page2 has not been read. One can see that WWW must be a composite concept that is not a page, because it is already partially learnt while it has not been read at all.
<table>
<thead>
<tr>
<th>concept name (uid)</th>
<th>Knowledge value</th>
<th>read</th>
<th>...</th>
</tr>
</thead>
<tbody>
<tr>
<td>Xanadu</td>
<td>well learned</td>
<td>true</td>
<td>...</td>
</tr>
<tr>
<td>KMS</td>
<td>learned</td>
<td>true</td>
<td>...</td>
</tr>
<tr>
<td>WWW-page1</td>
<td>well learned</td>
<td>true</td>
<td>...</td>
</tr>
<tr>
<td>WWW-page2</td>
<td>not known</td>
<td>false</td>
<td>...</td>
</tr>
<tr>
<td>WWW</td>
<td>learned</td>
<td>false</td>
<td>...</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Table 1: Example user model (instance).
The second kind of user feature is the user's goal. The user's goal or task is a feature that is related with the context of the user's working activities rather than with the user as an individual. The user's goal is the most volatile of all user features. It can be considered as a very important user feature for AHS. One representation of possible user goals uses a hierarchy (a tree) of tasks. Another representation of the user's current goal uses a set of pairs (Goal, Value), where Value is the probability that Goal is the current goal of the user. The latter representation perfectly matches the way in which AHAM models the user's state.
Two features of the user that are similar to the user's knowledge of the subject but that functionally differ from it, are the user's background and the user's experience in the given hyperspace. By background we mean all the information related to the user's previous experience outside the subject of the hypermedia system. By user's experience in the given hyperspace we mean how familiar is the user with the structure of the hyperspace and how easy can the user navigate in it. Again, these features can be modeled in AHAM using concepts' attribute/value pairs.
For different possible reasons the user can prefer some nodes and links over others or some parts of a page over others. This is used most heavily in information retrieval hypermedia applications. In fact in most adaptive information retrieval hypermedia applications preferences are the only information that is stored about the user. User preferences differ from other user model components, since in most cases they cannot be deduced by the system. The user has to inform the system directly or indirectly about the preferences. AHAM's attribute/value pairs can again be used to model the user's preferences.
From the above descriptions we can conclude that although a user model needs to represent (five) very different aspects of a user, all of these kinds of aspects can be implemented as sets of concepts with associated attribute/value pairs. For presentation purposes it is not necessary to treat these different kinds of aspects in a different way, but for implementation purposes it is often needed to treat these in different ways in adaptive hypermedia applications.
The knowledge value of a concept can be a Boolean, discrete or continuous value depending on the choice of the author (or the properties of the AHS). By using a Boolean value, the knowledge about the concept can be either known or unknown.
By using a discrete value the knowledge about the concept can be one of a small set of values, like unknown, learnt, well learnt or well known. By using continuous values from the range of [0..1], the value can more precisely describe the user’s knowledge, and even describe the loss or decay of knowledge over time. In conclusion, AHAM’s user model UM has enough expressive power to model all user features that current AHS take into account.
3.2 Changes in user features
In the previous subsection we discussed features that describe the user's state in the browsing process. Usually in adaptive hypermedia applications (as opposed to adaptable hypermedia applications, see [DHW99]), only the browsing behavior is observed in order to influence the adaptation. Basically, there are five ways in which the user features can change in an adaptive hypermedia application:
1. the user clicks on an anchor (and follows a link);
2. the user performs a test (explicitly);
3. information (about the user) is imported from an external testing system;
4. a user preference is (explicitly) set or declared by the user (initially);
5. a user preference is (automatically) inferred from the user’s behavior.
Besides observing the browser behavior, the application can change the user features based on information that is explicitly imported from its environment or that is explicitly declared or implicitly inferred about the user’s preferences.
These five different kinds of changes lead to five kinds of “rules” how to maintain the user features. The system can be made more author centered by including rules of types 2 and 3 (besides rules of type 1), while the application can become more user centered by including rules of types 4 and 5. It is also possible to choose a combination that suits the application.
3.3 Adaptation based on the user model
By maintaining the user model the system can infer how relevant aspects of the user change while the user is using the application and thus is using the adaptation. The adaptive engine realizes adaptive presentation and adaptive navigation (or link adaptation) according to the (adaptation) rules that are system-defined or written by the author and that depend on the user model.
Below we give a number of examples to show how adaptation rules are used to do adaptation. The syntax used for the rules is arbitrary and only exemplary. AHAM does not prescribe any specific syntax. Normally every AHS will provide its own syntax for defining adaptation rules.
Example 1 For atomic concepts (fragments) let us assume that the presentation specification is a two-valued (almost Boolean) field, which is either “show” or “hide”. When a page is being accessed, the following rule sets the visibility for fragments that belong to a “page” concept, depending on their “relevance” attribute-value.
\[ \text{access}(C) \text{ and } F \text{ IN } C.\text{children} \text{ and } F.\text{relevance} = \text{true} \implies F.\text{vis} := \text{show} \]
Here we simplified things, by assuming that we can treat C.children as if it were a set, whereas it really is a sequence. It is common to execute rules for generating presentation specifications before generating the page, so it is in GA.
Example 2 The following rules set the presentation specification for a specifier that denotes a link (source) anchor depending on whether the destination of the link is considered relevant and whether the destination has been read before. For simplicity we consider a link with just one source and one destination.
\[ \text{CR.type} = \text{link} \text{ and } \text{CR.cinfo.dir[1]} = \text{FROM} \text{ and } \text{CR.cinfo.dir[2]} = \text{TO} \text{ and } \text{CR.ss[2].uid.relevant} = \text{true} \text{ and } \text{CR.ss[2].uid.read} = \text{false} \implies \text{CR.ss[1].pres} := \text{GOOD} \]
\[ \text{CR.type} = \text{link} \text{ and } \text{CR.cinfo.dir[1]} = \text{FROM} \text{ and } \text{CR.cinfo.dir[2]} = \text{TO} \text{ and } \text{CR.ss[2].uid.relevant} = \text{true} \text{ and } \text{CR.ss[2].uid.read} = \text{true} \implies \text{CR.ss[1].pres} := \text{NEUTRAL} \]
\[ \text{CR.type} = \text{link} \text{ and } \text{CR.cinfo.dir[1]} = \text{FROM} \text{ and } \text{CR.cinfo.dir[2]} = \text{TO} \text{ and } \text{CR.ss[2].uid.relevant} = \text{false} \implies \text{CR.ss[1].pres} := \text{BAD} \]
These rules say that links to previously unread but “relevant” pages are “GOOD”. Links to previously read and “relevant” pages are “NEUTRAL” and links to pages that are not “relevant” are “BAD”. In the AHA system [DC98] this results in the link anchors being colored blue, purple or black respectively. In ELM-ART [BSW96a] and Interbook [BSW96b] the links would be annotated with a green, yellow or red ball. We can consider the actual presentation (the coloring of the anchors) as belonging to the Run-time Layer and thus outside the scope of AHAM. However, should we opt to include the color preferences for GOOD, NEUTRAL and BAD links in the user model then the translation of the presentation specification to the color could still be described using a adaptation rule. These rules are in GA also.
3.4 Maintenance of user model
To record the reading history of the user and the evolution of the user’s knowledge, the system updates the user model based on the observation of the user’s browsing process. The rules that the author has defined in AM describe how to keep track of the evolution of the user’s knowledge. For the application of adaptation rules we assume that the FollowLink operation from the Dexter (and thus AHAM) model’s Run-time Layer results in a call to a resolver function for a given specifier. In AHAM the resolver translates the given specifier to the uid of a composite concept component that corresponds to a page, or to a set of such uids. Which page exactly is selected depends on DM and UM. For the selected page an accessor function is called, according to the Dexter model, which returns the (page) concept component that corresponds to the resolved uid. Then the rules for presentation are executed, as shown in Subsection 3.3.
Example 3 The following rule expresses that when a page is accessed the “read” user-model attribute for the corresponding concept is set to true:
\[ \text{access}(C) \implies C.\text{read} := \text{true} \]
This rule is in UU-post. It is the Update-UM-post that will trigger other rules that have read on their left-hand side in the same group.
Example 4 The following rule expresses that when a page is “relevant” and it is accessed, the knowledge value of the corresponding concept becomes “well-learnt”. This is somewhat like the behavior of Interbook [BSW96b].
\[ \text{access}(C) \text{ and } C.\text{relevant} = \text{true} \implies C.\text{knowledge} := \text{well-learnt} \]
In Interbook, as well as in AHA [DC98], knowledge is actually updated before the page is generated. These rules thus are in UU-pre. At the end of Section 4 we shall describe why this option is chosen, and which problems it creates. In general one wishes to have the option to base some adaptation on the knowledge state before accessing a page and some adaptation on the knowledge state after reading the page.
Example 5 The following rule expresses that after a user has taken a test about a concept C, his knowledge about concept C is changed (a rule of “type 2" from Subsection 3.2). Here, an action “test" is used that represents that a test has been taken. It is in UU-pre
\[ < \text{test}(C) \land C.\text{test} > 60 \Rightarrow C.\text{knowledge} := \text{known} > \]
4. User Modeling and Adaptation in the AHA system
AHA [DC98] is a simple adaptive hypermedia system. We describe the properties of the version that is currently being used for two on-line courses and one on-line information kiosk, plus some features of the next version that is currently being developed.
- In AHA the domain model consists of three types of concepts: abstract concepts, fragments and pages. Concepts are loosely associated with (HTML) pages, not with fragments.
- The user model consists of:
- Color preferences for link anchors which the user can customize. (These preferences result in “non-relevant" link anchors to be hidden if their color is set to black, or visibly “annotated" if their color is set to a non-black color, different from that of “relevant" link anchors.)
- For each abstract concept, a knowledge attribute with percentage values. (100 means the concept is fully known.) For pages and fragments there is no knowledge attribute value.
- For each page, a Boolean read attribute. (True means the page was read, false means it was not read.) AHA actually logs access and reading times, but they cannot be used in a more sophisticated way in the current version. For abstract concepts and fragments there is no read attribute value.
- AHA comes with an adaptation model containing system-defined generic adaptation rules. It offers a simple language for creating author-defined specific adaptation rules (but no author-defined generic rules).
The domain model can only contain concept relationships of the types that are shown below. An author cannot define new types. The influence of these relationships on the adaptation and the user model updates is defined by system-defined generic adaptation rules. In AHA all rules are executed before generate the page and are triggered directly by a page access, thus eliminating the need for propagation.
- When a page is accessed, its read attribute in the user model is updated as follows (it is in UU-pre):
\[ < \text{access}(P) \Rightarrow P.\text{read} := \text{true} > \]
- The relationship type generates links a page to an abstract concept. A generates relationship between P and C means that reading page P generates knowledge about C (it is in UU-pre):
\[ < \text{access}(P) \Rightarrow C.\text{knowledge} := 100 > \]
This “generation” of knowledge in AHA is controlled by a structured comment in an HTML page:
\[ <!-- \text{generates readme} --> \]
This example generates comment denotes that the concept readme becomes known when the page is accessed.
- The relationship type requires links a concept to a virtual composite concept that is defined by a (constructor which is a) Boolean expression of concepts. Although in principle this composite concept is unnamed, we shall use a “predicate" or “pseudo attribute of the page" to refer to it. P.requires is used as a Boolean attribute of which the value is always that of the corresponding Boolean expression. It is not a user model attribute as its value is always computed on the fly and not stored in the user model. A requires relationship is implemented using a structured comment at the top of an HTML page, e.g.:
\[ <!-- \text{requires ( readme and intro )} --> \]
This example expresses that this page is only considered relevant when the concepts readme and intro are both known (100%). In AHA, links to a page for which requires is false are considered BAD, and reading such a page generates less knowledge than reading a GOOD page. Below we give the rules in GA that determines how the link anchors will be presented. They are very similar to the rules in Example 2 (Subsection 3.3):
\[ < \text{CR.type} = \text{link} \land \text{CR.cinfo.dir}[1] = \text{FROM} \land \text{CR.cinfo.dir}[2] = \text{TO} \land \text{CR.ss}[2].\text{uid}.\text{requires} = \text{true} \land \text{CR.ss}[2].\text{uid}.\text{read} = \text{false} \Rightarrow \text{CR.ss}[1].\text{pres} = \text{GOOD} > \]
\[ < \text{CR.type} = \text{link} \land \text{CR.cinfo.dir}[1] = \text{FROM} \land \text{CR.cinfo.dir}[2] = \text{TO} \land \text{CR.ss}[2].\text{uid}.\text{requires} = \]
true and CR.ss[2].uid.read = true => CR.ss[1].pres = NEUTRAL
• The relationship type link only applies to pairs of pages in AHA. "Page selectors" that exist in AHAM in general are thus not needed (or possible) in AHA.
AHAM allows author-defined specific adaptation rules only for the conditional inclusion of fragments in HTML pages. Structured HTML comments are used for specifying these rules. With a fragment F we can associate a "pseudo attribute" requires to indicate the condition, just like for whole pages. The syntax is illustrated by the following example:
```html
<!--
if ( readme and not intro ) -->
... here comes the content of the fragment ...
<!--
else -->
... here is an alternative fragment ...
<!--
endif -->
```
AHA only includes fragments when their requires "attribute" is true.
The above examples illustrate that representing the actual functionality of an existing AHS in the AHAM model is fairly straightforward. The main reasons for using such a representation are to be able to compare different AHS, to possibly translate an adaptive hypermedia application from one AHS to another, and to identify potential problems or shortcomings in existing AHS.
We conclude this Section with an illustration of one specific shortcoming that we have found in both AHA [DC98] and Interbook [BSW96]: the "new" knowledge values are calculated before generating the page (and in fact these systems do not support calculating knowledge values after generating a page at all). When a user requests a page, the knowledge generated by reading this page is already taken into account during the generation of the page. This has desirable as well as undesirable side-effects:
• When links to other pages become relevant after reading the current page it makes sense to already annotate the link anchors as relevant when presenting the page. Once a page is generated its presentation remains static while the user is reading it (and rightfully so). The new knowledge thus needs to be taken into account before the page is actually read.
• Pages contain information that becomes relevant or non-relevant depending on the user's knowledge. In some cases the relevance of a fragment may depend on the user having read the page that contains this fragment. This means that a fragment may be relevant the first time a page is visited and non-relevant thereafter, or just the other way round.
By already taking into account the knowledge before the page is generated for the first time a different "first time version" becomes impossible to create. (Some readers may argue that having content that changes in this way may not be desirable in any case, but not having this possibility limits the general applicability of the AHS.)
5. Conclusions and Future Work
Over the past few years we have developed an AHS, mainly for use in courseware. We have come across a number of other AHS, with different interesting properties. As part of the redesign of AHA [DC98] we developed a reference model for AHS, named AHAM. The description of adaptive hypermedia applications in terms of this model has provided us with valuable redesign issues. The three most important ones are:
• The division of an adaptive hypermedia application into a domain model, user model, and adaptation model provides a clear separation of concerns and will lead to a better separation of orthogonal parts of the AHS functionality in the implementation of the next version of AHA. We believe that a system which supports this separation of concerns will not only result in a cleaner implementation, but also in a more usable authoring environment [WHD99].
• In this paper we have described the adaptation rules in such a way that the rule definition is independent of the rule execution. This makes authoring easier.
• By representing AHA in the AHAM model we have identified another shortcoming: the lack of a two-phase application of rules. We found that this shortcoming is present in other AHS as well.
We deliberately based the AHAM model on the Dexter hypermedia reference model [HS90, HS94], to show that AHS are "true" hypermedia systems. In this paper we have concentrated on user modeling and adaptation. The description of these aspects at an abstract level sets AHAM apart from other descriptions of AHS that are too closely related to the actual implementation of these AHS.
In the near future we will develop a new version of the AHA system, in which the separation of domain model, user model and adaptation model will be more complete. We also plan an extended paper with a complete formal definition of AHAM, including a formal specification of a language for specifying adaptation rules.
References
|
{"Source-Url": "https://pure.tue.nl/ws/files/3219209/356692827386889.pdf", "len_cl100k_base": 9482, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34544, "total-output-tokens": 10691, "length": "2e13", "weborganizer": {"__label__adult": 0.0004494190216064453, "__label__art_design": 0.0015859603881835938, "__label__crime_law": 0.0004627704620361328, "__label__education_jobs": 0.034088134765625, "__label__entertainment": 0.00026679039001464844, "__label__fashion_beauty": 0.00032448768615722656, "__label__finance_business": 0.0006856918334960938, "__label__food_dining": 0.0005207061767578125, "__label__games": 0.0010213851928710938, "__label__hardware": 0.001224517822265625, "__label__health": 0.0007891654968261719, "__label__history": 0.0008373260498046875, "__label__home_hobbies": 0.00018310546875, "__label__industrial": 0.0006990432739257812, "__label__literature": 0.0012998580932617188, "__label__politics": 0.00043892860412597656, "__label__religion": 0.0007672309875488281, "__label__science_tech": 0.16455078125, "__label__social_life": 0.0002663135528564453, "__label__software": 0.0673828125, "__label__software_dev": 0.72021484375, "__label__sports_fitness": 0.0003459453582763672, "__label__transportation": 0.0009322166442871094, "__label__travel": 0.00041604042053222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45960, 0.01892]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45960, 0.64881]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45960, 0.911]], "google_gemma-3-12b-it_contains_pii": [[0, 2290, false], [2290, 5964, null], [5964, 11243, null], [11243, 14341, null], [14341, 17290, null], [17290, 21965, null], [21965, 25516, null], [25516, 30291, null], [30291, 35287, null], [35287, 39783, null], [39783, 43706, null], [43706, 45960, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2290, true], [2290, 5964, null], [5964, 11243, null], [11243, 14341, null], [14341, 17290, null], [17290, 21965, null], [21965, 25516, null], [25516, 30291, null], [30291, 35287, null], [35287, 39783, null], [39783, 43706, null], [43706, 45960, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45960, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45960, null]], "pdf_page_numbers": [[0, 2290, 1], [2290, 5964, 2], [5964, 11243, 3], [11243, 14341, 4], [14341, 17290, 5], [17290, 21965, 6], [21965, 25516, 7], [25516, 30291, 8], [30291, 35287, 9], [35287, 39783, 10], [39783, 43706, 11], [43706, 45960, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45960, 0.03828]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7acc43e3464b777e368525821b76eff42bb4386d
|
Designing a Benchmark for the Assessment of XML Schema Matching Tools
Fabien Duchateau, Zohra Bellahsene
To cite this version:
HAL Id: lirmm-00138527
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138527
Submitted on 26 Jun 2007
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ABSTRACT
Over the years, many XML schema matching systems have been developed. A benchmark for assessing the capabilities of schema matching systems and providing uniform conditions and the same testbed for all schema matching prototypes, has become indispensable as the matching systems grow in complexity. However, developing a benchmark for the schema matching problem is very challenging, given the wide range of techniques that can be applied to assist in schema matching. In this paper, we present the foundations and desiderata of a benchmark for XML schema matching. Moreover, we have extended the notion of quality of an integrated schema by proposing new scoring functions. Finally, we have designed and implemented XBenchMatch, an application which takes as input: an ideal schema and the result of a matching from a schema matching prototype (i.e. a set of mappings and/or an integrated schema) and generates as output: statistics on the quality of this input. Our proposal is aimed to provide two kinds of evaluations: (i) quality matching evaluation, which is based on the use of the quality measures and (ii) performance of matching schema. The first criteria is very important in automatic schema matching and the second is crucial in large scale when the schema to be matched are very large. In this paper, we present XBenchMatch, a benchmark for testing and assessing schema matching tools and report the experiments results of some matching tools over a large corpus of schemas using our benchmark.
1. INTRODUCTION
Over the years, several approaches of schema matching [6, 9, 14, 18, 22, 25, 28] have been proposed, demonstrating their benefit in different scenarios and many matching systems have been designed. Most of the papers describing a schema matching tool provide an experiment section. However, these experiments reflect a particular scenario, using real-world schemas. For example, a matching tool can provide an acceptable matching quality with good performance in a specific scenario, but it can be unreliable and slow in another case. Thus, it seems difficult to compare two schema matching tools, and to evaluate the one which performs best. And end-users might not know which one is the most appropriate for their task.
To the best of our knowledge, there is no complete benchmark for schema matching tools. In [8], the authors present an evaluation of schema matching tools. This evaluation suffers from two drawbacks. First, by evaluating the matching tools with the scenarios provided in their respective papers, one cannot objectively judge the capabilities of each matching tool. Secondly, some matching tools generate an integrated schema instead of a set of mappings, and the measures provided to evaluate a set of mappings appear not sufficient to evaluate the quality of an integrated schema. Another proposal for evaluating schema matching tools has been done in [28]. It extends [8] by adding time measures and relies on real-world schemas to evaluate the matching tools. However, the evaluation system has not been implemented. Our work extends the criteria provided in [8], by adding scoring functions to evaluate the quality of integrated schemas. It goes further on the evaluation aspect. Indeed all the matching tools are evaluated against the same scenarios.
In this paper, we present the foundation of a benchmark for XML schema matching tools. Our evaluation system involves a set of criteria for testing and evaluating schema matching tools. It is aimed to provide uniform conditions and the same testbed for all schema matching prototypes. Our approach focuses on the evaluation of the matching tools in terms of matching quality and performance. Next, we also aim at giving an overview of a matching tool by analysing its features and deducing some tasks it might fulfill. This should help an end-user to choose among the available matching tools depending on the criteria required to perform his task. Finally, we provide a testbed involving a large schema corpus described in 7 that can be used by everyone to quickly benchmark matching algorithms.
Here we outline the main contributions of our work:
- We describe the notion of benchmark for the schema matching application. More precisely we list the different features involved in this process, and we give a methodology on how to evaluate them and to choose the most appropriate for a defined task.
- We have extended the notion of quality for a schema, by proposing new measures like structural overlap.
Supported by ANR Research Grant ANR-05-MMSA-0007
Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, to post on servers or to redistribute to lists, requires a fee and/or special permission from the publisher, ACM.
VLDB ’07, September 23-28, 2007, Vienna, Austria.
• We have designed XBenchMatch, an application which takes as input an ideal schema the result of a matching from a schema matching system (i.e., a set of mappings and/or an integrated schema). It generates statistics on the quality of this input, based on the criteria defined above.
The rest of the paper is organised as follows: first we give some definitions and preliminaries in Section 2. In Section 3, the list of criteria is explained. In Section 4, we present the main features of schema matching tools. In Section 5 the scoring functions of quality are described. Section 6 briefly presents our XBenchMatch application and the results of our experiments. Section 9 contains the related work; and in Section 10, we conclude and outline some future work.
2. PRELIMINARIES
In this section, we define the main notions used in this paper.
Definition 1 (Schema): A schema is labeled unordered tree \( S = (V_S, E_S, r_S, \text{label}) \) where \( V_S \) is a set of nodes; \( r_S \) is the root node; \( E_S \subseteq V_S \times V_S \) is a set of edges; and \( \text{label} \) is a countable set of labels.
Definition 2 (Semantic Similarity Measure): Let \( E_1 \) be a set of elements of schema 1, and \( E_2 \) be a set of elements of schema 2. A semantic similarity measure between two elements \( e_1 \in E_1 \) and \( e_2 \in E_2 \), noted as \( S_m(e_1, e_2) \), is a metric value based on the likeness of their meaning/semantic content, given as:
\[
S_m : E_1 \times E_2 \rightarrow [0, 1]
\]
\[
(e_1, e_2) \rightarrow S_m(e_1, e_2)
\]
where a zero means a total dis-similarity and 1 value stands for total similarity.
Definition 3 (Automatic Schema Matching):
Given two schema elements sets \( E_1 \) and \( E_2 \) and a similarity measure threshold \( t \). We define Automatic Schema Matching, between two elements \( e_1 \) and \( e_2 \), noted as \( \text{match}(e_1, e_2) \), as follows:
For all \( (e_1, e_2) \in E_1 \times E_2 \),
If \( S_m(e_1, e_2) < t \) then \( \text{match}(e_1, e_2) = \text{false} \)
Else If \( S_m(e_1, e_2) \geq t \) then \( \text{match}(e_1, e_2) = \text{true} \);
\[
d = S_m(e_1, e_2)
\]
where \( d \) is the similarity degree
Threshold \( t \) may be adjusted by an expert, depending upon the strategy, domain or algorithms used by the match tools.
Example 2.1: If \( \text{match} \) (address, address) is calculated using edit distance algorithm\(^1\), the value of \( d \) is 0.857 and if 3-gram\(^2\) algorithm is used the result for \( d \) is 0.333. For another example match (dept., department): edit distance value of \( d \) is 0 and 3-gram result is 0.111. The examples show that the threshold has to be adjusted by an expert depending upon the properties of strings being compared and the match algorithms being applied.
Definition 4 (Best Match selection): There can be the possibility of more than one match for an element \( e_1 \in E_1 \) in \( E_2 \). In such situation the match with maximum similarity degree has to be selected. This case can be formally defined as:
Given \( E_2 \subseteq E_2 \) of size \( n \), such that \( \forall e_{ij} \) corresponding to \( e_i \), \( \text{match}(e_i, e_{ij}) \) is true; where \( 1 \leq j \leq n \). Best match for element \( e_i \) of \( E_1 \) noted as \( \text{match}_b \) is given as follows:
\[
\text{match}_b = \max_{j=1}^{n} S_m(e_i, e_{ij})
\]
Definition 5 (Schema Mapping): Given \( E_1 \) a set of elements of schema 1, \( E_2 \) a set of elements of schema 2 and \( I \) a set of mappings identifiers. We define a mapping between two elements \( e_1 \in E_1 \) and \( e_2 \in E_2 \) by the following function noted as Map:
\[
\text{Map}: E_1 \times E_2 \times F_S \rightarrow E_1 \times E_2 \times [0, 1] \times K
\]
where \( F_S \) is a set of functions performing similarity measure, \( d \) is the similarity degree returned by \( \text{match}(e_1, e_2) \) and \( K \) is the set of mapping expressions e.g. equivalence, synonym, inclusion etc., depending upon the data model being represented by schemas 1 and 2.
Schema mapping can be uni-directional i.e., from schema 1 toward schema 2, or bidirectional i.e., the correspondence holds in both directions e.g. if an element \( e_1 \) from schema 1 is mapped to an element \( e_2 \) of schema 2 then there exists another correspondence for element \( e_2 \) of schema 2 toward element \( e_1 \) of schema 1 \( [1] \).
3. DESIDERATA
The schema matching benchmark needs to have the following properties in order to be complete and efficient. It needs to be:
- Extensible, the benchmark is able to evolve according to progress. Thus, future schema matching tools could be benchmarked, as well as new measures can be added to evaluate the matching quality. The benchmark deals with well-formed XML schemas, and a set of mappings can easily be converted into the default set of mappings formats using a wrapper. Thus, the outputs of future matching tools should be handled. For the new measures, we intend to release the benchmark in open-source, allowing everyone to add new measures or functionalities.
- Portable. The benchmark should be OS-independent, since the matching tools might run on different OS. This requirement is fulfilled by using Java.
- Simple since both end-users and schema matching experts are targeted by this benchmark.
- Scalable on two aspects: creating new benchmark scenarios is an easy task. And a benchmark composed of many scenarios should be easy to construct and evaluate.
- Generic, it should work with most of the matchers available. Thus, the criteria have been restricted to
the average capabilities of the matchers. For example, some schema matching tools are able to match a large number of schemas at a time, but some others do not. This involves the number of schemas to be limited to 2. Another example is: some schema matching tools may provide as output both an integrated schema and a set of mappings while some others only provide a single output.
All these requirements should be met to provide an acceptable matching benchmark. Next we focus on the criteria dedicated to the schema matching process itself.
4. MATCHING TOOLS FEATURES
Some schema tools have enhanced the match task, namely in automatic schema matching, with pre-match and post-match phases. This section covers the general features which define the characteristics and the capabilities of the matching tools. This section is organized in four parts describing these features as follows: (i) the pre-match phase, (ii) the matching method,(iii) the output of the schema matching tool and (iv) the post-match phase.
4.1 Pre-Match Phase
This phase normally includes configuration of various parameters e.g. setting weights, thresholds of the matching algorithms etc. It can have three possibilities:
- **External resources.** They make use of some external resources, like ontologies (domain specific), thesauri or dictionaries (for example Wordnet) [13].
- **Tuning.** A matching tool might be flexible by allowing some parameters or thresholds (example 2.1) to be tuned by the user [12]. This step may be optional or compulsory, but these parameters generally affect other criteria. For instance, they can be varied to enable better performance by degrading the quality.
- **Training** Some approaches provide a new set of machine learning based matchers for specific types of complex matchings. For example, LSD [10] uses machine learning algorithms for matching as well as in summing up the match results for each pair of attribute comparison.
This pre-matching step involves more work at the beginning. However, this effort is often rewarded since it positively affects the matching quality. In our benchmark, the pre-match appears as a list of pre-processing tasks of the matching tool, performed at this phase. For example, use of dictionaries, use of ontologies, use of synonyms table, etc.
4.2 Matching Method
Schema matching is a complex problem, which starts by discovering similarities between schema elements’ names, mainly by using basic string matching approaches adapted from the information retrieval domain. These algorithms have been dependent on some basic techniques of element level string matching, linguistic similarities or constraints likelihood at element level or higher schema structure level. Similar graph algorithms utilized in schema matching is a special form of constraints matching [25]. The kernel of schema matching tool is the matcher. It correspond to the match operator defined in [5]. Some tools use composite approach to combine different matchers, for example, LSD [10] and COMA++ [1]. Our benchmark, by means, of the scoring functions described in section 5, allows to test the quality of a matching algorithm or a combination of matching algorithms for a given scenario.
4.3 The Output
There are three main issues regarding with the output:
- **Type of output.** Most matching tools generate either an integrated schema or/a set of mappings. The interesting aspect is to study how they produce the integrated schema. Our benchmark, by means of dedicated scoring functions, e.g. structural overlap would allow to test whether the method is appropriate. For example, is the method for building an integrated schema from scratch, or from a particular input schema, a good method regarding the ideal schema?
- **Format of the output.** This is an important feature which gather the possibilities to use this output. Since our benchmark is dealing with XML schema, the output can be queryable with XQuery.
- **Complexity of the mappings.** Several types of mappings need to be handled. All matching tools supports the 1:1 mapping, i.e. one element from one schema is mapped with one element of another schema. Complex mappings, involving several elements considered as 1:n, n:1, and n:m [22] are not supported by all matching tools. The possible relationship between the mapping elements can be specified: for example, some matching tools precise that an element price is mapped to the element amount with the relationship \( price = amount \times VAT \).
Our benchmark is able to deal with all kinds of mappings.
4.4 Post-Match Phase
The post-match phase uses different measures to select the best correspondence for an element from a set of possible matches which show the semantic equivalence aspect for that element. These techniques are termed as match quality measures in the literature [8]. In our benchmark, the post-match is handled by the overall and the schema proximity measures.
5. QUALITY MEASURES
The aim of automatic schema matching process is to avoid a manual, labor and error-prone task in large scale scenarios. For this purpose we have designed a set of score functions for evaluating the quality of the integrated schema. They are complemented by the performance aspect, although it just consists in the matching execution time. Our benchmark also provide some statistics like resource consumption (maximum memory needed, disk space storage) and statistics on the collection of schemata used (dimension of the integrated schema = min/max depth and width, number of nodes, etc.)
5.1 Mapping Quality Measures
**Precision** is an evaluation criterion very appropriate to the schema matching framework. Precision calculates the proportion of relevant mappings among the extracted mappings. A 100% precision means that all the mappings extracted by the system are relevant.
Another typical measurement coming from the machine learning approach is **recall** which computes the proportion of relevant mappings extracted among all the relevant mappings. A 100% recall means that all relevant mappings have been found.
The main objective of schema matching is to avoid a manual process, or at least save time since an expert is still required: the output of the matcher needs to be checked and eventually completed. Hence the **overall** measure [19] has been specifically designed to evaluate the post match effort. That is, the amount of work needed to add the relevant mappings that have not been discovered and to remove those which are not relevant but have been extracted by the matcher. The Overall measure can have negative values. It is often important to determine a compromise between recall and precision. We can use a measurement taking into account these two evaluation criteria by calculating the F-measure [27]. As explained in [8], the F-measure is more optimistic than overall.
5.2 Integrated Schema Quality Measures
A matching tool may provide three types of output: a set of mappings, or an integrated schema, or both. Our benchmark can evaluate the integrity of the integrated schema. In that case, our benchmark is able to evaluate the semantic integrity of the integrated schema. The previous score functions are not appropriate since they do not deal with the structure of the schema. We have designed the following measures to reach this goal.
The first measure takes into account the **backbone** of the tree. More formally, it shows if both trees share a large common subtree, seen as a backbone. This measure returns a value between 0 (no common subtree) and 1 (both trees are the same) is given by the following formula:
Given an input schema tree $S_i$ and another integrated tree noted $S_p$, then
\[
\text{Backbone} = \frac{|LSub(S_i \cap S_p)|}{|S_i|} \quad (1)
\]
Where $LSub(S_i \cap S_p)$ represents the largest common subtree between trees $S_i$ and $S_p$, and $|S_i|$ is the number of elements of the tree $S_i$. This measure reflects the structural similarity of the largest shared component of two trees. Note that this backbone measure is mainly efficient with similar trees.
In the following, a subtree is defined as ‘an extract’ of a tree which is composed of at least two nodes and has its own root. All the nodes in this subtree must be descendants of this and only this subtree root.
Considering an ideal (model or expert) schema tree $S_i$ and another tree noted $S_p$ which is evaluated against the ideal tree, we define $Sub$ as the set of all disjoint subtrees which are common to $S_i$ and $S_p$. $|S_i|$ stands for the number of elements in tree $S_i$, and $k$ for the total number of elements of all subtrees in $Sub$.
Based on these assumptions, the **structural overlap** is a measure representing the number of elements which are shared by both trees and are included in a common subtree.
\[
\text{StructuralOverlap} = \frac{k}{|S_i|} \quad (2)
\]
Another interesting measure we have designed is the **structural proximity**. This measure extends the structural overlap by adding several metrics seen as differences. Indeed, the structural overlap only measures the percentage of elements in the common subtrees, and this needs to be enhanced to evaluate a structural proximity between the two trees. Thus, we have added the number of common subtrees. If $S_i$ and $S_p$ are similar, they have only one common subtree, which is the whole tree. And the more common subtrees, the less similar the trees are. Another difference is the number of missing elements, i.e. the elements in $S_i$ that are not in one of the common subtrees. As $S_i$ is the ideal schema, all its nodes which are missing in the common subtrees affect the structural proximity between the two trees. First we define $o$ the number of elements in $S_i$ that are not included in any common subtree. Thus, $o = |S_i| - k$. And the tree proximity is obtained by the following formula:
\[
\text{StructuralProximity} = \frac{k}{|S_i|} \times \sqrt{|Sub| - o} \quad (3)
\]
This formula generates a value between 0 and 1, 0 meaning the trees are totally different and 1 ensuring the trees are identical.
Finally, the last measure, denoted **schema proximity**, computes the similarity between two trees. It takes into account both the structural aspect and the dissimilarity between the tree elements. This dissimilarity gathers the extra elements, namely those that appear in $S_p$ but not in $S_i$, and the missing elements, which are in $S_i$ but not in $S_p$. We define this dissimilarity $d = (|S_i| - |Com|) + (|S_p| - |Com|)$ where $Com$ stands for the set of common elements between $S_i$ and $S_p$ trees. The schema proximity formula is then given by:
\[
\text{SchemaProximity} = \frac{1}{|Sub|} \times \frac{k - d}{|S_i|} \quad (4)
\]
The value computed by the schema proximity measure stands between 1 for a complete similarity and $-\infty$ for a total dissimilarity between the two trees.
6. XBENCHMARK: XML SCHEMA MATCHING BENCHMARK
To evaluate and compare XML schema matching tools, we have implemented X Benchmark. The main goal of this application is to provide two kinds of evaluations: (i) quality matching evaluation, which is based on the use of the measures described in section 5 and (ii) performance of matching schema. The first criteria is very important in automatic schema matching and the second is crucial in large scale and when the schema to be matched are very large. Finally, our tool should also help an end-user to choose the
more appropriate among schema matching tools according to his requirements. This section gives an overview of our benchmark.
Figure 1 describes the architecture of our prototype. The input files may be of two types, either a well-formed integrated schema or a set of mappings. Two modules are in charge of converting them into an internal structure, the XML parser and the wrapper respectively. However, the file generated by the matching tool must be of the same type as that of the expert one. Creating new wrappers will ensure the extensibility by supporting new sets of mappings format. Next, the benchmark engines are able to compute different measures between the ideal file and the matcher’s file. XBenchMatch finally outputs various statistics (performance, size and depth of input schemas, ...) and the quality measures explained in 5. Some measurement techniques can also be compared on one or more scenarios, especially by comparing their f-measure and structural proximity measures. Note that the user may also choose the schema corpus that has been matched by the matcher. This only enables to generate statistics on the corpus, for example the average number of nodes, the maximum depth, etc. The static information, i.e. the features of the matching tool, is displayed to help the user to understand the results of the matching. The analysis of these results is given by the dynamic criteria, or the measures. Out tool also generate plots for precision, recall, F-measure and overall. If the type of the input files are integrated schemas, then some more measures like structural overlap and structural proximity are also computed.
7. EXPERIMENTS PROTOCOL
All experiments were run on a 3.0 Ghz laptop with 2G RAM under Windows XP. A demo version of the prototype is available at www.lirmm.fr/~duchatea/XBenchMatch. To obtain comparable results, our benchmark provides uniform conditions and use the same test schemas for all matching prototypes. In this section, we present the capabilities of XBenchMatch using four real-world scenarios: the first one describes a person, the second is related to a business order, the third one on university courses and the last one comes from the biology domain. All the ideal integrated schemas have been done manually by an expert and are provided with our benchmark application. Before using XBenchMatch, the user has to generate an integrated schema for each scenario with the matching tools he would like to evaluate.
Each of these scenarios is described with more details:
- **Scenario 1. General schemas** are small-sized schemas describing a person. The ideal set of mappings and the ideal integrated schema have also been expertized manually.
- **Scenario 2. Business schemas** dealing with an order. The first schema is drawn from the XCBL collection, and has about 160 elements. The second schema also describes an order but it is smaller with only 12 elements. This scenario reflects the possibility to matching a large schema with a smaller one. A human expert has manually generated the set of mappings between these schemas.
- **Scenario 3. University schemas** have been taken from Thalia collection presented in [15]. Each schema has about 20 nodes and the set of mappings contains 15 mappings. An expert has manually mapped the two schemas produced both output matching files, the set of mappings and the integrated schema.
- **Scenario 4. Biology schemas.** The two schemas come from different collections which are protein domain oriented, namely Uniprot and GeneCards. Both are quite large, with GeneCards around 400 XML paths, and 57 paths in UniProt. A domain expert has manually mapped both schemas, and produced 57 mappings.
<table>
<thead>
<tr>
<th>Scenario 1: General schemas</th>
<th>Scenario 2: Business schemas</th>
<th>Scenario 3: University schemas</th>
<th>Scenario 4: Biology schemas</th>
</tr>
</thead>
<tbody>
<tr>
<td>NB nodes (S1 / S2)</td>
<td>11 / 10</td>
<td>18 / 18</td>
<td>20 / 844</td>
</tr>
<tr>
<td>Avg NB of nodes</td>
<td>18</td>
<td>18</td>
<td>432</td>
</tr>
<tr>
<td>Max depth (S1 / S2)</td>
<td>4 / 4</td>
<td>5 / 3</td>
<td>3 / 3</td>
</tr>
<tr>
<td>NB of Mappings</td>
<td>5</td>
<td>15</td>
<td>10</td>
</tr>
</tbody>
</table>
Table 1: Details about the evaluation scenarios.
Table 1 summarizes the characteristics of the scenarios which are used in the benchmark.
The user can run the default benchmark, which involves four scenarios described above against the matcher’s integrated schema for all the schema (person, order, university
3http://www.xcbl.org
4http://www.ebi.uniprot.org/support/docs/uniprot.xsd
5http://www.geneontology.org/GO.downloads.ontology.shtml
and biology). XBenchMatch is able to calculate the matching quality of these matchers’ integrated schema against the ideal integrated schemas. It outputs the following measures: precision, recall, f-measure, overall, structural overlap, structural proximity. A plot is automatically drawn to show the quality according to the number of common elements between the two trees. Another plot focuses on the schema structure by comparing the structural overlap and proximity to the number of elements in the common subtrees.
As XBenchMatch is meant to be generic and extensible, it is also possible to run the benchmark using other scenarios. It provides the GUI for this option. The process is identical to the default benchmark, except that the user needs to choose, for a specific scenario, both the ideal integrated schema and the matcher’s generated integrated schema. Then the measures showing the quality of the matcher’s integrated schema are displayed in the main window.
Finally, XBenchMatch enables one to compare the quality of different matching tools on one or several scenarios. For example, figure 2 shows the comparison of three Matchers: COMA++, PORSCHE [24] and Similarity Flooding [19].
8. EXPERIMENT RESULTS
In this section, we present the evaluation results of the following matching tools: COMA++, PORSCHE, Similarity Flooding and BTreeMatch. However, our benchmark application is easily extended to other matchers. We notice, it is hard to find available matchers to test. COMA++ and Similarity Flooding matchers are considered by the schema matching community to provide good matching quality.
8.1 Quality of COMA++
COMA++ generates an integrated schema in ASCII tree format. Thus we developed a wrapper to convert it into an XML schema, which is the normal format of our benchmark. The quality of the integrated schema are given in figure 3 and figure 4. The first remark is that COMA++ is able to keep most of the relevant elements, since the recall is equal to 1 on each scenario. However the precision shows that COMA++ becomes less accurate when the size of the schema increases, namely most of the discovered elements should not be in the integrated schema. Except on the first scenario dealing with person description, COMA++ needs much post-match effort to add non-discovered elements and to remove the non-relevant ones. This is illustrated by a negative overall value in three scenarios. However, this matching tool uses a list of synonyms, and none has been provided in these experiments. And the domain-specific scenario on biology is particularly difficult for such matching tool which mainly uses a combination of terminological measures. As for the quality of structure, the results follow the same direction: the two small scenario provide an acceptable quality in terms of schema structure, but this quality decreases with bigger schemas.
To improve the understanding of the graph, the overall value has been limited to -1 instead of -∞. One should consider a negative overall value as not significant as it was explained in [19].
COMA++ also produces a set of mappings. The quality on the set of mappings generated by COMA++ is shown in figure 5. COMA++ results are difficult to interpret. Indeed, it discovers most of the relevant mappings in two scenarios (f-measure is above 0.6) but it does not perform as well in two other scenarios (f-measure is less than 0.1). Although the set of mappings does not enable to discover most of the information compared to the integrated schema, the quality is better with the set of mappings than with the integrated schema. Therefore, the post-match effort is reduced.
8.2 Quality of PORSCHE
PORSCHE produces an integrated schema. The produced set of mappings includes those between the input schema and the integrated schema. While in the other tested schema matching the mappings are those between the input schema.
Therefore, we decide to measure only the quality of the integrated schema. The results of experiments over PORSCHE are depicted in figure 6 and figure 7 on the four scenarios. Both the structure and quality measures on the first small scenario are acceptable, with a F-score around 0.8 and a structural proximity above 0.4. Note that the post-match effort is minimized in these cases. However, when the number of elements increases, the quality tends to decrease: PORSCHE either discovers many elements with only a few relevant, or it discovers a few common elements among which most of them are relevant. The structural quality values are quite low. Thus, with large schemas, the integrated schemas are not similar with the ones provided by the experts. Like COMA++, PORSCHE normally uses a list of synonyms, and this can explain the average results on the order scenario. Besides, one can notice the importance of the precision for the overall measure: a good precision enables to avoid a negative overall value, even with a low recall, as is shown in figure 6.
8.3 Quality of Similarity Flooding
Next experiments carries on Similarity Flooding (SF), implemented in Rondo matching tool. The quality of the integrated schema is given in the two graphs of figure 3 and figure 9. In contrast to the previous matching tools, SF has a better quality with large schemas. Although the precision value stands around 0.5, the structural proximity and the recall are equal to 1 when the number of elements is higher than 75. As this matching tool propagates the benefit of discovering a match to the neighbour nodes, it seems normal that it provides better results with large schemas. The quality on smaller schemas is also acceptable, with values above 0.4. However, the structural quality on the small schemas is low. We can also notice that even in a specific scenario like biology, where other matchers may require auxiliary information (e.g. list of synonyms), in SF the quality of integrated schema does not decrease.
8.4 Quality of BTreeMatch
Figure 10 depicts the quality of the mappings that have been produced by BtreeMatch. We remark that with small schemas, the quality is very low, since the F-score is less than 0.2. However, this measure reaches 0.6 on larger schemas. This behaviour can be explained by the matching algorithms used by BtreeMatch. Indeed, it is based on both terminological and structural techniques, like Similarity Flooding. Thus, it seems that the structural algorithms are able to match large schemas while ensuring an acceptable quality.
8.5 Performance evaluation
<table>
<thead>
<tr>
<th>Person</th>
<th>University</th>
<th>Order</th>
<th>Biology</th>
</tr>
</thead>
<tbody>
<tr>
<td>S1 11</td>
<td>S2 10</td>
<td>S1 18</td>
<td>S2 18</td>
</tr>
<tr>
<td>COMA++ ≤ 1 s</td>
<td>≤ 1 s</td>
<td>3 s</td>
<td>4 s</td>
</tr>
<tr>
<td>PORSCHE ≤ 1 s</td>
<td>≤ 1 s</td>
<td>≤ 1 s</td>
<td>≤ 1 s</td>
</tr>
<tr>
<td>SF ≤ 1 s</td>
<td>≤ 1 s</td>
<td>2 s</td>
<td>4 s</td>
</tr>
<tr>
<td>BtreeMatch ≤ 1 s</td>
<td>≤ 1 s</td>
<td>≤ 1 s</td>
<td>2 s</td>
</tr>
</tbody>
</table>
Table 2: Matching performance on the different scenarii.
Table 2 depicts the matching performances of each matching tool on the evaluation scenarii. All matchers are able to match the small schemas in less than one second. However, when one schema from the scenario is large, COMA++ and Similarity Flooding are less efficient. Similarity Flooding propagates until it reaches a fixpoint computation, involving this process to take more time. On the other hand, PORSCHE, which has been designed to match many large schemas, do not have decreasing performances with schemas up to 800 nodes.
8.6 Discussion.
These experiments show that some matchers are best suited for some scenarii. For example, COMA++ and PORSCHE
9. Related Work
9.1 Tentative for Benchmarking Schema Matching Tools
To the best of our knowledge, there is no complete benchmark for schema matching tools. In [8], the authors present an evaluation of schema matching tools. The main criteria required to reach this goal are discussed. A summary of the capabilities of each matching tool is finally provided. However, as the authors explained, it is quite difficult to evaluate the matching tools for several reasons: they are not always available as a demo. Therefore, it is not possible to test them against specific sets of schemas. Some require specific resources to be efficient, like an ontology or a thesaurus, which are not always available. Finally, some matching tools take as input specific files, for example Rondo. This evaluation suffers from two drawbacks, not mentioning the fact it was published 5 years ago: by evaluating the matching tools with the scenarios provided in their respective papers, one cannot judge efficiently on the capabilities of each matching tool. Secondly, some matching tools generate an integrated schema instead of a set of mappings, and the measures provided to evaluate a set of mappings are not sufficient to evaluate the quality of an integrated schema.
A proposal for evaluating on schema matching has been done in [28]. It extends [8] by adding time measures and relies on real-world schemas to evaluate the matching tools. The input is limited to a set of mappings while some matchers provide a more interesting output by building an integrated schema. Moreover, the evaluation system has not been implemented. In contrast to our work, this system is not available and is not extensible.
Our work extends the criteria list provided in [8], by adding some measures to evaluate the quality of integrated schemas. It goes further on the evaluation aspect. Indeed all the matching tools are evaluated against the same scenarios, thus involving a better and more thorough comparison.
9.2 Schema Matching Tools
In this section we review works classified under the schema matching. The surveys [22, 25, 28] incorporate solutions from schema level (metadata), as well as instance level (data) research, including both Database and Artificial Intelligence domains. Most of the methods discussed in these surveys compare two schemas (with or without their data instances) and work out quality matching for the elements of schema 1 to the schema 2. Some of the tools also suggest the merging process of the schemas based on the matching found in first step. Here we present the main schema matching namely the one we have tested with our benchmark.
TRANSCM [20] objective is to transform instances of source schema into target schema. It can have input schemas as DTD or OODB. Internally the schemas are converted into labeled trees and the match process is performed node by node in the top-down manner. TRANSCM presumes a high degree of similarity between the two schemas. TRANSCM supports a number of matchers (rules), to find correspondences between schema nodes. Each rule may in turn combine multiple match criteria, e.g. name similarity and the number of descendants. The rules are assigned distinct priorities and applied in a fixed order. If more than one target elements are found as possible match, user interaction is required to select the match. And in case no match is found user is allowed to apply a new rule to find a match.
DIKE [21] prototype implements a hybrid approach to automatically find synonymy, hypernymy and homonymy correspondences between elements of Entity-Relationship (ER) schemas. User specific set of synonyms, hypernyms and homonyms are utilized, constructed by some expert or using some thesauri. Other then the linguistic and syntactic comparison, the main algorithm is a structural matcher, which performs a pair-wise comparison of elements from the input schemas. The weight of similarity between two elements is increased, if the algorithm finds some similarity between the related elements of the pair of elements.
CUPID [18] is a generic, hybrid schema matching prototype, consisting of a name matcher and a structural one. It has been used for XML and relational schemas. Internally, schemas are converted into trees, in which additional nodes are added to resolve the multiple/recursive relationships between a shared node and its parent nodes. First, linguistic similarity of pair of nodes is calculated using external oracles of synonyms and abbreviations. Then the structural matcher is applied on the tree structures in post order manner. This technique gives similarity possibilities for non-leaf nodes, depending upon the similarity of their leaves. For
each pair of nodes, their linguistic and structural similarity are aggregated to a weighted similarity using a weighted sum. If the weighted similarity exceeds a threshold, the structural similarity of the leaf pairs is increased. Otherwise, it is decreased. For each source element, CUPID selects the target element with the highest weighted similarity exceeding a given threshold as the match candidate.
Similarity Flooding [19] have been used with Relational, RDF and XML schemas. These schemas are initially converted into labeled graphs and SF approach uses fix-point computation to determine correspondences of 1:1 local and m:n global cardinality between corresponding nodes of the graphs. The algorithm has been implemented as a hybrid matcher, in combination with a name matcher based on string comparisons. First, the prototype does an initial element-level name mapping, and then feeds these mappings to the structural SF matcher. The weight of similarity between two elements is increased, if the algorithm finds some similarity between the related elements of the pair of elements. In a modular architecture, the components of rondo, such as schema converters, the name and structural matchers, and filters, are available as high-level operators and can be flexibly combined within a script for a tailored match operation.
PROTOPLASM [6] target is to provide a flexible and a customizable framework for combining different match algorithms. Present CUPID and Similarity flooding are being used as the base matchers it. SQL and XML schemas, converted into graphs internally, have been successfully matched. PROTOPLASM supports various operators for computing, aggregating, and filtering similarity matrices. Using a script language, it allows flexibly defining and customizing the work flow of the match operators.
COMA/COMA++ [1, 9] is a generic, composite matcher with very effective match results. It uses the same architecture like that of Protoplasm but its range of match algorithms is more complete. It can process the relational, XML, RDF schemas as well as ontologies. Internally it converts the input schemas as trees for structural matching. For linguistic matching it utilizes a user defined synonym and abbreviation tables like CUPID, along with n-gram name matchers. Similarity of pairs of elements is calculated into a similarity matrix. At present it uses 17 element level matchers. For each source element, elements with similarity higher than threshold are displayed to the user for final selection. The COMA++ supports a number of other features like merging, saving and aggregating match results of two schemas.
S-MATCH/S-MATCH++ [2, 14] takes two directed acyclic graphs like structures e.g. XML schemas or ontologies and returns equivalence, subsumption type correspondences between pairs of elements. It uses external oracle Wordnet to evaluate the linguistic matching along with its structural matcher to return a subsumption type match. It is also heavily dependent on SAT solvers, which decreases its time efficiency. At present it uses 13 element-level matchers and 3 structural level matchers.
Smiljanic et al. work, [26] shows how personal schema for querying, can be efficiently matched and mapped to a large repository of related XML schemas. The method identifies fragments within in each schema of the repository, which will best match to the input personal schema, thus minimizing the target search space. The prototype implementation, called bellflower, uses k-means data mining algorithm as the clustering algorithm. The authors also demonstrate that this work can be implemented as an intermediate phase with in the framework of existing matching systems. The technique does produce efficient system but with some reduction in effectiveness.
Porsche [23] utilizes tree mining technique to cluster and holistically match and merge large number of schemas (represented as trees). It gives approximate matchings and generates an integrated schema with mappings from source schemas to this integrated schema. It has been devised to cater the quality as well as the performance element for large scale scenarios using domain specific linguistic matching (domain specific synonym and abbreviation oracles). It works in three steps. First, in the pre-mapping part, schema trees are input to the system as a stream of XML and calculate the scope and node number for each of the nodes in the input schema trees. Other statistics like each schema size, maximum depth and node parent are also calculated. A listing of nodes and a list of distinct labels for each tree is constructed. Next, a linguistic matcher identifies semantically distinct node labels in the labels list. The user can set the level of similarity of labels as A) Label String Equivalence, B) Label Token Set Equivalence (abbreviation table) and C) Label Synonym Token Set Equivalence(synonym table). Then Porsche derives the meaning for each individual token and combines these meanings to form a label concept. Finally, similar labels are clustered together. Since each input node remains attached to the its label object, this intuitively forms similar label nodes clusters within a certain schema.
BtreeMatch [11] approach uses the B-tree as the main structure to locate matches and create mappings between XML tree structures. The advantage of searching for mappings using the B-tree approach is that B-tree have indexes that significantly accelerate this process. For example, let us consider two schemas S1 and S2 with respectively 8 and 9 elements. Matching these schemas will entail 2 matching possibilities with an algorithm that tries all combinations. By indexing in a B-tree, we are able to reduce this number of matching possibilities, thus involving better performance. BtreeMatch does not use a matrix to compute the similarity of each couple of elements. Instead, a B-tree, whose indexes represent tokens, is built and enriched as we parse new schemas, and the discovered mappings are also stored in this structure. The tokens reference all labels which contains it. For each input XML schema, the same algorithm is applied: the schema is parsed element by element by preorder traversal. This enables to compute the context vector of each element. The label is split into tokens. We then fetch each of those tokens in the B-tree, resulting in two possibilities:
- no token is found, so we just add it in the B-tree with a reference to the label.
- or the token already exists in the B-tree, in which case
we try to find semantic similarities between the current label and the ones referenced by the existing token. We assume that in most cases, similar labels have a common token (and if not, they may be discovered with the context similarity).
9.3 Data Instance Based Schema Matching
In this section we consider some recent prototypes, which use schema instance data and machine learning techniques to find possible matches between two schemas. These matchers compute all possible match or mismatch possibilities among the attributes of the two source schemas to come up with best results.
AUTOMATCH [4] is the predecessor of AUTOPLEX [3]. It uses single strategy, machine learning match technique. It explicitly uses Naïve Bayesian algorithm to analyse the input instances of relational schemas fields against previously built global schema. The match result consists of 1:1 correspondences and global cardinality.
CLIO [16] has been developed at IBM. It has comprehensive GUI interface and provides matching for XML and SQL schemas. It uses a hybrid approach, combining approximate string matcher for element names and Naïve Bayes-learning algorithm for exploiting instance data. It also facilitates in producing transformation queries (SQL, XQuery, or XSLT) from source to target schemas, depending upon the computed mappings.
LSD [10] is a composite matcher. It requires an already developed global schema, against which newer schemas and their data instances are matched. LSD uses machine learning algorithms for in matching as well as in summing up the match results for each pair of attribute comparisons. LSD has been further utilized in Corpus-based Matching [17], which creates a CORPUS of existing schema and their matches. In this work, input schemas are first compared to schemas in the corpus before they are compared to each other. Another extension based on LSD is IMAP [7]. Here the authors utilize LSD to find 1:1 and n:m mapping among the attributes of the two source schemas to come up with best results.
10. CONCLUSION
In this paper, we present a benchmark for XML schema matching tools. Our approach is focusing on the evaluation of the matching tools in terms of matching quality and performance. Our work extends the criteria provided in [8] by adding new scoring functions which evaluate the quality of integrated schemas and extends the evaluation methodology. Indeed, in XBenchMatch all the matching tools are evaluated against the same scenario, and produce an improved objective comparison. Next, we also aim at giving an overview of a matching tool by analysing its features and deducing some criteria it might fulfill. This should help an end-user to choose among the available matching tools depending on his requirements. Finally, we Furthermore, we provide a tested involving a large schema corpus that can be used by everyone to quickly benchmark new matching algorithms.
We are planing to extend our experiments to CUPID prototype and other matching tools if they are available. We also plan to include evaluation about the scalability. This does not require any extension of our benchmark. We only require to manually generate an expert schema for a large number of input schemas to be matched.
11. ACKNOWLEDGMENTS
The authors would like to thank all the researchers who made available their schema matching tools.
12. REFERENCES
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00138527/file/Duchateau_838_final.pdf", "len_cl100k_base": 10832, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39165, "total-output-tokens": 12939, "length": "2e13", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.0010271072387695312, "__label__crime_law": 0.0007262229919433594, "__label__education_jobs": 0.005916595458984375, "__label__entertainment": 0.00025177001953125, "__label__fashion_beauty": 0.0003325939178466797, "__label__finance_business": 0.0012063980102539062, "__label__food_dining": 0.0004448890686035156, "__label__games": 0.0008358955383300781, "__label__hardware": 0.0008296966552734375, "__label__health": 0.0007982254028320312, "__label__history": 0.000995635986328125, "__label__home_hobbies": 0.0002005100250244141, "__label__industrial": 0.0006651878356933594, "__label__literature": 0.0014963150024414062, "__label__politics": 0.0005908012390136719, "__label__religion": 0.0007262229919433594, "__label__science_tech": 0.447265625, "__label__social_life": 0.0004117488861083984, "__label__software": 0.08306884765625, "__label__software_dev": 0.45068359375, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0005087852478027344, "__label__travel": 0.00033974647521972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54244, 0.03675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54244, 0.34795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54244, 0.89214]], "google_gemma-3-12b-it_contains_pii": [[0, 898, false], [898, 6060, null], [6060, 11669, null], [11669, 17225, null], [17225, 23141, null], [23141, 28061, null], [28061, 31967, null], [31967, 33985, null], [33985, 35592, null], [35592, 40297, null], [40297, 46834, null], [46834, 52251, null], [52251, 54244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 898, true], [898, 6060, null], [6060, 11669, null], [11669, 17225, null], [17225, 23141, null], [23141, 28061, null], [28061, 31967, null], [31967, 33985, null], [33985, 35592, null], [35592, 40297, null], [40297, 46834, null], [46834, 52251, null], [52251, 54244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54244, null]], "pdf_page_numbers": [[0, 898, 1], [898, 6060, 2], [6060, 11669, 3], [11669, 17225, 4], [17225, 23141, 5], [23141, 28061, 6], [28061, 31967, 7], [31967, 33985, 8], [33985, 35592, 9], [35592, 40297, 10], [40297, 46834, 11], [46834, 52251, 12], [52251, 54244, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54244, 0.05752]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
dece4a94d81fcd9b0cf0ab6fd60238158804ca74
|
Automatic Source-to-Source Error Compensation of Floating-Point Programs
Laurent Thévenoux, Philippe Langlois, Matthieu Martel
To cite this version:
Laurent Thévenoux, Philippe Langlois, Matthieu Martel. Automatic Source-to-Source Error Compensation of Floating-Point Programs. Computational Science and Engineering (CSE), Oct 2015, Porto, Portugal. pp.9–16, 10.1109/CSE.2015.11 . hal-01158399
HAL Id: hal-01158399
https://hal.archives-ouvertes.fr/hal-01158399
Submitted on 27 Oct 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Distributed under a Creative Commons Attribution 4.0 International License
Automatic Source-to-Source Error Compensation of Floating-Point Programs
Laurent Thévenoux
Inria – Laboratoire LIP
(CNRS, ENS de Lyon, Inria, UCBL)
Univ. de Lyon, France
Email: laurent.thevenoux@inria.fr
Philippe Langlois and Matthieu Martel
Univ. Perpignan Via Domitia, DALI, F-66860, Perpignan, France
Univ. Montpellier II, LIRMM, UMR 5506, F-34095, Montpellier, France
CNRS, LIRMM, UMR 5506, F-34095, Montpellier, France
Email: {langlois, matthieu.martel}@univ-perp.fr
Abstract—Numerical programs with IEEE 754 floating-point computations may suffer from inaccuracies since finite precision arithmetic is an approximation of real arithmetic. Solutions that reduce the loss of accuracy are available as, for instance, compensated algorithms, more precise computation with double-double or similar libraries. Our objective is to automatically improve the numerical quality of a numerical program with the smallest impact on its performances. We define and implement source code transformation to derive automatically compensated programs. We present several experimental results to compare the transformed programs and existing solutions. The transformed programs are as accurate and efficient than the implementations of compensated algorithms when the latter exist.
I. INTRODUCTION
In this paper, we focus on numerical programs using IEEE 754 floating-point arithmetic. Several techniques have been introduced to improve the accuracy of numerical algorithms, as for instance expansions [4], [23], compensations [7], [10], differential methods [14] or extended precision arithmetic using multiple-precision libraries [5], [8]. Nevertheless, bugs from numerical failures are numerous and well known [2], [18]. This illustrates that these improvement techniques are not known enough outside the floating-point arithmetic community, or not sufficiently automated to be applied more systematically. For example, the programmer has to modify the source code by overloading floating-point types with double-double arithmetic [8] or, less easily, by compensating the floating-point operations with error-free transformations (EFT) [7]. The latter transformations are difficult to implement without a preliminary manual step to define the modified algorithm.
We present a method that allows a non-floating-point expert to improve the numerical accuracy of his program without impacting too much the execution time. Our approach facilitates the numerical accuracy improvement by automating compensation process. Even if we provide error bounds on the processed algorithms, our approach takes advantage of a fast program transformation, available for a large community of developers. So, we propose to automatically introduce at compile-time a compensation step by using error-free transformations. We have developed a tool to parse C programs and generate a new C code with a compensated treatment: floating-point operations $\pm$ and $\times$ are replaced by their respective error-free TwoSum and TwoProduct algorithms [21, Chap. 4]. The main advantage of this method compared to operator overloading is to benefit from code optimizations and an efficient code generation. Program transformation is strongly motivated by our perspectives for the multi-criteria optimization of programs [25]. These optimizations will allow to trade-off between accuracy and execution time. Programs will be partially compensated by using transformation strategies to meet some time and accuracy constraints which could be difficult to reach with operator overloading.
To demonstrate the efficiency of this approach, we compare our automatically transformed algorithms to existing compensated ones such as floating-point summation [22] and polynomial evaluation [7], [10]. The goal of this demonstration is to recover automatically the same results in terms of accuracy and execution time. Compensation is known to be a good choice to benefit from the good instruction level parallelism (ILP) of compensated algorithms compared to the ones derived using fixed-length expansions such as double-double or quad-double [8], [15]. Results for the automatically transformed algorithms, both in terms of accuracy and execution time, are shown to be very close to the results for the implementation of the studied compensated algorithms.
This article is organized as follows. Section II introduces background material on floating-point arithmetic, error-free transformations, and accuracy improvement techniques like double-double arithmetic and compensation. The core of this article is Section III, where we present our automatic code transformation to optimize the accuracy of floating-point computations with the smallest execution time overhead. In Section IV, we present some experimental results to illustrate the interesting behavior of our approach compared to existing ones. Conclusion and perspectives are proposed in Section V.
II. PRELIMINARIES
In this section we recall classical notations to deal with IEEE floating-point arithmetic, basic methods to analyze the accuracy of floating-point computations, and EFTs of the basic operations $\pm$ and $\times$. We also present how to exploit these EFTs with expansions and compensations.
A. IEEE Floating-Point Arithmetic
In base $\beta$ and precision $p$, IEEE floating-point numbers have the form:
$$f = (-1)^s \cdot m \cdot \beta^e,$$
where $s \in \{0, 1\}$ is the sign, $m = \sum_{i=0}^{p-1} d_i \beta^{-i} = (d_0, d_1 d_2 \cdots d_{p-1})_\beta$ is the mantissa (with $d_i \in \{0, 1, \ldots, \beta - 1\}$ and $d_0 \neq 0$) and $e \in \mathbb{Z}$ is the exponent. The relative error is
$$\epsilon_f = \frac{f - \hat{f}}{\hat{f}},$$
where $\hat{f}$ is the exact result.
B. Error-Free Transformations
Error-free transformations (EFTs) are algorithms that compute the exact value of a function, or an expression involving basic floating-point operations, in a way that the result is close to the exact result.
C. Automatic Transformation
The main advantage of the proposed method compared to operator overloading is to benefit from code optimizations and an efficient code generation. Program transformation is strongly motivated by our perspectives for the multi-criteria optimization of programs [25]. These optimizations will allow to trade-off between accuracy and execution time. Programs will be partially compensated by using transformation strategies to meet some time and accuracy constraints which could be difficult to reach with operator overloading.
To demonstrate the efficiency of this approach, we compare our automatically transformed algorithms to existing compensated ones such as floating-point summation [22] and polynomial evaluation [7], [10]. The goal of this demonstration is to recover automatically the same results in terms of accuracy and execution time. Compensation is known to be a good choice to benefit from the good instruction level parallelism (ILP) of compensated algorithms compared to the ones derived using fixed-length expansions such as double-double or quad-double [8], [15]. Results for the automatically transformed algorithms, both in terms of accuracy and execution time, are shown to be very close to the results for the implementation of the studied compensated algorithms.
This article is organized as follows. Section II introduces background material on floating-point arithmetic, error-free transformations, and accuracy improvement techniques like double-double arithmetic and compensation. The core of this article is Section III, where we present our automatic code transformation to optimize the accuracy of floating-point computations with the smallest execution time overhead. In Section IV, we present some experimental results to illustrate the interesting behavior of our approach compared to existing ones. Conclusion and perspectives are proposed in Section V.
where $t$ is the exponent. The IEEE 754-2008 standard [24] defines such numbers for several formats, that is, for various pairs $(\beta, p)$. It also defines rounding modes, and the semantics of the basic operations $\pm, \times, \div, \sqrt{\cdot}$.
**Notation and assumptions.** Throughout the paper, all computations are performed in binary64 format, with the round-to-nearest mode. We assume that neither overflow nor underflow occurs during the computations. We use the following notations:
- $F$ is the set of all normalized floating-point numbers. For example, in the binary64 format floating-point numbers are expressed with $\beta = 2$ over 64 bits including $p = 53$ bits, 11 for the exponent $e$, and 1 for the sign $s$.
- $fl(\cdot)$ denotes the result of a floating-point computation where every operation inside the parenthesis is performed in the working precision and the round-to-nearest mode.
- $ulp(x)$ is the floating-point value of the unit in the last place of $x$ defined by $ulp(x) = 2^e \times 2^{1-p}$. Let $\tilde{x} = fl(x)$ for a real number $x$. We have $|x - \tilde{x}| \leq ulp(\tilde{x})/2$.
**Accuracy analysis.** One way of estimating the accuracy of $\tilde{x} = fl(x)$ is through the number of significant bits $\#_{\text{sig}}$ shared by $x$ and $\tilde{x}$:
$$\#_{\text{sig}}(\tilde{x}) = -\log_2(E_{\text{rel}}(\tilde{x})),$$
where $E_{\text{rel}}(\tilde{x})$ is the relative error defined by:
$$E_{\text{rel}}(\tilde{x}) = \frac{|x - \tilde{x}|}{|x|}, \quad x \neq 0.$$
**B. Error-Free Transformations**
Error-free transformations (EFT) provide lossless transformations of basic floating-point operations $\circ \in \{+,-,\times\}$. Let $a, b \in F$ and $\tilde{x} = fl(a \circ b)$. There exists a floating-point value $y = a \circ b - \tilde{x}$ such that $a \circ b = \tilde{x} + y$. We have $|y| \leq ulp(\tilde{x})/2$. Hence $\tilde{x}$ (resp. $y$) is the upper (resp. lower) part of $a \circ b$ and no digit of $\tilde{x}$ overlaps with $y$. The practical interest of EFTs comes from Algorithms 1, 2, 4, and 5 which exactly compute in floating-point arithmetic the error term $y$ for the sum and the product.
```
x ← fl(a + b) \quad \triangleright |a| \geq |b|
y ← fl((a - x) + b)
return [x, y]
```
**Algorithm 1:** FastTwoSum($a$, $b$) [Dekker, 1971].
```
x ← fl(a + b)
z ← fl(x - a)
y ← fl((a - (x - z)) + (b - z))
return [x, y]
```
**Algorithm 2:** TwoSum($a$, $b$) [Møller, 1965 and Knuth, 1969].
Algorithms 1 and 2, respectively introduced by Dekker [4] and Knuth [12, Chap. 4] and Møller [19], provide the error of floating-point addition. The TwoSum algorithm requires 6 floating-point operations (flop) instead of 3 for FastTwoSum, but does not require a preliminary comparison of $a$ and $b$.
```
c ← fl(f \times a)
a_H ← fl(c - (c - a))
a_L ← fl(a - a_H)
return [a_H, a_L]
```
**Algorithm 2:** TwoSum($a$, $b$) [Dekker, 1971].
Algorithm 3, due to Veltkamp [4], splits a binary floating-point number into two floating-point numbers containing the upper and lower parts. It is used in Algorithm 4, introduced by Dekker [4], to compute the EFT of a product for the cost of 17 flops.
```
x ← fl(a \times b)
y ← fl((f \times (a, b, -x)))
return [x, y]
```
**Algorithm 4:** TwoProductFMA($a$, $b$).
Some processors have a fused multiply-add (FMA) instruction which evaluates expressions such as $a \times b \pm c$ with a single rounding error. Algorithm 5 takes advantage of this instruction to compute the exact product of two floating-point numbers much faster, namely with 2 flops instead of 17 flops with TwoProduct.
Table I presents the number of operations and the depth of the dependency graph (that is, the critical path in the EFT data flow graph) for each of the output values $x$ and $y$ of these algorithms. It is shown in [13] that TwoSum is optimal, both in terms of the number of operations and the depth of the dependency graph.
<table>
<thead>
<tr>
<th>EFT algorithm</th>
<th>flop</th>
<th>depth</th>
</tr>
</thead>
<tbody>
<tr>
<td>TwoSum</td>
<td>1</td>
<td>6</td>
</tr>
<tr>
<td>TwoProduct</td>
<td>1</td>
<td>17</td>
</tr>
<tr>
<td>TwoProductFMA</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
**Algorithm 5:** TwoProductFMA($a$, $b$).
The result $x = fl(a \circ b)$ is computed and available after only one floating-point operation. Moreover, the computation of $y$ exposes some parallelism which can be exploited and, therefore, explains the efficiency of the algorithms [15].
Figure 1 defines diagrams for floating-point operations $\pm$ and $\times$, and for their EFTs. It allows us to graphically represent transformation algorithms as basic computational blocks.

Fig. 1: Diagrams for basic floating-point operations (a), (b) and EFT algorithms (c), (d), and (e).
C. Double-Double and Compensated Algorithms
We focus now on two methods using these EFTs to double the accuracy: double-double expansions and compensations. Then we recall why compensated algorithms are more efficient than double-double algorithms.
Double-double expansions. We present here the algorithms by Briggs, Kahan, and Bailey used in the QD library [8]. Let $a, a_H$ and $a_L$ be floating-point numbers of precision $p$. The corresponding double-double number of $a$ is the unevaluated sum $a_H + a_L$ where $a_H$ and $a_L$ do not overlap: $|a_L| \leq ulp(a_H)/2$. Double-double arithmetic simulates computations with precision $2p$. Proofs are detailed in [17].
Algorithms 6 and 7 compute the sum and the product of two double-double numbers more accurately than Dekker’s algorithms [4]. Double-double algorithms need a step of renormalization to guarantee $|x| \leq ulp(x_H)/2$. This step is insured by a FastTwoSum EFT and is represented by dotted boxes in Figure 2.
### Algorithm 6: DD_TwoSum
```plaintext
[r_H, r_L] = TWO SUM (a_H, b_H)
[s_H, s_L] = TWO SUM (a_L, b_L)
c ← fI (r_L + s_H)
[u_H, u_L] = FastTwoSum (r_H, c)
w ← fI (s_L + u_L)
return [x_H, x_L] = FastTwoSum (u_H, w)
```
Algorithm 6: DD_TwoSum($a_H, a_L, b_H, b_L$), double-double sum of two DD numbers [QD library, 2000].
### Algorithm 7: DD_TwoProduct
```plaintext
[r_H, r_L] = TWO PRODUCT (a_H, b_H)
r_L ← fI (r_L + (a_H \times b_L))
r_L ← fI (r_L + (a_L \times b_H))
return [x_H, x_L] = FastTwoSum (r_H, r_L)
```
Algorithm 7: DD_TwoProduct($a_H, a_L, b_H, b_L$), double-double product of two DD numbers [QD library, 2000].
In practice, double-double algorithms can be simply used by overloading the basic operations as for example in Algorithm 8, which is the double-double version of SUM, the classical recursive algorithm to evaluate $a_1 + a_2 + \cdots + a_n$.
Compensated algorithms. As double-double algorithms, compensated algorithms can double the accuracy. We focus here on this class of algorithms. We already mentioned that double-double algorithms are easy to derive. On the contrary, compensated algorithms have been, up to now, defined case by case and by experts of rounding error analysis [7], [9], [10], [11], [22]. For example the compensated Algorithm 9, SUM2 [22], returns a twice more accurate sum.
Double-double versus compensation. Previous double-double and compensated sums provide roughly the same accuracy. How do they compare in terms of computing time? Algorithm 9 needs $7n - 6$ flops compared to $n - 1$ for the original SUM algorithm. The double-double summation implementation by Algorithm 8 needs $10n - 9$ flops, that is, almost 1.43 more floating-point operations than the compensated algorithm.
The compensated H evaluation algorithm is detailed in [15]. This latter shows that ORNER double-double ones. The example of H algorithms exploit this low level parallelism much better than in [16].
Now, let us consider the instruction level parallelism by inspecting the number of instructions which could be simultaneously executed per one cycle (IPC). In the case of the classical SUM algorithm, each iteration performs one floating-point operation. Each iteration can be followed immediately by the next iteration, so IPC(SUM) = (n - 1)/n ≈ 1. With the SUMDD algorithm, each iteration of the loop contains 10 operations versus 7 for the SUM2 algorithm. Nevertheless, the main difference between both algorithms is in the parallelization of the loop iterations. The SUMDD algorithm suffers from renormalization, and one iteration may only be followed by the next one with the latency of 7 floating-point operations, so IPC(SUMDD) = (10n - 9)/(7n - 5) ≈ 1.42. The SUM2 algorithm does not suffer from such drawbacks and iterations can be executed with a latency of only one flop: IPC(SUM2) = (7n - 6)/(n + 5) ≈ 7. So SUM2 benefits from a seven times higher ILP. A detailed analysis has been presented in [16].
This fact is measurable in practice, and compensated algorithms exploit this low level parallelism much better than double-double ones. The example of HORNER’s polynomial evaluation algorithm is detailed in [15]. This latter shows that the compensated HORNER’s algorithm runs at least twice as fast as the double-double counterpart with the same output accuracy. This efficiency motivates us to automatically generate existing compensated algorithms.
III. AUTOMATIC CODE TRANSFORMATION
We present how to improve accuracy thanks to code transformation. Experimental results are presented in Section IV.
A. Improving Accuracy: Methodology
Our code transformation automatically compensates programs and follows the next three steps.
1) First, detect floating-point computations sequences. A sequence is the set $S'$ of dataflow dependent operations required to obtain one or several results.
2) Then for each sequence $S'$ compute the error terms and accumulate them beside the original computation sequence by (a) replacing floating-point operations by the corresponding EFTs, and (b) accumulating error terms following Algorithms 10 and 11 given hereafter. At this stage, every floating-point number $x \in S'$ becomes a compensated number, denoted $\langle x, \delta_x \rangle$ where $\delta_x \in \mathbb{F}$ is the accumulated error term attached to the computed result $x$.
3) Finally close the sequences. Closing is the compensation step itself, so that close($S'$) means computing $x \leftarrow fl(x + \delta_x)$ for $x$ being a result of $S'$.
B. Compensated operators
Algorithms 10 and 11 allow us to automatically compensate for the error of basic floating-point operations. Inputs are now compensated numbers.
Algorithm 8: SUMDD($a_1, a_2, \ldots, a_n$), double-double classical recursive summation.
Algorithm 9: SUM2($a_1, a_2, \ldots, a_n$), compensated classical recursive summation [Rump, Ogita, and Oishi, 2005].
Algorithm 10: AC_TWOVECTOR($a, b$), automatically compensated sum of two compensated numbers.
Algorithm 11: AC_TWODPRODUCT($a, b$), automatically compensated product of two compensated numbers.
\(\delta_b\), are accumulated with the previous generated error. This accumulation corresponds to the second line of Algorithms 10 and 11. These inherited errors come from previous floating-point calculations. Operands with no inherited error are processed to minimize added compensations. Figure 3 shows such variants which can be obtained by removing the dashed or dotted lines.
Listing 2 illustrates what provides the code transformation of the sequence \(a = b + c \times d\) of Listing 1.
Listing 1: Original code computing the sequence \(a = b + c \times d\).
```c
double foo() {
double a, b, c, d;
[...] // variables introduced by step 1
a = b + c * d; /* computation sequence */
[...] // variables introduced by step 2
return a;
}
```
Listing 2: Transformed code computing the sequence \(a = b + c \times d\) with error compensation.
```c
double foo() {
double a, b, c, d;
[...] // variables introduced by step 1
double t, c_H, c_L, d_L, d_H, tmp_L, a_L;
[...] // variables introduced by step 2
double delta_tmp, delta_a;
[...] // first part of the sequence detected at step 1
tmp = c * d;
[...] // step 2a: adding 16 flops with
TwoProduct(c, d) */
t = 134217728.0 * c; /* 2^{ceil(53/2)} + 1 */
c_H = t - (t - c);
c_L = c - c_H;
t = 134217728.0 * d;
d_H = t - (t - d);
d_L = d - d_H;
tmp_L = c_L * d_L - ((tmp - c_H * d_H) - c_L * d_H - c_H * d_L);
[...] // step 2b: accumulation of TwoProduct error
delta_tmp = tmp_L;
[...] // second part of the sequence detected at step 1
a = b + tmp;
[...] // step 2a: adding 5 flops with
TwoSum(b, tmp) */
t = a - b;
a_L = (b - (a - t)) + (tmp - t);
[...] // step 2b: accumulation of TwoSum error
delta_a = a_L + delta_tmp;
[...] // step 3: close sequence */
return a + delta_a;
}
```
IV. EXPERIMENTAL RESULTS
We now describe our CoHD tool that implements this code transformation. We apply it to several case studies chosen such that there exist compensated versions to compare with. We also add comparisons with the corresponding double-double versions.
A. The CoHD Tool
CoHD is a source-to-source transformer written in OCaml and built as a compiler. The front-end, which reads input C files, comes from a previous development by Casse [3]. The middle-end implements some passes of optimization, from classical compiler passes such as operand renaming or three-address code conversion [1, Chap. 19]. It also implements one pass of floating-point error compensation. This pass uses our methodology and the algorithms defined in Section III. Then, the back-end translates the intermediate representation into C code.
B. Case Studies
We study here the cases described in Table II which are representative of existing compensated algorithms.
<table>
<thead>
<tr>
<th>Case studies: compensated algorithms of reference</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1) SUM2 for the recursive summation of (n) values [22].</td>
<td></td>
</tr>
<tr>
<td>2) COMPBHORNER [7] and COMPBHORNERDER [10] for Horner’s evaluation of (p_H(x) = (x - 0.75)^5(x - 1)^{11}) and its derivative.</td>
<td></td>
</tr>
<tr>
<td>3) COMPDECASTELJAU and COMPDECASTELJAU- DER [11] for evaluating (p_D(x) = (x - 0.75)^7(x - 1)^{10}) and its derivative, written in the Bernstein basis, by means of deCasteljau’s scheme.</td>
<td></td>
</tr>
<tr>
<td>4) COMPCLENSHAW and COMPCLENSHAWII [9] for evaluating (p_C(x) = (x - 0.75)^7(x - 1)^{10}) written in the Chebyshev basis, by means of Clenshaw’s scheme.</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Summation (case 1 above)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Data</td>
<td># values</td>
</tr>
<tr>
<td>(d_1)</td>
<td>(32 \times 10^4)</td>
</tr>
<tr>
<td>(d_2)</td>
<td>(32 \times 10^5)</td>
</tr>
<tr>
<td>(d_3)</td>
<td>(32 \times 10^6)</td>
</tr>
<tr>
<td>(d_4)</td>
<td>(32 \times 10^4)</td>
</tr>
<tr>
<td>(d_5)</td>
<td>(32 \times 10^5)</td>
</tr>
<tr>
<td>(d_6)</td>
<td>(32 \times 10^6)</td>
</tr>
<tr>
<td>(d_7)</td>
<td>10^5</td>
</tr>
<tr>
<td>(d_8)</td>
<td>10^6</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Polynomial evaluations (cases 2, 3, 4)</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Data</td>
<td># (x)</td>
</tr>
<tr>
<td>(x_1)</td>
<td>256</td>
</tr>
<tr>
<td>(x_2)</td>
<td>256</td>
</tr>
<tr>
<td>(x)</td>
<td>1</td>
</tr>
</tbody>
</table>
TABLE II: Case studies and data for SUM and polynomial evaluation with HORNER, CLENSHAW, and DECASTELJAU.
This section presents how we perform accuracy and execution time measurements to compare programs generated automatically by our method with programs, written by hand, that implement compensated and double-double algorithms. It also presents a study of the Horner’s algorithm, and summarizes other test results: summation, polynomial and derivative evaluation with Clenshaw or deCasteljau algorithms. All measurements are done with the following experimental environment: Intel® Core™ i5 CPU M540: 2.53GHz. Linux 3.2.0.51-generic-pae i686 i386, gcc v4.6.3 with -O2 -mfpmath=sse -msse4, PAPI v5.1.0.2 and PERPI (pilp5 version).
**Accuracy and execution time measurements.** Accuracy is measured as the number of significant bits in the floating-point mantissa. So, 53 is the maximum value we can expect from the binary64 format.
A reliable measure of the execution time is more difficult to obtain. Such measurements are not always reproducible because of many side effects (operating system, executing programs,…). Significant measures are provided here using two software tools. First, PAPI (Performance Application Programming Interface) [20] allows us to read the physical counters of cycles or instructions that correspond to an actual execution. The second software, PERPI [6], measures the numbers of cycles and instructions of one *ideal execution*, that is, one execution by a machine with infinite resources. The latter measure is more related to a performance potential than to the actual one as provided by PAPI. Using both tools provides confident and complementary results.
**Horner’s polynomial evaluation.** We automatically compensate Horner’s scheme and compare it with DDHorner (a double-double Horner evaluation) and COMPHorner (a compensated Horner algorithm). The compensated algorithm and the data come from [7].

**TABLE III: Performance measurements of the algorithms:** COMPHorner, DDHorner, and ACHorner. Real values (PAPI) are the mean of $10^6$ measures. Ideal values (PERPI) are displayed within parentheses.
<table>
<thead>
<tr>
<th></th>
<th>Instructions</th>
<th>Cycles</th>
<th>IPC</th>
</tr>
</thead>
<tbody>
<tr>
<td>COMPHorner</td>
<td>532 (566)</td>
<td>277 (62)</td>
<td>1.99 (9.12)</td>
</tr>
<tr>
<td>DDHorner</td>
<td>658 (676)</td>
<td>920 (325)</td>
<td>0.72 (2.08)</td>
</tr>
<tr>
<td>ACHorner</td>
<td>553 (581)</td>
<td>303 (77)</td>
<td>1.82 (7.54)</td>
</tr>
</tbody>
</table>
**Fig. 4: Number of significant bits #_sig when evaluating** $p_H(x) = (x - 0.75)^5(x - 1)^{11}$, where $x \in [0.68, 1.15]$ for Horner, DDHorner, COMPHorner, and ACHorner.
Let $p_H(x) = (x - 0.75)^5(x - 1)^{11}$ be evaluated with Horner’s scheme, for $512 x \in \mathbb{R} \cap [0.68, 1.15]$. Figure 4 shows the accuracy of this evaluation using Horner (original), DDHorner, COMPHorner, and our automatically generated ACHorner algorithm. In each case, we measure the number of significant bits #_sig. The original Horner’s accuracy is low since the evaluation is processed in the neighborhood of multiple roots: most of the time, there is no significant bit. The other algorithms yield better accuracy. Our automatically generated algorithm has the same accuracy behavior as the twice more accurate DDHorner and COMPHorner.
The original compensated algorithms are expected to exploit much more ILP, even better results than the existing compensated algorithms (COMP), and the double-double (DD) ones. In a more favorable environment, where the hardware could exploit much more ILP, even better results are then possible. For example, the rightmost plot is the ratio of the number of cycles and of the number of instructions. We observe that AC algorithms have the same features as the original compensated ones. Measurements confirm also the interest of compensated algorithms, which have a better ILP potential than DD ones.
Finally, we note that the ILP potential (shown as dotted lines in Figure 5) is not fully exploited in our experimental environment. In a more favorable environment, where the hardware could exploit much more ILP, even better results for compensated algorithms are expected.
V. CONCLUSIONS AND PERSPECTIVES
In this article we discussed the automated transformation of programs using floating-point arithmetic. We propose a new method for automatically compensating the floating-point errors of the computations, which improves the accuracy without impacting execution time too much. The automatic transformation produces some compensated algorithms which are as accurate and efficient as the ones derived case by case. The efficiency of our approach has been illustrated on various case studies.
It remains now to validate this approach (and the CoHD tool) on real and more sophisticated programs. To achieve this, we have to add the support of floating-point division, square-root, and the elementary functions. Moreover, this work is actually a first step toward the automatic generation of multi-criteria program optimizations (with respect to accuracy and execution time). It will allow us to apply partial error compensation and optimize for the execution time overhead. Strategies of partial transformations assured by code synthesis will be the subject of another paper (which we currently write), whose abstract is given in [25].
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Data</th>
<th>AC-COMP</th>
<th>AC-DD</th>
</tr>
</thead>
<tbody>
<tr>
<td>HORNER</td>
<td>$p_H, x_1$</td>
<td>0</td>
<td>-1</td>
</tr>
<tr>
<td>HORNER</td>
<td>$p_H, x_2$</td>
<td>0</td>
<td>-0.5</td>
</tr>
<tr>
<td>HORNERDER</td>
<td>$p_H, x_1$</td>
<td>+0.1</td>
<td>-0.3</td>
</tr>
<tr>
<td>HORNERDER</td>
<td>$p_H, x_2$</td>
<td>+0.3</td>
<td>+0.1</td>
</tr>
<tr>
<td>CLENSHAWI</td>
<td>$p_C, x_1$</td>
<td>0</td>
<td>-1.3</td>
</tr>
<tr>
<td>CLENSHAWI</td>
<td>$p_C, x_2$</td>
<td>-0.3</td>
<td>-1.5</td>
</tr>
<tr>
<td>CLENSHAWII</td>
<td>$p_C, x_1$</td>
<td>-0.3</td>
<td>-1.7</td>
</tr>
<tr>
<td>CLENSHAWII</td>
<td>$p_C, x_2$</td>
<td>0</td>
<td>-1.2</td>
</tr>
<tr>
<td>DECASTELJAU</td>
<td>$p_D, x_1$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>DECASTELJAU</td>
<td>$p_D, x_2$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>DECASTELJAU'</td>
<td>$p_D, x_1$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>DECASTELJAU'</td>
<td>$p_D, x_2$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_1$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_2$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_3$</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_4$</td>
<td>0</td>
<td>-3</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_5$</td>
<td>0</td>
<td>-4.8</td>
</tr>
<tr>
<td>SUM</td>
<td>$d_6$</td>
<td>0</td>
<td>-10</td>
</tr>
</tbody>
</table>
TABLE IV: Differences of the number of significant bits $\#_{sig}$ for automatically compensated (AC) algorithm versus the existing compensated algorithms (COMP), and the double-double (DD) ones.
Fig. 5: Performance ratios between automatically compensated algorithms (AC) and existing compensated (COMP) or double-double (DD) ones. Line drawings are real measurements done with PAPI (mean of $10^6$ values) while dotted ones are ideal measures done with PERPI.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01158399/file/LMT15a-ieee.pdf", "len_cl100k_base": 8306, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 38261, "total-output-tokens": 10191, "length": "2e13", "weborganizer": {"__label__adult": 0.0004258155822753906, "__label__art_design": 0.0004489421844482422, "__label__crime_law": 0.0004117488861083984, "__label__education_jobs": 0.0007176399230957031, "__label__entertainment": 0.0001024007797241211, "__label__fashion_beauty": 0.0002357959747314453, "__label__finance_business": 0.000308990478515625, "__label__food_dining": 0.00046181678771972656, "__label__games": 0.0008134841918945312, "__label__hardware": 0.0034618377685546875, "__label__health": 0.0008473396301269531, "__label__history": 0.00043082237243652344, "__label__home_hobbies": 0.00015723705291748047, "__label__industrial": 0.0009531974792480468, "__label__literature": 0.00029277801513671875, "__label__politics": 0.0004513263702392578, "__label__religion": 0.0008969306945800781, "__label__science_tech": 0.1781005859375, "__label__social_life": 0.00010323524475097656, "__label__software": 0.0075836181640625, "__label__software_dev": 0.80126953125, "__label__sports_fitness": 0.0004513263702392578, "__label__transportation": 0.0009064674377441406, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35086, 0.03889]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35086, 0.38884]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35086, 0.8267]], "google_gemma-3-12b-it_contains_pii": [[0, 1108, false], [1108, 9003, null], [9003, 13403, null], [13403, 16495, null], [16495, 19841, null], [19841, 24132, null], [24132, 27310, null], [27310, 30774, null], [30774, 35086, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1108, true], [1108, 9003, null], [9003, 13403, null], [13403, 16495, null], [16495, 19841, null], [19841, 24132, null], [24132, 27310, null], [27310, 30774, null], [30774, 35086, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35086, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35086, null]], "pdf_page_numbers": [[0, 1108, 1], [1108, 9003, 2], [9003, 13403, 3], [13403, 16495, 4], [16495, 19841, 5], [19841, 24132, 6], [24132, 27310, 7], [27310, 30774, 8], [30774, 35086, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35086, 0.19343]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b95b9b171f38a936ddb6eaeaeb7063bcabdf87c0
|
Outsourcing service provision through step-wise transformation
CLARK, Tony <http://orcid.org/0000-0003-3167-0739> and BARN, Balbir S
Available from Sheffield Hallam University Research Archive (SHURA) at:
http://shura.shu.ac.uk/12061/
This document is the author deposited version. You are advised to consult the publisher's version if you wish to cite from it.
Published version
Copyright and re-use policy
See http://shura.shu.ac.uk/information.html
Outsourcing Service Provision Through Step-Wise Transformation
Tony Clark
Middlesex University London, UK
t.n.clark@mdx.ac.uk
Balbir S. Barn
Middlesex University London, UK
b.barn@mdx.ac.uk
ABSTRACT
Component-based development principles promise a flexible approach to system design and implementation. In particular, service-based techniques provide a computational model whereby the physical location of components makes no difference to the overall system behaviour. Economic business models for organisations have led to outsourced services as an attractive way of reducing costs and allowing a business to focus on its key processes. In the context of business and IT alignment, this raises a problem of how to transform an organization and its enterprise systems so that it can take advantage of an external service, given that in most cases the existing processes will be embedded in many places across the organisation. This paper addresses this problem by proposing a simple component-based simulation language together with transformation rules that can be used to incrementally isolate a service as an external component.
1. INTRODUCTION
Modern software systems are often organised in terms of components. Component-based approaches generalise basic object-oriented implementation platforms by allowing large collections of objects to be grouped together and externalised in terms of public interfaces. Such systems execute in terms of messages between components where the distance between message source and target is completely arbitrary. Component-based approaches can be used at different stages of the development life-cycle, at different levels of granularity and can involve different architectural approaches. Elsewhere we have critiqued the relative merits of various architectural styles [10], here we present an short overview of these approaches.
Service Oriented Architecture (SOA) organizes a system in terms of components that communicate via operations or services. Components publish services that they implement as business processes. Interaction amongst components is achieved through orchestration at a local level or choreography at a global level. Its proponents argue that SOA provides loose coupling, location transparency and protocol independence [3] when compared to more traditional implementation techniques. The organization of systems into coherent interfaces has been argued [25] as having disadvantages in terms of: extensions; accommodating new business functions; associating single business processes with complex multi-component interactions.
As described in [18] and [23], complex events can be the basis for a style of EA design. Event Driven Architecture (EDA) replaces thick interfaces with events that trigger organizational activities. This creates the flexibility necessary to adapt to changing circumstances and makes it possible to generate new processes by a sequence of events [21]. EDA and SOA are closely related since events are one way of viewing the communications between system components. The relationship between event driven SOA and EA is described in [1] where a framework is proposed that allows enterprise architects to formulate and analyse research questions including ‘how to model and plan EA-evolution to SOA-style in a holistic way’ and ‘how to model the enterprise on a formal basis so that further research for automation can be done.’
Complex Event Processing (CEP) [12] can be used to process events that are generated from implementation-level systems by aggregation and transformation in order to discover the business level, actionable information behind all these data. It has evolved into the paradigm of choice for the development of monitoring and reactive applications [6].
Enterprise Architecture (EA) aims to capture the essentials of a business, its IT and its evolution, and to support analysis of this information: ‘[it is] a coherent whole of principles, methods, and models that are used in the design and realization of an enterprise’s organizational structure, business processes, information systems and infrastructure.’ [16]. A key objective of EA is being able to provide a holistic understanding of all aspects of a business, connecting the business drivers and the surrounding business environment, through the business processes, organizational units, roles and responsibilities, to the underlying IT systems that the business relies on. In addition to presenting a coherent explanation of the what, why and how of a business, EA aims to support specific types of business analysis including [13, 22, 20, 5, 14]: alignment between business functions and IT systems; business change describing the current state of a business (as-is) and a desired state of a business (to-be); maintenance the de-installation and disposal, upgrading, procurement and integration of systems including the prioritization of maintenance needs; acquisition and mergers
describing the alignment of businesses and the changes that occur on both when they merge.
EA has its origins in Zachman’s original EA framework [28] while other leading examples include the Open Group Architecture Framework (TOGAF) [24] and the framework promulgated by the Department of Defense (DoDAF) [27]. In addition to frameworks that describe the nature of models required for EA, modelling languages specifically designed for EA have also emerged. One leading architecture modelling language is ArchiMate [17].
Enterprise Architectures are built to support use-cases related to managing and evolving an organization. For example, directive development is concerned with developing directives that express how a business operates; business intelligence describes how a CEO is informed of the state of the organization at any level; resource planning involves the allocation of business resources to processes; impact analysis covers a variety of analyses used to measure the effect a proposed change has on an organization; change management involves describing the context and requirements for changes in any aspect of the business, including the construction of as-is and to-be analysis and the calculation of the return on investment (ROI) for any proposed change; regulatory compliance checking establishes that an organization meets some externally imposed constraints on its operating procedures; risk analysis identifies dangers, both internal and external, that can affect the successful operation of the organization; acquisition and merger involves the comparison of two organizations to identify their similarities and differences with respect to achieving a goal; outsourcing involves the identification of services that can be supplied by an external partner. Supporting these use cases is challenging and requires models that accurately describe relevant aspects of an organization at an appropriate abstraction level.
2. PROBLEM AND CONTRIBUTION
An important EA use-case identified in the previous section is outsourcing. This has become increasingly popular across many sectors where it is difficult for an individual firm to master all the knowledge required to perform all business functions [29]. According to [19] outsourcing decisions are taken during the development of a new system architecture and modularity is a key enabler for these decisions: ‘A system producer basically faces two alternatives to manage the development of its components: in-house development or outsourcing.
Business Transformation involves changing the current processes and resources used by a business in order to achieve a goal. Outsourcing is an example of a transformation that must identify those elements of a business that can be given to a service provider. The transformation removes the elements of the business that are no longer required.
Achieving an outsourced business function through business transformation involves a precise understanding of how the as-is business operates and producing a to-be business that incorporates the service provider. Component-based techniques can help with this since the service provider can be viewed as a component incorporated within the business ecosystem. However, this approach relies on a precise representation of the business as a collection of components and the ability to decompose and refactor the components in order to isolate the service provider.
Matching internal service needs with those provided by an external service is reminiscent of some of the earlier work on component re-use and repositories where research such as that by Cheng et al [7] and Jeng et al [15] under the direction of Betty Cheng presented approaches of using formal specifications utilising pre and post condition specification of operations on components to attempt to perform matching of required components with those stored in external repositories.
Transformation and refactoring approaches to component-based systems are often based on an analysis of the interfaces or require detailed understanding of programming-language semantics [4, 26]. However, if we are to work at the level of abstraction required by EA and achieve outsourcing through component-based refactoring, then there must be a precise, but implementation independent language that supports approaches such as SOA and EDA without requiring detailed knowledge of message protocols or run-time platforms.
Our hypothesis is that it is possible to take a component-based approach to outsourcing in terms of precisely defined decomposition operators that isolate the service provider through transformation and refinement. Our primary contribution is to define this approach using µLEAP which is a simple, abstract, executable component language and to show that the approach can be used to outsource a simple service. µLEAP represents a new refinement of our existing LEAP technology by the embedding of µLEAP as a domain specific language in the LISP based platform Racket.
3. CASE STUDY
The University of Middle England (UME) is under pressure to reduce costs. It has been in contact with a number of service providers in the UK Higher Education sector and has found a company that offers a service to manage registration and tuition fees for students.
Unfortunately, UME currently distributes student registration information around the institution. An academic department holds information about the fees for the individual courses that it offers, and also holds information about whether a student has paid the tuition fees. The UME finance department also manages information about whether students have paid their fees. Although the finance function knows about courses and departments, they do not currently hold any information about tuition fees which are set at departmental level. Each academic department also manages a list of active staff members, whilst the finance department knows about the staff grades for all members of department that have ever been employed by UME.
UME would like to modify its internal architecture so that it can take advantage of the service provider. This will involve changing how both departments and finance operates so that the provider manages registration and payment of tuition fees, informing a department when each student can officially start to study. Payroll, is to remain as a collaboration between the academic departments and finance.
4. µLEAP
The language LEAP and its associated toolset [9, 2, 11, 10, 8] has been developed to support the design, analysis and simulation of component-based systems. The LEAP approach aims to reduce such systems to a small number of orthogonal features. The full LEAP language contains
http://racket-lang.org
many features that enable the language to integrate with a
tool-set, including the ability to produce diagrams directly
from the LEAP component models.
This paper uses our previous experience with LEAP as
a platform for validating our hypothesis that outsourcing
can be achieved using a compositional and transformational
component based approach. As such we do not need the
complete LEAP language and therefore we use a sub-set
called µLEAP as defined in figure 1.
The rest of this section provides an overview of µLEAP and the remainder of paper
uses the language to implement and analyse the case study.
µLEAP data items are: constants (numbers, strings, bools),
lists, terms, functions and components. Boolean constants
are $\texttt{tt}$ and $\texttt{ff}$. µLEAP is embedded as a domain specific
language in Racket and uses Lisp-style lists consisting of
the empty list $\texttt{()}$ and cons pairs (h . t). A term $(f \, v \ldots)$
consists of a $\texttt{functor}$, $f$, which is a name starting with a capital
letter, followed by a sequence of values. The following
is a term representing a person with an age and a name:
(Person 34 "Fred"). A function $(\texttt{fun} \ (x \ y) \ (+ \ x \ y))$
is a value that can be applied to a collection of args and returns a value.
The key data value in µLEAP is the $\texttt{component}$. Components behave as independently executing state transition
machines. The state of a component can be any value and the transitions are defined by a collection of rules. Messages
sent to a component are added to an internal message queue.
Each rule uses pattern matching against the current state of the component and its message queue. Each time the component changes, the rules are examined in turn. The first
rule that matches, fires producing a new component-state
and message queue and potentially sending messages to other components.
Typically, component messages are terms where the term-
functor corresponds to the name of the component and the
term-elements are data items passed in the message. Rules
examine the head of the message queue. The following is a component that is initialised with a starting integer, a
limit and a target component. It is sent $\texttt{Inc}$ messages that
increments the integer until the limit is reached at which
point the target component is sent a $\texttt{Go}$ message and before
re-initialising:
\begin{verbatim}
component (list 0 limit target) rules
\begin{enumerate}
\item \textbf{(rule)} \textbf{when} \ (limit limit target) \ ((Inc) . messages)
\item \textbf{(become) \ (list 0 limit target) \ messages))
\item \textbf{(target) \ (Go))}
\item \textbf{(become) \ ((Inc) . messages))}
\end{enumerate}
\end{verbatim}
The program shown above provides examples of several key µLEAP features. The state of the component is a list of three elements. Line 1 uses the built-in function $\texttt{list}$ to
construct a list of elements. A 3-element list is constructed as $(\texttt{list} \ x \ y \ z)$ which is equivalent to the expression:
$:\ (1 \ (2 \ (3 \ ()))).$ Where is the function that maps two values
(typically a list-head and a list-tail) to a pair.
Patterns are used in rules and in $\texttt{case}$-expressions. A pattern
matches a value and may contain variables that are bound to sub-values as a result of a successful match. Rules
use patterns in the $\texttt{when}$ clause to match against the current
state and message queue of a component. A $\texttt{case}$-expression
matches a value against a sequence of patterns and evaluates
the expression associated with the pattern. Structured pat-
terns match pairs, and lists. The pattern $(\texttt{union} \ p \ q)$
treats a list 1 as a set and matches $p$ and $q$ against any pair of
exhaustive set partitions of 1. The membership pattern $(\in \ p \ q)$
is equivalent to $(\texttt{union} \ p \ q)$.
The component defines two rules (lines 2 and 6). The first
rule has a guard on line 3 that matches against the current
state and message queue. The state-pattern must match a
list of three elements where the first two elements are the
same i.e., the limit has been reached. The message-pattern
matches a list of elements headed by a term whose functor is $\texttt{inc}$ and without term-elements. The tail of the list is any
value and is matched against the variable messages.
The first rule performs a transition to a new component
state involving a list of three elements whose first element is
$\varnothing$ (the component has re-initialised), and where the message
has been consumed. The action performed by the rule (line 5)
sends a message $\texttt{Go}$ to the component $\texttt{target}$. Messages
are sent to components by applying them to a sequence of
arguments. Message passing may be synchronous or asyn-
chronous. Asynchronous message passing starts with $\texttt{@}$, as
in $(\texttt{@ target} \texttt{(Go))}$. In that case, the message is sent and the
expression immediately returns the value $\texttt{nothing}$. A syn-
chronous message omits the $\texttt{@}$, sends the message to the
target component and waits for the return value. The target
component will handle the message using its own rules,
and the return value is provided by the corresponding rule.
\begin{verbatim}
p = ... e ... \texttt{programs} \hspace{1cm} r = ... \texttt{rules}
d = ... \hspace{1cm} \texttt{definitions}
| (def name e) \hspace{1cm} \texttt{(rule)} \texttt{when} \ (p p)
| (def name (name e ...)) \hspace{1cm} \texttt{(become) e e)
| e = ... \hspace{1cm} \hspace{1cm} \texttt{(become) \ (list 0 limit target) \ messages))
\texttt{expressions}
| const \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| \hspace{1cm} \texttt{(target) \ (Go))}
| var \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| () \hspace{1cm} \texttt{when} \ (current limit target) \ ((Inc) . messages))
| (component e r ... e ...)) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| (name e ...) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| (fun name (e e ...)) \hspace{1cm} \texttt{when} \ (current limit target) \ ((Inc) . messages))
| (block d ... e ...)) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| (e e ...) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| (@ e e ...) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
| (case e (p e ...)) \hspace{1cm} \texttt{(become) \ ((Inc) . messages))}
\end{verbatim}
action. The second rule (line 6) matches when the limit has not been reached. In this case the current value is incremented and no action is taken.
Both components and functions are first-class data values in µLEAP. This feature is important since it used to implement a component combinator ⊕ that allows as-is architecture to be decomposed into a collection of atomic components that can be re-factorized in order to identify and isolate a service provider. As a result, µLEAP has the flavour of a functional programming language and can support many of the patterns that are typical of that idiom. For example:
```lisp
(def internal (n m) (\oplus (n m))))
(def service (\times n))
(def business \times internal))
```
Given this basic idea, the approach is described as follows: Suppose that a business can be defined using a collection of components \(A, B\) and \(C\) and that there is a service provider \(D\) that roughly corresponds to \(B\). Using infix notation and being clear about associativity, the business is: \(A \oplus (B \oplus C)\). Notice that \(B\) is buried inside the expression, i.e., the business. If, \(\oplus\) is associative then the architecture can be redefined as: \((A \oplus B) \oplus C\). If \(\oplus\) is commutative then the architecture is equivalent to: \((B \oplus A) \oplus C\). Now associativity can be used to isolate \(B\): \(B \oplus (A \oplus C)\). If we can define a correspondence \(\phi\) between \(B\) and \(D\), then it is possible to replace \(B\) by the service provider: \(\phi(D) \oplus (A \oplus C)\).
In a realistic situation, it will be unlikely that existing components can be decomposed in such a straightforward manner. Further analysis leads to the following requirements for the composition operator \(\oplus\):
[R1] A single component may send messages to itself. If a component \(\times\) is decomposed into \(\oplus\) then the independent components \(\oplus\) and \(\times\) must be able to refer to the whole.
[R2] It is likely that several sub-components will be removed when moving from an \(as\text{-}is\) architecture to a \(to\text{-}be\) involving a service provider. Therefore, the operator \(\oplus\) should be both associative and commutative, or there should be variants that can be selective used as appropriate. For example if a component \(\times\) can be decomposed into \(\oplus\) then the independent components \(\times\) and \(\oplus\) must be able to refer to the whole.
[R3] The internal interfaces used by an organisation are likely to be different to those provided as a service. Since components and messages are first class data in µLEAP it is straightforward to rename messages. For example, given a component \(\times\) with a message interface \(\text{M}\) that is to be replaced with a service component whose message interface is \(\text{N}\), but otherwise has the same behaviour then we can wrap a renaming: \(\text{C(M)}\) which is shorthand for:
```
(def component (\times n))
(def when \times (n m))
(def become (n.m))
```
[R4] An organisation will consist of a collection of components and will execute in terms of message traces \(M \in M\). The state of the organisation is represented by the aggregate state of the components \(\Sigma\). If the organisation is broken down into finer grained components that are composed using \(\oplus\) then the structure of the aggregate state is \(\Sigma\) and the execution traces are \(M' \in M\). However, it should be possible to define a mapping \(\phi: \Sigma' \rightarrow \Sigma\) such that executions
```lisp
(def component (\times n m))
(def when \times (n m))
(def become (n.m))
```
Consider a business that operates a software component that manages two counters:
```lisp
(def business (\times n m)
(when n (\times n m))))
```
Suppose that a service provider offers an interface that increments numbers at a very reasonable price. The business would like to outsource that portion of its business that increments the counter whilst it retains that which decrements the counter. In this example the service provider component is obvious. Suppose that the binary operator \(\oplus\) maps two components to a single combined component.
Given such an operator, business can be redefined as:
```lisp
(def internal (n m) (\oplus (n m))))
(def service (\times n))
(def business \times internal))
```
can be mapped:
\[
\begin{align*}
\Sigma' & \xrightarrow{m'} \Sigma' \\
\phi & \downarrow \quad \downarrow \phi \\
\Sigma & \xrightarrow{\phi(m')} \Sigma
\end{align*}
\]
If the diagram shown above commutes then it means that the execution of the to-be organisation is consistent with the as-is execution after re-namings, changes in state composition, and modifications to messages are taken into account.
The approach to outsourcing is to be supported by a range of operators that behave like \( + \) and \( \sqcup \) with well understood properties. In our case study we use two operators that are defined below. In order to satisfy R1, it must be possible for each component-part to refer to the whole component. This is achieved by defining pre-components as functions that map a whole-component \( (self) \) to a component-part. The pre-components are combined using the operators as described below. Given a pre-component, it is transformed into a whole-component using \( new \):
1. `def new (pre-component)`
2. `def whole-component (pre-component whole-component)`
3. `whole-component`
The pre-component composition operator is defined as follows:
1. `def combine (v1 v2)`
2. `case (cons v1 v2)`
3. `[((Fail) . (Fail)) (Fail))` 4. `((v) . v) v)` 5. `((v) . (Fail) . v)`)
6. `((c . c) (error))` 7
8. `def (c1 c2)` 9. `fun (self)` 10. `block`
11. `(def o1 (c1 self))` 12. `(def o2 (c2 self))` 13. `(component (list o1 o2))`
14. `(rule)` 15. `(when s (m . ms))` 16. `(become s ns)` 17. `(combine (o1 m) (o2 ns)))`
18. `self)` 19. `component ()`
20. `(rule () (m . ms))`
21. `(become () ns)`
22. `( Fail))`
The \( + \) operator combines pre-components that handle synchronous messages (i.e., those that contain rules that return results. The function combine ensures that the two components \( o1 \) and \( o2 \) are independent and only one can return a result for a given message (or both fail). This ensures that the operator \( + \) is a commutative monoid:
\[
\begin{align*}
X + Y &= Y + X \\
X + (Y + Z) &= (X + Y) + Z \\
X + \epsilon &= X = \epsilon + X
\end{align*}
\]
Although the case study in this paper uses \( + \), in practice there is likely to be a family of operators that exhibit different transformation properties. For example, we may combine two components that exclusively deal with asynchronous messages. In that case the following associative and commutative operator can be used:
1. `def o1 (c1 self)`
2. `def o2 (c2 self)`
3. `component ()`
4. `rule () (m . ms))`
5. `become () (m . ms))`
6. `def (c1 c2)`
7. `def (c2 self)`
8. `def (self)`
9. `fun (self)`
10. `block`
11. `when s (m . ms))`
12. `become s ns)`
13. `combine (o1 m) (o2 ns))`
### 6. WORKED EXAMPLE
Section 3 describes a case study that involves two interacting components. The approach described in this paper is a method for representing a component architecture and subsequently transforming it in order to isolate a component corresponding to a service provider.
Figure 2 shows the decomposition of the finance and academic departments into separate aspects. The academic department manages staff, students registered on courses (with both paid and unpaid tuition fees) and the cost of the fees for each course. The finance department manages all staff grades, students who have paid and the process of payment.
Once the aspects of each major component have been identified, the approach creates a to-be architecture as shown in figure 9 where the aspects have been reallocated. Some aspects are handled by the service provider and have been removed from the as-is component. Others, such as recording students as having paid for their courses are shared between the service provider and the business.
Section 6.1 uses µLEAP to represent the as-is case study architecture. Section 6.2 describes the service provider as a µLEAP component and section 6.3 uses the approach to
apply a step-wise transformation to the as-is architecture in order to produce a to-be architecture containing the service provider component.
6.1 As-Is Architecture
The current system is implemented using two components. The first component is used to manage a department within the university and the second deals with finance. This section gives a simple implementation of both components using µLEAP.
Figure 4 shows a component department that manages a department. The state of the component consists of a list of lecturers and a list of courses. Each course is a term of the form (Course name fees students) where students is a list of student names with departments.
The department component defines rules that process the following messages:
- (GetStaff) that returns the list of lecturers in the department.
- (Register student course) that adds a student record to the department.
- (HasStudent name) returns true when the department has a named student.
- (GetFee course) returns the name of the course that the named student is studying.
- (GetStaff) informs the department that the student has paid something towards their tuition fees.
- (Paid student paid?) returns the fee associated with the named course.
The boolean value paid? determines whether the fees are full paid or not.
Figure 5 shows the finance component of the university. The state consists of four elements: (1) a list of the departments in the university; (2) a list of the all university staff and their job-grades; (3) a list of student records of the form (Student name paid?); (4) a list of terms that associate course names with departments.
The finance component has an interface that handles the following messages:
- (Payroll) which causes all of the staff of the university to be paid. This involves iterating over all of the departments, getting the staff in each department, looking
(def provider
(component
(list (Courses (list (Course "computer science" department 90000)))
(Students ()))
(rule
(when (cs (Students ss)) ((Register student c) . ms))
(become (list cs (Students (: (Student n c ff) ss)) ns))
(rule
(when (== (list cs (Courses (: (Course c d f) _)))
(Students (: (Student s c _ ss)))))
((Pay s a) . ms))
(become:
(list cs (Students (: (Student s c (= a f) ss))) ms)
(case (= a f)
(t (d (Register s c)))))
Figure 6: Service Provider
up their job-grade and making a payment based on the grade.
(Pay lecturer) looks up the job-grade of the lecturer and makes the payment.
(Register student) adds a new student record. The student is marked as having tuition fees outstanding.
(Pay student amount) informs finance that the student has made a payment. The tuition fee is requested from the appropriate department and the student record is updated accordingly.
6.2 Service Provider
Figure 6 shows an implementation of the service provider component. The provider manages the financial aspect of student registration and therefore manages a state that is a list of course and student terms. A course term has the form (Course name department fees) and a student has the form (Student name course paid?). When a student registers with the provider they are marked as owing tuition fees and when they pay the correct amount their status changes, and the appropriate department is informed.
6.3 Transformation
Our proposition is that we can take the as-is architecture described in section 6.1, decompose it using the operator ⊕, transform the resulting tree of pre-components and then show that the service provider defined in 6 can be isolated in the to-be architecture.
The first step is to decompose the as-is architecture. Consider the department component. It consists of two different aspects: staff and courses. The staff can be defined as a separate pre-component:
(def department-staff (self)
(component
(list (Lecturer "Dr Piercemuller"))
(rule
(when staff (GetStaff) . ms))
(become staff ms)
(rule
(when (s (m . ms))
(become s ms)
(Fail))))
From now on the rule that produces (fail) will be omitted from pre-components since it is always the last rule. The courses information is slightly more structured since it contains two aspects, the tuition fees and the students. Therefore the tuition fees are identified as a separate pre-component:
(def department-courses-fees (self)
(component (list (Course "computer science" 90000))
(rule
(when (== (Courses (: (Course c ss)) cs) ms))
(become (: (Course c (: (Student s c ss)) cs) ms))
(rule
(when (== (Courses (: (Student s c) ss)) cs)
(HasPaidStudent s) . ms))
(become (: (Course c (: (Student s c ss)) cs) ms))
(rule
(when (== (Courses (: (Student s c) ss)) cs)
((GetFee name) . ms))
(become (list courses) ms)
(fee))))
The students information relates to students that have paid their fees and those that have not. Therefore we can separate these issues out. The students that have paid their fees are maintained by a pre-component that manages a list of courses. Each course contains a list of students who have paid their tuition fees:
(def department-paid-students (self)
(component (list (Course "computer science" 90000))
(rule
(when (== (Courses (: (Course c ss)) cs) ms))
(become (: (Course c (: (Student s c ss)) cs) ms))
(rule
(when (== (Courses (: (Student s c) ss)) cs)
(HasPaidStudent s) . ms))
(become cs ms)
(rule
(when (== (Courses (: (Student s c) ss)) cs)
((RegisteredFor s) . ms))
(become (list courses) ms)
(c)))]
The pre-component department-unpaid-students is virtually the same as that shown above except it handles additional messages for (HasUnpaidStudent) and (RemoveUnpaid).
Having defined all of the pre-components it is possible to compose them to produce a single component for a department. Firstly, a pre-component department-students that manages students is defined as follows:
(def student-extension
(funcall (self)
(component ()
(rule
(when (s (Paid student) . ms))
(become s ms)
(self (AddPaid student (self (RegisteredFor student))))
(self (RemoveUnpaid student))))
(rule
(when (s (HasPaidStudent) . ms))
(become s ms)
(Gr (self (HasPaidStudent name))
(self (HasPaidStudent (self)))))))))
(def department-students
(fun (department-unpaid-students department-paid-students)
student-extension))
Next, the courses and staff can be added:
(def pre-department
(fun (department-staff
(department-courses-fees department-students))))
Finally, a department is created by:
(def department (new pre-department)
The finance component is decomposed into pre-components as shown in figure 7.)
\[ A \oplus (B \oplus ((C \oplus D) \oplus E)) \]
Figure 7: Finance Decomposition
6.4 Outsourcing
The previous section has decomposed the as-is architecture as a collection of components combined using \( \oplus \). We can use the properties of these operators to transform the pre-components in order to isolate the service provider. It is unlikely that the transformations will produce a component that is in one-to-one correspondence with the service provision component, however a mapping \( \ominus \) can be defined between the states of the isolated pre-component and the service provider component and used to to establish equivalence.
The first step is to transform the pre-components. To make this more concise, we rename the pre-components as shown in figure 8. Therefore, a department pre-component is:
\[ A \oplus (B \oplus ((C \oplus D) \oplus E)) \]
Figure 8: Pre-Component Labels
Given the properties of \( \oplus \) we can transform this expression into the following pre-component:
\[ (B \oplus (D \oplus E)) \oplus (A \oplus C) \]
The finance department pre-component is defined as follows:
\[ ((F \oplus G) \oplus (H \oplus (I \oplus J))) \]
Again, using the properties of the pre-component combination operator the expression can be transformed into:
\[ (F \oplus G) \oplus (H \oplus (I \oplus J)) \]
Through transformation, we have isolated two elements of pre-components that can be combined to produce a new component: \((B \oplus (D \oplus E)) \oplus (F \oplus G)\), producing the following system defined by three pre-components instead of two:
pre-department = \((A \oplus C)\)
pre-finance = \((F \oplus G)\)
pre-service = \((B \oplus (D \oplus E)) \oplus (H \oplus (I \oplus J))\)
We must now establish that the new system configuration has equivalent behaviour to the as-is architecture and that the pre-service component captures the behaviour of the service provider as defined in figure 6.
Equivalence is established by defining state mappings preserve the behaviour between systems. Since \( \mu \text{LEAP} \) states and messages are explicitly represented in each component, equivalence can be established through inspection of the definitions and reasoned argument. The state of the to-be architecture has three elements corresponding to the separate components. The form of the state is given as type signatures.
1. \((\text{Lecturer String})\ldots (\text{Course String} (\text{Student String})\ldots)\)
2. \((\text{Grade String Integer})\ldots (\text{Department} \ldots)\)
3. \((\text{Course String Integer})\ldots (\text{Course String} (\text{Student String})\ldots)\ldots)\)
A to-be department (1) has a state containing lecturers and courses with paid-up students. A to-be finance component (2) manages the staff and their grades and contains a collection of departments. The to-be model of the service-provider has a state (3) containing the same course occurring in three different aspects: fees, paid students and unpaid students.
Taken as a whole, the to-be state can be mapped to the as-is state. For example, the paid and non-paid students that are now managed by the service provider can be mapped back to the states in the finance and academic departments. Therefore, no information has been lost. Furthermore, if we exercise the to-be system by processing messages, we find that the state changes are consistent with respect to the mapping.
If we now reverse the decomposition based on the definitions of pre-department and pre-finance that have been achieved by transformation and refinement above, the to-be architecture is produced as shown in figures 9 and 6, with components department, finance and provider as required.
7. ANALYSIS AND CONCLUSION
This paper has identified a use-case of component-based systems and Enterprise Architecture in particular: organizational transformation in order to outsource business functionality. Such a use-case is intrinsic to any notion of business and IT alignment as the requirement to move from an as-is architecture to an to-be architecture leads to a problem: how to analyse and transform the architecture in order to achieve confidence that the resulting business, based on the new service, is equivalent to the existing business.
Our contribution is to identify and implement a process for achieving outsourced services that is based on component decomposition and transformation. Our claim is validated by implementing the process using an abstract, higher-order, executable component language μLEAP and using the implementation to address a simple case study. The process requires more detailed elaboration outside the scope of this paper, for example to include discussions on how candidate components for outsourcing may be identified. For example, outsourcing any operation needs to be done in a business context of strategy and goals of an organisation. Our previous work provides an indication of such a direction of travel [11].
Whilst we have shown that the approach can be implemented, the case study is the basis for much further work. μLEAP is executable and therefore has an operational semantics, and all of the examples shown in the paper have been implemented as μLEAP executable models. The models have been run against test data that supports the claims that have been made for the architectural transformations. A precise analysis of component-based architectures is likely to require a more declarative semantics than that presented here; the form of the semantics and the ability to use it as a basis for reasoning about architecture is left as further work.
In addition, μLEAP lacks a type system that would help to validate claims of component equivalence. As identified earlier in this paper, the + pre-component composition operator is likely to be one of many and therefore more elaborate case studies will be required in order to identify alternatives.
This paper presents a technological basis for an approach, as such it lacks a methodological framework within which the technology can be used. Given an as-is enterprise, there must be some guidance regarding its representation as a μLEAP model. Existing component-based approaches might be used here, however given the aim of using refinement and transformation using operators, the subsequent identification and isolation of an outsourced component is likely to be facilitated by judicious choice of representation for as-is structures. Our hypothesis is that use-cases can be used to validate claims of component equivalence. As identified earlier in this paper, the ⊕ pre-component composition operator is likely to be one of many and therefore more elaborate case studies will be required in order to identify alternatives.
Although the approach as described here is textual, it could be supported by graphical languages such as UML where a high-level view of an organization is described as a collection of components via diagrams and μLEAP is used to define the internal details for simulation. Such an approach would require a UML profile to allow components to be expressed as a combination of sub-models using the operators defined in this paper.
Our case study has been used to validate the technology but lacks validation through results that establish its practicality. For example, without guidance, how easy is it to get an organisation of components that makes subsequent transformation difficult or impossible? Given the size of an organisation, is the technology too detailed, leading to problems of maintenance and availability of expertise? These areas are left for further work.
8. REFERENCES
|
{"Source-Url": "http://shura.shu.ac.uk/12061/1/16-2014%20%20%20Outsourcing%20Service%20Provision%20Through%20Step-Wise%20Transformation.pdf", "len_cl100k_base": 9241, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 41385, "total-output-tokens": 10455, "length": "2e13", "weborganizer": {"__label__adult": 0.0002918243408203125, "__label__art_design": 0.0005521774291992188, "__label__crime_law": 0.0002880096435546875, "__label__education_jobs": 0.002277374267578125, "__label__entertainment": 6.651878356933594e-05, "__label__fashion_beauty": 0.000141143798828125, "__label__finance_business": 0.00067138671875, "__label__food_dining": 0.0003113746643066406, "__label__games": 0.00039458274841308594, "__label__hardware": 0.0005645751953125, "__label__health": 0.0003466606140136719, "__label__history": 0.00025200843811035156, "__label__home_hobbies": 7.30752944946289e-05, "__label__industrial": 0.00037980079650878906, "__label__literature": 0.00029969215393066406, "__label__politics": 0.0002753734588623047, "__label__religion": 0.0003497600555419922, "__label__science_tech": 0.016571044921875, "__label__social_life": 9.41753387451172e-05, "__label__software": 0.00653839111328125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0001952648162841797, "__label__transportation": 0.0005145072937011719, "__label__travel": 0.00019156932830810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42126, 0.01497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42126, 0.45415]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42126, 0.91394]], "google_gemma-3-12b-it_contains_pii": [[0, 647, false], [647, 5610, null], [5610, 12346, null], [12346, 18737, null], [18737, 23100, null], [23100, 26986, null], [26986, 28872, null], [28872, 33852, null], [33852, 37262, null], [37262, 42126, null], [42126, 42126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 647, true], [647, 5610, null], [5610, 12346, null], [12346, 18737, null], [18737, 23100, null], [23100, 26986, null], [26986, 28872, null], [28872, 33852, null], [33852, 37262, null], [37262, 42126, null], [42126, 42126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42126, null]], "pdf_page_numbers": [[0, 647, 1], [647, 5610, 2], [5610, 12346, 3], [12346, 18737, 4], [18737, 23100, 5], [23100, 26986, 6], [26986, 28872, 7], [28872, 33852, 8], [33852, 37262, 9], [37262, 42126, 10], [42126, 42126, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42126, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
d0ab19978a74b462d32324f874bf77fc6b564136
|
Ontologies in domain specific languages: a systematic literature review
Citation for published version (APA):
Document status and date:
Published: 01/01/2014
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement:
www.tue.nl/taverne
Take down policy
If you believe that this document breaches copyright please contact us at:
openaccess@tue.nl
providing details and we will investigate your claim.
Download date: 04. Nov. 2019
Ontologies in domain specific languages – A systematic literature review
Ana-Maria Şutii, Tom Verhoeff and M.G.J. van den Brand
ISSN 0926-4515
All rights reserved
editors: prof.dr. P.M.E. De Bra
prof.dr.ir. J.J. van Wijk
Reports are available at:
http://library.tue.nl/catalog/TUEPublication.csp?Language=dut&Type=ComputerScienceReports&Sort=Author&level=1 and
http://library.tue.nl/catalog/TUEPublication.csp?Language=dut&Type=ComputerScienceReports&Sort=Year&Level=1
Computer Science Reports 14-09
Eindhoven, November 2014
Ontologies in domain specific languages - A systematic literature review
Ana-Maria Sutii, Tom Verhoeff, M.G.J. van den Brand
October 21, 2014
Abstract
The systematic literature review conducted in this paper explores the current techniques employed to leverage the development of DSLs using ontologies. Similarities and differences between ontologies and DSLs, techniques to combine DSLs with ontologies, the rationale of these techniques and challenges in the DSL approaches addressed by the used techniques have been investigated. Details about these topics have been provided for each relevant research paper that we were able to investigate in the limited amount of time of one month. At the same time, a synthesis describing the main trends in all the topics mentioned above has been done.
1 Introduction
This work is motivated by the fact that ontologies, as knowledge representation systems, can be used in analysing the domain of a DSL [51]. From this, the question whether ontologies can be reused in other phases of building DSLs follows up. This further lead to a search through the literature that we reported in this paper.
The purpose of this paper is to conduct a systematic literature review on different techniques that exist on leveraging ontologies in domain-specific languages (DSL). The rationale for using these techniques and the challenges that the usage of the ontologies address in the DSL approach are also considered. At the same time, similarities and differences between ontologies and DSLs are investigated as a way of offering context to the other topics.
A paragraph in Richard’s Hamming Turing award lecture referring to developing solutions from existing ones represents a valid argument for conducting literature reviews as well: “Indeed, one of my major complaints about the computer field is that whereas Newton could say, “If I have seen a little farther than others, it is because I have stood on the shoulders of giants,’ I am forced to say, ‘Today we stand on each other’s feet.’ Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way. Science is supposed to be cumulative, not almost endless duplication of the same kind of things.” [25].
Context
There are initiatives, like those of the TWOMDE workshop [39], to investigate how can ontologies and the reasoning capabilities supported by them be used in Model Driven Engineering. One of the areas in MDE where ontologies could be valuable is the area of domain specific languages. Our focus is on ontologies and domain specific languages.
The development of domain specific languages and ontologies has taken place separately. Domain specific languages have been the focus of designers of applications in engineering fields, while ontologies have been more the focus of Artificial Intelligence and Knowledge Engineering [35]. Our goal is to identify ways to benefit from ontologies in DSLs, in spite of their different origins.
Structure of the paper
Section 2 introduces terms used in the explanation of the investigated papers and Section 3 gives an overview of the research questions, search process, inclusion and exclusion criteria for papers and quality assessment attributes of the papers. Then, Section 4 answers to the research questions in each paper individually. Section 5 gives
grades to the read papers based on the way they answer the research questions. Section 6 describes papers that did not make it into the final batch of accepted papers, but that still present interesting ideas on our subject. Then, Section 7 presents some statistics on the selected papers. Finally, Section 8 makes a synthesis of the information obtained from the research papers, Section 9 presents threats to the validity of the results obtained and Section 10 makes the final remarks.
2 Preliminary notes
In this section we concisely define ontologies and DSLs. Besides the definitions, there are multiple languages, frameworks and architectures related to the subject of ontologies or DSLs. We also give a short explanation to some notions that we encountered in the investigated papers. We have separated the notions as being part of two different technological spaces.
A technical (technological) space, as defined by Kurtev et al. [32], is a “working context with a set of associated concepts, body of knowledge, tools, required skills, and possibilities”.
A technological space has a user community around it that shares the knowledge, the literature and that even organizes conferences and workshops. The two technical spaces that we are using are the ontological technical space and the model driven architecture (MDA) technical space. Ontologies are part of the ontological technical space, while DSLs are part of the MDA technical space.
2.1 Ontologies
An ontology is, as defined by Gruber in 1993, an explicit specification of a shared conceptualization [21]. The definition was extended in 1998 by Strudel et al. [50] into: “An ontology is a formal, explicit specification of a shared conceptualization”. The ‘shared’ part in the definition refers to the fact that the knowledge represented by an ontology is understood and agreed upon by most of the experts in a domain. The ‘conceptualization’ part in the definition refers to the fact that the ontology represents an abstract, simplified view of the world described by the ontology. The formal, explicit part of the definition makes reference to the fact that a language is needed to describe the concepts in the domain.
There are two types of ontologies: domain ontologies and upper ontologies. Domain ontologies deal with real-world descriptors of business entities, while upper ontologies provide “meta-level” concepts for the domain ontologies.
Ontologies are expressed in formal languages based on logic, so semantic reasoners behind ontologies can also reason about the data besides representing it. Between the reasoning capabilities of a semantic reasoner, we mention consistency checking, transitive relations, value partitions, automated classification, inheritance or constraint checking [16].
Ontological technical space
OWL [36] stands for Web Ontology Language and it is part of the ontological technical space. One can represent terms in an ontology and make interrelations between them. There are three increasingly expressive sublanguages within OWL: OWL Lite, OWL DL and OWL Full. OWL is used for information that not only needs to be presented to humans, but that also needs to be processed by applications. OWL is based on description logics, a formal language used to represent knowledge and reason about it. A knowledge representation system based on description logic is made of two components: the TBox and the ABox. The TBox introduces the terminology of an application domain and the ABox contains assertions about named individuals using terms of the vocabulary [7].
The Semantic Web Rule Language (SWRL) is a World Wide Web Consortium (W3C) proposed language that combines OWL DL or OWL Lite sublanguages with the Unary/Binary Datalog sublanguage of Rule Markup Language [27]. The combination is provided such that, besides logic, rules can also be expressed in the language.
Finally, SPARQL [43] is a query language for RDF [12] recommended by W3C.
2.2 DSLs
Domain specific languages do not have a clear definition in the literature. They are defined as “a computer programming language of limited expressiveness focused on a particular domain”
by Martin Fowler [19]. As examples of DSLs, we mention regular expression languages and SQL. To illustrate the difficulty in determining the fact that a certain programming language or not, we mention that there are SQL variants nowadays that are Turing complete [34]. That means that the limited expressiveness character does not apply to SQL anymore. But SQL still remains focused on a particular domain, that of relational databases, and we can write code in SQL in terms of database concepts and that makes it a DSL in our opinion.
There are multiple factors involved in the design of a good DSL: a syntax definition, a proper semantics, tooling and eventually methodology and documentation [4]. These factors have a different importance depending on the executability level of the DSL [37] and the type of the DSL (internal or external) [19].
Mernik et al. [37] give detailed information on when and how to develop domain-specific languages. They identified five stages in the DSL development process: decision, analysis, design, implementation and deployment. They also identified patterns for the first four stages of the DSL development process. For example, patterns that occur in the decision stage are the need for: a new notation, a domain-specific analysis, verification, optimization, parallelization and transformation, an automation task, a product line, a data structure representation, a data structure traversal, etc. The DSL development process is seen as a hard endeavour because of the domain and language development knowledge that it requires. The decision to develop a new domain specific language is thus not easy. On the other hand, the language workbenches are becoming more and more powerful and therefore, they decrease more and more the effort put into the creation of a new DSL. Language workbenches are tools that help developers in more efficiently defining and using DSLs [53]. They offer productive DSLs and APIs for the definition of languages and their IDEs.
There are open issues in the DSL community [20, 55], issues that make it difficult for the DSLs to be accepted and used in industry. The challenges that current DSL approaches face are:
- tooling related: debuggers, testing engines;
- interoperability with other languages;
- formal semantics;
- learning curve;
- domain analysis;
- integration of graphical and textual editing;
- scalability;
- DSL evolution.
There is, in general, a strong relation between DSLs and model-based engineering. Kurtev et al. [33] define DSLs as a set of coordinated models. That is why, even though some of the studies selected for the systematic literature review do not mention DSLs, but only discuss on the relation between DSLs and metamodelling, we took them into consideration.
**MDA technical space**
In the MDA technical space, EMF (Eclipse Modelling Framework) is a modelling framework and code generation facility for building Java applications from model definitions [49]. EMF tries to bridge the gap between Java programmers and modellers. The model used to represent models in EMF is Ecore [49].
OCL (Object Constraint Language) is a formal language used to specify invariant conditions that must hold for the modelled system or query over objects in the model [3]. OCL is aligned with UML [47] and MOF [1] (thus with Ecore).
The architecture in metamodelling proposed by the Object Management Group (OMG) has four layers. The M0 layer is the data layer, the M1 layer is the model layer, the M2 layer is the metamodel layer and the M3 layer is the metametamodeling layer (MOF). An object in M0 is an instance of a class in the M1 level, a class in M1 is said to be an instance of a meta-class in the M2 layer and so on [26]. In the document of MOF 2.0 [1], it is emphasized that the four-layered architecture is not rigid, one being able to have as many layers as it wishes as long as their number is greater or equal to two. The fundamental concept is to be able to navigate from an instance to its metaobject. The architectures with the leveled meta-layers are also named meta-pyramids.
One other notion, the Ontology Definition Metamodel (ODM) [2], is a specification for enabling the formal grounding for representation, management, interoperability and application of business semantics to the capabilities of MDA based software engineering. ODM permits the modelling of an ontology and the interoperability with other modeling languages (like UML). This is also part of the ontological technical space.
Finally, IQPL [8] is a graph query language for EMF models.
3 Method
The systematic literature review has been done according to the method proposed by Kitchenham et al. [31]. The article suggests guidelines on how to evaluate and interpret all available research related to particular research questions or topic areas. There are three phases involved in a systematic literature review: planning the review, conducting the review and reporting it.
3.1 Research questions
The research questions whose answers we look for in the read papers are listed next.
RQ1 What are the similarities and differences between ontologies and DSLs?
RQ2 What technique is used to apply ontology technologies in DSLs?
RQ3 Why did the authors choose to use ontologies in DSLs?
RQ4 What challenges faced by DSLs are solved/addressed by using ontologies?
3.2 Search process
We first searched for papers on ontologies and DSLs that answered all of the above questions on the SpringerLink digital libraries. The limited amount of time of one month that we had to perform the literature study was not enough to go through all the search results on SpringerLink. We have also examined all the references in the selected papers so that we have a greater chance of finding more relevant papers.
We have used the following strings to search the digital library: “ontology domain specific language”, “ontology metamodeling” and “ontology model driven engineering”. We did not use quotes in the search strings because we did not want the search to be too restrictive. Then, we have also looked at the references of the selected papers to find more relevant papers.
3.3 Inclusion and exclusion criteria
Articles that answer in detail research question number two were considered for the study. Research question number one is there to offer context to our investigation, while research questions number three and four are natural follow-up question for the second research question.
The articles’ fitness was judged based on their title’s, keywords’, abstract’s and conclusion’s connection to research question number two. The first phase in selecting an article was to look at the title and keywords. The next step for the articles selected in phase one, was to read their abstract and their conclusions section. Then, finally, the entire article was read. Proceeding from one phase to the other was done by judging the content based on research question number two. The phases were conducted on one article per turn and we would proceed to a following article in the search results of SpringerLink only after completely finishing with the preceding article.
We considered papers that directly tackled the subject of using ontologies in DSLs approaches, but also those that tackle the subject of metamodeling and ontologies, metamodeling being strongly related to DSLs. SpringerLink includes both conference proceedings and journal papers.
We did not take into account papers that only tackle ontologies or only tackle DSLs. We also did not consider papers that tackle the subject of developing ontologies using model driven approaches.
3.4 Quality assessment
The quality assessment scores are giving the quality of the paper in what regards the level of details on which research questions are described in the paper. Because of the correlated research questions, the papers were expected to have high scores in general and that was indeed the case.
The criteria are based on six quality assessment questions:
QA1
Does the paper address RQ1 with sufficient level of detail?
QA2
Does the paper address RQ2 with sufficient level of detail?
QA3
Does the paper address RQ3 with sufficient level of detail?
QA4
Does the paper address RQ4 with sufficient level of detail?
The questions were scored as follows:
- QA1: Y(Yes), the authors explicitly give both similarities and differences between ontologies and DSLs (at least one of each); P(Partly), the authors explicitly give only similarities or only differences between ontologies and DSLs (at least one); N(No), the authors don’t explicitly mention any of the similarities or differences between ontologies and DSLs.
- QA2: Y(Yes), the technique for applying ontology technologies in DSLs is clearly described; P(Partly), the technique is not described sufficiently; N(No), there is no mention of any technique of applying ontology technologies in DSLs.
- QA3: Y(Yes), the reason for using the ontologies is clearly stated; P(Partly), the reason is implicit; N(No), the reason cannot be deduced clearly.
- QA4: Y(Yes), the authors mention at least one addressed challenge; P(Partly), the challenge addressed is implicit; N(No), there is no challenge addressed.
A “yes” scores one point, a “partly” scores half a point and a “no” scores a zero. This scoring model is taken from an example of Kitchenham et al. [31].
4 Results
At times, there were several papers published on the same technique and by the same authors (or partly the same authors). We have chosen a representative for that group and assessed that representative.
4.1 Search results
In this subsection we assign an id to the investigated papers and we also mention the papers that reside in the same group as the investigated papers. Papers in the same group describe the same subject at different level of details. The group of papers is not exhaustive because we did not do a systematic search for papers in the same group. Papers residing in a group were discovered during the normal search process of papers on ontologies used in DSLs. The results can be seen in Table 1.
4.2 Data collection
This subsection gives a small abstract of the studied research papers and answers to our research questions.
4.2.1 P1
Walter et al. [55] describe an ontology-based framework for domain-specific languages, framework that permits the definition of DSLs enriched with formal descriptions of classes. The main idea of the paper is that ontologies and the automated reasoning that they provide help in addressing major challenges faced by current DSL approaches.
RQ1 The authors point out that there is a mismatch on the underlying semantics of modelling between UML-based class modelling and OWL because the former one adopts the closed world assumption and OWL adopts the open world assumption by default.
RQ2 They integrate ontologies with DSLs at the metametalevel. They use KM3 [29] to define the general structure of the language, OWL2 [38] to define the semantics and OCL to define operations for calling the reasoning services. Reasoning services provide means to derive facts that are not explicitly stated in the model. To provide ontology reasoning, the DSL metamodel and domain model are transformed into a Description Logics knowledge base (TBox and ABox).
By using this technique, they provided a new technical space which allows implementing DSL metamodels with formal semantics, constraints and queries.
RQ3 Walter et al. [55] use ontologies in the development of a domain-specific languages’ framework because some of the main challenges of developing DSLs were motivation for developing ontologies, like interoperability and formal semantics. This comes to the benefit of the DSL designers and DSL users. The DSL designers profit from constraint definition, formal representations and expressive languages. The DSL users profit from progressive verification (verification of incomplete models), reasoning explanation, assisted programming (suggesting concepts to the user and explaining inferences) and different ways of describing constructs.
RQ4 The challenges that are partially solved by Walter et al.’s approach are those related to formal semantics (by constraint definition), learning curve (by progressive verification, suggestions of suitable domain concepts to be used, reasoning explanation and syntactic sugar) and tooling (by progressive verification and reasoning explanation).
4.2.2 P2
Lortal et al. [35] use robotic ontologies to develop robotic DSLs. The main idea of the paper is to reuse ready-made information from an ontology to ease the building of DSLs.
RQ1 The authors note that ontologies and DSLs have the same building phases (except for the fact that the implementation phase is not as emphasized for ontologies as much as
for DSLs). Moreover, their development faces the same problems. At the same time, ontologies and DSLs both structure data and information for application use.
On the other hand, models in ontologies are used for different applications than models in DSLs (former ones are used in artificial intelligence and web application mostly and the later ones are used in code generation, systems modelling, verification, simulation etc.). At the same time, different technologies and tools are used for each.
RQ2 Ontologies were used during the requirements specification phase of the DSL by gathering requirements when inspecting the ontologies and during the design of the domain models of the DSL by using specific mappings between ontologies (OWL) and domain models (UML class diagrams). For example, concepts in ontologies are mapped to classes in domain models.
Thus, by extracting concepts that are specific to the domain from an ontology, the DSL corresponds to domain concepts defined in the ontology.
RQ3 The rationale of using ontologies in DSLs consists in knowledge reuse. Lortal et al. [35] represent the domain by inferring information from a knowledge base and capturing the experts’ knowledge.
RQ4 The challenges in the DSL development process solved by this technique are those related to easing domain analysis.
4.2.3 P3
Tairas et al. [51] use ontologies for the phase of domain analysis in a DSL.
RQ1 The authors note that both ontologies and DSLs contain the domain model vocabulary and the interdependencies between the concepts in the domain.
RQ2 Starting from an existing ontology and based on its structure, a conceptual class diagram can be designed manually in an informal way. Thus, the information in the ontology assists in designing the conceptual class diagram. Afterwards, the conceptual class diagram is being manually transformed into an initial context-free grammar following a predefined collection of transformation rules.
At the same time, the instances of the ontology can be used to capture the commonalities and variabilities in a DSL.
RQ3 Tairas et al. [51] used ontologies in the development of DSLs because the domain analysis part in a DSL is not researched enough and ontologies can provide a structured mechanism for domain analysis.
RQ4 The challenges in the DSL development process solved by this technique are those related to domain analysis.
4.2.4 P4
Walter et al. [54] combine ontologies with two other DSLs at the metamodel level in order to be able to express semantical constraints.
RQ1 The authors report on the equivalences that exist in the ontological technology space and metamodeling technology space. For example, ‘cardinality’ in the ontological technological space is equivalent to ‘multiplicity’ in the metamodeling technological space.
RQ2 The technique used by the authors in this study to combine ontologies and DSLs is to unify them at the metamodel level. In their case study, they do manual transformations to create an integrated metamodel consisting of two DSLs and OWL. The integration is done without any loss of information from any of the three metamodels. Then, they project the integrated domain metamodel to a complete ontology for reasoning. This step is also performed manually.
RQ3 They have done the integration at the metamodel level in order to be able to define domain models and semantics for model elements simultaneously. Ontologies are also attractive because they provide the means for reasoning, querying and constraint checking.
RQ4 The challenges in the DSL approaches addressed by this technique are the specifications of formal semantics for the DSLs.
4.2.5 P5
Bräuer et al. [11] create an upper ontology (see Section 1) software for software models that permits integrity and consistency checking across the boundaries of individual models. Integrity refers to conditions that need to hold in order for the software models to be in a valid state.
RQ1 The authors emphasize, as a difference, the closed-world assumption of models in MDA and the open-world assumption of ontologies. At the same time, the closed-world assumption in models is closely related to nonmonotonic reasoning, while ontologies are built on monotonic formalisms.
RQ2 The authors created an upper ontology so that they can integrate different domain-specific modelling languages based on the upper ontology. For this, one needs to establish a binding between the domain-specific modelling language and the concepts and relationships in the upper ontology. The upper ontology behaves like a semantic connector. The presented method permits integrity and consistency checking for domain models.
RQ3 Ontologies were used in the development of DSLs because the model-driven engineering approach advocates the modelling of a system from different viewpoints, that raising the problem of interoperability between the DSLs used to express these views. That also implies consistency checking between individual models and automatic generation of model transformations. All these reasons lead to the decision to use ontologies in the development of DSLs.
RQ4 The technique addresses the challenge of semantic relationships and interoperability between DSLs.
4.2.6 P6
Curé et al. [17] focus on a domain specific language based on an ontology. The language is called Ocelet and it is used to model dynamic landscaping.
RQ1 The authors note that the steps needed to describe an ontology are the same ones involved in the development of a DSL: identify the domain problem, collect domain knowledge and establish domain vocabulary and semantics. This leads to the possibility of establishing relations between ontology concepts and DSL concepts. This point has been tackled also in paper P2.
RQ2 The usage of ontologies in the DSL development process takes place in the first step. The process starts with the development of OWL ontologies that are being verified for consistency with reasoners. Then, the ontologies are automatically transformed into Ocelet models. The transformation process can occur in the other direction too, as the transformations are bijections. The Ocelet models are then merged into a global Ocelet model in a manual fashion. This model is transformed into the structures of a reasoner and a consistency checking is done on the global model. The inconsistencies discovered need to be solved by hand as there is usually more than one solution possible.
RQ3 The ontologies were used in order to do local and global consistency checking on the Ocelet models. All started from the fact that models in the Ocelet framework would be developed by different persons. Reasoning is thus a prerequisite for the framework.
RQ4 The technique used in this study addresses the challenges of formal semantics in DSLs.
4.2.7 P7
Guizzardi et al. [23] present how can ontologies be used to evaluate and design a domain specific visual modelling language.
RQ1 This question has not been approached in the article.
RQ2 The quality of a domain-specific modelling language with respect to a domain ontology is guaranteed if an isomorphism between the ontology and the domain-specific modelling language can be established. This isomorphism is guaranteed if the mapping between a domain ontology and the domain language’s metamodel has a number of properties:
soundness, completeness, laconicity and lucidity [24]. These properties are verified manually in order to establish the quality of the domain language’s metamodel regarding the ontology.
Using the domain ontology, one can, besides evaluating the quality of the domain-specific modelling language with respect to the domain ontology, also see it as a starting point for the design of a new modeling language in the given domain that is isomorphic to the ontology.
RQ3 Using the domain ontology and keeping some mapping properties between the ontology and the domain modelling language, the quality of the domain specific modelling language can be guaranteed with respect to the ontology. The quality of the domain specific modeling language represents the degree to which it ensures the proper representation of the subject domain.
RQ4 The technique addresses the challenge of formal semantics in DSLs.
4.2.8 P8
Čeh et al. present a framework (Ontology2DSL) where a DSL grammar is automatically created from an ontology and some transformation patterns.
RQ1 This question has not been approached in the article.
RQ2 The technique described in this paper starts from an ontology that is transformed into an appropriate internal data structure of Ontology2DSL. On the data structure, a series of transformation patterns are applied and a grammar for a DSL is obtained. The irregularities found in the resulted grammar are solved either in the ontology, or in the transformation patterns. The transformation patterns applied on the data structure include basic concept transformations (class hierarchy transformed into production alternatives), generalization abstraction and so on.
In this technique, the ontology based domain analysis replaces classic domain analysis.
RQ3 The reason for choosing this technique is the fact that ontologies come with reasoning and querying, which allows the validation of the ontology. A valid ontology reduces errors during DSL development. The semantics in an ontology also help in establishing the semantics of the DSL.
RQ4 The challenges in the DSL development process that this technique addresses are domain analysis and formal semantics.
4.2.9 P9
Using an upper ontology, Roser et al. [46] describe a framework for the automatic generation and evolution of model transformations.
RQ1 This question has not been approached in the article.
RQ2 The technique starts from an upper ontology. Bindings to the required metamodels are established with the upper ontology. The binding represents a semantic mapping of the metamodels to their semantic concepts in the ontology. In order to perform model transformations, an initial model transformation needs to be provided (or it can also be automatically generated). The level of automation depends on the differences between the employed metamodels. The automated process of model transformation generation is based on the framework establishing several substitution proposals and choosing the one that scores best (based on some heuristics). The framework supports the evolution and reuse of existing mappings too.
RQ3 The reason for choosing to integrate ontologies in model transformations is because of their reasoning capabilities that can help in automating the process of model transformation. This is in relation to the need to exchange information between organizations, that boils down to interoperability support in modeling applications.
RQ4 The technique presented in this research paper addresses the challenges of formal semantics and language interoperability in DSLs.
4.2.10 P10
Rahmani et al. [44] describe a transformation from OWL to Ecore and OCL that can be adjusted depending on the level on which we want the ontology to be reflected in the Ecore model.
RQ1 The comparison is made between Ecore and OWL, so the comparison is more specific then the comparison between ontologies and DSLs in general, but it is still relevant. The differences between the two are the following:
- The open world assumption of OWL and the closed world assumption of Ecore. This point has been tackled in other papers too.
- The high web compliance of OWL and the low web compliance of Ecore. Web compliance refers to the degree to which a system is suitable to publish and exchange knowledge on the web. This leads to different identification mechanisms in OWL and Ecore.
- The unique name assumption in Ecore, that does not hold in OWL.
- Properties in OWL are first-class citizens, in contrast to their counterparts in Ecore, the references. This means that properties in OWL can be applied between several different pairs of classes, can be put in hierarchies and can be constrained, in contrast with references in Ecore.
- The expressiveness of OWL and Ecore does not overlap. This means that there are modeling constructs that can appear in OWL or Ecore, but not in the other.
- Ecore is built on four layers of modeling, while OWL is built only on two layers of modeling, the TBox and the ABox.
- The intuition of cardinality is different for OWL and Ecore: 0..* can implicitly mean 0..1 for ontologists.
- OWL benefits from an inference engine, while Ecore does not.
RQ2 The authors give transformations for every OWL modeling primitive to Ecore and OCL. Some transformations are straightforward, like the OWL classes that can be generally transformed directly to Ecore classes. Other transformations are more complex, like the transformation of the property hierarchy of OWL into Ecore. The materialization of all inferred implicit knowledge of the reasoner on property relations needs to be added in Ecore. This is done using OCL constraints on the corresponding classes.
The transformation to Ecore and OCL can be done in a way that preserves the entire ontology or only partly, depending on the types of transformations that the user chooses to perform and their number. For example, a user may choose to skip the transformation of property hierarchy. The transformations are adjustable at both meta and instance level.
RQ3 The rationale behind the technique was to leverage existing knowledge captured in ontologies to Ecore models. Thus, software engineers do not have to manually remodel models written in OWL.
RQ4 The implied challenges in DSL development that the technique addresses are domain analysis challenges.
4.2.11 P11
Izsó et al. [28] are describing the creation of domain specific modeling environments starting from ontologies. As a result, a validation of both metamodel-level and instance level models is done.
RQ1 The differences between ontologies and DSLs start from their different purposes. The ontologies are used to capture the knowledge and the requirements in a domain in very early phases of the design and the ontology reasoners are made for meta-level validation. On the other hand, domain-specific language tools are made for increasing the productivity of the developers and they use instance level validators like, for example, OCL.
RQ2 The technique consists in transforming OWL2 enriched with SWRL into EMF enriched with IQPL (see Section 2). The process starts with an ontology where formal textual requirements are captured and the meta-level consistency is checked. Then, a first transformation is done between the OWL2 ontology to the EMF metamodel, followed by a transformation of more complex OWL2 axioms into graph patterns. Finally, the SWRL rules are mapped into graph patterns. The transformations occur only at the meta-level (from TBox). EMF instances can be validated using EMF-IncQuery.
RQ3 The reason for the entire process was to be able to combine the benefits from both the ontology world and the domain specific modelling world. Domain requirements captured in the ontologies drive the development of domain specific modeling environments. At the same time, ontologies are used for the consistency checking at the meta-level, while EMF and IncQuery are used to validate instances.
RQ4 The challenges in DSL development that are addressed are those related to domain analysis, and implicitly those related to formal semantics.
4.2.12 P12
Parreiras et al. [40] describe a method to integrate OWL and UML at the metamodel level. The strengths and weaknesses of the two modelling approaches complement each other and they are appropriate for specifying different aspects of the software systems. The approach is implemented in a tool called TwoUse.
RQ1 The similarities and differences are not made in general for DSLs and ontologies, but they are still relevant. OWL ontologies and UML class-based modelling are similar with respect to classes, associations, properties, packages, types, generalization and instances. On the other hand, there are also differences between the two modelling approaches. UML class-based modelling is able to capture only static specification of specialization and generalization of classes and relationships, while OWL can do this dynamically as well. At the same time, UML provides mechanisms to define dynamic behaviour, while OWL does not.
RQ2 TwoUse uses UML profiles as concrete syntax and the profiles offer the possibility to design both UML models and OWL ontologies. These UML profiles are then transformed to TwoUse models, conforming to TwoUse metamodels, that represent the abstract syntax. The TwoUse metamodel contains the OWL metamodel and some packages from the UML2 metamodel. The OWL metamodel allows describing semantically expressive classes and the UML2 metamodel allows describing behavioral and structural features of classes. Further transformations take TwoUse models and produce OWL ontologies and Java code.
The advantage of the TwoUse metamodel is the fact that it offers SPARQL-like expressions for reasoning over OWL models.
RQ3 The reason for doing the integration between OWL and UML-class based modeling was the complementary benefits that the two approaches bring. The result provides more modeling power to the developers. The semantic of the models is also better expressed with ontologies.
RQ4 The challenges addressed are related to formal semantics in DSLs.
4.2.13 P13
Erofeev et al. [18] describe a method to achieve semantic interoperability between different technologies used in Ambient Intelligence.
RQ1 This question has not been approached in the article.
RQ2 They start with a core ontology of a reference technology. When an integration is needed with other technologies conforming to different metamodels, they transform the other metamodels into the core ontology metamodel. Subsequently, the models of the other technologies will be automatically transformed into models corresponding to the core ontology.
RQ3 They have chosen to use ontologies in order to obtain semantic interoperability between different technologies.
RQ4 The challenges addressed are those of interoperability.
4.2.14 P14
Kappel et al. [30] describe a method of lifting metamodels to ontologies as a way of integrating modeling languages. The work is part of a bigger project, ModelCVS, whose purpose is to create a framework for semi-automatic generation of transformation programs.
RQ1 Ontologies and metamodeling are created with different goals in mind, but they share common ground in conceptual modeling in general. At the same time, metamodeling is more implementation oriented, while ontologies are more knowledge representation oriented.
RQ2 The technique employed in this paper has been coined as lifting. The approach is divided into three steps. The first step is called conversion, during which, Ecore metamodels are transformed into ODM metamodels. This step introduces a change of formalism and it takes care of the subtle semantic nuances that occur between Ecore and ODM in the transformation. The result of this step is called a pseudo-ontology. In the next step, this pseudo-ontology is refactored, the result being a semantically richer view of the pseudo-ontology. Refactoring is needed to make explicit the concepts that are hidden in attributes or in association ends, as not all concepts are represented as first-class citizens in metamodels. The third step consists in semantically enriching the ontology with axioms with the purpose of integrating it with other ontologies.
The resulting ontologies are the main artifacts of semantic integration. The matching between ontologies and a code generation step are the ingredients for obtaining model transformations between the original metamodels.
RQ3 The starting point is again the need to use tools in combination. The approach was chosen in order to do a conceptual integration between the metamodels via the creation of ontologies. The ontologies created from the metamodels are able to capture more concisely the mapping between them and the mapping between ontologies can further be used to give rise to a bridging between the initial metamodels.
RQ4 The implicit challenge that the technique addresses is that of language/tool interoperability.
5 Quality evaluation
In this section we present the scores of each of the investigated papers. As it was expected, there is no “N” answer to question two in Table 2, because an answer of “Y” or “P” was the inclusion criteria of the papers to be investigated. As can be seen from Table 2, all of the papers explain the rationale of using the chosen technique (answer to question three) and almost all explain explicitly what challenge in the development of DSLs they tackle. This is to be expected, as questions three and four come as natural follow-ups of question two. At the same time, there is only one paper discussing both differences and similarities between ontologies and DSLs. This question being a question offering context to the subject, no answer to this question was not an exclusion criteria. Given these explanations, it was to be expected that the total scores of the papers would be similar.
6 Other relevant papers
The papers we are briefly discussing next offer small overviews on the usage of ontologies in metamodeling or DSLs, or present an idea (without sufficient level of detail and experimentation) on how the integration could take place. These papers are papers that were discarded only in the last phase, after reading their content. So, as with the papers selected for the systematic literature review, we do not claim that this list is exhaustive.
Bézivin et al. [9] propose building bridges between the software engineering and ontology engineering technical spaces at the M3 level. To illustrate the concept, the authors suggested
bridging the MOF-based ontology language ODM with OWL. The two of them being conceptual technical spaces, they are bridged through a concrete technical space (spaces that have techniques with more material representations of conceptual elements), the EBNF technical space. The transformations occurring at the M3 level between different technical spaces (MOF to EBNF, and Metametaontology to EBNF, and vice versa in both cases) are called projectors. The transformations occurring at the M3 level can then be pushed down to the M2 level. This method then saves one from doing 2N different mappings for N metamodels at the M2 level instead of one mapping at the M3 level. The problem is that the method is only mentioned in the paper and not detailed and exemplified.
Atkinson [6] argues the case for core level unification for MDA and ontology representation languages. This comes from the observation that MDA and ontology description languages are not inherently distinct technologies, UML being able to capture the knowledge captured in ontology representation languages by “programming around” features that are not directly supported. The conclusion he draws is that MDA should not be extended to add ontology features to the infrastructure through ODM, but a unified language should be developed as the core of MDA.
Henderson-Sellers [26] is identifying a couple of similarities and differences between ontologies and metamodels in order to provide a bridge between the two. Henderson-Sellers also emphasizes two kinds of ontologies: the domain ontologies and the meta-ontologies or foundational ontologies. The author discusses on the different meta-levels where ontology concepts are used in literature. In some publications cited in the paper, the authors regard an ontology as a M1 model, while others regard it as a M2 model.
Aßmann et al. [5] try to clarify the role of ontologies in MDE. They start from the observation that models in MDA are mostly prescriptive, while ontologies are descriptive models. They then describe an ontology-aware meta-pyramid, where upper-ontologies live at the M2 metamodel level, and domain ontologies live at the M1 model level. This meta-pyramid brings a series of conceptual benefits, like a common vocabulary for the software architect, customer and domain expert or a more concrete model-driven software development with ontologies as analysis models.
Parreiras et al. [41] conduct a domain analysis on the combination of metamodeling technical space and ontology technical space. The result of the domain analysis is a feature model of the existing approaches in literature. The discussed features consist of language, formalism, data model, reasoning, querying, rules, transformation, mediation and modeling level. The mediation process (reconciling different models) is classified in mapping, integration and composition. Another feature we are more interested in is the transformation feature with its three aspects: semantical, syntactical and directionality. Furthermore, the authors classify the approaches that transform the metamodeling technical space into the ontological technical space. These transformations take place for model checking (the ontology resulted
<table>
<thead>
<tr>
<th>ID</th>
<th>QA1</th>
<th>QA2</th>
<th>QA3</th>
<th>QA4</th>
<th>Total score</th>
</tr>
</thead>
<tbody>
<tr>
<td>P1</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P2</td>
<td>Y</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P3</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P4</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P5</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P6</td>
<td>P</td>
<td>P</td>
<td>Y</td>
<td>P</td>
<td>2.5</td>
</tr>
<tr>
<td>P7</td>
<td>N</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3</td>
</tr>
<tr>
<td>P8</td>
<td>N</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3</td>
</tr>
<tr>
<td>P9</td>
<td>N</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3</td>
</tr>
<tr>
<td>P10</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>P</td>
<td>3</td>
</tr>
<tr>
<td>P11</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P12</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
<tr>
<td>P13</td>
<td>N</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>2.5</td>
</tr>
<tr>
<td>P14</td>
<td>P</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>3.5</td>
</tr>
</tbody>
</table>
Table 2: Scores of investigated papers.
from the transformation is checked for consistency, class hierarchy etc.), model enrichment
(transform model to ontology, derive new facts and transform back to model), ontology mod-
eling (from model to ontology via transformation rules) and hybrid approaches (the TwoUse
approach with composition between source metamodel and target ontology and bidirectional
transformation with querying).
Staab et al. [48] classify methods of model driven engineering with ontology technologies.
They distinguish between language bridges and model bridges among software languages
and ontologies. The language bridges occur at the M3 level in the form of integrations or
transformations. The model bridges occur at the M2 level also in the form of integrations
and transformations. These methods appear also in our literature research.
7 Observations
In this section we present some statistics on the 14 investigated papers. The oldest paper
we have found is from 2002, while the newest ones are from 2012. Most of the papers were
published in 2006 (5 papers), 2009 (4 papers), 2010 (8 papers) and 2011 (4 papers). This
shows quite some interest in the subject in the last years. In what regards the journals
these papers were published in, there is no major trend. The journals where these papers
were published include “Software and Systems Modelling”, “Computer Science and Informa-
tion Systems”, “Data and Knowledge Engineering” and “Journal of Systems and Software”.
There was also a workshop organized between 2008 and 2010 on the subject (workshop on
transforming and weaving ontologies in model driven engineering). Then, the conferences
were the papers were published include conferences on Semantic Web subjects (“Reasoning
Web Semantic Technologies for Software Engineering”, “Workshop on Semantic Web Enabled
Software Engineering” etc.) and Model Driven Engineering subjects (“Ontologies for Software
Engineering and Software Technology”, “Models in Software Engineering”, etc.).
Most of the research done on ontologies used in the development of DSLs is conducted in
Europe in countries such as Germany, Slovenia, France, Austria, the Netherlands, Hungary
and Spain. On the other hand, Springer is more Europe-based, so it might partially explain
this outcome.
8 Discussion
In this section we emphasize the main trends in the methods of utilizing ontologies in DSLs.
As an introduction to our main research question, the techniques of introducing ontologies
to DSLs, we first look at the identified similarities and differences between ontologies and
DSLs. This comparison can give us an impression on what the benefits of a combined scheme
could be and how hard could be to combine the two.
The similarities between ontologies and DSLs consist of the fact that they both have
the same building phases (with different focuses) (P2, P6), they both structure data for
application use (P2), they present a domain model vocabulary and the relations between the
concepts in the domain (P3, P14) and they exhibit equivalence relationships between their
main concepts (P4, P6, P12).
The differences that exist between the ontologies and the DSLs consist of different applica-
tion domains (P2, P10, P11, P14), the closed world assumption and nonmonotonic reasoning
associated with models and the open-world assumption and monotonic reasoning associated
with ontologies (P1, P5, P10) and the different technologies and tools used by each (P2, P10,
P12).
The techniques employed to leverage DSLs by the usage of ontologies are:
- integration at the M3 model (P1)
- integration at the M2 model (P4)
- mapping from ontology to M2 model (P2, P3, P6, P8, P10, P11)
- mapping from M2 model to ontology (P5, P7, P9, P12, P13, P14)
- ontology inspection to gather requirements for DSLs (P2)
The reasons for which ontologies were considered to be used in DSLs in the first place are the need of interoperability between tools / DSL views / technologies, the knowledge reuse, the reasoning and querying capabilities behind ontologies, the complementary benefits of the two approaches and the need for consistency checking at the metamodel level.
The challenges in the DSL approaches that appear to be addressed and partially solved by using ontologies in DSLs are those related to formal semantics, interoperability between tools, domain analysis, learning curve of the DSL and tooling (through progressive verification and reasoning explanations).
As a possible direction to explore, we suggest that an in-depth study of the productivity gains of using ontologies in the development of DSLs should be made. None of the papers considered treats this subject. As a result of this sort of studies, engineers could decide whether it is profitable to start with an ontology when building a DSL.
One point that was not clear was how easily can interoperability between tools / DSLs be achieved using ontologies. There were no examples of considerable sized tools that would be made interoperable using ontologies. Such an example and a report on the amount of work to make it work would make clear the feasibility of such an approach.
9 Threats to validity
There are also some threats to the validity of the conclusions drawn. The first threat is the fact that not all search results on SpringerLink were examined. This was due to time limitation. On the other hand, taking into account that all the references of the selected studies found during the one month period were covered, we can conclude that we covered a good part of the literature. Other threat to the validity could be the fact that we only looked at search results on SpringerLink. This should not be a problem, because a quick search and glance on the first results given on the ACM and IEEE websites did not bring anything new.
10 Conclusions
The complementary benefits of DSLs and ontology technologies seem to make them suitable for combination. DSLs' development could mostly profit from the reasoning capabilities supported by the ontologies and the concepts that are captured and related in an ontology. On the other hand, the question is whether the benefits brought justify the effort put in the combination of ontologies and DSLs. That is because transformations or integrations between ontologies and DSL metamodels/models seems not to be trivial in most of the techniques described. Although some techniques involve a certain amount of automation, manual work cannot be removed completely from these processes in most of the cases. That is also due to the semantic gap between DSL metamodels and ontologies [52].
Although the research of this subject only took one month, we consider that we managed to cover a good part of the literature. That is because at the end of this month we did not have any paper that seemed to be suitable to our investigation (from the references of the selected studies and the other papers in their group).
Acknowledgments
This work was supported in part by the European Union’s ARTEMIS Joint Undertaking for CRYSTAL - Critical System Engineering Acceleration - under grant agreement No. 332830.
References
In this series appeared (from 2012):
12/01 S. Cranen Model checking the FlexRay startup phase
12/02 U. Khadim and P.J.L. Cuijpers Appendix C / G of the paper: Repairing Time-Determinism in the Process Algebra for Hybrid Systems ACP
12/03 M.M.H.P. van den Heuvel, P.J.L. Cuijpers, J.J. Lukkien and N.W. Fisher Revised budget allocations for fixed-priority-scheduled periodic resources
12/04 Ammar Osaiweran, Tom Fransen, Jan Friso Groote and Bart van Rijnsoever Experience Report on Designing and Developing Control Components using Formal Methods
12/05 Sjoerd Cranen, Jeroen J.A. Keiren and Tim A.C. Willemse A cure for stuttering parity games
12/06 A.P. van der Meer CIF MSOS type system
12/07 Dirk Fahland and Robert Prüfer Data and Abstraction for Scenario-Based Modeling with Petri Nets
12/08 Luc Engelen and Anton Wijs Checking Property Preservation of Refining Transformations for Model-Driven Development
12/10 Milosh Stoljki, Pieter J. L. Cuijpers and Johan J. Lukkien Efficient reprogramming of sensor networks using incremental updates and data compression
12/11 John Businge, Alexander Serebrenik and Mark van den Brand Survival of Eclipse Third-party Plug-ins
12/12 Jeroen J.A. Keiren and Martijn D. Klabbers Modelling and verifying IEEE Std 11073-20601 session setup using mCRL2
12/13 Ammar Osaiweran, Jan Friso Groote, Mathijs Schuts, Jozef Hooman and Bart van Rijnsoever Evaluating the Effect of Formal Techniques in Industry
12/14 Ammar Osaiweran, Mathijs Schuts, and Jozef Hooman Incorporating Formal Techniques into Industrial Practice
13/01 S. Cranen, M.W. Gazda, J.W. Wesselink and T.A.C. Willemse Abstraction in Parameterised Boolean Equation Systems
13/02 Neda Noroozi, Mohammad Reza Mousavi and Tim A.C. Willemse Decomposability in Formal Conformance Testing
13/03 D. Bera, K.M. van Hee and N. Sidorova Discrete Timed Petri nets
13/04 A. Kota Gopalakrishna, T. Ocezebei, A. Liotta and J.J. Lukkien Relevance as a Metric for Evaluating Machine Learning Algorithms
13/05 T. Ocezebei, A. Weffiers-Albu and J.J. Lukkien Proceedings of the 2012 Workshop on Ambient Intelligence Infrastructures (WAmI)
13/06 Lotfi ben Othmane, Pelin Angin, Harold Weffiers and Bharat Bhargava Extending the Agile Development Process to Develop Acceptably Secure Software
13/08 Mark van den Brand and Jan Friso Groote Software Engineering: Redundancy is Key
13/09 P.J.L. Cuijpers Prefix Orders as a General Model of Dynamics
| 14/01 | Jan Friso Groote, Remco van der Hofstad and Matthias Raffelsieper | On the Random Structure of Behavioural Transition Systems |
| 14/02 | Maurice H. ter Beek and Erik P. de Vink | Using mCRL2 for the analysis of software product lines |
| 14/03 | Frank Peeters, Ion Barosan, Tao Yue and Alexander Serebrenik | A Modeling Environment Supporting the Co-evolution of User Requirements and Design |
| 14/04 | Jan Friso Groote and Hans Zantema | A probabilistic analysis of the Game of the Goose |
| 14/05 | Hrishikesh Salunkhe, Orlando Moreira and Kees van Berkel | Buffer Allocation for Real-Time Streaming on a Multi-Processor without Back-Pressure |
| 14/06 | D. Bera, K.M. van Hee and H. Nijmeijer | Relationship between Simulink and Petri nets |
| 14/07 | Reinder J. Bril and Jinkyu Lee | CRTS 2014 - Proceedings of the 7th International Workshop on Compositional Theory and Technology for Real-Time Embedded Systems |
| 14/08 | Fatih Turkmen, Jerry den Hartog, Silvio Ranise and Nicola Zannone | Analysis of XACML Policies with SMT |
| 14/09 | Ana-Maria Şutu, Tom Verhoeff and M.G.J. van den Brand | Ontologies in domain specific languages – A systematic literature review |
|
{"Source-Url": "https://pure.tue.nl/ws/files/3889700/353021132621361.pdf", "len_cl100k_base": 12932, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 53045, "total-output-tokens": 17347, "length": "2e13", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.0005655288696289062, "__label__crime_law": 0.0003426074981689453, "__label__education_jobs": 0.0019044876098632812, "__label__entertainment": 0.00010269880294799803, "__label__fashion_beauty": 0.00018131732940673828, "__label__finance_business": 0.0003743171691894531, "__label__food_dining": 0.00033664703369140625, "__label__games": 0.0007138252258300781, "__label__hardware": 0.0005130767822265625, "__label__health": 0.0004811286926269531, "__label__history": 0.0003578662872314453, "__label__home_hobbies": 0.0001024007797241211, "__label__industrial": 0.00041747093200683594, "__label__literature": 0.0006885528564453125, "__label__politics": 0.0003345012664794922, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.05279541015625, "__label__social_life": 0.00014472007751464844, "__label__software": 0.0129852294921875, "__label__software_dev": 0.9248046875, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.0005359649658203125, "__label__travel": 0.0002111196517944336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69829, 0.05376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69829, 0.51831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69829, 0.89542]], "google_gemma-3-12b-it_contains_pii": [[0, 2169, false], [2169, 2714, null], [2714, 2714, null], [2714, 6164, null], [6164, 10319, null], [10319, 14409, null], [14409, 17754, null], [17754, 20552, null], [20552, 23116, null], [23116, 26776, null], [26776, 30460, null], [30460, 34042, null], [34042, 37450, null], [37450, 41135, null], [41135, 45004, null], [45004, 48994, null], [48994, 52779, null], [52779, 56284, null], [56284, 60176, null], [60176, 63529, null], [63529, 65875, null], [65875, 68643, null], [68643, 69829, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2169, true], [2169, 2714, null], [2714, 2714, null], [2714, 6164, null], [6164, 10319, null], [10319, 14409, null], [14409, 17754, null], [17754, 20552, null], [20552, 23116, null], [23116, 26776, null], [26776, 30460, null], [30460, 34042, null], [34042, 37450, null], [37450, 41135, null], [41135, 45004, null], [45004, 48994, null], [48994, 52779, null], [52779, 56284, null], [56284, 60176, null], [60176, 63529, null], [63529, 65875, null], [65875, 68643, null], [68643, 69829, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69829, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69829, null]], "pdf_page_numbers": [[0, 2169, 1], [2169, 2714, 2], [2714, 2714, 3], [2714, 6164, 4], [6164, 10319, 5], [10319, 14409, 6], [14409, 17754, 7], [17754, 20552, 8], [20552, 23116, 9], [23116, 26776, 10], [26776, 30460, 11], [30460, 34042, 12], [34042, 37450, 13], [37450, 41135, 14], [41135, 45004, 15], [45004, 48994, 16], [48994, 52779, 17], [52779, 56284, 18], [56284, 60176, 19], [60176, 63529, 20], [63529, 65875, 21], [65875, 68643, 22], [68643, 69829, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69829, 0.06219]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
ac3fb80704b73ca8d9f00557463dbe3a479d6ea3
|
Auditing Windows Environments
PowerShell XML output, windows security, ossams
Cody Dumont
Auditing Windows Environments with PowerShell XML Output
GIAC (GCWN) Gold Certification
Author: Cody Dumont, cdumont@nwnit.com
Advisor: Aman Hardikar .M,
Accepted: 3 Jan, 2012
Abstract
Auditing with PowerShell is a major component to the future on Windows Security. As part of the Open Source Security Assessment Management System (OSSAMS) project, this paper analyzes the initial development of the PowerShell framework used to collect DACL’s from AD objects. The objective for OSSAMS is normalizing data for a streamlined analysis. The data will be collected from routers, switches, firewall, security tools, directory services, and other information systems. This paper outlines the initial framework used within PowerShell to audit MS AD and other MS systems. The restrictions on the framework are the customer, or organization being assessed, would only need to create a user account for the assessor. The computer doing the assessment cannot join the domain. The paper discusses the SID, .Net Classes, and the coding process in-depth.
Keywords: PowerShell, Active Directory, XML, DACL, SID
1 Introduction
A security professional often performs security assessments for customers and will use many tools to collect data. Each tool stores data in a separate format; this requires the assessor to develop a proprietary automated process or use a manual process to correlate all the data. This process includes custom parsing XML files into spreadsheets, manual reviews of some data and other time consuming tasks. This manual process often results in missed vulnerabilities or extensive time spent putting all pieces of the puzzle together. To solve this problem, a team of security professionals decided to create a Relational Database Management System (RDMS) to normalize all the data inputs and then store the data in a common database for a more accurate assessment. This project is called **Open Source Security Assessment Management System (OSSAMS)**.
The heart of the project is the database structure, but the brain of the project is the framework developed by the OSSAMS team\(^1\) to normalize the data. The data parsing is a combination of Perl, Python, and PowerShell (PS1) scripts. These scripts will read the output from other tools, most commonly in XML and normalize the data into common and supportable data structure used by the OSSAMS database. This paper specifically focuses on the use of the PS1 to create a framework, collecting data from AD, normalizing, and storing the data into OSSAMS database. The initial framework will be used to collect Dynamic Access Control Lists (DACL) for the Active Directory objects.
1.1 What is OSSAMS
The OSSAMS project is a framework used to normalize data from many security tools and then correlate the information in a structured manner to allow the security professional and customer to see a more accurate security posture. The data sources are from device configurations (i.e. firewall, router, etc), questionnaires, security tools such as Nessus® & NMAP®, and manual reviews (i.e. policy reviews). Once all the data is collected, the security professional is able to review all the data based on three (3) primary pivot points, which are:
\(^1\) Original Team Members – Cody Dumont, Darryl Williams, & Adrien de Beaupre
Cody Dumont, cody@melcara.com
• Project / Assessment
• IP Address / Hostname / FQDN
• User / Group / Role
The function of the OSSAMS will not be to provide answers to the security professional, for example to tell the security professional what is wrong and how to fix the problem. The function of OSSAMS is to correlate all the data from many sources and provide the security professional with an in-depth view of the information system being reviewed. One of the initial concerns for OSSAMS is to avoid creating another system to do the thinking for the security professional, but to provide all the information to the security professional to make more accurate assessment of the customer’s security posture.
1.2 Project Scope Boundaries
When conducting security assessments or penetration tests, the security professional is often not permitted to join a PC to the customer’s AD domain. While using PS1 to preform the assessment, this can be somewhat unfortunate. The PS1 environment supports the use of a customized module called a Snap-In. There are many snap-ins for many functions, for example Quest Software created a snap-in to easily manage an AD environment. However, all of these plugins require the workstation running the PS1 script to be a member of the domain, or to have established trust relationships with the domain to work properly. As mentioned earlier the ability to join the domain is seldom an option during the assessment. Therefore, for the purposes of this paper, the computer will not be a member of the AD domain.
The plugins, which provide support to AD, require Active Directory Web Service (ADWS) or the Active Directory Management Gateway Service (ADMGS) to be configured. As the security professional can’t rely on this service to be running and can’t ask the customer to enable it, the scripts must run without these services enabled.
1.3 Windows Assessment Methodology
When conducting assessments or audits, the security professional should have a methodology or strategy. The methodology or strategy should dictate the type of information to be collected and analyzed. Additionally the methodology or strategy
Cody Dumont, cody@melcara.com
should be combined with generally accepted best practices and standards for comparison against the customer’s implementation. This paper will not focus on the results of the data collection, meaning what is the best configuration for the customer, but will focus on the data collection process and methodology for collecting the data.
2 Development Process
The development process began with some basic PS1 scripts to loop through collected data and to gain a better understanding of PS1 and .Net framework. Then the decoding of the objects collected from AD and the collection of DACL’s applied to the AD objects. The next step in the project is to create XML data structure used in the framework. Finally, import method used to insert the data into the OSSAMS database.
2.1 Development Environment Setup
2.1.1 Data Collection Targets
For the purposes of the initial research, the data collection target will be a Windows 2008 R2 domain controller built during the SEC505 course. The information collected will be from AD. The environment does not have MS Exchange or other MS components that would extend the schema. However, because of the dynamic nature of the framework, the PS1 scripts should not require modification should the AD schema be extended.
2.1.2 Workstation Configuration
The assessment workstation is configured with following Snap-in’s:
- Install RSAT on Win7 (64bit)
- .Net Framework v4
Other PS1 Development Tools used are:
- PowerGUI
- Notepad++
- PowerShell Community Extensions (PSCX)
- PowerShell Guy PowerTab
Cody Dumont, cody@melcara.com
2.2 Data Collection
The script has many modular tasks, such as connecting to AD, and formatting XML data. After the data is collected, the data is saved in the native XML object format used by PS1 to allow reimporting. The ability to reimport the collected data when needed will reduce the need to re-query AD for common tasks such as AD object collection.
2.2.1 AD Object Collection
The initial modular function is collection of all the AD objects. There are many types of AD objects; some examples are Users, Groups, Containers, and many more. The commands to connect to AD are relatively simple and easy to follow. Shown below is a snippet of code, each line will be explained in detail.
```powershell
$ad_dir_entry = new-object DirectoryServices.DirectoryEntry($ad_ldap_svr,$ad_ldap_user,$ad_ldap_password)
$searcher.filter = "(objectclass=*)"
$adObj = $searcher.findall()
```
The first line of code creates a new object called “$ad_dir_entry”. The object located within the .Net namespace “DirectoryServices”. The DirectoryServices namespace provides easy access to AD from managed code. The namespace contains two component classes, DirectoryEntry and DirectorySearcher, which use the Active Directory Services Interfaces (ADSI) technology. ADSI is the set of interfaces that Microsoft provides as a flexible tool for working with a variety of network providers. ADSI gives the administrator the ability to locate and manage resources on a network with relative ease, regardless of the size of the network (MSDN, 2011). The first command is passed to the AD server IP address (or FQDN), username, and password.
Next, another object is created; this is the DirectorySearcher object. To create the DirectorySearcher object, the script must invoke ADSI and pass the DirectoryEntry object. A filter is applied to the DirectorySearcher, and the output is stored into the $adObj variable. From this point forward, the script does not need to query AD to get any AD object, but only call this variable. To save more time, this object is exported to an XML file for use by other scripts and functions yet to be developed.
Cody Dumont, cody@melcara.com
2.2.2 Security Identifiers
Active Directory uses a Security Identifier (SID) to identify each object uniquely in the LDAP database. There are well known SIDs for common groups such as the Administrators and Power Users groups. As the computer is not a member of the AD domain, by default the computer is unable to resolve the SID to an object name; therefore a manual process for this component is required. To address the SID resolution problem, an associative array is created using the predefined common SID, and extracting the SID and user name for the $adObj table captured earlier.
To create the associative array for the SID resolution, the script first calls a CSV file called “Well-Known-SID.csv”. The “Well-Known-SID.csv” SID data is stored, then a foreach loop is processed using the $adObj. The SID stored in AD is stored in hex, to address this issue MSDN had an example of the code to convert the SID to decimal. Additionally some objects within AD don’t have a SID, and use the Globally Unique Identifier (GUID) instead. This required a check for the SID to be present, if not the GUID is stored in the SID field.
As all items in PS1 are objects, each field properties or method is also an object. To prevent the unnecessary data passed from actual object to the newly created object, the use of the [string] variable type is required. The [string] variable type ensures only the data in the properties is entered into the new variable. The new entry for the SID table is then send to a CSV file location stored in the $sid_csv_file variable. As the entries may use a comma with the object name, the script uses the “|” pipe for value separator. The new SID entry is then stored in an associative array called $sid_array.
2.2.3 DACL Collection
The DACL collection proved to be the hardest obstacle to overcome. The “BSonPoSH” blog gives a great blog post on how to get started (Shell, 2008). The blog post shows how to query the DACL of a specific object. The blog post provides information about several different functions that can be completed using the “SecurityIdentifier” class. This class is located in “System.Security.Principal” hierarchy.
The DACL collection process begins with looping through the AD objects, and creating a new object using the “DirectoryServices.DirectoryEntry” class. From the
Cody Dumont, cody@melcara.com
The ObjectSecurity property is stored into a new variable called “$acl”. The $acl is then used to extract the DACL using the “CommonObjectSecurity.GetAccessRules” methods. The GetAccessRules method provides a collection of the access rules associated with the specified security identifier. When calling the GetAccessRules method, there are three parameters that are required, which are includeExplicit, includeInherited, and targetType. The “targetType” can be one of two settings, “SecurityIdentifier” or “NTAccount”. To avoid naming errors, the script will collect the “SecurityIdentifier”.
The script then enters into a “Where-Object” loop and store several properties of the DACL using the [string] variable type. After the fields for the ACL entry are collected, the SID is searched for within the $sid_array previously collected. If there is a name, the SID is replaced with the name value of the SID array entry. Lastly, the data collected to create the DACL is then stored into an associative array for reporting and other analysis.
2.2.4 XML to Store Data
The script next stores the data in several XML formats. The first format is the PS1 native format called using the Export-Clixml. The Export-Clixml cmdlet creates a native XML format used by PS1 to recreate the object using the Import-Clixml cmdlet. This XML format is not the optimal format needed to import into the OSSAMS database.
To import data into the OSSAMS database, the script creates new XML object. Next the script creates a sample data format string of the XML structure. The associative array is then looped through and data is injected into the XML object using the formatted strings. Examples (Figures 1, 2, and 3) of the XML formatting can be found on the subsequent pages.
2.3 Functionality vs. Modularity
The script at this point achieves the goals that have been established. However, the script neither supported modularity nor resembled a useable framework for expansion. The next goal is to do the following:
Cody Dumont, cody@melcara.com
• Change the script to pull configuration variables from seed file or command line
• Dynamically create XML format
• Allow for the use of other cmdlet to gather DACLs
• Expand Functionality using plugin style components
3 Script Modularization
To modularize the script and begin the creation of the PS1 framework used in OSSAMS, the script will focus on the creating the XML structures dynamically, and creating the scalable method of calling additional cmdlets and imported data from other objects.
3.1 Dynamic XML Data Structure
There are several methods for interacting with XML data using PS1. As mentioned earlier, there is a native XML format used to reimport data using the Import-Clixml/Export-Clixml cmdlets. The next option is to use ConvertTo-XML cmdlet. This option is more descriptive than the Export-Clixml option, but the ConvertTo-XML cmdlet adds too much repetitive data. The third option is to create an XML object in a customized format, which is selected for use in the OSSAMS project.
3.1.1 PS1 Native XML Format
The native XML formats using the Export-Clixml and ConvertTo-XML cmdlets could be viable options in some cases. However for the OSSAMS project, the goal is to have the data normalized before being entered into the database. Therefore the PS1 scripts should remove unneeded data before inserting into the database. As part of the data analysis, this section will provide samples of the data found in the native formats and discuss why these options are not the best for importing into OSSAMS.
The purpose for the Export-Clixml cmdlet format is to reimport objects into a PS1 environment for analysis. As shown in the figure below, the namespace structure used is not easily understandable and the code required to analyze the data before importing would be rather difficult to create. However, this file is created to reimport the data if needed for other components with the OSSAMS framework.
Cody Dumont, cody@melcara.com
Figure 1 - XML Format from Export-Clixml cmdlet
```xml
<Obj Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">
<DN RefId="0">
<TSystem.Object/></T>
<TN RefId="1">
<TProp>
<T N="Path">LDAP://10.10.10.10/DC=sans505,DC=int</TProp>
<TN RefId="1">
<TSystem.Object/></T>
<TSystem.Collections.DictionaryBase/></T>
<TSystem.DirectoryServices.ResultPropertyCollection/></T>
<TSystem.Object/></T>
</DN>
</Obj>
```
The XML structure created from the ConvertTo-XML cmdlet is much easier to ready and comprehend. However, there is a lot of extra data also exported. Much of this extra data is important when used with .Net framework, however for the purposes of OSSAMS this extra data is not needed. During the initial script testing, this format was used as a guide for creating the OSSAMS XML format.
Figure 2 - XML Format from the ConvertTo-XML cmdlet
```xml
<xml version="1.0"/>
<Objects>
<Object Type="System.DirectoryServices.SearchResult">
<Property Name="Path" Type="System.String">LDAP://10.10.10.10/DC=sans505,DC=int</Property>
<Property Name="Properties" Type="System.DirectoryServices.ResultPropertyCollection">
<Property Name="Value" Type="System.DirectoryServices.ResultPropertyValueCollection">
<Property Type="System.Int32">7</Property>
</Property>
</Property>
<Property Name="Key" Type="System.String">dc</Property>
<Property Name="Value" Type="System.DirectoryServices.ResultPropertyValueCollection">
<Property Type="System.String">sans505</Property>
</Property>
</Object>
</Objects>
```
3.1.2 Customized XML Format
There are two customized XML format created for OSSAMS. One of the format that is the default dynamically creates XML from .Net objects. The other is a custom format created for the gathering of DACL information and is generated by the PS1 script. The custom format used for the DACL is based off the fields extracted from the AD object and DACL object, then imported into the $object_acl_array array.
The default format dynamically created from the .Net object is the corner stone of the scripts flexibility (Weltner, 2009). The function named “XML_Reformat” is the
Cody Dumont, cody@melcara.com
process where the data structure is extracted from the .Net object model, and then creates dynamic field names for the OSSAMS XML structure. The function works by looping through each object in the associative array and extracting the property name for each object. If the property name is not previously identified, the property name is stored into an array called $PropertyNames.
The next task within the function is to create the new XML structure. The XML structure has a root element or level 1 node, and each object is the separate child element of the root, the level 2 node. The level 2 node is the parent element that contains the child elements that are created from the properties. The figure shown below has the root element (level 1 node) of ACL, the child element (level 2 node) of ACE, and the properties of acl_objectclass, ACL_Target_Name_CN, ActiveDirectoryRights, ACL_Target_Name, User, AccessControlType, and DistinguishedName.
Figure 3 - XML Reformat structure
```xml
<ACL version="0.3.2">
<ACE>
<acl_objectclass>top domain domainDNS</acl_objectclass>
<ACL_Target_Name_CN>LDAP://10.10.10.10/DC=sans505,DC=int</ACL_Target_Name_CN>
<ActiveDirectoryRights>DeleteChild</ActiveDirectoryRights>
<ACL_Target_Name>sans505</ACL_Target_Name>
<User>Everyone</User>
<AccessControlType>Deny</AccessControlType>
<DistinguishedName>N/A</DistinguishedName>
</ACE>
</ACL>
```
Now the new XML template is created and is inserted into a XML object. The first step is to export the array $xml_template into a file using the “Out-File” cmdlet. Next a new XML object is created called $xml. The file previously created is then imported into the new XML object. The next function is to loop through each object stored in the associative array $xmlObj, and for each property defined, the value is then stored in the $xml object.
However, as each object in $xmlObj may not have the same properties, the result is that many data fields defined are not used and the first level 2 node is the template data. The next series of commands are used to extract all the unused fields from the level 2 nodes, and remove the template node. The method “$xml.$lvl1_node.RemoveChild” is used to remove the level 2 node used for the template. The next command uses the “Select-Xml -XPath” cmdlet to select all child nodes with the word “empty” in the field.
Cody Dumont, cody@melcara.com
and removes the identified elements. This step alone reduces the data size by about 80% and creates a clean XML file for importing into OSSAMS database.
### 3.2 Data Structure
*NOTE: At the time of this writing the OSSAMS database architecture is being developed, therefore the data structure proposed is of an initial conceptual structure.*
The data is now formatted in a manner, which can be easily supported and imported into the database. Each level 2 node from the XML file will be converted into a server property. The level 1 node will denote the server property type, and the level 2 child nodes will be the property name and property value. For the DACLs, there is a need for more than two fields; therefore an additional field called “prop_permission” is added.
The database table will have the following fields:
- **prop_index**
- The index of the table, also the primary key with auto increment.
- **prop_name**
- The property name for all entry types other than DACL.
- For DACL entries, this field will map to the “ActiveDirectoryRights” field.
- **prop_value**
- The property value stored within the child node, for all entry types other than DACL.
- For DACL entries, this field will map to the “User” field.
- **prop_type**
- This is the type of entry and will be determined by the name of the root element or level 1 node.
- **prop_permission**
- This field is for DACL entries only.
- The value mapped to this field is AccessControlType.
- **HostData_HostKey**
Cody Dumont, cody@melcara.com
This is a foreign key used to create a many to one relationship.
One property can be mapped to single server object, while a server object could have many properties.
Figure 4 - EER Diagram of Data Model
4 Detailed Script Breakdown
The previous sections provided the conceptual review of the script’s functionality and this section will be a detailed block-by-block break down of the script.
4.1 Configuration Variables
The script’s configuration variables are configured from lines 3 to 41. There are four groupings of variable settings.
4.1.1 Command Line Arguments
The first section line 4 to line 9 collects arguments from the command line and sets the variable $myDir to the directory where the script is located. The interesting issue with this section of code is the “param” cmdlet must be the first command executed in the script, otherwise the command line arguments are not passed correctly (Goude, 2009). Many of the examples failed to mention this requirement is their code discussion. Additionally the $myDir variable must be set, as the Present Working Directory (PWD)
is not where the script is executed. This issue with PWD is commented on throughout many of the PS1 script blogs and configuration examples.
4.1.2 The Config File “-cfg”
The second block of code is from line 12 through 23, where the command line argument “-cfg” is tested. If “-cfg” is present, all other command line arguments are ignored, and the file referenced in the parameter is inserted into the array $cfg_data. The array $cfg_data is populated using the “Import-Csv -Delimiter "" $cfg”. The interesting part of this command was the “-Delimiter” option. If the “-Delimiter” option is a “|” or “;” then the delimiter must be enclosed with quotes, however all other characters used as the delimiter must not be enclosed in quotes.
Next there is a series of “foreach” loops where the name of the object is tested for a defined variable and if found in the array $cfg_data, the data is imported into the appropriate variable. Note the use of the “break” command to stop processing the “foreach” loop after a match is located.
4.1.3 Passing Config Parameters via the Command Line
The third group of commands is from lines 24 to 33, where other command line arguments are evaluated. This group of commands are a series of “if, then, else” statements. The process is first checked to see if the argument variable is not null. If the parameter is not null, then the parameter variable is stored into the global variable. However, should the variable be null then the default data is stored into the global variable.
4.1.4 Global Variables
The fourth part of the configuration is the definition of the global variables. These variables are defined from lines 36 to 40. All variables are stored as null with exception of the $version variable. There are also two arrays defined using the “@()” as the data input.
4.2 Defined Functions
The next section starts at line 43 and continues to line 179, where the functions called from other parts of the scripts are stored. The script has two defined functions, namely HexSIDToDec and XML_Reformat.
4.2.1 HexSIDToDec
The “HexSIDToDec” function, beginning on line 46 to 74 was found on the Technet web site (Mueller, 2011). The syntax in the function appears to be fairly simple; however some of the logic is hard to follow if the formatting of the SID is unclear. The SID value is passed to the function in the form of an array of hex numbers. When the SID is passed the function, the SID becomes the first element in an array of arrays. The array of arrays can be confusing in this case because there is only one element in the array.
The first element of the array is prepended with “S-”, converted into a string, and then stored into the variable $strSID. The $strSID is then converted back into an array using the split method and stored into the $arrSID. Next the count of the arrays elements is stored in the $Max variable using the count method. The decimal formatted SID is then created using the array elements 0,1,8. The choice of using the first and eight elements is not clear.
On the Wikipedia page on security identifiers (Security Identifier, 2011) there is content that explains the hex to decimal SID conversion process. However, the description is limited to a machine account. At the bottom of the Wikipedia page there is a link to “selfadsi.org”, where there is a very detailed break down of the SID formatting. From the diagram shown below, we can graphically see the formatting of the SID (Föckeler, 2011). When we map the elements in the array to blocks below, there are eight blocks that make up the Revision, SubID, and Identifier Authority. As the array elements are mapped to the block lists, the ninth ($arrSID[8]) element is now clear, as the start of the SubAuthority.
Figure 5 - SID Format (Föckeler, 2011)
The next task is to test and see if the array has eleven elements in the array, as this would depict a “Well-Known-SID”. The figure shown above shows the SID for the “Everyone” group.
\[
S-1-1-0 = \text{Everyone} = 0x01\ 0x01\ 0x00\ 0x00\ 0x00\ 0x00\ 0x01\ 0x00\ 0x00\ 0x00\ 0x00\ 0x00
\]
Should the `$Max` be greater than 11, the variable `$Temp1` is created using the following formula.
\[
\text{[Int64]}\$\text{arrSID}[12] + (256 * ([\text{Int64]}\$\text{arrSID}[13] + (256 * ([\text{Int64]}\$\text{arrSID}[14] + (256 * ([\text{Int64]}\$\text{arrSID}[15]))))))
\]
The value from `$Temp1` is then pushed on the end of `$DecSID`, and should the value of `$Max` be equal to 15, the `$DecSID` is returned.
**Figure 6 - SID Second Example (Föckeler, 2011)**
The next few steps perform similar conversions previously described and using the `$Temp2` and `$Temp3` variables. If the `$Max` variable is less that 24, then the `$DecSID` is returned. If the `$Max` is greater than 24, then the `$Temp4` variable is returned.
### 4.2.2 XML_Reformat
The XML_Reformat function is the first function created specifically for the script. The section begins on line 80 and ends on line 179. The lines 88 through 94 extract the parameters from the array passed when the function is called from within the script.
#### 4.2.2.1 Discover Properties
The next process is a “ForEach-Object” loop which discovers all the properties of the elements stored in the `$xmlObj` array. Within the “ForEach-Object” loop there are two test conditions, the first will test to see if the current object has the property “Properties.PropertyNames” and the second test will determine if the current object is an associative array. Objects pulled directly from AD or PS1 will have the property of “Properties.PropertyNames”, while the ACL array created by the script will be an associative array.
Cody Dumont, cody@melcara.com
The reason for bringing focus to this series of commands is the potential for customizing this section in the future. For other objects, for example file systems and registry keys, may require this code to be modified to normalize the data. If neither condition is true the script will report an error and exit.
4.2.2.2 Create XML Document
After the properties are discovered the XML file is created, and from line 120 through 171 is where the data from the input array is stored into the XML file. The first task is to create the XML object with the New-Object cmdlet. The XML template is described in detail in section 3.1.2. The next command opens the XML Object where data will be written. The first object created is then stored into a variable called “$xml_entry”. The variable called “$xml_entry” is cloned during the subsequent foreach loop. Note in this “foreach” loop the same tests are repeated from the loop that determines if the variable is an associative array or object with the specified property. Next as the command to extract the syntax is not the same for the two testing methods, the variable “$PropertyNameType” is tested to see if the entry is either “Properties” or “Keys”. Then the appropriate command is used to store the text in the value setting properly into the new array. The final action taken by this sub function is to set the variables used to $null to prevent data from being corrupted.
4.2.2.3 XPATH & NULL Tasks
Starting on Line 164 and continuing to line 178 is a series of commands used to clean out data that is no longer needed. First the first XML node is removed. The second command uses the Select-Xml using the XPATH method to search for nodes that contain the string “empty” and delete them. Finally the following command sets variables to for reuse.
4.3 File Object Creation
The next series of commands, lines 181 to 199, create the working directory for the data files and creates the files used to store the data collected. The first command sets the variable $well_known_sid_file with file name where the well-known SIDS are stored, then changes directory to the location stored in the $myDir variable.
Cody Dumont, cody@melcara.com
Using the Get-Date cmdlet the date is stored in the $date variable. Then a
directory name is created using the “-f” format operator. The command uses the
properties of the $date variable are used to create a string with date and time attributes.
The $DirectoryName is populated with year, month, day, hour, minute, and second.
Then the directory is created, note the “> $null” usage, this sends the output from the
“mkdir” command to $null. The output from the “mkdir” command returns a PS1 error,
and it is irrelevant to script operations. Then the ACL CSV file is created, and PS1
environment changes directory to a newly created directory. The next two command
sequences create two CSV files, one is the ACL CSV file, and the other is the SID CSV
file.
4.4 AD Connection
This is first section of the script that is designed to operate with AD, beginning on
line 201 and ending on line 243. This section will connect to AD, then collect all the
objects and decode the GUID and SID. This is last preparatory phase of the script.
4.4.1 Load AD Object
To make the connection to Active Directory a new object must be created. The
new object is a “DirectoryServices.DirectoryEntry” object. To create this object, the
LDAP URL, username, and password must be submitted. Next another new object is
created, using the “DirectoryServices.DirectorySearcher” class. This command uses the
ADSI interface and is passed the previously created AD object. The searcher object
requires a filter to be created. As the scripts intent is to capture all the objects the filter is
set to the object class of “*”, meaning all objects. Finally an array called $adObj is
created and stored with all AD objects.
4.4.2 Export AD Objects
This section begins on line 209 and ends on 213. This section is the first time the
XML_Reformat function is called. The arguments passed are “AD” for the level 1 node,
“OBJECT” for the level 2 node, “ad_data” for the XML files name and the $adObj array.
The next two commands are temporary at this point, as they create export of the data in
both XML formats using the Export-Clixml and convertTo-CML cmdlets. These last two
Cody Dumont, cody@melcara.com
files are only used at this point for validation testing and will most likely be deleted when fully implemented into OSSAMS.
4.4.3 Decode SID for ACL’s
The next section is where the SIDs are evaluated and decoded. The section begins on line 214 and ends on line 243. The first task imports the well-known SID file and creates the arrays used to store the SID data. Following these tasks a “foreach” loop is started where each object stored in the $adObj is processed. The next four lines store AD related information into variables, then the $guid is populated with returned data from the HexSIDToDec function. As some objects in AD are not assigned a SID, the GUID is also used for ACL processing. Then the next “if, then, else” statement checks the status of the $sid variable and takes action accordingly.
Now the data has been collected, the data is stored into an associative array called $new_obj. Additionally the data which is stored in the $sid_csv_file is stored into a variable called $sid_csv_file_entry. This entry is redirected to the $sid_csv_file and into the $sid_array array. The remaining lines clear variables for reprocessing.
4.5 ACL Processing
The final section of code is the ACL Processing section and begins on line 245. As in the previous section, the first couple of lines are used to step through each object in the $adObj array and collect data for creating the ACL data files. The first test condition is used to determine the length of the “displayname”, and if the length is not equal to “0” then the $acl_target_name is set to the AD object’s property item called “displayname”.
4.5.1 AD Object ACL Collection
The next task is to collect the ACL on the AD object. To start this process a new object must be created called $ad_object_entry, using the “DirectoryServices.DirectoryEntry” class. When using the “DirectoryServices.DirectoryEntry” class a variable called $acl_target is used along with username and password previously used. The $acl_target is the ADsPath of the AD object. The ADsPath is the hierarchical path to an object in AD (Codeidol.Com, 2009). The ADsPath is comprised of the LDAP URL and the full common name of the AD object. A sample of the an ADsPath is
Cody Dumont, cody@melcara.com
“LDAP://10.10.10.10/CN=Users,DC=sans505,DC=int”, for the “Users” container in AD. A new object is created using ADSI with the $ad_object_entry as the parameter. Next the ObjectSecurity property is stored in the $acl variable.
The GetAccessRules method of the $acl variable is called and sends variables for the parameters includeExplicit, includeInherited, and targetType. The “includeExplicit” parameter tells AD to include access rules explicitly set for the object. The “includeInherited” parameter tells AD to include inherited access rules. While the “targetType” specifies the security identifier type, the two options are:
- System.Security.Principal.NTAccount
The returned data is then passed to a “While-Object” loop where the IsInherited is checked. If IsInherited is false, the data ACL entry is stored in the $acl_xml_obj_entry array. The setting of “includeInherited” parameter, when calling the GetAccessRules method would allow the script to avoid the “While-Object” loop. In a future revision the “includeInherited” parameter will be set to null or $false to avoid the additional processing. The next two lines input the $acl_xml_obj_entry in to the $acl_xml_array and then sets the $acl_xml_obj_entry to $null. The setting of $acl_xml_obj_entry to null prevents old data from being used on the next iteration of the loop.
4.5.2 ACL Entry Creation
The next section, beginning on line 261, begins the same GetAccessRules method used previously. The reason for this is an error processing of the ACL properties, which will be corrected in a later version of the script.
The first step in this section is to get the ACL and then increment though all entries. The next two commands collects and stores the data in the $ActiveDirectoryRights and $AccessControlType variables.
Next the script extracts the SID for the current object, by creating a new “System.Security.Principal.SecurityIdentifier” object called $ID. The “AccountDomainSid” is then tested and if null the variable $DistinguishedName is set to “N/A”. The next test condition attempts to translate the $ID to an “NTAccount” name format. If that test fails then the well-known SIDs are evaluated. Note the modification
Cody Dumont, cody@melcara.com
of the $ErrorActionPreference setting. The reason for the $ErrorActionPreference change is during the evaluation process there are a lot of errors which are false positives that should be ignored. If the “AccountDomainSid” is not null, then the $sid_array collected from AD is iterated though to look for a match. If a match is found the loop is broken out of and the next entry is processed.
Now the data has been collected, an associative array called $new_obj is created to store the ACE. The data is stored in a CSV file using the format method used previously and then stored into an array for parsing. The next nine commands clear variables for reuse. The last remaining commands are used to parse the data using the “XML_Reformat” function and the Export-Clixml and convertTo-CML cmdlets.
5 Security Findings
This script allows the security professional to view all the objects in AD and access controls assigned to each object. Some items to look for are hidden accounts in containers that are not normal. For example there are many hidden containers used for upgrades and other system functions. A crafty attacker or a malicious system administrator could create a user account in these containers and they may not be seen by the typical help desk person. Other concerns could be excessive permissions applied to key objects in AD. This script also helps to show security professionals and system administrators how many hidden objects are created by default.
6 References
Cody Dumont, cody@melcara.com
7 Appendix A - Script Execution Flow Chart
Flow charts demonstrating the script processing are shown below.
Figure 7 - Primary Script Function Flow Chart
Figure 8 - Continuation of Primary Script
4.3.2 ACL Entry Creation
4.3.2 ACL Entry Creation
Export ACL Data (OSSEC Format)
End of foreach (exit in $attack)
Clear Variables Close foreach Loop
Output to Screen
Increment Array Close Variables
Store Data
Store Data in Array
Output to CSV
XML Reformat
("ACL", "ACE", "$acl_data", Subject_acl_array)
Function XML Reformat
Export Client
ConvertTo-XML
Define Variables
- $ActiveDirectoryPath = [string]$ActiveDirectoryPath
- $AccessControlType = [string]$AccessControlType
$IdentityReferenceValue
$ErrorActionPreference = "SilentlyContinue"
$DistinguishedName = "NA"
foreach($x in $Sid_array)
$x.Item("Sid") -eq $null
$x.IdentityReference.Value
yes
foreach($x in $well_known_aid)
$x $null
$x.IdentityReference.Value
yes
$x.IdentityReference.Value
yes
$sUser = $x.Name
$sDistinguishedName = $x.DistinguishedName
break
$sErrorActionPreference = "Continue"
Figure 9 - Flow Chart for Function HexSIDToDec
Auditing Windows Environments with PowerShell XML Output
Figure 10 - Flow Chart for Function XML_Reformat
8 Appendix B - Script Output
The following text is a sample of the output for the script execution.
PS C:\PowerShell\get-AD-ACL.v03.3.c\20111120-105132> .\get-ad-objects-with-acl.v0.3.3c.ps1 -cfg cfg.0.3.3.txt
Getting Arguments from C:\PowerShell\get-AD-ACL.v03.3.c\cfg.0.3.3.txt
START THE ad_data XML_REFORMAT FUNCTION
FINISHED THE ad_data XML_REFORMAT FUNCTION
PARSEING AD ACLS NOW
PARSED ACL for sans505
PARSED ACL for Users
PARSED ACL for Computers
PARSED ACL for Domain Controllers
PARSED ACL for System
PARSED ACL for LostAndFound
PARSED ACL for Infrastructure
PARSED ACL for ForeignSecurityPrincipals
PARSED ACL for Program Data
PARSED ACL for Microsoft
PARSED ACL for NTDS Quotas
PARSED ACL for Managed Service Accounts
PARSED ACL for WinsockServices
PARSED ACL for RpcServices
PARSED ACL for FileLinks
PARSED ACL for VolumeTable
PARSED ACL for ObjectMoveTable
PARSED ACL for Default Domain Policy
PARSED ACL for AppCategories
PARSED ACL for Meetings
PARSED ACL for Policies
PARSED ACL for Default Domain Policy
PARSED ACL for User
PARSED ACL for Machine
PARSED ACL for Default Domain Controllers Policy
PARSED ACL for User
PARSED ACL for Machine
PARSED ACL for RAS and IAS Servers Access Check
PARSED ACL for File Replication Service
PARSED ACL for Dfs-Configuration
PARSED ACL for IP Security
PARSED ACL for ipsecPolicy{72385230-70FA-11D1-864C-14A300000000}
PARSED ACL for ipsecISAKMPolicy{72385231-70FA-11D1-864C-14A300000000}
PARSED ACL for ipsecNFA{72385232-70FA-11D1-864C-14A300000000}
PARSED ACL for ipsecNFA{59319BE2-5EE3-11D2-AECE-0060B0ECCA17}
PARSED ACL for ipsecNFA{594272E2-071D-11D3-AD22-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{72385233-70FA-11D1-864C-14A300000000}
PARSED ACL for ipsecNegotiationPolicy{72385233-70FA-11D1-864C-14A300000000}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSED ACL for ipsecNegotiationPolicy{59319C01 -5EE3-11D2-ACE8-0060B0ECCA17}
PARSEDACL for AdminSDHolder
Cody Dumont, cody@melcara.com
Auditing Windows Environments with PowerShell XML Output
Cody Dumont, cody@melcara.com
Auditing Windows Environments with PowerShell XML Output
PARSED ACL for 2951353c-d102-4ea5-906c-54247e9ec741
PARSED ACL for 71482d49-8870-4cb3-a438-b6f9ec35d70
PARSED ACL for aed72870-b1f6-4788-8ac7-22299c820711
PARSED ACL for f58300d1-b71a-4db6-88a1-a8b8538beac6
PARSED ACL for c2e1f90b-c92a-40c9-9379-bacfc31a3e3
PARSED ACL for 4aaab3c3-e416-4b9c-a6bb-4b453ab1c1f0
PARSED ACL for 973c4097-7795-4dfb-b19d-c126e6466166
PARSED ACL for de10d491-9090-4fb0-9abb-4bb7865c5f680
PARSED ACL for b9ed3444-545a-4172-a90c-68118202f125
PARSED ACL for 4c934d2c-17c4-472b-8600-16b115d2f3aa
PARSED ACL for c82b7bc-fcca-45b8-a8d4-ad5e2852a02
PARSED ACL for 5e1574f6-55df-49e0-a671-aae8dfca61e0
PARSED ACL for d262a4e-41f7-48ed-9f35-56b6775753d
PARSED ACL for 82112ba0-7e4c-4a44-89b9-d46c9612bf91
PARSED ACL for Windows2003Update
PARSED ACL for ActiveDirectoryUpdate
PARSED ACL for Password Settings Container
PARSED ACL for PSPs
PARSED ACL for Administrators
PARSED ACL for Guest
PARSED ACL for cody
PARSED ACL for Builtin
PARSED ACL for Administrators
PARSED ACL for Users
PARSED ACL for S-1-5-4
PARSED ACL for S-1-5-11
PARSED ACL for Guests
PARSED ACL for Print Operators
PARSED ACL for Backup Operators
PARSED ACL for Replicator
PARSED ACL for Remote Desktop Users
PARSED ACL for Network Configuration Operators
PARSED ACL for Performance Monitor Users
PARSED ACL for Performance Log Users
PARSED ACL for Distributed COM Users
PARSED ACL for IIS_IUSRS
PARSED ACL for S-1-5-17
PARSED ACL for Cryptographic Operators
PARSED ACL for Event Log Readers
PARSED ACL for Certificate Service DCOM Access
PARSED ACL for Server
PARSED ACL for DC
PARSED ACL for krbtgt
PARSED ACL for Domain Computers
PARSED ACL for Domain Controllers
PARSED ACL for Schema Admins
PARSED ACL for Enterprise Admins
PARSED ACL for Cert Publishers
PARSED ACL for Domain Admins
PARSED ACL for Domain Users
PARSED ACL for Domain Guests
PARSED ACL for Group Policy Creator Owners
PARSED ACL for RAS and IAS Servers
PARSED ACL for Server Operators
PARSED ACL for Account Operators
PARSED ACL for Pre-Windows 2000 Compatible Access
PARSED ACL for Incoming Forest Trust Builders
PARSED ACL for Windows Authorization Access Group
PARSED ACL for Terminal Server License Servers
PARSED ACL for S-1-5-9
PARSED ACL for 6E157EDF-4E72-4052-A82A-EC3F91021A22
PARSED ACL for Allowed RODC Password Replication Group
PARSED ACL for Denied RODC Password Replication Group
PARSED ACL for Read-only Domain Controllers
PARSED ACL for Enterprise Read-only Domain Controllers
PARSED ACL for RID Manager$
PARSED ACL for RID Set
PARSED ACL for DnsAdmins
PARSED ACL for DnsUpdateProxy
Cody Dumont, cody@melcara.com
Auditing Windows Environments with PowerShell XML Output
PARSED ACL for DNS Servers
PARSED ACL for RootDNSServers
PARSED ACL for @
PARSED ACL for a.root-servers.net
PARSED ACL for b.root-servers.net
PARSED ACL for c.root-servers.net
PARSED ACL for d.root-servers.net
PARSED ACL for e.root-servers.net
PARSED ACL for f.root-servers.net
PARSED ACL for g.root-servers.net
PARSED ACL for h.root-servers.net
PARSED ACL for i.root-servers.net
PARSED ACL for j.root-servers.net
PARSED ACL for k.root-servers.net
PARSED ACL for l.root-servers.net
PARSED ACL for m.root-servers.net
PARSED ACL for DFSR-GlobalSettings
PARSED ACL for Domain System Volume
PARSED ACL for Content
PARSED ACL for SYSVOL Share
PARSED ACL for Topology
PARSED ACL for WIN-7J8E90L988E
PARSED ACL for DFSR-LocalSettings
PARSED ACL for Domain System Volume
PARSED ACL for SYSVOL Subscription
PARSED ACL for BCKUPKEY_30b7a6ad-e26e-4166-a63a-551b7e22b986 Secret
PARSED ACL for BCKUPKEY_P Secret
PARSED ACL for BCKUPKEY_7a9808ff-f6af-4bdc-bda0-7169d2a837a7 Secret
PARSED ACL for BCKUPKEY_PREFERRED Secret
PARSED ACL for Joe
PARSED ACL for OU.For.Joe
PARSED ACL for Not.For.Joe
START THE acl_data XML_REFORMAT FUNCTION
FINISHED THE acl_data XML_REFORMAT FUNCTION
FINISHED - RUNNING THIS SCRIPT
PS C:\PowerShell\get-AD-ACL.v03.3.c\20111120-105132>
Cody Dumont, cody@melcara.com
## Upcoming SANS Training
- **SANS Chicago Spring 2020**
- Location: Chicago, ILUS
- Dates: Jun 01, 2020 - Jun 06, 2020
- Type: Live Event
- **SANS ICS Europe Summit & Training 2020**
- Location: Munich, DE
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANS Budapest June 2020**
- Location: Budapest, HU
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANS Paris June 2020**
- Location: Paris, FR
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANS FOR508 Milan June 2020 (in Italian)**
- Location: Milan, IT
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANS New Orleans 2020**
- Location: New Orleans, LAUS
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANS Las Vegas Summer 2020**
- Location: Las Vegas, NVUS
- Dates: Jun 08, 2020 - Jun 13, 2020
- Type: Live Event
- **SANSFIRE 2020**
- Location: Washington, DCUS
- Dates: Jun 13, 2020 - Jun 20, 2020
- Type: Live Event
- **SANS Zurich June 2020**
- Location: Zurich, CH
- Dates: Jun 15, 2020 - Jun 20, 2020
- Type: Live Event
- **SANS Chennai 2020**
- Location: Chennai, IN
- Dates: Jun 22, 2020 - Jun 27, 2020
- Type: Live Event
- **SANS Pittsburgh 2020**
- Location: Pittsburgh, PAUS
- Dates: Jun 22, 2020 - Jun 27, 2020
- Type: Live Event
- **SANS Silicon Valley - Cupertino 2020**
- Location: Cupertino, CAUS
- Dates: Jun 22, 2020 - Jun 27, 2020
- Type: Live Event
- **SANS London June 2020**
- Location: OnlineGB
- Dates: Jun 01, 2020 - Jun 06, 2020
- Type: Live Event
- **SANS OnDemand**
- Location: Books & MP3s Only US
- Dates: Anytime
- Type: Self Paced
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/auditing/auditing-windows-environments-powershell-xml-output-windows-security-ossams-33854", "len_cl100k_base": 11960, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 71071, "total-output-tokens": 14588, "length": "2e13", "weborganizer": {"__label__adult": 0.0003767013549804687, "__label__art_design": 0.0004343986511230469, "__label__crime_law": 0.0007433891296386719, "__label__education_jobs": 0.0036830902099609375, "__label__entertainment": 9.91225242614746e-05, "__label__fashion_beauty": 0.0001552104949951172, "__label__finance_business": 0.00070953369140625, "__label__food_dining": 0.00022077560424804688, "__label__games": 0.0006589889526367188, "__label__hardware": 0.0012025833129882812, "__label__health": 0.00033783912658691406, "__label__history": 0.0002696514129638672, "__label__home_hobbies": 0.00013625621795654297, "__label__industrial": 0.0006055831909179688, "__label__literature": 0.0002682209014892578, "__label__politics": 0.0002608299255371094, "__label__religion": 0.0003077983856201172, "__label__science_tech": 0.05499267578125, "__label__social_life": 0.0001506805419921875, "__label__software": 0.042999267578125, "__label__software_dev": 0.890625, "__label__sports_fitness": 0.00018107891082763672, "__label__transportation": 0.00028634071350097656, "__label__travel": 0.0001722574234008789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50753, 0.05131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50753, 0.73678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50753, 0.80481]], "google_gemma-3-12b-it_contains_pii": [[0, 91, false], [91, 1192, null], [1192, 3420, null], [3420, 5576, null], [5576, 7153, null], [7153, 9390, null], [9390, 11750, null], [11750, 13948, null], [13948, 15913, null], [15913, 18225, null], [18225, 20632, null], [20632, 22169, null], [22169, 23259, null], [23259, 25077, null], [25077, 27052, null], [27052, 28954, null], [28954, 31146, null], [31146, 33321, null], [33321, 35570, null], [35570, 37848, null], [37848, 39320, null], [39320, 41122, null], [41122, 41278, null], [41278, 42316, null], [42316, 42363, null], [42363, 42470, null], [42470, 45000, null], [45000, 45088, null], [45088, 47734, null], [47734, 49072, null], [49072, 50753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 91, true], [91, 1192, null], [1192, 3420, null], [3420, 5576, null], [5576, 7153, null], [7153, 9390, null], [9390, 11750, null], [11750, 13948, null], [13948, 15913, null], [15913, 18225, null], [18225, 20632, null], [20632, 22169, null], [22169, 23259, null], [23259, 25077, null], [25077, 27052, null], [27052, 28954, null], [28954, 31146, null], [31146, 33321, null], [33321, 35570, null], [35570, 37848, null], [37848, 39320, null], [39320, 41122, null], [41122, 41278, null], [41278, 42316, null], [42316, 42363, null], [42363, 42470, null], [42470, 45000, null], [45000, 45088, null], [45088, 47734, null], [47734, 49072, null], [49072, 50753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50753, null]], "pdf_page_numbers": [[0, 91, 1], [91, 1192, 2], [1192, 3420, 3], [3420, 5576, 4], [5576, 7153, 5], [7153, 9390, 6], [9390, 11750, 7], [11750, 13948, 8], [13948, 15913, 9], [15913, 18225, 10], [18225, 20632, 11], [20632, 22169, 12], [22169, 23259, 13], [23259, 25077, 14], [25077, 27052, 15], [27052, 28954, 16], [28954, 31146, 17], [31146, 33321, 18], [33321, 35570, 19], [35570, 37848, 20], [37848, 39320, 21], [39320, 41122, 22], [41122, 41278, 23], [41278, 42316, 24], [42316, 42363, 25], [42363, 42470, 26], [42470, 45000, 27], [45000, 45088, 28], [45088, 47734, 29], [47734, 49072, 30], [49072, 50753, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50753, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
d8e6de2dad91bc67ca83300a0e24821be6381d1f
|
Symbiotic Organisms Search Response to Distributed Database Queries
Atinderpal Singh1), Krishan Kumar2), Rajinder Singh Virk3), Hye-jin Kim4)
ABSTRACT
This paper has prime focus on the query optimization problem in distributed database environment. Query Processing is the major concept involved in distributed system. Data that is dispersed geographically at different locations is too gathered at resulting user site. There are many different execution actions that lead to same query result but differ in the cost incurred in query processing. We have to consider the execution strategy that yields optimal result. Communication cost is the major cost incurred when we talk about query processing in distributed system. Symbiotic Organisms Search Algorithm (Meta-heuristic) has been applied for query optimization in distributed environment. Result obtained using this meta-heuristic approach is compared with several optimization approaches. Results reveal the better performance of the algorithm for solving the hard problem of query processing in distributed database environment.
Keywords: Distributed Database, Query, Symbolic organism search, Execution strategy, Business data
I. INTRODUCTION
Distributed database system has emerged to provide optimal solutions to the information processing problems related to the organisations that are dispersed geographically. A distributed database is a collection of several, logically interrelated databases disseminated over several sites. Query optimizer is considered to be the important portion of distributed system. It considers the user query and make search for entire execution approach for the requisite query and result in optimal cost plan. The main component of a Database System is the data and is defined in literature as the collection of facts about something. This ‘something’ may be the business data in case of a business corporation, strategic data in case of a military’s database and experimental data in a scientific experiment etc [1-20]. Data that constitutes the database
has to be correlated and stored on different sites of a computer network to be a part of distributed database. Distributed Database is a very complex and costly technology, so mostly employed by big businesses, or governmental organizations [21-23].
![Diagram of Distributed Database System]
**Fig 1.1 Distributed Database Systems**
Optimization problem become more complex with the increase in complexity of distributed queries. The various operations on the relations over various sites include the major operations like selection, projection and join. These operations are to be dealt strictly to have optimal results. For small problems that involve small queries many algorithms are available to provide solutions but the queries that involve multiple sites, for them optimization problem become challenging and the various currently available algorithms may not cope with complexity.
The query [24, 25] often considered to be a logical data is referenced by a relation in relational distributed database system. The processing of query involves allocation of various sub-query operations to multiple network sites and result is obtained by joining various relations over various network sites or servers in distributed database system. The query optimization is based on cost model that reveals the various cost incurred in processing and answering a query. Number of algorithms have been practically utilised to solve above mentioned query processing concept. In order to validate the research carried out by us we have compared SCS for [23] distributed queries against Genetic Approach [26] and Exhaustive algorithm. This algorithm has positive aspects over other metaheuristic techniques because it does not require specific algorithm parameters.
II. STRATUMS OF DISTRIBUTED QUERY HANDLING
The problem of distributed query processing has been labelled [27] in detail by decomposing it into various sub problems as shown below in the diagram.
Fig 2.1 Generic Stratum of Query Handling.
III. LITERATURE SURVEY
Distributed database systems design and query optimization has been a dynamic area of research for database community, due to the complex and NP-Hard nature of the general problem [27].
Apers et al. Described how Processing schedules are generated for distributed query
optimization. Virtual data sites are merged into actual network sites to find an optimal data allocation plan, by minimizing intermediate relation to relation transmission costs. Finally the query result is sent to originating site. The drawback of the approach was intractability in case of large size problems [28, 29].
Barker et al. Come up with an innovative idea of applying restricted growth to chromosome string. Group Oriented Restricted Growth String - GA to exclude redundant chromosomes from further GA process. They applied them for n-way partitioning of relations. They designed two new crossover operators and four new mutation operators. They further proved the effectiveness of using " Binary Merge Crossover Operator" and " Merge and Jump Mutation Operators" for large scale partitioning problems [30].
Carlo et al. has anticipated a technique for estimating the size of relational query outcomes. The method is constructed on the estimations of the attribute individual values. In particular, the proficiency of critical method to estimate selectivity factors of relational operations is considered. They also presented some experimental results on real databases which show the likely performance of analytic approach [31].
Chiu et al. Authors studied the use of semi-joins for chain queries for a distributed database query. A powerful and efficient dynamic programming algorithm was developed that translated a chain query into sequence of semi joins. It has a computing complexity of the order \(n^3\), where \(n\) is the number of relations referenced by the query. Finally they extend their work to optimize a larger class of queries called tree queries [32, 33].
Cosar et al. Proposed a new Genetic Algorithm for distributed database query optimization. They also investigated the effect of increasing number of nodes and relations on the performance of GA [34].
Cui et al. Gave a multi-objective genetic algorithm for distributed database management. They formulated a multi objective combinatorial optimization problem based on Dominance and Optimality. Multiple criteria are developed with a goal to provide trade-off optimal performances for web services [35].
Deshpande A et al. Studied the problem of query optimization in federated database
systems and highlighted the need of decoupling various aspects of query processing. They implemented it on "Cohera" federated database system and demonstrated superiority of 2PO algorithm in case of known physical design of database [36].
Douglas W et al. developed a methodology to assign relations and determine the joint sites simultaneously. It decomposes queries into relational algebra operations and then makes site assignments based on a linear integer programming technique to minimize the inter-system communication. It describes procedures for balancing resource utilization across systems. Further it uses a heuristic technique to minimize average response time [37].
Falza N et al. offered a statistical method for simulating method for calculating the cardinality of the resulting relation acquired by relational operator by using a sample based estimation that execute the query to be optimized on small sample of real database and then used the results to determine the cost. All the database states maintained in the system are initialized when database is loaded. Chen's formulation is used to obtain the tuples in intermediate relation [38].
Gavish et al. extended the work of Chu by proposing certain properties of a tree query to check the usefulness of a join sequence or a non optimality of one. Then they extend this work by imposing more restrictions on tree queries to convert them into star queries [39].
Ghaemi et al. in their paper on evolutionary query optimization for heterogeneous distributed database systems, discuss a multiagent based architecture and use of genetic algorithms. They demonstrate the superiority of the GA over Dynamic Programming methods in case of large scale problems [39].
Hevner et al. made the first successful attempt at using semi joins in a distributed query process. After performing the local processing part of simple queries, each relation contains one common joining attribute. Authors generalized the algorithm for equi-join queries. They introduced the concept of Selectivity for join operations, as number of domain values currently appearing in the joining columns divided by the total domain values. They assumed that selectivity of one joining domain does not affect selectivity of other joining domain. A heuristic algorithm with improved exhaustive search was proposed for general queries. This approach also suffered from scaling problems, in the way that if number of relation joins and number of
sites involved move into double digit figure, algorithm computing time increases to exponential growths and quickly goes intractable [39].
Huang et al. Proposed a simple and comprehensive model for Fragment Allocation in Distributed Database Design that reflects transaction behaviour [40].
Junn W et al. in their work represented the genetic approach based on appropriate data structure to reduce the cost for distributed query processing. Simulation was performed for the distributed database environment and the experiment result revealed that genetic showed up as better way in computation effort and quality of solution subjected against other various approaches [41].
Kossmann D et al. present architecture for distributed query processing and some techniques to reduce communication costs, to exploit intra query parallelism [41].
Li et al. Presented a tree based GA with new coding method of genetic parameters with tree structure based on position and value. Improved Crossover and Mutation operators are devised to claim improved stochastic coding rules of genetic algorithms [42].
Li et al. proposed a design of a distributed query optimization algorithm. It is based on multiruleation semi joins, processed at a buffer zone of distributed database, so as to reduce the communication time of intermediate results generated [43].
Lin Z et al. Proposed first a data allocation algorithm with respect to a simple strategy to process transactions, and then secondly gave a dynamic data allocation algorithm, guaranteeing to produce a locally optimal data allocation [44].
Martin L et al. demonstrate the cost effectiveness of four algorithms, Branch & Bound, Greedy, Local Search and Simulated Annealing for site selection during the optimization of compiled queries in a large replicated distributed database system. They conclude that enumerative algorithms are best suited for simple queries and recommend a local search algorithm for complex queries [45].
Pund et al. describe the basic concepts of query processing and query optimisation in the
relational database domain. Further they differentiate three different types of query processing algorithms via Deterministic, Genetic, and Randomized Algorithm [46].
Rahmani et al. proposed an innovative model by clustering sites based on the cost of communication between sites, to allocate data on nodes of a Distributed Database. Authors claim significant reduction in the data redundancy in fragment allocation and network traffic [47].
Sakti et al. presented a semi join reducer cover set based technique for optimizing join queries in distributed databases. The technique converts semi join programs into a partial order graph which allows concurrent processing of semi join programs [48].
Segev et al. proposed a mathematical model for a special set of queries and two ways join queries later implemented by Segev. They prove the problem to be NP Complete and propose a heuristic solution by partitioning data horizontally and no use of semi joins [49].
SOS algorithm has been implemented, validated and tested for number of unconstrained mathematical problems and engineering design problems. Hence we deployed this algorithm for distributed queries and it showed better results as compared to other optimizations approaches.
IV. SOS ALGORITHM
Symbiotic Organism Search is inspired by natural interaction plans of organisms that are incorporated by them for sustaining life in the ecosystem. It is a powerful meta-heuristic algorithm that can be easily applied to engineering problems. Though large number of optimization methods have been developed but still some fail to solve real world engineering problems [48, 49].
SOS illustrates the interactive behavior among various organisms. According to nature’s aspect and law it is hard to find the organisms living alone all way. They sustain in ecosystem while being dependent on each other for primary needs. This way of living and interaction among organisms is called symbiosis i.e. to live together. The relationships that are common among distinct species organisms are Mutualism, Commensalism and Parasislasm.
Like other nature inspired algorithms SOS simulates the relationship between paired organisms of distinct species for a search that results in fittest organism. In order to get
optimal Solution SOS has its inception with an initial population size as ecosystem. In the beginning phase organisms are generated randomly that represent their result to particular problem. Each organism has fitness value that reveals their chance to achieve desired goal. The new generation of organisms are obtained by initializing the biological interaction among organisms. The type of interaction among organism defines the main goal of each phase. Each organism interacts with other in all these phases.
SOS applies avaricious strategy at the end of each communicating phase to have finest organism in ecosystem. It considers two major Parameters.
* Population Size or Ecosystem Size.
* Maximum Generations or Iterations.
It provides solution to wide variety of problems and is more robust than other computing algorithms and it does not require parameter tuning, hence it avoids risk that may result in compromised performance.
The general outline of the algorithm is given below and the individual phase is explained later in the paper.
3.1. General Outline of Symbiotic Organisms Search Algorithm:
```
{ Initial population (Ecosystem);
While (The termination criteria is not met)
Mutualism Phase;
Commensalism Phase;
}
Output the best Organism;
}
```
Fig.1 General Outline of SOS Algorithm
3.2. Detailed representation of Various Phases:
A) Ecosystem Initialization
B) Evaluate the Best Organism (Zbest)
C) Mutualism Phase
D) Commensalism Phase
\{
Number of Organisms (\textit{eco\_size});
Initial Ecosystem;
Termination Criteria;
\textit{Num\_iter}=0;
Num\_fit\_eval=0;
Max\_iter;
Max\_fit\_eval;
\}
\{
Select one organism arbitrarily, \( z_i \), where \( i \neq 2 \);
Determine mutual relationship vector (\textit{M}) and benefit factor (\textit{B\_Fac});
\textit{M}=(\textit{Z} + \textit{Z}_2)/2;
\textit{B\_Fac1} = any arbitrary number 1 or 2;
\textit{B\_Fac2} = any arbitrary number 1 or 2;
Modify Organism 2, and 2, based upon there mutual relationship;
\textit{Z} = random(0,1) * (\textit{Z}_2 - \textit{M} \times \textit{B\_Fac1});
\textit{Z} = random(0,1) * (\textit{Z}_2 - \textit{M} \times \textit{B\_Fac2});
Calculate fitness value of modified organisms.
Num\_fit\_eval=Num\_fit\_eval+1;
If amended organisms fitter than previous
\{
Accept the amended organisms;
\}
Else
\{
Reject the amended organisms and keep previous;
\}
\}
\{
Select one organism arbitrarily, \( z_i \), where \( i \neq 2 \);
Modify Organism 2, with the help of \( z_i \);
\textit{Z} = random(-1,1) * (\textit{Z}_2 - \textit{Z}_i);
Calculate fitness value of reformed organism.
If reformed organisms fitter than previous
\{
Accept the reformed organism (\textit{Z} \_reformed);
\}
Else
\{
Reject the reformed organism (\textit{Z} \_reformed) and keep previous
\}
\}
\textit{E) Parasitism Phase}
\section*{IV. DATABASE DESIGN AND OBJECTIVE FORMULATION}
\subsection*{4.1. Database Design}
The Design of distributed database is simulated considering a set ‘S’ of network sites, a set ‘R’ of relations(Tables) stored at various sites and a Set ‘Q’ as set of transactions. A transaction query (q) for information retrieval, is broken into a set of sub queries on the ‘R’ set of relations [28, 41, 29].
4.2. Problem Definition
The above stated variables in database design formulate the problem in the following context. Input data file represents the data allocation matrix given by for base relation K stored at sites’. The objective [41, 29] function for the distributed database queries is given below. For the below objective function we have to find:
Given a set of Relations or fragments \( R = \{r_1, r_2, \ldots, r_n\} \)
A net of sites \( S = \{s_1, s_2, \ldots, s_m\} \)
A set of sub queries \( Q = \{q_1, q_2, \ldots, q_l\} \)
A) Input Variables
**DAVR** : Data Allocation Scheme that have matrix representation for I/O cost and communication cost.
**\( IF_{rK}^q \)** : Matrix that represent the intermediate fragments ‘r’ used by a sub query ‘K’ of main query ‘q’.
B) Output to be Generated
**\( SDD_q^k \)** : An Operation Allocation Scheme Matrix which optimizes objective function.
4.3 Cost Model Formulation:
A) Data Allocation Variables:
\[ DAV_s = 1 \quad (\text{If Relation is available over site } s). \]
\[ DAV_s = 0 \quad (\text{If Relation is not available}). \]
B) Variables for selecting the site where the sub-query allocation will take place.
\[ SDD^q_{K,i} = 1 \quad (\text{Represents the sequence of sub query execution that takes place at different sites}). \]
\[ SDD^q_{K,i} = 0 \quad (\text{If sub-query } K \text{ of main query } Q \text{ is executed at site } s). \]
C) Variables that are used for join operation
\[ SDD_{M,PEM} = 1 \quad (\text{for performing left previous operation of a Join}) \]
\[ SDD_{M,PEM} = 1 \quad (\text{for performing right previous operation of a Join}) \]
\[ SDD_{M,PEM} = 0 \quad (\text{Else}). \]
D) \( IF^q_{x,e} \): Tells whether the sub query \( K \) of query \( q \) make reference to intermediate relation or not.
\[ IF^q_{x,e} = 1 \quad (\text{If the intermediate fragment is used by sub query } K \text{ of the main query } q). \]
\[ IF^q_{x,e} = 0 \quad (\text{Else}). \]
E) Variables for the use of intermediate fragments for Join Operation
\[ IF^q_{R_0(M)} = -1 \quad (\text{For left previous operation of join where } [M] = 1 \text{ for query } K). \]
\[ IF^q_{R_0(M)} = 1 \quad (\text{For right previous operation of join where } [M] = 2 \text{ for query } K). \]
\[ IF^q_{R_0(M)} = 0 \quad (\text{Else}). \]
Distribution of relations to sites represented by cost function involving query processing cost and storage cost [25].
\[ \text{Total Cost} = \sum_{q \in Q} QP_i + \sum_{v \in S} \sum_{r \in R} ST_{ip} \]
Here \( QP_i \) is the cost for processing query \( q_i \) for any application and \( ST_{ip} \) is the cost for storing fragments \( R_i \) at site \( S_p \). The total cost for the query is sum of local processing cost and Transmission cost i.e. according to OZSU [29] model. Further we have modified it considering only the retrieval transaction according to our design. Query processing cost is given below.
\[ QP_{LP\_COST + COMM\_COST} \]
A) Processing Cost for Selection and Projection Operations
Here $\text{IO}$ represents the memory blocks of relation $r$ accessed by sub-query $K$ of main query $q$. $\text{IO}$ represents the Input Output Cost Coefficient of a particular site $s$ of $S$. $\text{CPUCOST}$ represents the CPU Cost coefficient of site $s$ i.e. is a subset of $S$.
4.3.1 Local Processing Cost for various Operations in Distributed Query Processing:
A) Processing Cost for Selection and Projection Operations
It involves the input/output cost from secondary memory to primary and CPU processing cost for performing selection and projection at particular site.
\[
\text{LP\_COST}_K^q = \sum_s \text{SDD}_K^q(\text{IO}, \sum_r \text{IF}_K^q \text{MB}_r^q + \text{CPUCOST}_s \sum_r \text{IF}_K^q \text{MB}_r^q)
\]
4.3(a)
Here $\text{MB}_r^q$ : Represents the memory blocks of relation $r$ accessed by sub-query $K$ of main query $q$.
$\text{IO}$ represents the Input Output Cost Coefficient of a particular site $s$ of $S$.
$\text{CPUCOST}$ : Represents the CPU Cost coefficient of site $s$ i.e. is a subset of $S$.
B) Processing Cost for Join Operation:
The general model of OZSU does not incorporate local processing cost for join operation but we have considered it in our design.
Local processing costs for a join may be given as
\[
\text{LP\_COST}_K^q = \sum_s \text{SDD}_K^q(\text{IO}, \sum_r \sum_{m_r} \text{IF}_K^q \text{MB}_r^q)
\]
4.3(b)
\[
+ \sum_s \text{SDD}_K^q(\text{IO} \prod_r \text{IF}_K^q \text{MB}_r^q + \text{CPUCOST}_s \prod_r \text{IF}_K^q \text{MB}_r^q)
\]
4.3(c)
Here $m_r$ is the selectivity factor and it is considered as the ratio of the different values of a field to the domain of that field (0 <= $m_r$ <= 1)
$\text{MB}_r^q$ : Represents the size of an intermediate relation.
$V_{m}$ represents the left and right join operation for $M=1$ and $M=2$ respectively.
Equation 4.3(b) represents the I/O costs in storing the intermediate results of previous operations to the site where current join operation is performed. Equation 4.3(c) represents the CPU & I/O costs for evaluating join operation at current site.
### 4.4 Communication Cost Involved in Distributed Query Processing
Communication costs are incorporated in case of join operations and final resultant operation only. Since the selections & projections of retrievals on relations are to be performed only at sites which hold a copy of those base relations, join can be performed at any of all possible sites. Communication Cost represented below:
$$COMM_{COST}^q_K = \sum_m \sum_s \sum_u SDD^q_{Kv[M]5} + SDD^q_{yu} COM_{su} \left( \sum_r 1F^q_{rKv[M]} MB^q_{rKv[M]} \right)$$
4.4(a)
$COM_{su}$ is considered as the communication cost coefficient between site $s$ and $u$ from the input data matrix.
$COM_{su} = 0$ (for $s = u$) this is the case in which the previous operation and current join operation is performed on same site.
If the final operation in query processing is not performed at the query originating or destination site then a Communication Cost variable is added separately for costs incorporated in sending the final query answer to the query originating/destination site.
### 4.5 Objective Formulation
The best way to calculate cost of a query execution plan is to minimize the 'Total Cost of Query', which is represented in terms of time units and refers to use of resources such as CPU Cycles, Disk I/O & Communication Channels by a candidate allocation plan. It can be represented by the sum of mathematical equations as formulated & illustrated above.
Thus our Objective Function is to: Minimize the sum $\{4.3(a) + 4.3(b) + 4.3(c) + 4.4(a)\}$
Step 1: Data Distribution Scheme along with fragmentation schemes is given as input to the SOS_DDQ.
Step 2: Ecosystem Initialization.
Step 3: Consider the best (organism) solution, $Z_{best}$.
Step 4: Mutualism Phase.
Step 5: Commensalism Phase.
Step 6: Parasitism Phase.
Step 7: Move to step 3 if the current $Z_i$ is not the last organism in pool of ecosystem, otherwise move to next step.
Step 8: Stop if the maximum number of generations is reached and print the optimal solution, else move to step 3 and repeat the whole process.
4.6. SOS for Implementing Distributed Database Queries
This section gives the various steps involved in implementing SOS for distributed queries [2].
Step 1: Data Distribution Scheme along with fragmentation schemes is given as input to the SOS_DDQ.
Step 2: Ecosystem Initialization.
Step 3: Consider the best (organism) solution, $Z_{best}$.
Step 4: Mutualism Phase.
Step 5: Commensalism Phase.
Step 6: Parasitism Phase.
Step 7: Move to step 3 if the current $Z_i$ is not the last organism in pool of ecosystem, otherwise move to next step.
Step 8: Stop if the maximum number of generations is reached and print the optimal solution, else move to step 3 and repeat the whole process.
V. EXPERIMENTAL SETUP AND DATABASE STATISTICS
A customized Simulator developed in MATLAB is considered to analyse the performance of different techniques in optimizing distributed queries in distributed environment. Simulator SOS_DDQ takes the required stat as input and produces the desired output. Various parameters (Input/output Cost, CPU cost, Communication Cost) of distributed query are considered for the analysis. The input to the simulator is fed in the form of text file. Intermediate Fragment Size and Block Size of a relation plays inevitable role in performing analyses of distributed queries.
Different benchmark queries are considered for analysis of the performance of SOS_DDQ, GA_SA and Exhaustive approach. The Experiments are performed on Intel(R) core(TM) 2 Duo T6600 @ 2.20 GHz Machine having 3 GB Random Access Memory.
5.1 DATABASE STATISTICS
Data Allocation
<table>
<thead>
<tr>
<th>Sites</th>
<th>S1</th>
<th>S2</th>
<th>S3</th>
<th>S4</th>
<th>S5</th>
<th>S6</th>
<th>S7</th>
<th>S8</th>
<th>S9</th>
<th>S10</th>
</tr>
</thead>
<tbody>
<tr>
<td>R1</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>R2</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>R3</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>R4</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>R5</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>R6</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>R7</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
Fig 5.1 Data Allocation (DAVRs) over Several Sites
Cost Coefficients
<table>
<thead>
<tr>
<th>Sites</th>
<th>S1</th>
<th>S2</th>
<th>S3</th>
<th>S4</th>
<th>S5</th>
<th>S6</th>
<th>S7</th>
<th>S8</th>
<th>S9</th>
<th>S10</th>
</tr>
</thead>
<tbody>
<tr>
<td>INPUT/OUTPUT</td>
<td>1</td>
<td>1.1</td>
<td>1.2</td>
<td>1</td>
<td>1.1</td>
<td>1</td>
<td>1.2</td>
<td>1</td>
<td>1.1</td>
<td>1</td>
</tr>
<tr>
<td>CPU COST</td>
<td>1.1</td>
<td>1</td>
<td>1</td>
<td>1.1</td>
<td>1</td>
<td>1.2</td>
<td>1</td>
<td>1</td>
<td>1.2</td>
<td>1</td>
</tr>
</tbody>
</table>
Fig 5.2 I/O and CPU Cost Coefficients
Communication Cost Matrix
<table>
<thead>
<tr>
<th>Sites</th>
<th>S1</th>
<th>S2</th>
<th>S3</th>
<th>S4</th>
<th>S5</th>
<th>S6</th>
<th>S7</th>
<th>S8</th>
<th>S9</th>
<th>S10</th>
</tr>
</thead>
<tbody>
<tr>
<td>S1</td>
<td>0</td>
<td>10</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
</tr>
<tr>
<td>S2</td>
<td>10</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
</tr>
<tr>
<td>S3</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
<td>12</td>
<td>13</td>
</tr>
<tr>
<td>S4</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
<td>12</td>
</tr>
<tr>
<td>S5</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>11</td>
</tr>
<tr>
<td>S6</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
</tr>
<tr>
<td>S7</td>
<td>12</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
<td>13</td>
</tr>
<tr>
<td>S8</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
<td>12</td>
</tr>
<tr>
<td>S9</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
<td>11</td>
</tr>
<tr>
<td>S10</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>14</td>
<td>13</td>
<td>12</td>
<td>11</td>
<td>0</td>
</tr>
</tbody>
</table>
Fig 5.3 Communication Cost Matrix
5.2 INPUT DATA FILE REPRESENTATION
5.2.1 JOIN OPERATION FILE
Consider join operation file for query in Case 3 described later.
Line 1: 8 10 12 15 17 19 Represents the Left Previous Operations.
Line 2: 9 11 13 16 14 18 Represents the Right Previous Operations.
Line 3: 15 16 17 19 18 20 Represents the Join Operations.
Line 4: 15 17 19 22 24 25 Represents the Left Previous Fragments.
Line 5: 16 18 20 23 21 26 Represents the Right Previous Fragments.
5.2.2 FILE FOR COST PARAMETERS, DATA ALLOCATION AND INTERMEDIATE FRAGMENT SIZE
File for Case 3. (Example)
Line 1: 4 Represents the resultant site.
26 Represents the Fragments.
20 Represents the Operations.
7 Represents the Base Relations.
10 Represents the number of sites.
6 Represents the number of joins.
7,7 Represents the number of selections and projections respectively.
Line 2: 1 1.1 1.2 1 1.1 1 1.2 1 1.1 1 Represents I/O Cost.
Line 3: 1.1 1 1 1.1 1 1.2 1 1 1.2 1 Represents CPU Cost.
Line 4 – 13 Represents Communication Cost.
0 10 12 13 14 11 12 13 14 11
10 0 11 12 13 14 11 12 13 14
12 11 0 11 12 13 14 11 12 13
13 11 12 0 11 12 13 14 11 12
14 13 12 11 0 11 12 13 14 11
11 14 13 12 11 0 11 12 13 14
12 11 14 13 12 11 0 11 12 13
13 12 11 14 13 12 11 0 11 12
14 13 12 11 14 13 12 11 0 11
11 14 13 12 11 14 13 12 11 10
Line 14 – 20 Represents the Data Allocation over several sites.
1 1 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 1 1 1
1 1 0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0 0 0
0 0 0 0 0 1 1 0 0 0
Copyright © 2016 HSST
5.3 Experimental Queries
Case 1: In this case geographically dispersed environment is considered where five relations are distributed over 5 sites. The distributed query given below performs 5 selections, 5 projections and 4 join operations.
\[(\pi_{p1}(\sigma_{S1})R_1) \Join (\pi_{p2}(\sigma_{S2})R_2) \Join (\pi_{p3}(\sigma_{S3})R_3) \Join (\pi_{p4}(\sigma_{S4})R_4) \Join (\pi_{p5}(\sigma_{S5})R_5).\]
Case 2: In this case geographically dispersed environment is considered where six relations are distributed over 10 sites. The distributed query given below performs 6 selections, 6 projections and 5 join operations.
\[(\pi_{P1}(\sigma_{S1})R_1) : X : (\pi_{P2}(\sigma_{S2})R_2) : X : (\pi_{P3}(\sigma_{S3})R_3) : X : (\pi_{P4}(\sigma_{S4})R_4) : X : (\pi_{P5}(\sigma_{S5})R_5) : X : (\pi_{P6}(\sigma_{S6})R_6).\]

Number of Intermediate Fragments: 22
Number of Operations: 17
Case 3: In this case geographically dispersed environment is considered where seven relations are distributed over 10 sites. The distributed query given below performs 7 selections, 7 projections and 6 join operations.
\[(\pi_{p_1}(\sigma_{S_1})R_1):X:\ (\pi_{p_2}(\sigma_{S_2})R_2):X:\ (\pi_{p_3}(\sigma_{S_3})R_3):X:\ (\pi_{p_4}(\sigma_{S_4})R_4):X:\ (\pi_{p_5}(\sigma_{S_5})R_5):X:\ (\pi_{p_6}(\sigma_{S_6})R_6):X:\ (\pi_{p_7}(\sigma_{S_7})R_7)\]

Number of Intermediate Fragments: 26
Number of Operations: 20
5.4 GRAPHICAL RESULT REPRESENTATION
5.4.1 SOS_DDQ Analysis of Different Queries.
5.4.2 SOS_DDQ performance over different organisms and generations.
5.4.3 SOS_DDQ Time performance against other Optimization
5.4.4 SOS DDQ Communication Cost against other Optimization.
VI. CONCLUSION AND FUTURE SCOPE
This study and the work completed by us compared the symbiotic organism's algorithm (SOS) performance against genetic algorithm and exhaustive approach for distributed queries. Various parameters for distributed environment have been evaluated and it showed up the fine results of the above approach against other optimization techniques. SOS DDQ (Symbiotic Organism Search for Distributed Database Queries) simulation was performed in MATLAB.
Earlier SOS has been validated for many benchmark functions and engineering design mathematical problems. Hence we deployed it to distributed environment by incorporating its function in relevance to query optimization. The analysis of our study reveals the positive and negative aspects of different algorithms. SOS performed consistently better than other algorithms when compared over various parameters. The result conveys that Exhaustive approach provides fine and optimal results for adequate queries but it can't be applied to large distributed relations since it may take hours or many days to provide reasonable solution. Genetic approach provides results in reduced time but it may not always provide optimum query execution.
Symbiotic Search algorithm performance surpasses others in distributed optimization. Analysis was carried out over 100 generations (SOS DDQ) and ecosystem range from 10 to 600. Each organism has a fitness value associated which represents the total cost involved in distributed query answering. Communication cost is the major cost involved in distributed environment. Therefore we have plotted the communication cost against other approaches in results section. Results evaluated in our work (SOS DDQ) demonstrated that this algorithm was able to achieve better results with fewer evaluation functions than algorithms tested in previous
research. Concurrency control and Security concepts can be incorporated in future work. Some heuristic can be applied in interactive phases of algorithm to further optimize the complex update transactions.
REFERENCES
|
{"Source-Url": "http://journal.hsst.or.kr/DATA/pdf/v6_11_55.pdf", "len_cl100k_base": 9075, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 37748, "total-output-tokens": 12831, "length": "2e13", "weborganizer": {"__label__adult": 0.0003707408905029297, "__label__art_design": 0.00041747093200683594, "__label__crime_law": 0.0005044937133789062, "__label__education_jobs": 0.00196075439453125, "__label__entertainment": 0.0001404285430908203, "__label__fashion_beauty": 0.00020420551300048828, "__label__finance_business": 0.0008640289306640625, "__label__food_dining": 0.0003914833068847656, "__label__games": 0.0007777214050292969, "__label__hardware": 0.0017795562744140625, "__label__health": 0.0010805130004882812, "__label__history": 0.0004978179931640625, "__label__home_hobbies": 0.0001608133316040039, "__label__industrial": 0.0008630752563476562, "__label__literature": 0.0004897117614746094, "__label__politics": 0.000308990478515625, "__label__religion": 0.0004911422729492188, "__label__science_tech": 0.4326171875, "__label__social_life": 0.0001615285873413086, "__label__software": 0.020782470703125, "__label__software_dev": 0.5341796875, "__label__sports_fitness": 0.00028014183044433594, "__label__transportation": 0.000682830810546875, "__label__travel": 0.0002071857452392578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41738, 0.07494]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41738, 0.22556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41738, 0.85149]], "google_gemma-3-12b-it_contains_pii": [[0, 2055, false], [2055, 3859, null], [3859, 4351, null], [4351, 6630, null], [6630, 9108, null], [9108, 11173, null], [11173, 13433, null], [13433, 14915, null], [14915, 16432, null], [16432, 17677, null], [17677, 19697, null], [19697, 21497, null], [21497, 23362, null], [23362, 25469, null], [25469, 27391, null], [27391, 28884, null], [28884, 29292, null], [29292, 29775, null], [29775, 30428, null], [30428, 30638, null], [30638, 32553, null], [32553, 35442, null], [35442, 38579, null], [38579, 41451, null], [41451, 41738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2055, true], [2055, 3859, null], [3859, 4351, null], [4351, 6630, null], [6630, 9108, null], [9108, 11173, null], [11173, 13433, null], [13433, 14915, null], [14915, 16432, null], [16432, 17677, null], [17677, 19697, null], [19697, 21497, null], [21497, 23362, null], [23362, 25469, null], [25469, 27391, null], [27391, 28884, null], [28884, 29292, null], [29292, 29775, null], [29775, 30428, null], [30428, 30638, null], [30638, 32553, null], [32553, 35442, null], [35442, 38579, null], [38579, 41451, null], [41451, 41738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41738, null]], "pdf_page_numbers": [[0, 2055, 1], [2055, 3859, 2], [3859, 4351, 3], [4351, 6630, 4], [6630, 9108, 5], [9108, 11173, 6], [11173, 13433, 7], [13433, 14915, 8], [14915, 16432, 9], [16432, 17677, 10], [17677, 19697, 11], [19697, 21497, 12], [21497, 23362, 13], [23362, 25469, 14], [25469, 27391, 15], [27391, 28884, 16], [28884, 29292, 17], [29292, 29775, 18], [29775, 30428, 19], [30428, 30638, 20], [30638, 32553, 21], [32553, 35442, 22], [35442, 38579, 23], [38579, 41451, 24], [41451, 41738, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41738, 0.07082]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2063a32a25843e3c6956618dc3a5dd842ad73c6d
|
A survey of load sharing in networks of workstations
To cite this article: G Bernard et al 1993 Distrib. Syst. Engng. 1 75
View the article online for updates and enhancements.
Related content
- A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms
M Kara
- KTK: kernel support for configurable objects and invocations
B Mukherjee, D Silva, K Schwan et al.
- Micro-kernel support for migration
M O'Connor, B Tangney, V Cahill et al.
Recent citations
- A situatedness-based Knowledge Plane for autonomic networking
Dominique Gaïti et al
- C.S. Chandrashekaran et al
- Examination of load-balancing methods to improve efficiency of a composite materials manufacturing process simulation under uncertainty using distributed computing
Feng Zhang et al
A survey of load sharing in networks of workstations
Guy Bernardt†, Dominique Stevet‡ and Michel Simatic§
Institut National des Télécommunications, 9 rue Charles Fourier, 91011 Evry Cedex, France
O2 Technology 7, rue du Parc de Clagny, 78035 Versailles Cedex, France
Alcatel Aisthom Recherche, Route de Nozay, 91460 Marcoussis Cedex, France
Received 8 December 1992
Abstract. This paper is a survey of existing policies and mechanisms for load sharing in loosely-coupled distributed computing systems, where user machines are personal workstations interconnected by a local area network. We are interested only in centralized operating systems providing mechanisms for remote process communication, thus we do not study distributed operating systems in which load balancing and process migration may be provided by a network-wide virtual memory mechanism. We define load sharing, load balancing, non-preemptive migration and pre-emptive migration, and we discuss the goals of load sharing and load balancing strategies related to process scheduling. We argue against the usefulness of load balancing strategies in the context of networks of workstations. A load sharing algorithm is composed of three parts, namely, a location policy, an information policy and a transfer policy. We review the different location policies, information policies and transfer policies that have been proposed in the literature. We discuss their ability to take personal use of workstations into account, and we compare them with respect to the performance that can be really obtained in the context of networks of workstations. We show that only some policies are efficient in such a context. Thereafter, we present the mechanisms proposed for supporting the policies, and discuss them with respect to network interfaces, file system design, machine heterogeneity and program interactivity. The question of pre-emptive versus non-pre-emptive migration is addressed, and we argue that pre-emptive migration does not provide substantial benefits in a network of workstations. Most of the implemented load sharing systems described in the literature are presented, and finally some perspectives in this research area are described.
1. Introduction
A loosely coupled distributed system is made of a set of machines linked by a communication network. There is no physical memory common to the processors. Rather, interprocess communications are done by message exchanges over the network. Such an architecture provides mechanisms for resource sharing (processors, disks, printers) between applications running on different machines. These mechanisms are well understood today. The most important of them, namely the Remote Procedure Call, was formalized and implemented as early as 1981 [56, 17] (a recent survey on research work in the area of Remote Procedure Call, and a detailed comparison of eight existing mechanisms, may be found in [77]). However, the Remote Procedure Call mechanism by itself does not define the way it is used to achieve resource sharing. The main problem that the distributed system designers are faced with is that of performance. Three main classes of policies, built upon the possibility of having access to remote resources from any machine of the network, may be applied in order to enhance the global system performance:
- A file location policy determines the sites on which the files should be placed, and possibly the number of copies that should exist, in order to optimize some criteria. (A comparison of existing file location policies may be found in [33].)
- A task assignment policy aims at solving the problem of assigning tasks to processors when a job to be run consists of a set of communicating tasks. The goal is to transform the logical parallelism of the tasks into a real parallelism over a network of several processors. Most of the time, some a priori knowledge
# 'Building a 16-node distributed system that has a total computing power about equal to a single-node system is surprisingly easy.' [76].
of the program behaviour is needed, and assignment algorithms are complex [26]. Thus, task assignment policies are generally static, i.e. not able to take into account rapid changes in the system state.
- A program assignment policy considers application programs as atomic entities, for which is raised the question of the choice of the machine on which each program should run. The goal here is not to execute programs started by the users anywhere in the network.
- The lack of information about programs (real execution time, adaptability to changes in the system state; (iii) migrating programs as atomic entities, for which is raised the question of the choice of the machine on which each program should run. The goal here is not to execute programs started by the users anywhere in the network.
Emphasis is put on dynamicity. Most of these policies make no assumption about the behaviour of the users, nor require a priori knowledge of the state of the system. This approach has three consequences: (i) the lack of information about programs (real execution time, in particular) prevents optimal algorithms from being obtained; (ii) program assignment heuristics are simple enough to be enforced in real time, thus providing a good adaptability to changes in the system state; (iii) migrating a program after its execution was started is possible, because programs are independent and the decision of a new assignment can be made quickly.
Since the three classes of policies listed above have the same broad goal (improving performance in a distributed system by resource sharing), they are not mutually exclusive. Algorithms involving process migration, file migration and file replication [34,35,36], and algorithms considering assignment of independent programs, each of them being composed of several communicating modules [25] have been considered.
Research work in the area of program assignment in loosely-coupled distributed systems started about ten years ago, both with implementations [70,61,65,40] and with theoretical papers [16]. Since then, research activity in the field has always been intensive. The main reason is the generalization of loosely-coupled distributed systems as computer environments, which has been a driving force for designing resource sharing algorithms in general, and program assignment policies in particular.
Compared to networks of mainframes or networks of multiuser minicomputers, networks of workstations present several characteristics (we will come back to them throughout this paper).
(i) The total computing power is most of the time underutilized.
(ii) Users sometimes need peaks of computing power, for which the computing power provided by a single workstation is not sufficient.
(iii) Workstations are frequently diskless, so that system binary files and user data files are stored on a server machine. Workstations have remote access to them.
(iv) Most of local area networks provide a message broadcast capability, which seems an attractive tool for supporting a program assignment policy.
(v) In most computing environments, each workstation is dedicated to an 'owner', i.e. the customary user of the workstation.
With respect to a program assignment policy, these characteristics have the following implications. The second point makes a program assignment policy desirable. The first point makes it possible. The third point makes it cheap, since assigning a program on a remote workstation does not involve an extra overhead for file migration. The fourth point may be used to design simple algorithms, but we will show that broadcast may be expensive. The fifth point raises some problems, since 'owners' do not easily accept suffering large response times on 'their' workstation, under the pretext that another user needs a peak of computing power.
The purpose of this paper is to make a synthesis of research works in the area of program assignment policies in networks of workstations. We restrict the scope of this survey to network operating systems [76], i.e. computer environments where each machine runs its own standard, centralized operating system, augmented with some communication facilities for interprocess communications over a network (a typical example is Unix 4.3bsd). The study of program assignment policies in distributed operating systems, where each processor runs a part of the same network-wide operating system (in which program assignment is only an instance of a more general object migration facility), is out of the scope of this paper.
Other surveys were published a few years ago. The most important are those of Wang and Morris [79], and Zhou and Ferrari [81,82]. However, Wang and Morris considered only theoretical models of algorithms, and the outstanding work of Zhou and Ferrari did not pretend to cover the whole spectrum of program assignment policies. Furthermore, other policies and many implementations have been recently described in the literature, most of them in the context of networks of workstations. Thus, a synthesis as complete as possible is not useless.
The main design choices involved by program assignment policies may be summarized by a set of questions: 'why?', 'how?', 'when?', 'where?' and 'which one?'. The paper is organized in order to give answers to these questions. The goals of program assignment policies are addressed in section 2. Section 3 deals with a classification and a comparison of policies. The underlying mechanisms are described and discussed in section 4. Existing implementations are presented in section 5, and finally section 6 derives the perspectives of evolution in the field.
2. Goals of program assignment policies
In this section we define the terms we use throughout the paper and we list the various objectives of program assignment policies.
2.1. Terminology
There is some anarchy in the literature about the meaning of the words used by the various authors. We define here the terms that we will employ in the following.
In a loosely-coupled distributed system, programs are invoked by users from some terminal, or by the system itself (e.g., periodic electronic mail handling). A program assignment facility may decide to run the program on a machine that is different from the one the program was invoked on. Non-pre-emptive process migration consists in starting the execution of the program on a remote machine, and running the program there until it ends†. Pre-emptive process migration permits the execution of a program to be suspended on the current machine, and resumed on another machine‡. In any case, we use intentionally the term ‘process’ instead of ‘program’, to emphasize the dynamic character of the execution (both in time and space). However, it is a whole program that is run by the operating system of the target machine, even though this program may consist in several communicating tasks, each of them being run possibly as a separate process, a thread, or a lightweight process.
2.2. Objectives
The benefits that may be expected from process assignment strategies are the following [43]:
(i) Load sharing: if there are in the network some machines with small load (or even completely inactive), they can be used to relieve more loaded machines of some processes.
(ii) Network communication savings: by putting on the same machine entities that exchange much data, the network load may be lowered (e.g., a process making a lot of disk I/O operations may run on the machine that manages the disk).
(iii) Availability: spreading programs on several machines gives a better robustness to machine failures.
(iv) Reconfiguration: the possibility of despatching the programs amongst several machines may be used for system reconfiguration when a machine crashes, when recovery occurs, or before a scheduled halt of a machine.
(v) Remote access to a resource not locally available: for instance, a program that requires a floating-point coprocessor may be invoked from a machine without coprocessor.
The last four objectives are easy to understand. However, the first one requires some discussion. Just as several strategies may be designed to allocate the processor to the processes in a single-processor machine, several strategies may be designed to allocate machines to programs in a distributed system. This way, a global scheduling algorithm is superimposed on local scheduling algorithms. A parallel may be drawn between the objectives of program assignment strategies in distributed systems, and that of processor allocation algorithms in a centralized operating system [44].
In a centralized operating system, the minimum property expected from a scheduling algorithm is that the processor should not be idle while some processes are waiting for it. In a distributed system, an analogous minimum property is expected: no machine should be idle while processes are waiting for the processor on another machine in the network. Eager, Lazowska and Zahorjan [21] make the distinction between two broad classes of program assignment strategies. The only goal of load sharing algorithms is to provide this minimum property, while load balancing algorithms aim at equilibrating the process load amongst the machines of the network.
In networks of workstations, load balancing is not only unnecessary but undesirable, for two reasons. First, most workstations are under-utilized, thus balancing the load would result in process migrations between lightly loaded machines, and migration overhead would prevail over the small gain in execution time [78]. Second, since workstations are often dedicated to an owner, the user of a lightly loaded workstation would not be happy to suffer from long response times because another user makes intensive computations. However, as long as an owner is not affected significantly, their workstation may receive some load from outside [3]. For these two reasons, load balancing is not used in networks of workstations. The goal of process assignment in such systems is load sharing.
3. Process assignment strategies
A process assignment strategy is composed of an algorithm and of input parameters for this algorithm (a priori information, or observations). The algorithm is executed in order to take a decision: given the current values of input parameters, should a process be migrated to another machine and, if the answer is yes, to which one? More precisely, a process assignment algorithm is built with three components [82]. The information policy specifies the nature and the amount of information used for decision making, and the way this information is distributed. The transfer policy determines the eligibility of a process for remote assignment. The location policy selects a suitable machine which an eligible process should be assigned to. There are strong interactions between three components, thus it is difficult to study each of them separately. In this paper, we will consider information policies and location policies together.
Most process assignment strategies rely on some load index observed locally on each machine, and used as input parameter for the transfer policy and the location policy. In this section, we first classify and evaluate information and location policies, disregarding the nature of the load index involved (if any), then we discuss the transfer policies and the usual load indices, and finally we compare pre-emptive process migration and non-pre-emptive process migration.
3.1. Information and location policies
As mentioned before, process assignment strategies have a broad goal of load sharing or load balancing. However, in order to reach this goal, it is necessary to set more specific objectives for a process assignment algorithm [78]:
- **quality**: at least, the algorithm should be able to find an idle machine on the network, if there are some;
- **efficiency**: the algorithm should not impose an unacceptable overhead on the system, nor disrupt the machines that do not participate in the process assignment strategy;
- **extensibility**: the algorithm should be able to cope with a large number of machines (workstation-based configurations are currently made of several hundreds of machines);
- **robustness**: the process assignment facility should be interrupted as briefly as possible by the failure of one or more machines.
3.2. A taxonomy of algorithms
According to the taxonomy proposed in [15], the first distinction is between static and dynamic algorithms. In **static algorithms**, information about the total mix of processes in the system is assumed to be known by the time the executable image of a program is linked, and this information is used to assign a processor to the program: each time the program is started, the corresponding process is run on that processor. In **dynamic algorithms**, no (or little) **a priori** information is required about resource demands of processes, and no assumption is made about what the system state will be at program execution time. When local conditions make a process migration desirable, the location policy selects a suitable machine for receiving the process.
The family of dynamic algorithms may be further refined (see figure 1). Algorithms with **blind location** are those where the choice of an execution site is made without any information about the current conditions of the remote machines. Conversely, for **conditional location**, the choice of a receiver machine is based upon a **global knowledge** or a **partial knowledge** of the system state, according to whether the decision is made with information about all the machines in the network, or about a subset. The global knowledge may be maintained on a single machine (**centralized information**) or on all the machines of the network (**distributed information**). Finally, the migration decision may be taken by a single machine (**centralized decision**) or else may be taken by each machine (**distributed decision**).
Like any taxonomy, this one is not perfect, and it is difficult to find the right place for a few algorithms. However, we consider that it is better than the one proposed in [15], because it can take into consideration recent algorithms that were not published by the time [15] was written.
3.2.1. Static algorithms. The first research works in the area of process assignment dealt with static algorithms. In [51], process execution times are supposed to be deterministic. Probabilistic execution times are introduced in [16]. In [57], process inter-arrival times and service times are exponentially distributed. In these papers, the objective is load balancing, and optimal solutions (minimizing the average response time) are given. In [12], the objective is to balance the idle period durations amongst the machines.
Static algorithms have two drawbacks. First, their execution cost is high, hence they cannot be used to react to fast changes in the system. Second, when the variability of execution times is taken into account, this is done with exponential assumptions (in order to be able to obtain exact results), when in fact observations on real systems (see for instance [46] or [82]) invalidate these assumptions.
Static algorithms may be worthwhile for computing systems that execute periodically a set of programs with well known behaviour (e.g., real-time systems). This is clearly not the case for networks of workstations. Thus, the remainder of this section will be devoted to dynamic algorithms.
Before describing dynamic algorithms, we first set the values of the parameters that we will use for their comparison.
3.2.2. Parameters used for algorithm comparison. In [78], Theimer and Lantz compare a few algorithms on a quantitative basis, with the following parameter values. In order to be efficient, an algorithm should select a receiving machine for a process in less than 100 ms, consume less than 1% of CPU cycles on any machine, and consume less than 1% of network bandwidth. Furthermore, Theimer and Lantz assume that program generation leads to running the process assignment algorithm once per second on average on every machine, and that, for algorithms involving periodic information emission, the interval between two emissions is 10 s.
† These values were observed on a system composed of 70 Sun-2 and Sun-3 machines linked by an Ethernet at Stanford University.
In this section we extend the work of Theimer and Lantz, by setting some additional parameter values and by comparing a larger number of algorithms. Here we make the additional assumptions:
- The time required for processing a request (message reception, request processing, response emission) is 5 ms. This is a minimal hypothesis: Theimer and Lantz observed a 4 ms delay for an empty request on Sun-3s, which is confirmed by our own measurements [10], and 5 ms or 20 ms for two algorithms that they implemented.
- Two values are selected for the average percentage of idle machines on the network: 33% and 90%. These values appear as a lower and an upper bound for figures reported by several authors (37% in [82], 80% during the busiest times in [73], 33% at the busiest times and 80% most of the time in [78], 90% in [55]). This way, it will be possible to compare the behaviour of algorithms in rather different global load conditions, and to test the robustness of algorithms with respect to load variation during a typical day.
In [78], Theimer and Lantz stress the cost of broadcast and multicast on a local area network. When available, this feature looks very attractive, since it makes it possible to send information or a request to a set of machines with the same sending cost and the same bandwidth consumption as for point-to-point communication. However, broadcast and multicast have a major drawback: when the received message asks for an answer, all the recipients compute and send their answer at nearly the same time, so that a lot of answer messages arrive simultaneously, and buffer overflow may occur. Theimer and Lantz observed a loss rate above 50% as soon as a few tenths of answers are generated.
In [68], Simatic evaluated ten dynamic algorithms according to the parameters described above (these algorithms will be described in the following subsections). His results are summarized in table 1, which should be read as follows. ‘Quality’ is the probability to find an idle machine on the network if such a machine exists, when the average percentage of idle machines is 33% and 90%. The column ‘Efficiency’ indicates whether broadcasts are necessary in the absence of failures, and the average number of messages that is necessary to select a remote machine and transfer a process execution to it is. ‘Extensibility’ is the maximum number of machines that algorithms can take into account while remaining in the limits of the constraints described above. The column ‘Robustness’ gathers the answer to three questions:
- column ‘HM’—can location policy select a halted machine (i.e., a machine that is not up, or that does not participate to the process assignment facility)? If the answer is ‘yes’, then the transfer phase will fail, and another selection will be necessary.
- column ‘WF’—in the worst case of failure, is the process assignment facility fully available (OK), downgraded (DG), or non-available (NA)?
- column ‘Insertion’—when a machine joins the process assignment facility (e.g., at boot time), is it possible to use this facility immediately, or else is there a learning phase?
Now we review the different classes of dynamic algorithms.
3.2.3. Blind location. No state information about the machines in the network is used.
In RANDOM [81], the selection of a receiving machine is made randomly. This algorithm is efficient, extensible and robust. However, its quality is low, and it may select a halted machine. A variation is suggested in [21]: a loaded receiving machine may forward the incoming process to another machine, and a maximum number of hops prevents instability.
While being very simple, RANDOM provides substantial performance improvement with respect to no process assignment policy, at least with low or moderate global system load. This result is pointed out by probabilistic models [79], probabilistic simulation [21], measurements [73], and trace-driven simulation [82].
The outstanding work described in the last paper is a detailed comparison of seven algorithms. We will refer to it several times in the following.
In CYCLIC [79], processes are assigned to remote machines in a cyclic way. The only information to store is thus the identification of the last machine that a process was sent to. Simulation showed a small improvement with respect to RANDOM.
3.2.4. Partial knowledge. In this class, algorithms use some information about a subset of the machines in the network. This knowledge may be obtained either implicitly (by memorizing the result of a process transfer request), or explicitly (by message exchanges).
LEARNING [72] is a variation of RANDOM. According to the result of a transfer request towards some machine, the probability to select that machine for the next transfer is increased or decreased. LEARNING does not provide better performance than RANDOM when machine loads vary frequently, and unfortunately this is the case for networks of workstations.
In PROBABILISTIC [6,7], every machine maintains a load vector holding the load of a subset of machines. Periodically, the first half of the load vector, including the local machine load, is sent to a randomly selected machine. This one updates its load vector accordingly. This way, information may be spread in the network without broadcast messages. However, the quality of this algorithm is not perfect, its extensibility is low and insertion is deferred.
There were a number of papers about THRESHOLD and LEAST [21, 38, 82, 54]. They both use a partial knowledge obtained by message exchanges. In THRESHOLD, when a process is to be transferred, a randomly selected machine is asked for its load. If the load is less than some threshold \( T \), the process transfer occurs. If not, polling is repeated with another machine. If no suitable receiver has been found after \( Maxpoll \) attempts, the process is executed locally. LEAST is a variation of THRESHOLD, where systematically \( Maxpoll \) machines are probed, and the least loaded machine is selected for receiving the process. THRESHOLD and LEAST show good performance results with respect to their simplicity (see table 1). Furthermore, the load values used by these algorithms are up-to-date, hence a bad location decision (i.e., one based on obsolete load information) is unlikely. The influence of the values of \( T \) and \( Maxpoll \) is discussed in [21], [82] and [54]. An adaptation of LEAST for real-time systems is proposed in [64].
There are several variants of THRESHOLD and LEAST:
- RECEPTION [50, 20] is equivalent to THRESHOLD, but driven by available machines rather than overloaded ones. When the load of a machine falls under a threshold, this machine tries to find an overloaded machine by random polling. It is shown in [20] that performance is not so good as that of THRESHOLD if the cost of transferring a process having started its execution is larger than the cost of starting a new process, which is the case in most systems.
- [27] proposes an algorithm based on a microeconomic approach. The main drawback is that the execution duration of processes has to be known a priori, which is generally not the case for interactive use of workstations.
- RESERVATION [20, 82, 54] is a variant of RECEPTION applied to non-pre-emptive migration rather than pre-emptive migration. In this algorithm, an underloaded machine gets a reservation for the next process to be started from an overloaded machine. The performance of RESERVATION is not good, because reservations are made on the basis of information that will be obsolete by the time it is honoured.
To summarize, THRESHOLD and LEAST provide good results when system load is homogeneous between machines. This can be easily explained. If system load is homogeneous, a small subset of machines constitutes a representative sample: if no available machine can be found after \( Maxpoll \) trials, it means that the system is globally loaded, thus it is not worthwhile to continue searching an idle machine. However, this is not always the case in networks of workstations, because some workstations may be overloaded whereas others are completely idle at the same time (e.g., a workstation ‘owner’ is currently outside). In heterogeneous load patterns, partial knowledge about global system state is not accurate enough. Algorithms based on global knowledge are better adapted.
3.2.5. Centralized information and centralized decision. In the class of algorithms using a global knowledge of system state, information may be concentrated on a single machine, or else distributed. It is the same for decision making, therefore four subclasses may be considered theoretically. In fact, the subclass where the selection of a suitable receiver would be replicated on every machine is of no practical interest. The first subclass (centralized information and centralized decision) is studied in this subsection.
In CENTRAL [37, 48, 59, 82, 13, 78], when an overloaded machine wishes to transfer a process, it asks a server for an underloaded machine, if there is one. The server machine is informed of the availability of any machine in the system by means of messages sent to it by every machine in the system. CENTRAL provides very good performance results (see table 1).
Once again, several variations have been suggested:
- Every machine periodically sends its load to the server [37, 13].
- Every machine sends its load to the server only when the load has changed by a significant amount [78].
- In Butler [59, 60] and Sprite [19], load information is not maintained in memory by a server process. Instead, it is read/written in a single shared file managed by the network file system.
† The figures for Efficiency and Extensibility were obtained with 33% of idle machines and \( Maxpoll = 5 \).
In Remote Unix [48], the server asks periodically for the load values (by a broadcast message).
Centralized solutions in a distributed system suffer from two potential drawbacks. First, the server may become a bottleneck. This is not the case for CENTRAL (see extensibility in table 1—remember that the figures are obtained assuming less than 1% CPU overhead on any machine, including the server). Second, a server crash makes the facility unavailable, and the time necessary for recovery may be large. This is the case with CENTRAL. In their implementation, Theimer and Lantz measured a delay of 18 s [78]. The following classes of algorithms introduce some degree of distribution in order to solve this problem.
3.2.6. Centralized information and distributed decision. In GLOBAL [29, 81, 39, 82], information gathering is centralized and information use (decision) is distributed. Periodically, the server broadcasts the load vector. This way, an overloaded machine just finds the less loaded machine from its load vector without asking the server. This algorithm is more efficient and extensible than CENTRAL, because it involves a smaller number of messages. Furthermore, robustness is better, since during server recovery the process assignment facility is still available, yet with old load values.
However, the exact behaviour of algorithms cannot be predicted from the results given in table 1 only. In fact, GLOBAL does not perform better than LEAST [82]. The reason is that GLOBAL uses a larger amount of information, but this information is not up-to-date (a high frequency for gathering/broadcasting load values would result in an unacceptable overhead). On the other hand, LEAST uses information on a subset of machines only, but this information is up-to-date.
3.2.7. Distributed information and distributed decision. In OFFER [25, 52, 73, 82, 67, 23], every machine broadcasts periodically its load value, thus every machine can maintain a global load vector. Extensibility is very bad (see table 1). The poor results of OFFER are confirmed by Zhou [82]: the mean process response time is larger than with GLOBAL. A variation is proposed in [9], but it is not sufficient to overcome the cost of systematic broadcasts.
REQUEST [72, 73, 78, 30, 42] avoids periodic broadcasts. This algorithm is similar to LEAST, except that all the machines in the system are polled, and that polling is done by a single broadcast message. Extensibility is barely passable. Furthermore, buffer overflows may occur when many answers are received simultaneously. Thus, in the variation proposed in [78], only the machines with reasonable load reply to polling messages, and the reply is delayed by a small time increasing with the local load, so that the first replies received are probably the most interesting. With this variation, REQUEST and CENTRAL show comparable performances.
In RADIO [11], information and decision are distributed too, but no broadcasts occur in normal use. The idea is the following. Currently underloaded workstations are linked by a distributed list (the 'available list') where each machine knows the identity of its successor and predecessor. Furthermore, every workstation in the network knows the identity of a head of the available list (the 'manager'). Process transfers are negotiated directly between an overloaded workstation and an underloaded one, or indirectly via the manager that knows an available workstation (its successor in the available list). Broadcasts are necessary only when the manager crashes or when a workstation joins the process assignment facility (at boot time, for instance). The performances of RADIO are intermediary between those of CENTRAL and those of REQUEST (see table 1). The measured recovery time after a crash of the manager is 800 ms, to be compared to 18 s with CENTRAL.
3.2.8. Synthesis. Whereas all the information and location policies described above give significant improvement in process response time with respect to local scheduling alone, some of them are more attractive for networks of workstations. Algorithms unable to find an available workstation in the network (quality less than 1) should be discarded. OFFER is not extensible enough for an average size network. GLOBAL suffers from out-of-date information, so that attempted transfers may be rejected. Theimer and Lantz [78] conclude that for large size networks, CENTRAL is the best choice, whereas REQUEST is adequate for small size networks. This is our conclusion, too. Let us add however that RADIO may be a good alternative for medium size networks.
3.3. Transfer policies
The location policy and the information policy together define the algorithm used to find a suitable workstation for receiving a process, given that an overloaded workstation tries to get rid of a process. The transfer policy determines when a workstation should be declared 'overloaded', and whether a process migration is desirable. Two pitfalls must be avoided by transfer policies. The first one is 'Role Reversal' [38]: A is more loaded than B, and thus migrates a process to B, and the effect is that B becomes more loaded than A. The second one ('Migrate for Nothing') is related to the process execution time, which is a priori unknown: if the process transfer time is larger than the gain in execution time, the net result is negative.
3.3.1. System state. Most transfer policies for load sharing are based on local thresholds. When the local load is above some value \( T \), the workstation is said to be 'overloaded'. In order to avoid the 'Role Reversal' phenomenon, several mechanisms have been proposed. The simplest is that the loads of the sending machine and that of the receiving machine should differ by at least some 'bias' [71, 82]. Other mechanisms are defined in [63, 71, 66, 53]. Zhou and Ferrari [81, 82] studied the influence of the value \( T \) on the performance.
\( \dagger \) Residual time, for pre-emptive migration.
of RANDOM and GLOBAL algorithms. An adaptive method for setting \( T \) is proposed in [62].
Transfer policies using a double threshold [38, 3] are an extension of the "bias" mechanism. At any time, the load may be in one of three intervals:
(i) \( \text{load} < \text{LOW} \): the machine is 'underloaded'. It may receive foreign processes.
(ii) \( \text{LOW} \leq \text{load} < \text{HIGH} \): the machine is 'normally loaded'. It will not accept new foreign processes.
(iii) \( \text{HIGH} \leq \text{load} \): the machine is 'overloaded'. It will try to send one or more processes to an unloaded machine.
Note that in this scheme, the transfer policy is concerned by \( \text{HIGH} \) only, \( \text{LOW} \) being used by the location policy. This points out the tradeoffs between location policies, information policies and transfer policies. The double threshold scheme is very flexible (the values of \( \text{LOW} \) and \( \text{HIGH} \) need not be the same on all machines). Furthermore, it is well adapted to networks of workstations, because it can conciliate load sharing and personal use of workstations. For instance, if a user is only editing a file, its workstation may receive a compilation process without bothering the user. If the load index reflects the CPU utilization, the workstation will be 'underloaded' in this case. The value of \( \text{HIGH} \) is generally set statically and rather empirically in existing systems. An exception is the work of Alonso and Cova [3], who measured the average process response time on a network of four Sun-2 workstations under artificially generated load with different threshold values, but clearly more work is needed in this area.
3.3.2. Process eligibility. Given that a workstation is 'overloaded', it will try to get rid of a process. However, this has to be done carefully, in order to avoid the 'Migrate for Nothing' pitfall. Data collected on Unix systems [14, 75] show that most processes consume less than 1 s CPU, thus this is a real problem. Clearly, starting processes remotely as soon as a workstation is overloaded is not a good solution. Therefore, filtering techniques have been proposed in order to set the eligibility of processes for migration (the first three concern non-pre-emptive migration):
(i) In manual filtering [37, 1, 59, 73, 4, 78, 30, 60, 11], users invoke some particular command to indicate that a process is a candidate for migration. This method is very simple, but it is not transparent to the user.
(ii) Type filtering [14, 39, 49, 52] is a variation of manual filtering. Eligible processes are put by the user in a batch queue. Migration is thus restricted to non-interactive processes.
(iii) Name filtering [40, 14, 29, 74, 82, 75] is transparent to the user. The system maintains a list a command names that correspond to processes having presumably a long lifetime (such as compilation and text formatting), which are therefore eligible for migration. However, this scheme cannot take into account user written programs (such as simulations), nor the large variability in execution times (a compilation may end prematurely because of errors). The influence of the filtering rate is studied in [75].
(iv) Age filtering [14, 45] has been proposed for pre-emptive migration. The starting observation is that a large amount of long lived processes are in fact very long lived processes (for example, at least 44% of processes whose lifetime is already 1.0 s have in fact a lifetime of at least 2.0 s). This may be used for filtering: only long lived process are eligible for (pre-emptive) migration.
3.4. Load indices
When thresholds are involved in location policies or transfer policies, their value is that of some load index. In this section we review the different load indices that have been proposed. The broad objectives of load indices are discussed in [29].
The first requirement for a load index is to reflect processor activity. Most load indices are based on averaged CPU queue length. Better results are obtained by adding IO queue length [28], as is done in the Berkeley Unix load index. Other indices have been proposed ('Normal Response Time' [24, 1], 'idle process' [73]).
However, for networks of personal workstations, ownership should be taken into account too. The first way is to consider that a workstation is unavailable for receiving foreign processes as soon as the owner is logged in [59, 39]. This is very restrictive, since often owners do not log out even when they do not use their machine. The second way is to monitor user activity (keyboard and mouse in [18], keyboard, mouse and average CPU utilization in [49]). This is restrictive too, because when a user is only editing a file, his/her workstation will be declared busy. The third way, which is the most flexible, is to use a threshold transfer policy associated to a classical load index, such as the one of Berkeley Unix [3, 11].
3.5. Pre-emptive versus non-pre-emptive migration
In the previous sections, no assumption was made about the time by which a process is migrated from one workstation to another. For non-pre-emptive migration, a process migration may occur only when a process is started. For pre-emptive migration, a process migration may occur at any time.
Pre-emptive migration is far more complex than non-pre-emptive migration. Whether it is worthwhile is questionable. Results reported in [22] (probabilistic model) and in [46] (trace-driven simulation) show that pre-emptive migration may offer limited benefits over non-pre-emptive migration. More precisely, benefits appear when three conditions hold: (i) heterogeneous load; (ii) high global load; (iii) files are mainly local [45].
In networks of workstations, condition (i) is fulfilled, but condition (ii) is not, and condition (iii) does not hold with diskless workstations.
However, a motivation for pre-emptive migration in a network of workstations is personal use. When an owner reclaims his/her machine, what to do with the possible foreign process running there? Simple
solutions, such as killing them [60], or lowering their priority [37], are not satisfactory. Clearly, pre-emptive migration could be the right solution. Some systems support it (see section 5).
In fact, experience with pre-emptive migration leads to mitigated conclusions. [5] stresses implementation difficulties for Charlotte. Conclusions drawn from Sprite [19] indicate that pre-emptive migration can be used as a last resort to guarantee response time to the owner of a workstation, but is unlikely to be useful for load sharing.
4. Mechanisms
In this section we review and discuss the mechanisms which process assignment facilities rely on, and their relationships with process assignment strategies.
(i) Network interface: most local area networks support broadcasting. However, multicasting is not always available in hardware. If multicasting is not directly available, it can be made up with broadcast and software filtering, but this entails an extra overhead on all machines. Algorithms based on multicast are thus penalized.
(ii) File system structure: in most networks of workstations, the file system is distributed, and efficient mechanisms are available for remote file access. Non-pre-emptive process migration is especially attractive for diskless workstations, since running a process remotely rather than locally does not involve extra file access overhead: in both cases access to the necessary files is available via the network. On the other hand, when some executable files are replicated on some local disks, both information policy and location policy should take this information into account. This is still an open problem.
(iii) Taking heterogeneity into account: hardware heterogeneity may be easily taken into account. Resource requirements of programs and resource availability on machines (e.g., floating point coprocessor) may be integrated in information policy and location policy. The problem of different CPU types may be solved easily too: search paths for executable files may be set conditionally to CPU type. Furthermore, it is possible to weight load values by CPU speed in order to achieve some fairness between workstations.
However, operating system heterogeneity is far more difficult to cope with. In particular, pre-emptive migration is still unsolved. For non-pre-emptive migration, a strategy involving a 'service server' is proposed in [80].
(iv) Program interactivity: there are standard solutions for keeping user interactivity with remotely executing programs (the remote shell facility in Berkeley Unix is an example). However, some existing systems restrict process migration to non-interactive programs.
(v) Pre-emptive migration: the mechanisms involved are very complex. It is necessary to detach the process to migrate from its initial environment, to transfer its state and its context, and to attach the process to a new environment on the receiving machine, all this in a reliable and efficient way. Information to gather includes stack, registers, current directory, open file descriptors. In Unix, this information is scattered,
<table>
<thead>
<tr>
<th>Name</th>
<th>Reference</th>
<th>Algorithm</th>
<th>Load index</th>
<th>Filtering</th>
<th>OS</th>
<th>PM</th>
<th>Inter</th>
</tr>
</thead>
<tbody>
<tr>
<td>Process Server</td>
<td>[37]</td>
<td>CENTRAL</td>
<td>QCPU</td>
<td>by name</td>
<td>Cedar</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<td>NEST</td>
<td>[25]</td>
<td>OFFER</td>
<td>NRT</td>
<td>manual</td>
<td>modified UNIX</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<td>MOS</td>
<td>[8]</td>
<td>PROBA</td>
<td>?</td>
<td>by name</td>
<td>modified UNIX</td>
<td>no</td>
<td>yes</td>
</tr>
<tr>
<td>Butler</td>
<td>[59]</td>
<td>CENTRAL</td>
<td># users</td>
<td>manual</td>
<td>UNIX</td>
<td>no</td>
<td>yes</td>
</tr>
<tr>
<td></td>
<td>[60]</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>—</td>
<td>[39]</td>
<td>GLOBAL</td>
<td># users</td>
<td>by type</td>
<td>UNIX</td>
<td>no</td>
<td>no</td>
</tr>
<tr>
<td>Condor</td>
<td>[49]</td>
<td>CENTRAL</td>
<td>QCPU + user</td>
<td>by type</td>
<td>UNIX</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>REM</td>
<td>[67]</td>
<td>OFFER</td>
<td>BSD</td>
<td>manual + threshold</td>
<td>modified UNIX</td>
<td>yes</td>
<td>?</td>
</tr>
<tr>
<td>GATOS</td>
<td>[30]</td>
<td>REQUEST</td>
<td>BSD</td>
<td>manual</td>
<td>UNIX</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<td>—</td>
<td>[13]</td>
<td>CENTRAL</td>
<td>QCPU</td>
<td>none</td>
<td>modified UNIX</td>
<td>no</td>
<td>yes</td>
</tr>
<tr>
<td>GAMMON</td>
<td>[9]</td>
<td>OFFER</td>
<td>QCPU</td>
<td>none</td>
<td>UNIX</td>
<td>no</td>
<td>?</td>
</tr>
<tr>
<td>Siddle</td>
<td>[42]</td>
<td>REQUEST</td>
<td>BSD + free memory</td>
<td>manual + threshold</td>
<td>UNIX</td>
<td>no</td>
<td>yes</td>
</tr>
</tbody>
</table>
which makes process migration difficult to implement [2].
There are some implementations above the Unix kernel [49, 52], but performance seems poor. Implementations for Unix with modifications to the kernel provide better results [37, 8, 7, 18, 41]. However, they are far from those obtained in distributed operating systems designed from scratch. Furthermore, in most implementations, system calls related to inter-process communications are not supported. A survey of process migration mechanisms may be found in [69].
5. Implementations
A summary of existing systems is given in table 2. The list is in no way exhaustive. Only systems described with enough details are included.
In the column ‘Load index’, ‘QCPU’ represents the length of the CPU queue, ‘BSD’ is the load index used in Berkeley Unix and ‘NRT’ stands for Normal Response Time. Column ‘PM’ and column ‘Inter’ indicate whether pre-emptive migration and program interactivity are supported.
No performance results are given in table 2, because a few references report results. Furthermore, when figures are provided, they cannot be compared (incompatible definitions for measured times, different hardware configurations). Some other implementations are reported in [32].
6. Perspectives
One can expect that research work in the area of load sharing will go on in the following directions.
(i) Automatic parallelization: automatic program parallelization that can be derived from process assignment facilities is ‘large-grained’. When a program is made of several processes without communications between them, running each process on a different machine may provide a substantial speed up. A typical example is the make command in Unix: each of the several modules that will be linked into an executable program may be compiled in parallel, on as many different machines. A ‘parallel make’ command is available in Sprite [19], Gatos [31], and Isis [41].
(ii) Distributed virtual memory: with distributed virtual memory [47, 58], a process may have access to memory pages on remote machines. This feature could provide an elegant solution to pre-emptive migration. Instead of sending the whole memory image when a process is migrated, the receiver could fetch pages on demand.
(iii) Object-oriented distributed systems: in some object-oriented distributed systems, objects may move between machines, and may contain data as well as executable code. Such systems provide a new approach to load sharing, since it is possible to migrate units of code smaller than a process [43].
7. Conclusion
Load sharing in networks of workstations may provide substantial benefits in process response time, even with the simplest policies. Many algorithms have been proposed. The choice should be made according to environment specificities (network interface, file system structure, size of the network). There is no clear winner.
A filtering policy for selecting which processes are candidates for migration is desirable in environments where many processes are short lived. Manual filtering is the most flexible, although not transparent to the user.
Non-pre-emptive migration may be easily implemented. Pre-emptive migration is very complex, and does not provide decisive improvements. However, it may be appropriate for permitting workstation owners to reclaim their machines.
Current load indices are not able to take into account all the parameters that influence process response time. For instance, main memory sizes and local disks are not considered when selecting a receiving machine.
Finally, distributed virtual memory and object-oriented distributed systems open promising perspectives.
References
[12] Bonomi F and Kumar A 1988 Adaptive optimal load balancing in a heterogeneous multiserver system with a central job scheduler Proc. 8th Int. Conf. on Distributed
Load sharing in networks of workstations
pour la conception et la mise en oeuvre d'applications dans les systèmes répartis hétérogènes Phd thesis Télè de Doctorat, Université Paris VI
Hac A and Jin X 1987 Dynamic load balancing in a distributed system using a decentralized algorithm Proc. 7th Int. Conf. on Distributed Computing Systems (Berlin) (Los Alamitos, CA: IEEE Computer Society)
Hac A 1989 A distributed algorithm for performance improvement through file replication, file migration, and process migration IEEE Trans. Software Eng. 15 1459–70
Harbas R S 1986 Dynamic process migration: to migrate or not to migrate Technical note csi-42 University of Toronto
Hunter C 1988 Process cloning: a system for duplicating UNIX processes Proc. USENIX Winter '88 (Dallas, TX)
Ju L Xu G and Tao J 1993 Parallel computing using idle workstations ACM Operating Systems Rev. 27 87–96
Krueger P and Livny M 1987 The diverse objectives of distributed scheduling policies Proc. 7th Int. Conf. on Distributed Computing Systems (Berlin) (Los Alamitos, CA: IEEE Computer Society)
Krueger P and Livny M 1988 A comparison of preemptive and non-preemptive load distributing Proc. 8th Int. Conf. on Distributed Computing Systems (San Jose, CA) (Los Alamitos, CA: IEEE Computer Society)
Li K 1986 Shared virtual memory in loosely coupled multiprocessors Phd thesis Yale University
Litzkow M 1987 Remote UNIX turning idle workstations into cycle servers Proc. USENIX Summer '87 (Phoenix, AZ) (Berkeley, CA: USENIX)
Litzkow M J, Livny M and Mutka M W 1988 Condor—a hunter of idle workstations Proc. 8th Int. Conf. on Distributed Computing Systems (San Jose, CA) (Los Alamitos, CA: IEEE Computer Society)
Ma P R, Lee E and Tsuchiya M 1982 A task allocation heuristics for distributed computing systems Proc. 9th Int. Conf. on Distributed Computing Systems (Amsterdam: Elsevier)
computing systems (San Jose, CA) (Los Alamitos, CA: IEEE Computer Society)
[27] Ferguson D, Yemini Y and Nikolaou C 1988 Microeconomic algorithms for load balancing in distributed computer systems Proc. 8th Int. Conf. on Distributed Computing Systems (San Jose, CA) (Los Alamitos, CA: IEEE Computer Society)
[31] Foliot B 1989 Tools for implementation of parallel applications with automatic load balancing Report no. 308 MASI, Université Paris-VI
[32] Foliot B 1993 Méthodes et outils de partage de charge
Adaptive load sharing in heterogeneous systems Proc.
9th Int. Conf. on Distributed Computing Systems
(Newport Beach, CA) (Los Alamitos, CA: IEEE
Computer Society)
processing capacity in a workstation–processor bank
network Proc. 7th Int. Conf. on Distributed Computing
Systems (Berlin) (Los Alamitos, CA: IEEE Computer
Society)
Carnegie-Mellon University (available as CMU Report
CMU-CS-81-119)
[57] Ni L M and Hwang K 1985 Optimal load balancing in a
multiple processor system with many job classes IEEE
Trans. Software Eng. SE-11 491–6
[58] Ni L M and Wu C F 1989 Design tradeoffs for process
scheduling in shared memory multiprocessor systems
[59] Nichols D A 1987 Using idle workstations in a shared
computing environment Proc. 11th ACM Symp.
Operating Systems Principles (Austin, TX) (New York:
ACM)
[60] Nichols D A 1990 Multiprocessing in a Network of
Workstations PhD thesis Carnegie-Mellon University
(available as CMU Report CMU-CS-90-107)
Rudisin G and Thiel G 1981 LOCUS: a network
transparent, high reliability distributed system Proc.
8th ACM Symp. Operating Systems Principles (Pacific
Grove, NY) (New York: ACM)
gradient estimators in load balancing algorithms Proc.
8th Int. Conf. on Distributed Computing Systems (San
Jose, CA) (Los Alamitos, CA: IEEE Computer Society)
allocation policy using time thresholding Proc.
Performance '83 (Minneapolis, MN) (Amsterdam:
Elsevier)
[64] Ramamritham K, Stankovic J A and Zhao W 1989
Distributed scheduling of tasks with deadlines and
communication oriented network operating system kernel
Proc. 8th ACM Symp. Operating Systems Principles (Pacific
Grove, NY) (New York: ACM)
[66] Shivaratri N and Krueger P 1990 Two adaptive location
policies for global scheduling algorithms Proc. 10th Int.
Conf. on Distributed Computing Systems (Paris) (Los
Alamitos, CA: IEEE Computer Society)
A software facility for load sharing and parallel
processing in workstation environments 2nd IEEE Conf.
on Computer Workstations (Santa Clara, CA) (Los
Alamitos, CA: IEEE Computer Society)
[68] Simon M 1990 Placement de tâches interactives sur un
réseau de stations de travail Rapport de dea Université
Paris VI
mechanisms ACM Operating Systems Rev. 22 28–40
[70] Solomon M H and Finkel R A 1979 The Roscoe
distributed operating system Proc. 7th ACM Symp.
Operating Systems Principles (Pacific Grove, NY) (New
York: ACM)
[71] Stankovic J A 1984 Simulations of three adaptive,
decentralized controlled, job scheduling algorithms
Comput. Networks 8 199–217
[72] Stankovic J A 1985 Stability and distributed scheduling
[73] Stumm M 1988 The design and implementation of a
decentralized scheduling facility for a workstation
cluster 2nd IEEE Conf. on Computer Workstations
(Santa Clara, CA) (Los Alamitos, CA: IEEE Computer
Society)
[74] Summers R C 1987 A resource sharing system for
personal computers in a LAN: concepts, design and
[75] Svensson A 1990 History, an intelligent load sharing filter
Proc. 10th Int. Conf. on Distributed Computing Systems
(Paris) (Los Alamitos, CA: IEEE Computer Society)
[76] Tanenbaum A S and Van Renesse R 1985 Distributed
operating systems ACM Comput. Surveys 17 419–70
[77] Tay B H and Ananda A L 1990 A survey of remote
procedure calls ACM Operating Systems Rev. 24 68–79
in a workstation-based distributed system IEEE Trans.
Software Eng. SE-15 1444–58
[80] Wills C B 1989 A service execution mechanism for
a distributed environment Proc. 9th Int. Conf. on
Distributed Computing Systems (Newport Beach, CA)
(Los Alamitos, CA: IEEE Computer Society)
[81] Zhou S and Ferrari D 1987 A measurement study of
load balancing performance Proc. 7th Int. Conf. on
Distributed Computing Systems (Berlin) (Los Alamitos,
CA: IEEE Computer Society)
[82] Zhou S 1988 A trace-driven simulation study of dynamic
|
{"Source-Url": "http://iopscience.iop.org/article/10.1088/0967-1846/1/2/003/pdf", "len_cl100k_base": 12539, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45097, "total-output-tokens": 15329, "length": "2e13", "weborganizer": {"__label__adult": 0.00030112266540527344, "__label__art_design": 0.0004529953002929687, "__label__crime_law": 0.0002911090850830078, "__label__education_jobs": 0.0022735595703125, "__label__entertainment": 0.00017023086547851562, "__label__fashion_beauty": 0.00016605854034423828, "__label__finance_business": 0.000667572021484375, "__label__food_dining": 0.0003380775451660156, "__label__games": 0.0008826255798339844, "__label__hardware": 0.002349853515625, "__label__health": 0.000579833984375, "__label__history": 0.0004591941833496094, "__label__home_hobbies": 0.00013434886932373047, "__label__industrial": 0.0006470680236816406, "__label__literature": 0.0005426406860351562, "__label__politics": 0.00032591819763183594, "__label__religion": 0.0004725456237792969, "__label__science_tech": 0.35498046875, "__label__social_life": 0.00014388561248779297, "__label__software": 0.054229736328125, "__label__software_dev": 0.57861328125, "__label__sports_fitness": 0.00022101402282714844, "__label__transportation": 0.0005526542663574219, "__label__travel": 0.0002186298370361328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64151, 0.03662]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64151, 0.33513]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64151, 0.91326]], "google_gemma-3-12b-it_contains_pii": [[0, 794, false], [794, 4835, null], [4835, 10657, null], [10657, 16309, null], [16309, 21212, null], [21212, 25161, null], [25161, 31012, null], [31012, 36991, null], [36991, 43058, null], [43058, 47609, null], [47609, 52988, null], [52988, 59468, null], [59468, 64151, null]], "google_gemma-3-12b-it_is_public_document": [[0, 794, true], [794, 4835, null], [4835, 10657, null], [10657, 16309, null], [16309, 21212, null], [21212, 25161, null], [25161, 31012, null], [31012, 36991, null], [36991, 43058, null], [43058, 47609, null], [47609, 52988, null], [52988, 59468, null], [59468, 64151, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64151, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64151, null]], "pdf_page_numbers": [[0, 794, 1], [794, 4835, 2], [4835, 10657, 3], [10657, 16309, 4], [16309, 21212, 5], [21212, 25161, 6], [25161, 31012, 7], [31012, 36991, 8], [36991, 43058, 9], [43058, 47609, 10], [47609, 52988, 11], [52988, 59468, 12], [59468, 64151, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64151, 0.04734]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
9b80f5e666731bb90ab62fcba6af727693658436
|
Cloud Ready Applications Composed via HTN Planning
Ilche Georgievski
Sustainable Buildings
Groningen, The Netherlands
ilche@sustainablebuildings.nl
Faris Nizamic
Sustainable Buildings
Groningen, The Netherlands
faris@sustainablebuildings.nl
Alexander Lazovik
Johann Bernoulli Institute
University of Groningen
Groningen, The Netherlands
a.lazovik@rug.nl
Marco Aiello
Johann Bernoulli Institute
University of Groningen
Groningen, The Netherlands
m.aiello@rug.nl
Abstract—Modern software applications are increasingly deployed and distributed on infrastructures in the Cloud, and then offered as a service. Before the deployment process happens, these applications are being manually – or with some predefined scripts – composed from various smaller interdependent components. With the increase in demand for, and complexity of applications, the composition process becomes an arduous task often associated with errors and a suboptimal use of computer resources. To alleviate such a process, we introduce an approach that uses planning to automatically and dynamically compose applications ready for Cloud deployment. The industry may benefit from using automated planning in terms of support for product variability, sophisticated search in large spaces, fault tolerance, near-optimal deployment plans, etc. Our approach is based on Hierarchical Task Network (HTN) planning as it supports rich domain knowledge, component modularity, hierarchical representation of causality, and speed of computation. We describe a deployment using a formal component model for the Cloud, and we propose a way to define and solve an HTN planning problem from the deployment one. We employ an existing HTN planner to experimentally evaluate the feasibility of our approach.
Index Terms—service composition, automated planning, application configuration, software deployment, cloud computing
I. INTRODUCTION
Cloud computing brings new possibilities of experiencing benefits from software applications. These are no longer installed and running on a single machine, but they are composed of assorted software components that are transparently deployed and distributed on several machines in Cloud infrastructures, and are always available on a reliable network. Consider as an example an application for intelligent energy management of office buildings [1]. The application is supposed to provide office occupants with various representations of energy and environment information, and control a wide range of devices and systems, for instance, a lighting system. Such an application consists of multiple components each of which offers its capabilities as services deployed on the Cloud infrastructure belonging to some office building or building corporation. These services are not necessarily accessible over a network that is open for public use, but they are typically accessible only by the corporations providing or using them (thus greater control and privacy). We refer to such services as Cloud services.
The problem
Cloud applications are usually composed manually or with some predefined scripts, either involving strenuous effort and being error prone. Several factors contribute to this. The first one is that although each service is responsible for addressing a specific and separate aspect of an application, there is often high interdependency between services [2]. Second, each service may have multiple versions each of which includes a different set of requirements for communication, exchange of information, and functionalities of other services [3]. Third, each service may have multiple instances running in the same setting [4]. Say there are 300 rooms in some office building. A single instance of a service with some specific functionality, for example, lighting control, may have difficulties with such scaling of the number of offices. This implies that the number of services for an actual deployment may vary and increase, which is a fourth factor.
Considering these factors, one has to find, choose and properly configure appropriate services so that they compose applications ready for deployment. We refer to this as a deployment problem. The solutions to deployment problems involve deployment actions, which are simple operations performed on services, such as installing a service instance, binding service instances, terminating a service instance, etc. With the proliferation of services and requests for application deployments, solving deployment problems requires a lot of resources in the development, configuration, integration and maintenance of applications in Cloud infrastructures. It is therefore vital to search for and decide on deployment actions automatically and dynamically such that these actions configure a required application by interacting with existing service instances and/or creating new ones on the Cloud.
Proposed solution
As a necessary direction to automate the composition of Cloud applications ready for deployment, it appears natural to resort to automated planning [5]. Planning provides powerful methods for searching in large and complex Cloud infrastructures to find “good” compositions of Cloud ready applications. Applications are composed dynamically, thus services need not to be fixed in advance in scripts and always available (the same holds for the servers of the Cloud). Additionally,
planning can be used to handle the Cloud uncertainty (e.g., failures of hardware resources), find deployments optimal with respect to the use of computer resources, etc.
There is an evident basic correspondence between planning problems and deployment problems: planning goals correspond to requests for application deployments, planning states correlate to current deployments or configurations of Cloud infrastructures, and planning actions correspond to deployment actions. In the Cloud setting, however, deployment actions are simple operations without any semantics, keeping the actions separate from the configuration knowledge. To support this modularity of deployment actions and still consider the configuration knowledge when composing Cloud applications, we turn to Hierarchical Task Network (HTN) planning [6]. HTN planning provides support through its rich domain knowledge and hierarchical representation of causality. HTN planning is suitable also due to its speed of computation.
The contributions
We summarise our contributions next.
- We propose to solve the problem of composing applications ready for deployment on Cloud infrastructures via HTN planning. To the best of our knowledge, this is the first proposal to compose Cloud applications using a generic planning technique in contrast to special-purpose planning techniques (see [2], [3]). On the other hand, this sort of problems has a close resemblance with Web service composition, a problem well studied by the planning community. There are however a few notable differences. The first one is that Web services are distributed on the Internet, thus publicly available, and assumed to be registered to some repository. Cloud services, in contrast, are commonly part of well-controlled environments. The second and important issue with Web services lies in the lack of consistent semantic annotations such that make their composition feasible in practice. Even though various ways to describe Web services exist (e.g., SOAP, WSDL, OWL-S), some already deprecated or never used in practice, the reality of Web services is that they are associated only with syntactic specifications and free-text descriptions, leading to the consideration of Web services as nothing more than data sources [7]. Being part of controlled environments, Cloud services have different characteristics: they tend to be structured and described using consistent (in-house) ontologies [8], [9], or even provided with machine-interpretable annotations [10]. Corporations tend to make use of well-established standards and best practices they gain in the domain of service-oriented architectures to support a standardised way of access to Cloud services [11]. In contrast to Web service composition, these considerations foreground the possibility to make the composition of Cloud applications feasible in practice. Third, the configuration processes in Cloud infrastructures involve creation of new service instances, making the composition of Cloud services and our approach distinct in this respect. Another issue that differentiates the two problems but we do not deal here with is the deployment of Cloud services on multiple servers under various resource constraints.
- We establish a formal correspondence between deployment problems and HTN planning problems. In fact, we propose a strategy to create HTN planning problems from deployment problems described using an existing formal model called Aeolus [12]. The Aeolus model enables configuring applications deployable on the Cloud.
- We encode a domain model and use our own domain-independent HTN planner to examine it.
- We evaluate the planner’s performance under increasing difficulty of deployment problems, and show that the planner is able to compose applications fast. We then compare it to the performance of an existing planner implemented specifically to handle Aeolus-based deployment problems. As expected, the domain-specific planner outperforms our domain-independent HTN planner, however, the results show the feasibility of HTN planning to compose Cloud applications.
The paper is organised as follows. Section II provides brief descriptions of HTN planning, the Aeolus model and a running example. Section III introduces our modelling strategy and the deployment-based HTN planning problem. Section IV provides details on the experimental evaluation. Section V discusses related work, followed by Section VI that concludes the paper.
II. PRELIMINARIES
HTN planning provides the means for solving deployment problems, and the Aeolus model enables specifying them. We also provide a running example that helps in demonstrating our approach.
A. HTN planning
In HTN planning, the domain model consists of tasks that can be accomplished by operators or methods. An operator represents a transition from a state to another one, while a method predefines how to decompose some task into greater details. Given an HTN planning problem, which consists of an initial state, an initial task network and sets of operators and methods, planning is performed by repeatedly decomposing tasks from the initial task network until operators executable in the initial state are reached.
A primitive task is an expression of the form $pt(t)$, where $pt$ is a primitive-task symbol, and $\tau = \tau_1, \ldots, \tau_n$ are terms. A compound task is defined similarly. The set of primitive and compound tasks is a finite set of task names $TN$. A state $s$ is a set of ground predicates with the closed-world assumption. An operator $o$ is a triple $\langle pre(o), eff(o) \rangle$, where $pt(o)$ is a primitive task, $pre(o)$ and $eff(o)$ are preconditions and effects, respectively. An operator $o$ is applicable in a state $s$ iff $pre(o) \subseteq s$. Applying $o$ to $s$ results into a new state $s[o] = s \cup eff^+(o) \setminus eff^-(o)$. A task $t$ is a pair $\langle ct(t), M_t \rangle$, where $ct(t)$ is a compound task, and $M_t$ is a set of methods. A method $m$ is a pair $\langle pre(m), tn(m) \rangle$, where $pre(m) are
preconditions and $tn(m)$ is a task network. A method $m$ is applicable in a state $s$ iff $\text{pre}(m) \subseteq s$. Given a task $t$ such that $m \in M_t$, applying $m$ to $s$ results into a task network $s[m] = tn(m)$. A task network $tn$ is a pair $\langle T_n, \prec \rangle$, where $T_n \subseteq TN$, and $\prec$ defines the order of tasks in $T_n$.
Definition 1 (HTN planning problem): An HTN planning problem $\mathcal{P}$ is a tuple $\langle s_0, tn_0, O, T \rangle$, where $s_0$ is an initial state, $tn_0$ is an initial task network, $O$ and $T$ are sets of operators and tasks, respectively.
Definition 2 (Solution): Given an HTN planning problem $\mathcal{P}$, a sequence of operators $o_1, \ldots, o_n$ is a solution to $\mathcal{P}$, if and only if there exists a task $t \in T_0$, where $tn_0 = (T_0, \sim_0)$, such that $(t, t') \in \sim_0$ for all $t' \in T_0$ and 1) $t$ (or $o_1$) is primitive and applicable in $s_0$ such that $o_2, \ldots, o_n$ is a solution to $\mathcal{P} = \langle s_0[\{o_1\}], tn_0 \setminus \{o_1\}, O, T \rangle$; or 2) $t$ is compound and there exists an applicable method $m$ such that $tn(m) = (s_0[m], t)$, $tn' = tn_0 \setminus \{t\} \cup tn(m)$, and $o_1, \ldots, o_n$ is a solution to $\mathcal{P} = \langle s_0, tn', O, T \rangle$.
B. Deployment model
We define the problem of configuring and deploying applications on the Cloud by using the Aelous model [12]. The main element of the model is a component, describing a manageable resource that provides and requires functionalities. Through the use of state machines, the Aelous model provides a way to encode specific components declaratively by specifying how functionalities are accomplished. Let us consider a component as the Finite State Machine (FSM) shown in Figure 1. The FSM defines the state transition processes of a component, i.e., the states and the order in which a component can transition from one state to another. A component is initially in an Uninstalled state. Upon start, it transitions into an installed state, and then to a Running state. State transitions are accomplished using deployment actions. For example, given some component in its initial state, it is installed by invoking the startComponent action.
In most cases, however, a component can transition in some state only if the functionalities that particular state requires through require ports are communicated by components that can provide them through provide ports. We can observe such transitions in configuration patterns (see Figure 2). A pattern contains a set of components interrelated among each other through the ports on the level of states. The components are abstract, meaning that they will be replaced by concrete components, or instances, at runtime. A single configuration pattern therefore defines a number of actual compositions.
A component $c$ is a 5-tuple $\langle Q, q_0, U, P, R \rangle$, where $Q$ is a finite set of states, $q_0$ is the initial state, $U \subseteq Q \times Q$ is the set of state transitions, $P$ is the set of provide ports, and $R$ is the set of require ports. We denote the set of all available components as $C$, and the set of all ports as $F$. The set $A$ consists of the deployment actions used upon the elements in $C$ and $F$. A configuration $D$ is a tuple $\langle C, I, \phi, B \rangle$, where $C$ is a set of available components, $I$ is a set of currently deployed component instances, $\phi$ is a function that associates $i \in I$ with a pair $\langle c, q \rangle$, where $c \in C$ and $q \in Q$ is the current component state; and $B \subseteq F \times I \times I$ is a set of bindings.
A deployment problem consists of an initial configuration, a set of deployment actions, and a request for a new configuration (i.e., application). The solution to the problem is a deployment run representing a sequence of deployment actions on components that, when deployed, produce the required configuration.
C. A running example
Let us consider again the application for energy management in office buildings and suppose that its only capability is to present energy and environment information to office occupants on public screens using Web interfaces. We refer to this application as Public Dashboard. Figure 2 graphically represents a simplified Aelous pattern for composing the Public Dashboard application in a running state. The main and top-level component represents Dashboard, which operates using several software services among which essential ones are a Web server and a database. The application requires a database to store all energy and environment information (e.g., energy consumption, light level, weather information, etc.). Cassandra database is preferred and commonly used, but other databases are compatible too. A recommended server is Apache, but any other server that supports the underlying scripting language and database is suitable too. We use Cassandra and Apache2 as components that Dashboard depends on.
III. DEPLOYMENT AS AN HTN PLANNING PROBLEM
Next we introduce the strategy to create an HTN planning problem from a deployment problem. We use the Hierarchical Planning Definition Language (HPDL) [13] when describing the planning structures. In the following, we refer to a state transition that does not depend on any functionality provided by other components as simple transition. Otherwise, we use the term complex transition.
A. Hierarchical planning domain model
Components, states and ports of components: We encode components, instances, ports as domain types component instance port, which are all subtypes of the type object. In fact, each component type, such as Dashboard, is represented as an object of type component.
While FSMs associate components with states abstractly, component instances are the ones to be in a specific state at planning time. We encode an instance state using a predicate "(state instance)", where state is a string representing the type of an FSM state, and instance is a variable representing the component instance. An example of a Dashboard instance d1 in an installed state is (installed d1).
A component state may be associated with require and provide ports. To represent the association of a port to a state, we use a predicate "(statePort component port)", where statePort is a string representing the type of port in a specific state, component is a variable representing the type of component that requires or provides a port represented by the variable port. For example, if Dashboard requires the httpd port in the installed state, we encode it as (installed-require dashboard httpd). Such knowledge holds for all instances of the respective component. These predicates are therefore grounded in the initial state and static during planning.
Creating new component instances: One of the features of the composition of Aeolus applications is that one or more component instances must be created from existing (abstract) components. We address the creation of new uninitialised instances using a domain function. This function returns a number that we use to represent instance variables in a special predicate (instance ?i:Num - number). The instance-number function practically serves us as a counter to keep track of the current value that can be assigned for new instances. The domain function does not take arguments. We use an additional predicate (type ?iNum - number ?c - component) to associate the instance with a particular component. We increase the instance number, and assert the association by manipulating the effect of the operator that creates new instances as showed in the following encoding.
{:(action createInstance :parameters (?c - component) :precondition () :effect (and (instance (instance-number)) (type (instance-number) ?c) (increase (instance-number) 1)))}
Deployment actions: In addition to createInstance, we consider the actions that accomplish simple transitions. These are the deployment actions, including the binding ones. The binding actions are responsible for low-level binding of ports – the require ports are bound to the provide ports. We encode all these actions as HPDL operators. The parameters of operators corresponds either to a component instance variable or to variables of a port and two instances (in the case of binding actions). The preconditions and effects of each operator capture the semantics of the respective action. The following is an operator that corresponds to the startComponent deployment action, which makes the state of a instance to become installed and activates all the ports associated with the installed state of the component which the current instance belongs to.
{:(action start :parameters (?i - instance) :precondition (and (not (installed ?i))) :effect (and (installed ?i) (forall (?p - port) (when (and (installed-provide ?c ?p) (type ?i ?c))(active ?p ?i)))))}
Other deployment actions are encoded similarly. As for the binding ones, the bind operator creates a binding between the provide port of some instance and the require port of another one, and the unbind operator deletes an already established binding between two components’ instances.
Configuration processes: Although each different type of an application has its own installation and running configuration pattern, the process of configuring applications is general and can be abstracted away. Let us detail how we can accomplish that.
The process of configuring an application requires satisfaction of the dependencies to functionalities provided by components. Let us assume that an instance in an uninstalled state cannot have requirements to be satisfied. We may then consider two abstractions for complex transitions of components. The first abstraction refers to acquiring a component functionality in the installed state, while the second one refers to establishing a functionality in the running state. We point out that complex transitions representing other configuration types can be easily incorporated in the current domain model with minor modifications. HTNs naturally enable encoding knowledge at different levels of abstraction. This support for modularity enables us to focus on a particular level at a time [6]. We can formulate tasks and encode high-level strategies in the methods of these tasks before reasoning on low-level tasks (operators).
We encode each abstraction as a task in the domain model, namely install and run tasks. Each method of these tasks encodes a specific case. One such method involves port activation. If a component state is associated with one or more require ports, the port activation process makes sure that the need of the current instance for specific functionalities is addressed. That is, if the current component instance has require ports that are not active, the method first activates each port and calls recursively its corresponding task until all necessary ports are activated. The actual process of port
forall and task, when running an run is more complex than the binding one, and tasks that deal and run task.
expression in the method for both tasks, a set of components | is the set of tasks derived from the G is the set of deployment actions. Once we have methods that involve port activation and binding, we can proceed to the method that deals with the case when all require ports are active and bound. To address the satisfaction of all require ports, we use a forall expression in the method for both tasks, install and run. The following expression is used for the install task.
(forall (?p - port)
(and (installed-require ?c ?p) (bound ?p ?i ?i1)))
After this constraint check, we are ready to start or run an instance. In the case of the run task, when running an instance, we have to deactivate the ports that will be no longer provided by the instance in the installed state. The process of port deactivation is accomplished using a separate task with multiple methods. Each method represents a different case to be handled, such as a provide port that is bound but needed for the running state, a provide port free to be unbound, etc. The port deactivation task uses port unbinding. The process of port unbinding is more complex than the binding one, and requires checking for constraint violation. That is, we have to take care of active provide ports bound to active require ports. We use a separate task for this process, that is, unbindPorts. This task does nothing when the port is bound and needed for the next transition. When all necessary constraints are satisfied, it unbinds a specific port and recursively calls itself, shown in the following encoding. Being a recursive task, it includes a base case that performs phantomisation [6].
:tasks (sequence (unbind ?p ?i ?i1) (unbindPorts ?i1))
There are methods in the install and run tasks that deal with the case when there are no required functionalities for an instance. This means that we have a simple transition which can be handled by installing the component instance directly. In the case of running an instance, we invoke the port deactivation task to ensure a valid transition to the running state.
The modelling of the transitions from a running state to an installed state and further to an uninstalled state is analogous to the encoding of the tasks we described so far.
One of the features of these kinds of compositions is that a cycle may occur between states of different component instances. That is, an instance is expected to provide a functionality at a specific point in the composition, but it is not possible because at the same point the instance is required to change its state [3]. We address this feature using the process of instance duplication. Instance duplication deals with such cycles by creating as many instances of the same component as needed, and deploying them in different states at the same time. We encode instance duplication as a separate method. The method makes sure that the current component instance is in a specific state and it has at least one provide port bound. Consequently, a new component instance is created either in an installed state or in a running state, depending on the type of configuration.
Algorithm 1 shows the high-level steps of the strategy we described for the creation of an HTN domain model.
**Algorithm 1** Transformation of an Aeolus model into an HTN planning domain model
| Input: | a set of components C, a set of deployment actions A |
| Output: | HTN planning domain model (O, T) |
| 1: | Encode component, instance, port as types |
| 2: | Choose c = (Q, q0, U, F, R) from C |
| 3: | for j = 1 to |Q| do |
| 4: | Create state predicate and port predicates for qj, qj ∈ Q |
| 5: | end for |
| 6: | Encode an operator o for creating instances |
| 7: | for j = 1 to |A| do |
| 8: | Encode aj as an operator aj, aj ∈ A |
| 9: | end for |
| 10: | Ask the user questions regarding the configuration processes in (C, A), and encode the corresponding tasks |
**B. Deployment-based HTN planning problem**
A deployment problem $\mathcal{P}^D$ is a tuple $(D_0, A, G)$, where $D_0$ is the initial configuration, $A$ is the set of deployment actions, and $G$ is the requested configuration. $\delta$ is a satisfying deployment run for $\mathcal{P}^D$ if and only if $\delta$ is a sequence of deployment actions that transform $D_0$ into $G$. A requested configuration, $G$, is achievable if and only if there exists at least one satisfying deployment run for it.
Given a deployment problem $\mathcal{P}^D$, we define the corresponding deployment-based HTN planning problem $\mathcal{P}$ according to Definition 1, where 1) $s_0$ is the initial state consisting of a list of the following ingredients derived from $D_0$: components and ports as objects, component states, currently deployed instances, the current state of deployed instances and bindings as the special predicates we defined in the HTN planning domain model. $s_0$ also contains a domain function initialised to 0. 2) $tm_0$ is the initial task network encoding the requested configuration $G$; 3) $O$ is the set of operators that represent actions in $A$, and $T$ is the set of tasks derived from the configuration processes with respect to Algorithm 1. A plan $\pi$ is a solution to $\mathcal{P}$ according to Definition 2.
**Theorem 1:** Let $\mathcal{P}^D$ be a deployment problem and $\mathcal{P}$ be the corresponding HTN planning problem. If a requested configuration $G$ is achievable, then there exist a plan $\pi$ for $\mathcal{P}$.
Let $\delta$ be a satisfying deployment run for $\mathcal{P}^D$ such that $G$ is achievable. Under the assumption that the user provides reasonable answers – there is a correspondence between $\mathcal{P}^D$ and $\mathcal{P}$ as defined previously, then there must exist a solution for $\mathcal{P}$.
85
We can now obtain that the solution of the deployment-based HTN planning problem is a deployment run for the corresponding deployment problem.
**Theorem 2:** Let \( \mathcal{P}^D \) be a deployment problem and \( \mathcal{P} \) be the corresponding HTN planning problem such that Theorem 1 holds. We can then construct a sequence of deployment actions based on \( \pi \) that is a satisfying deployment run for \( \mathcal{P}^D \).
Let us present a constructive proof for which we consider the deployment problem \( \mathcal{P}^D \) shown in Figure 2. Let \( \mathcal{P} \) be the corresponding deployment-based HTN planning problem. Furthermore, consider the following plan for \( \mathcal{P} \): \([\text{create-Instance}(d0), \text{createInstance}(a1), \text{start}(a1), \text{bind}(\text{httpd}, d0, a1), \text{start}(d0), \text{createInstance}(c2), \text{start}(c2), \text{run}(c2), \text{bind}(\text{cascade}, d0, a2), \text{run}(d0)]\). We can construct a deployment run in which the actions from the plan are deployment actions. The resulting deployment run is a satisfying deployment run for \( \mathcal{P}^D \).
**IV. EXPERIMENTAL EVALUATION**
**Motivation:** Consider extending the application for managing office buildings and suppose that its capabilities go beyond those of the Public Dashboard. Typically, such an application consists of a number of primary components responsible for implementing core processes, and several secondary components that complete the operation cycle of the application [1]. The primary and secondary components are all highly interdependent. Say that some building is equipped with numerous heterogeneous devices, such as sensors and actuators. A primary component wraps up and interacts with these devices in such a way that it gathers the information they provide (e.g., light level), and executes low-level commands (e.g., turn on a lamp). Some of these functionalities are used by another component that amasses the device information and provides it as unified raw data to other interested components. Among those, an essential one processes the raw data and exposes it as meaningful context information. The component that provides automated control reasons over the context information and selects device actions that achieve some building objective. These actions are further processed by another component and send out to the component responsible for executing low-level commands. Other primary components may focus on more specific issues, such as collection and measurement of only electricity consumption of devices. As secondary components, different databases are used, for example, one for storing raw and context data and another for saving descriptive information about the building; message brokers are used for asynchronous communication between the components, etc.
The primary and secondary components are implemented as Cloud services, which can be in all three states described earlier. We see that services are dependent among each other, thus they have require and provide ports. We consider the **degree of dependence** a computational factor. Furthermore, the final application is intended to be deployed in a private Cloud. Given that such an application may be run in environments of varying size (e.g., small and large office buildings), the number of components involved in the application may reach relatively high. We therefore evaluate the efficiency of our approach under increasing number of components. Finally, components may have multiple instances running, for instance, to cover different building spaces (e.g., floors, offices, common spaces, etc.). The need for instance duplication increases the difficulty of planning problems too.
**Set-up:** We make planning problems more interesting and challenging with respect to component interdependencies by having the requested configuration of applications to appear deeply in the right of the search space. We use a set of components \( c_1, \ldots, c_n \), where each \( c_i \) has require and provide ports as follows. Given that we want to have the rightmost component \( c_n \) in its running state, the dependencies between components will require to first create instances for components from \( c_1 \) to \( c_n \), then to perform transition from uninstalled to installed state in the reverse order of component instances, and finally, to transition from installed to running state in the order from \( c_1 \) to \( c_n \). Then, we increase the difficulty of planning problems with respect to the number of components by varying the number from 3 to 300, resulting in more than 50 problems. These constitute our first test case.
Using the setting of the first test case, we create a second test case to increase the difficulty of planning problems in such a way that configurations require instance duplication. We randomly select several components and, for a selected component \( c_i \), we remove the activation of a provide port \( p_i \) from its running state. The removal requires another instance of \( c_i \) to be created so as to satisfy the requirements of \( c_{i-1} \) and \( c_{i+1} \).
We use our own HTN planner, called Scalable Hierarchical (SH) planning system [14], to solve the planning problems of the two test cases and to evaluate the feasibility of HTN planning for composing Cloud applications. SH is a domain-independent HTN planner implemented entirely in the Scala programming language. It consists of two main modules, namely HPDL processor and Planner. HPDL problem and domain descriptions are transformed into programming-level constructs through the HPDL processor. The Planner includes the main algorithm which is based on depth-first search. SH shares similarities with two existing HTN planners: the support for HPDL with SIADEX [15] and the search mechanism with SHOP2 [16].
We run SH on an Intel Core i7-3517U @1.90GHz, 8GB RAM machine running Windows 8.1 and Java 1.8.0_31.
To assess the impact of using HTN planning, we compare the results of the performance of SH with the results of the performance of a planner developed specifically for solving Aeolus-based deployment problems [3]. This domain-specific planner is evaluated in an experimental set-up similar to ours, thus we use their reported results directly.
**Results:** Figure 3 shows the results of the both planners, where the number of generated instances equates to the number of components. Even though SH shows worse performance than the domain-specific planner, which is expected, deployment problems with 200 components can be solved in less than 15 seconds.
Our HTN planner outperforms both planners significantly, though their results seem symptomatic and unexpected even for pure domain-independent planners.
From a perspective of computational complexity, HTN planning problems are generally hard to solve. On one end of the spectrum, when various restrictions are imposed on HTN planning problems to reduce their complexity, it takes polynomial time to check whether there exist a plan for such problems. On the other end of the spectrum, when no restrictions are imposed on tasks, variables and the domain, checking whether there is a solution to a given HTN planning problem becomes undecidable [6].
V. RELATED WORK
The problem of composing applications ready for deployment via automated planning has been addressed, to the best of our knowledge, in two studies:
* Arshad et al. describe a problem of deploying software systems, and uses a temporal-based planner to search for an optimal plan with respect to plan duration [2]. While we also deal with configuring software application, we tackle two important issues, not addressed in this study, namely new instance creation and modelling configuration processes, making it possible to apply planning to Cloud-based applications. In addition, we use a formal model for the Cloud to derive planning problems, we allow for more than once instance of a service to exist at a time, and the goal does not need to include the ports for connection – the planner figures that out automatically.
* Lascu et al. describe a deployment problem based on the Aeolus formal model and presents a domain-specific planner to search for a solution [3]. This means that all configuration processes and features are implemented and embodies in the planning process. We however encode all domain-specific knowledge in the domain model, making the approach flexible and extensible to new features and capabilities. Also, our approach does not require the initial configuration to be empty.
More generally, the problem of composing Cloud services have a close resemblance with the problem of Web service composition. Various aspects of Web service composition have already been addressed by numerous planning approaches, e.g., [17]–[19]. Existing approaches however overlook an important characteristic about the Web service composition: a Web service can represent either an abstract Web service type or one or more instances of a specific Web service [4]. In the existing approaches, the Web service composition consists of synthesising a Web service type, which seems to be sufficient for the scenarios considered – too small to involve multiple service instances. In practice, however, there is a choice among many instances of a Web service. One of the distinct features of our approach is the creation of new and multiple instances of Cloud services during runtime.
Looking at HTN planning, it is employed to represent and compose Web services in several studies summarised in [6]. Common among those studies is the assumption that Web
services are represent in OWL-S and can be transformed into HTNs. OWL-S is a language specifically designed to support the discovery, composition and monitoring of Semantic Web services. In reality, however, the language supports essentially only behavioural descriptions of services [7], [20]. Such descriptions seem insufficient to be correctly translated into HTNs, and moreover, inappropriate to reason over. This drawback prevents OWL-S from being used in practical and real-world cases at all. On the other hand, our approach is not dependent on a specific modelling language, but on a formal model that captures the semantics of current and future controlled Cloud infrastructures. Additionally, the studies assume the existence of OWL-S compound Web services which can be translated to HTN methods and compound tasks (for details, see [17]). On the other hand, we do not use any compound Cloud services, but we use compound tasks to model configuration processes.
Contrary to the approach taken in [21], where assignment expressions encoded in the precondition of SHOP2’s operators are used to create new streams, we create new instances using domain functions and numerical fluents modelled in the effects of HPDL actions. It would be interesting to analyse whether there are performance benefits from these two different encoding approaches. Additionally, we allow for existence of multiple instances.
In Cloud computing, the problem of managing interconnected machines has been addressed by many tools, such as Wrangler [22], SmartFrog [23], CFEngine [24], Puppet [25], Chef, and Ansible. These tools support specifying the components, together with their configuration files, to be installed on machines, and then, by using various mechanisms, deploy the components accordingly. The task of specifying which component to deploy where, and how to interconnect it to other components is however left to the user. Furthermore, ConfSolve [26] is used to search for an optimal allocation of virtual machines on servers and applications on virtual machines. However, the tool does not handle the problem of composing interdependent services. Juju and Engage [27] are focused on a problem similar to ours, avoiding some issues related to the connection between components. For example, while our approach supports circular dependencies, these cannot be defined in Engage. In Juju, circular dependencies must be resolved manually.
VI. CONCLUSIONS
We examined the connections between the task of composing Cloud applications and automated planning. We proposed the use of HTN planning, described a deployment problem based on a formal model for the Cloud, and presented how to model an HTN planning problem from the deployment one. We showed that HTN planning offers a possibility to express various constraints on the composition, dynamic instance creation, recursion through the use of tasks, and instance duplication provided in the domain model.
The experimental evaluation illustrated that HTN planning can compose Cloud applications of 100 components in less than 4 seconds, and applications of 200 components in about 15 seconds. This gives a concrete advantage of automated planning over the popular tools used in Cloud computing. We also showed that our domain-independent HTN planner is comparable to a planner developed specifically for this type of problems. In contrast to prior findings [3], we showed that even domain-independent planners are able to compose Cloud applications fast.
The advantages of our approach include the modularity and flexibility of the approach to further improvements and developments; the speed of computation; and the amount of effort spent to model HTN planning problems as compared to the effort spent developing (and extending) a domain-specific planner and/or tool. The contributions of our study include the establishment of a stronger relationship between Cloud computing and HTN planning; a model of deployment-based HTN planning problems; the dynamic instance creation; and the support for instance duplication.
As part of the future work, we would like to improve the performance of the SH planner and to compare the planner with other types of AI planners known for performing fast in general planning problems. In addition, we would like to apply our proposed solution to a real-life setting.
ACKNOWLEDGMENT
This research has been partially sponsored by the EU H2020 FIRST project, Grant No. 734599, FIRST: VF Interoperation suppoRting buSiness innovaTion.
REFERENCES
|
{"Source-Url": "https://www.rug.nl/research/portal/files/77068288/08241528.pdf", "len_cl100k_base": 8705, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33047, "total-output-tokens": 11071, "length": "2e13", "weborganizer": {"__label__adult": 0.0002677440643310547, "__label__art_design": 0.0005946159362792969, "__label__crime_law": 0.0003361701965332031, "__label__education_jobs": 0.0007138252258300781, "__label__entertainment": 0.00010198354721069336, "__label__fashion_beauty": 0.00016009807586669922, "__label__finance_business": 0.000591278076171875, "__label__food_dining": 0.0003006458282470703, "__label__games": 0.0005316734313964844, "__label__hardware": 0.0010194778442382812, "__label__health": 0.0004279613494873047, "__label__history": 0.00032711029052734375, "__label__home_hobbies": 0.00011199712753295898, "__label__industrial": 0.00045013427734375, "__label__literature": 0.00032901763916015625, "__label__politics": 0.0002911090850830078, "__label__religion": 0.0003390312194824219, "__label__science_tech": 0.10107421875, "__label__social_life": 0.00010484457015991212, "__label__software": 0.035003662109375, "__label__software_dev": 0.85595703125, "__label__sports_fitness": 0.00017499923706054688, "__label__transportation": 0.0004665851593017578, "__label__travel": 0.0002180337905883789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47633, 0.02513]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47633, 0.37619]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47633, 0.90924]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5394, false], [5394, 11448, null], [11448, 16446, null], [16446, 22424, null], [22424, 28300, null], [28300, 34925, null], [34925, 37949, null], [37949, 44303, null], [44303, 47633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5394, true], [5394, 11448, null], [11448, 16446, null], [16446, 22424, null], [22424, 28300, null], [28300, 34925, null], [34925, 37949, null], [37949, 44303, null], [44303, 47633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47633, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5394, 2], [5394, 11448, 3], [11448, 16446, 4], [16446, 22424, 5], [22424, 28300, 6], [28300, 34925, 7], [34925, 37949, 8], [37949, 44303, 9], [44303, 47633, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47633, 0.07843]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ccc0b5a0908523702734e41e1b4df6a575949a05
|
[REMOVED]
|
{"Source-Url": "http://pagesperso.lip6.fr/Fabrice.Kordon/pdf/2012-topnoc-pnml.pdf", "len_cl100k_base": 13311, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 72231, "total-output-tokens": 16464, "length": "2e13", "weborganizer": {"__label__adult": 0.00027370452880859375, "__label__art_design": 0.0003669261932373047, "__label__crime_law": 0.00028324127197265625, "__label__education_jobs": 0.000701904296875, "__label__entertainment": 5.46574592590332e-05, "__label__fashion_beauty": 0.00013816356658935547, "__label__finance_business": 0.00028252601623535156, "__label__food_dining": 0.0002880096435546875, "__label__games": 0.0004792213439941406, "__label__hardware": 0.0006613731384277344, "__label__health": 0.0003659725189208984, "__label__history": 0.0002696514129638672, "__label__home_hobbies": 0.0001029372215270996, "__label__industrial": 0.0005350112915039062, "__label__literature": 0.00023877620697021484, "__label__politics": 0.0002727508544921875, "__label__religion": 0.00047659873962402344, "__label__science_tech": 0.031463623046875, "__label__social_life": 8.32676887512207e-05, "__label__software": 0.008880615234375, "__label__software_dev": 0.95263671875, "__label__sports_fitness": 0.0002779960632324219, "__label__transportation": 0.0005717277526855469, "__label__travel": 0.00019991397857666016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61389, 0.01901]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61389, 0.3787]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61389, 0.87242]], "google_gemma-3-12b-it_contains_pii": [[0, 2629, false], [2629, 5549, null], [5549, 7866, null], [7866, 10715, null], [10715, 13867, null], [13867, 16652, null], [16652, 19534, null], [19534, 23111, null], [23111, 26688, null], [26688, 29053, null], [29053, 31827, null], [31827, 34599, null], [34599, 36888, null], [36888, 39214, null], [39214, 40487, null], [40487, 42148, null], [42148, 43903, null], [43903, 45478, null], [45478, 48140, null], [48140, 50093, null], [50093, 51771, null], [51771, 54420, null], [54420, 57227, null], [57227, 60697, null], [60697, 61389, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2629, true], [2629, 5549, null], [5549, 7866, null], [7866, 10715, null], [10715, 13867, null], [13867, 16652, null], [16652, 19534, null], [19534, 23111, null], [23111, 26688, null], [26688, 29053, null], [29053, 31827, null], [31827, 34599, null], [34599, 36888, null], [36888, 39214, null], [39214, 40487, null], [40487, 42148, null], [42148, 43903, null], [43903, 45478, null], [45478, 48140, null], [48140, 50093, null], [50093, 51771, null], [51771, 54420, null], [54420, 57227, null], [57227, 60697, null], [60697, 61389, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61389, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61389, null]], "pdf_page_numbers": [[0, 2629, 1], [2629, 5549, 2], [5549, 7866, 3], [7866, 10715, 4], [10715, 13867, 5], [13867, 16652, 6], [16652, 19534, 7], [19534, 23111, 8], [23111, 26688, 9], [26688, 29053, 10], [29053, 31827, 11], [31827, 34599, 12], [34599, 36888, 13], [36888, 39214, 14], [39214, 40487, 15], [40487, 42148, 16], [42148, 43903, 17], [43903, 45478, 18], [45478, 48140, 19], [48140, 50093, 20], [50093, 51771, 21], [51771, 54420, 22], [54420, 57227, 23], [57227, 60697, 24], [60697, 61389, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61389, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
f89a0b83fc3c27e29f65c301991cde6d67d17d7a
|
Design-time Compliance of Service Compositions in Dynamic Service Environments
Groefsema, Heerko; van Beest, Nick
Published in:
8th IEEE International Conference on Service Oriented Computing & Applications (SOCA)
DOI:
10.1109/SOCA.2015.14
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document version below.
Document Version
Final author's version (accepted by publisher, after peer review)
Publication date:
2015
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):
Copyright
Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
The publication may also be distributed here under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license. More information can be found on the University of Groningen website: https://www.rug.nl/library/open-access/self-archiving-pure/taverne-amendment.
Take-down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.
Download date: 16-09-2023
Design–Time Compliance of Service Compositions in Dynamic Service Environments
Conference Paper - October 2015
DOI: 10.1109/SOCA.2015.14
CITATIONS
18
READS
117
2 authors, including:
Nick R. T. P. van Beest
Data61 | CSIRO, Brisbane
40 PUBLICATIONS 437 CITATIONS
Some of the authors of this publication are also working on these related projects:
- Behavior-based conformance checking of process models and logs View project
- Business Process Regulatory Compliance View project
Design-time Compliance of Service Compositions in Dynamic Service Environments
Heerko Groefsema
Johann Bernoulli Institute
Faculty of Mathematics and Natural Sciences
University of Groningen
h.groefsema@rug.nl
Nick van Beest
Software Systems Research Group
NICTA Queensland
nick.vanbeest@nicta.com.au
Abstract—In order to improve the flexibility of information systems, an increasing amount of business processes is being automated by implementing tasks as modular services in service compositions. As organizations are required to adhere to laws and regulations, with this increased flexibility there is a demand for automated compliance checking of business processes. Model checking is a technique which exhaustively and automatically verifies system models against specifications of interest, e.g. a finite state machine against a set of logic formulas. When model checking business processes, existing approaches either cause large amounts of overhead, linearize models to such an extent that activity parallelization is lost, offer only checking of runtime execution traces, or introduce new and unknown logics. In order to fully benefit from existing model checking techniques, we propose a mapping from workflow patterns to a class of labeled transition systems known as Kripke structures. With this mapping, we provide pre-runtime compliance checking using well-known branching time temporal logics. The approach is validated on a complex abstract process which includes a deferred choice, parallel branching, and a loop. The process is modeled using the Business Process Model and Notation (BPMN) standard, converted into a colored Petri net using the workflow patterns, and subsequently translated into a Kripke structure, which is then used for verification.
I. INTRODUCTION
Laws and regulations are a common sight in all fields of business and government. Such regulations directly affect the way organizations conduct business. As such, business processes are increasingly supported by service-oriented information systems, in order to achieve higher flexibility, which allows them to anticipate on changing regulations. As the number of processes is increasing rapidly, it becomes significantly more complicated to continuously ensure the compliance of these business processes and the resulting new service compositions. As a result, organizations are becoming more and more interested in automatically checking the compliance of their business processes and service compositions.
Model checking is a technique to automatically verify a given system model against specifications of interest. To allow for algorithmic verification, a system model is commonly presented as a transition system and verified against a set of logic formulas. Model checking business processes models can be done for three purposes: monitoring (checking whether the model is executing correctly), auditing (checking whether the model has been executed correctly), or preventative (ensuring correctness of the model prior to its execution). Existing approaches tend to focus on monitoring and auditing, using the runtime execution trace of the business process [1]. Therefore, whenever a business process appears not to be compliant, its execution has already started, or could even have been completed. Consequently, rollbacks are required in order to undo any work not compliant with business rules. Design-time compliance verification, on the other hand, does not suffer from this disadvantage, as any discrepancy will be detected prior to execution. Existing design-time approaches, however, invent new or extended logics in order to support the different branching constructs implemented by business processes [2][3], generate transition systems with large amounts of overhead (e.g. [4]), or linearize the model to such an extent that parallelization information is lost [5][6][7]. Particularly in more complex systems, where parts of the same process are executed by different service providers, concurrency is an important aspect to be taken into account. However, the analysis of a large number of concurrent branches and activities in a business process quickly results in a state explosion in the underlying transition system.
Therefore, we present a novel approach allowing pre-runtime compliance checking that supports the different branching and merging constructs allowed by business process models, while significantly reducing the complexity of the analysis compared to other approaches. In addition, our approach does not require new or extended logics. As a result, well-known model checking techniques, as well as existing model checkers, can be applied during process verification.
First, a service composition is converted into a colored Petri net [8] (CPN) through the application of workflow patterns [9]. The resulting CPN is translated into a transition system known as a Kripke structure [10]. Although the Kripke structure closely resembles the well-known reachability graph [11] (RG) of the CPN, it maintains parallelization information and allows correct specification of branching time temporal logics over transition occurrences. Prior to verification, the Kripke structure is reduced, resulting in a significant performance gain. Finally, properties over the possible sequential and concurrent service executions of the composition can be verified.
The resulting model 1) allows correct interpretation of branching time temporal logic specifications over complex business process models, 2) provides full insight into possible parallel interleavings, 3) supports arbitrary cycles, 4) causes a limited state explosion compared to other approaches, and 5) allows further model reduction through equivalence with respect to stuttering.
The paper is structured as follows. In Section II, we first introduce BPMN, CPN, and the one-on-one conversion between the two through workflow patterns. Next, in Section III, the conversion from CPN to Kripke structures is presented, the semantics of the branching time temporal logics is defined upon the CPN its possible executions, and the obtained model is normalized further. Next, in Section IV, we evaluate performance and the effects of model sizes for the conversion algorithm. In Section V, the related work is discussed. Finally, we conclude our work in Section VI.
II. PROCESS MODELING
A. BPMN
In 2004, the Business Process Management Initiative introduced the Business Process Modelling Notation (BPMN). This business process modelling language has been developed with the specific purpose of providing a modelling language that is readily understandable by business users [12]. As such, the process flow is represented in a graph-oriented way, where the explicit control-flow is defined by events, activities, and gateways, which are connected through sequence flows and message flows [13][12]. In 2009, BPMN was updated to version 2.0, including detailed execution semantics for all BPMN elements [14].
The general process model, which is later converted for verification, is based on the control-flow perspective of the BPMN standard. In order to allow for formal verification, the processes defined in BPMN are translated into Colored Petri Nets [8] and subsequently to Kripke structures [15] to obtain the possible states of the process.
In Figure 1, an abstract process is depicted using BPMN, which we use to graphically describe the basic conversion. The abstract process comprises two exclusive branches, of which one contains a loop and the other comprises a parallel split.

B. Colored Petri Nets
A Colored Petri Net (CPN) is a directed graph representing a process. CPNs consist of places, transitions, and arcs between transition and place pairs. Transitions in CPNs represent activities or tasks of the business process, and places that can hold tokens, representing the state between transitions. The previous transitions with an arc to that place have finished execution and a next transition with an arc from that place has been enabled. Prior to illustrating the translation from BPMN to CPN, first a formal definition is provided of CPN and its reachability graph. A CPN is defined as follows [8]:
**Definition 1 (Colored Petri Net):** A Colored Petri Net is a 9-tuple $CPN = (\Sigma, P, T, A, N, C, G, E, M_0)$, where:
- $\Sigma$ is a finite set of non-empty types, called color sets,
- $P$ is a finite set of places,
- $T$ is a finite set of transitions,
- $A$ is a finite set of arcs such that $P \cap T = P \cap A = T \cap A = \emptyset$,
- $N$ is a node function defined from $A$ over $P \times T \cup T \times P$,
- $C$ is a color function defined from $P$ into $\Sigma$,
- $G$ is a guard function defined from $T$ into expressions such that $\forall t \in T : \{Type(G(t)) = Bool \land Type(Var(G(t))) \subseteq \Sigma\}$,
- $E$ is an arc expression function defined from $A$ into expressions such that $\forall a \in A : \{Type(E(a)) = C(p(a))_{arg} \land Type(Var(E(a))) \subseteq \Sigma\}$ where $p(a)$ is the place of $N(a)$,
- $M_0$, the initial marking, is a function defined on $P$, such that $M(p) \in [C(p) \rightarrow N]_f$ for all $p \in P$.
The CPN state, often referred to as the marking of CPN, is a function $M$ defined on $P$, such that $M(p) \in [C(p) \rightarrow N]_f$ for all $p \in P$. Let $p$ be a place and $t$ a transition. Elements of $C(p)$ are called colors. $p$ is an input place (output place) for $t$ iff $(p, t) \in N$ ($(t, p) \in N$) [8]. Every CPN is paired with an initial marking $M_0$. Transitions of a CPN may occur in order to change the marking of the CPN per the firing rule [8]. Places containing tokens in a marking enable possible binding elements $(t, b)$, consisting of a transition $t$ and a binding $b$ of variables of $t$. A binding element is enabled if and only if enough tokens of the correct color are present at the input places of transition $t$ and its guard evaluates true. More formally, if $\forall p \in P: E(p, t)(b) \leq M(p)$. An enabled binding element may occur, changing the marking, by removing tokens from the input places of $t$ and adding tokens to the output places of $t$ as dictated by the arc evaluation function. Then, a multiset $Y$ of binding elements $(t, b)$, or a step, is enabled if $\forall p \in P: \sum_{(t, b) \in Y} E(p, t)(b) \leq M(p)$, or if the sum of the binding elements is enabled. The occurrence of a step $Y$ at a marking $M_i$ produces a new marking $M_j$ as denoted by $M_i \xrightarrow{Y} M_j$. All possible states of a CPN can be obtained from the initial marking through the firing rule. Utilizing CPNs as an intermediary form comes with the advantage that the marking (i.e., the distribution of tokens over places) can be seen as the process state, allowing a mapping of the state of the composition to the system model, instead of a mapping using the activities of the business process composition (i.e., the transitions of the CPN).
**Definition 2 (Reachability Graph):** The reachability graph of a CPN with markings $M_0, ..., M_n$ is a rooted directed graph $G = (V, E, v_0)$, where:
- $V = \{M_0, ..., M_n\}$ is the set of vertices,
- $v_0 = M_0$ is the root node
- $E = \{(M_i, t, b), M_j \mid M_j \in V \land M_i \xrightarrow{t, b} M_j\}$ is the set of edges, where each edge represents the firing of a binding element $(t, b)$ at a marking $M_i$ such that a marking $M_j$ is produced.
The general approach for converting CPNs into transition systems is used when generating the reachability graph (RG) (Definition 2) [11]. Starting from the initial marking $M_0$, states are created for each encountered marking while enabled binding elements occur to generate new markings. However, in this case, the occurrence of transitions can not be concluded from the states of the transition system but from its relations. States contain information on the marking of the net, that is, they hold information on which places contain which tokens. When considering the states of the RG, the only certainty that can be concluded is that transitions are enabled at that marking – which does not ensure their occurrence – or, that one of a set of transitions may just have occurred to achieve the current marking. On the other hand, when verifying using the transitions labeled on the relations, the occurrence of the same transition at a marking could result in different markings due to conditional arcs (e.g., the exclusive choice workflow pattern). Verification of branching time temporal logics would then incorrectly consider both occurrences as being on different paths, where in actuality it is the same occurrence.
C. Converting from BPMN to CPN
For the conversion from a BPMN model to a CPN, we provide a translation for each BPMN element to its CPN representation. The translation is based on the workflow patterns as defined in [9]. However, in some cases, additions were required in order to provide a generic translation of the respective BPMN element. In Table 1, an overview is presented of the conversion of BPMN elements to CPN constructs.
<table>
<thead>
<tr>
<th>BPMN Element</th>
<th>BPMN Symbol</th>
<th>CPN Translation</th>
<th>BPMN Element</th>
<th>BPMN Symbol</th>
<th>CPN Translation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sequence Flow</td>
<td>[Diagram]</td>
<td></td>
<td>Exclusive Merge</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Sequence Flow with condition p</td>
<td>[Diagram]</td>
<td></td>
<td>Parallel Merge</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Task / Activity</td>
<td>[Diagram]</td>
<td></td>
<td>Inclusive Merge</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Sub-process</td>
<td>[Diagram]</td>
<td></td>
<td>Deferred Merge</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Top-level Start Event</td>
<td>[Diagram]</td>
<td></td>
<td>Complex Merge</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Top-level End Event</td>
<td>[Diagram]</td>
<td></td>
<td>Complex Merge Variant 2</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Intermediate Throwing Event</td>
<td>[Diagram]</td>
<td></td>
<td>Structured Loop (While)</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Intermediate Catching Event</td>
<td>[Diagram]</td>
<td></td>
<td>Structured Loop (Repeat)</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Exclusive Fork</td>
<td>[Diagram]</td>
<td></td>
<td>MI Variant 2</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Parallel Fork</td>
<td>[Diagram]</td>
<td></td>
<td>Message between activities or events</td>
<td>[Diagram]</td>
<td></td>
</tr>
<tr>
<td>Inclusive Fork</td>
<td>[Diagram]</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Deferred choice</td>
<td>[Diagram]</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Complex Fork</td>
<td>[Diagram]</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
TABLE I: Conversion of BPMN elements into CPN constructs based on the workflow patterns as defined in [9].
In general, BPMN sequence flows are represented by arcs in the CPN and activities are represented by a place connected to a transition. In the table, the elements that are part of the construct are indicated with black lines, whereas the surrounding elements (depicted where necessary for clarity) are represented with grey dotted lines. This is necessary, because in some cases the translation is not represented by a separate construct in CPN. Rather, it affects the preceding
or succeeding elements (e.g. consider the parallel merge). To avoid any unnecessary complexity, the patterns have been adopted to use one color (e.g. complex merge variants). At the same time, intermediate catching events have been changed to occur either once (or a set number of times) or when required in order to avoid an infinite number of possible markings. Naturally, these changes do not affect future labelings of states.
Using the conversion provided in Table I, the abstract BPMN model depicted in Figure 1 can be translated into a CPN. In Figure 2, the resulting CPN is depicted graphically.

### III. Design Time Business Process Verification
Before service compositions can be verified using model checking techniques, they first need to be translated from a CPN into a verifiable system model. The different models required for verification are introduced. Subsequently, the conversion process is presented, after which the resulting branching time temporal logic interpretation is defined on the reachability graph. Finally, model reduction is discussed.
#### A. Model and Specification
When model checking, a system model—often a transition system—is verified against specifications of interest. As the CPN state can be captured based on its marking, a state-based labeled transition system is used as the system model for model checking. A state-based labeled transition system is a transition system with a labeling function over its states, instead of (or in addition to) a labeling function over its flow relations. A Kripke structure is such a state-based labeled transition system [15].
**Definition 3 (Kripke structure):** Let AP be a set of atomic propositions. A Kripke structure K over AP is a quadruple
\[ K = (S, S_0, R, L) \]
where:
- \( S \) is a finite set of states,
- \( S_0 \subseteq S \) is a set of initial states,
- \( R \subseteq S \times S \) is a transition relation such that it is left-total, meaning that for each \( s \in S \) there exists a state \( s' \in S \) such that \( (s, s') \in R \),
- \( L : S \rightarrow 2^{AP} \) is a labeling function with the set of atomic propositions that are true in that state.
Kripke structures are often used to interpret temporal logics such as Computation Tree Logic (CTL) [10]. CTL is a branching time temporal logic which specifies temporal operators over future states on branching paths, or tree-like structures, where each branch represents a possible execution path. CTL pairs the operators \( E \) and \( A \) (exists/always) with the temporal operators \( X \), \( F \), \( G \), and \( U \) to specify that a property holds either on a path or all paths, and in the next state, eventually in a state, in all future states, or until another property holds, respectively. However, when considering concurrent systems—or in our case, concurrently executing branches—it may be dangerous to evaluate the nexttime operator, \( X \), as it refers to the next global state (i.e. the typical interleaved execution of concurrent programs or branches) and not the next local state (i.e. the execution of one such program or branch) [16].
Instead, when one considers the nexttime operator, one actually means to describe that something occurs before other local occurrences—which, in turn, can be specified easily using the other operators. As such, the use of CTL-X (CTL minus the nexttime operator) is preferred in such a case.
Next, these definitions are used to present a correct model translation from CPN to Kripke structures, such that temporal logic formulas expressed using CTL-X can be verified upon the Kripke structure.
#### B. Verifiable Model
Since transitions in CPN relate directly to activities when describing business processes, a common technique for converting CPN into transition systems entails the inclusion of transitions as states in the transition system upon their occurrence. While traversing the CPN from its initial marking \( M_0 \), transitions are continuously added as states while they occur. A major drawback of this technique occurs when transitions are encountered multiple times during, for example, the interleaving of parallel paths. In such cases the approach causes the inclusion of multiple copies of the same state. Due to this, an enormous amount of duplicate states are created. Instead, we define a verifiable model which only includes states for each marking and each set of transitions that are not just enabled, but will occur at that marking.
In order to obtain a verifiable system model from the markings of a CPN, we first specify what the places containing tokens in a marking represent. Let us define \( Y_i(M) \) as the set of binding elements enabled at a marking \( M \):
\[ Y_i(M) = \{ (t, b) \mid \forall p \in P : E(p, t)(b) \leq M(p) \} \]
Then, \( Y_p(M) \) are the enabled steps of the powerset \( P \) of \( Y_i(M) \). Formally,
\[ Y_p(M) = \{ Y \mid Y \in P(Y_i) \land \forall p \in P : \sum_t \{ E(p, t)(b) \leq M(p) \} Y \} \]
Finally, \( Y_s(M) \) are those elements of the enabled powerset \( Y_p(M) \) which are not subset of any other element of the powerset:
\[ Y_s(M) = \{ Y \mid Y \in Y_p(M) \land \forall Y' \in Y_p(M) : Y \not\subset Y' \land Y \neq \emptyset \} \]
This set, \( Y_s(M) \), is used in upcoming definitions to determine the different labelings when multiple sets of binding occurrences could occur concurrently at the same marking.
Using these conventions, we convert a colored Petri net CPN into a Kripke structure \( K \) by creating states at each marking \( M_i \) for each set of binding elements that can occur concurrently at a marking \( M_i \), and then having each binding element occur individually to find possible next states. Although binding elements could occur simultaneously, allowing these would only provide for additional relations, creating shorter paths between existing states when interleaving. Even though CPN could theoretically reach an infinite number of markings, the use of the sound and safe workflow patterns restrict the CPN in such a way that it always produces a number of markings that is finite. The verifiable system model of a business process model called the transition graph, is formalized in Definition 4.
**Definition 4 (Transition Graph):** Let AP be a set of atomic propositions. The transition graph of a CPN with markings \( M_1, \ldots, M_n \) is a Kripke structure \( K = (S, S_0, R, L) \) over AP, with:
- \( AP = \{ M_0, \ldots, M_n \} \cup \{ (t, b) \in Y \mid Y \in \{ Y_i(M_0), \ldots, Y_i(M_n) \} \} \)
- \( S = \{ s' \mid Y \in Y_i(M_0) \} \)
- \( S_0 = \{ s_0' \mid Y \in Y_i(M_0) \} \)
- \( L(s') = \{ M_i \} \cup \{ (t, b) \mid (t, b) \in Y \} \)
- \( R = \{ (s_i, s_j) \mid (t, b) \in L(s_i) \land M_i \in L(s_i) \land M_j \in L(s_j) \land M_i \rightarrow M_j \} ^1 \)
\(^1\text{Although Definition 4 uses elements from the definition itself to define } R \text{ (i.e. the labeling function } L), \text{ this is merely done to produce a more concise and readable definition.}
Definition 4 introduces a novel conversion from the marking of the CPN where a state, which is labeled with a binding element, can be interpreted as that binding element currently occurring. Binding elements, however, can be found as occurring over multiple states. A binding element has only occurred (i.e., finished occurring) when it is occurring at one state and not occurring at a next state. Binding elements occur concurrently during interleaving of parallel branches. In such cases, states are labeled with multiple binding elements. Although the transition graph is a graph containing states with such cases, states are labeled with multiple binding elements. Figure 3 depicts the transition graph resulting from this conversion process on the abstract CPN depicted in Figure 2.
Fig. 3: The transition graph of the abstract process.
Even though states are labeled with markings $M_0, \ldots, M_n$, these should not be used as propositions when verifying by means of the transition graph. The markings are only included in the transition graph in order to obtain a correct model (i.e., to detect the difference between a marking where a step $(t, b)$ is enabled without additional tokens at places and a similar marking with additional tokens unrelated to $(t, b)$). When verifying over markings, using the well-known reachability graph is preferred. The reachability graph can equally be obtained from the transition graph.
Definition 5 (Reachability Graph of a Transition Graph): Let $AP$ be a set of atomic propositions. The reachability graph of the Transition Graph $K = (S, S_0, R, L)$ over $AP$ is a rooted directed graph $G = (V, E, v_0)$, with:
- $V = \{M_l | s_i \in S : M_l \in L(s_i)\}$ is the set of vertices
- $v_0 = M_0$ is the root node
- $E = \{\langle M_l, (t, b), M_j \rangle | (s_j, s_i) \in R \land M_l \in L(s_i) \land M_j \in L(s_j) \land (t, b) \in L(s_j) \setminus L(s_i) \setminus M_j\}$ is the set of edges.
Definition 5 completes the cycle of model conversions. Together with earlier definitions, Definition 5 allows a CPN to be transformed into a transition graph, which then can be transformed back into a CPN through an intermediary reachability graph step. Using these steps, the semantics of CTL-X can be defined upon the possible executions of a CPN. An occurrence path of a CPN is a sequence of sets of enabled transitions that can occur concurrently $\pi = y_1, y_2, \ldots$ with $y_i \in Y_i(M_l)$ and $M_i \xrightarrow{(t, b) \in (t, b) \in M_j}$ for $i > 0$. Here, $y_i \in Y_i(M_l)$ are versions of the marking $M_l$ where different sets of binding elements are enabled (i.e., those that can occur simultaneously).
A binding element $(t, b)$ is occurring at $y_i$ iff $(t, b) \in y_i$. The semantics of CTL-X on the possible executions of a colored Petri net is defined using the minimal set of CTL-X operators $\{\rightarrow, \lor, \lnot, E, U\}$.
Definition 6 (CTL-X semantics on Reachability Graph): $G, y_i \models \phi$ means that the formula $\phi$ holds at $y_i \in Y_i(M_l)$ of marking $M_l$ of the reachability graph $G$. When the model $G$ is understood, $y_i \models \phi$ is written instead. The steps $(t, b)$ form the propositions of the language of CTL-X. When $b$ is understood, $t$ is written only. The relation $|$ is defined inductively as follows:
- $y_i \models (t, b)$ if $(t, b) \in y_i$
- $y_i \models \lnot \phi$ if $y_i \not\models \phi$
- $y_i \models \phi \lor \phi'$ if $y_i \models \phi$ or $y_i \models \phi'$
- $y_i \models E \phi$ if $\exists \pi = y_i, y_{i+1}, y_{i+2}, \ldots$ with $\forall n : (n \geq 0 \land y_{i+n} \models \phi)$
- $y_i \models E[\phi \ U \ \phi']$ if $\exists \pi = y_i, y_{i+1}, y_{i+2}, \ldots$ with $\exists m : (m \geq 0 \land y_{i+m} \models \phi' \land y_{i+m+1} \models \phi)$
Lemma 1 (Truth Lemma): For any CTL-X formula $\phi$: $G, y_i \models \phi$ iff $\phi \in y_i$.
Proof: The proof is by structural induction on $\phi$. If $\phi$ consists of a propositional letter $(t, b)$ then by Definition 6, $y_i \models \phi$, or $y_i \models \phi_1$, and $\phi_1 \in y_i$, or $\phi_2 \in y_i$. If $\phi$ is of the form $\lnot \phi$, then $y_i \not\models \phi$, and $\phi \not\in y_i$. If $\phi$ is of the form $EG\phi$, then $\exists \pi = y_i, y_{i+1}, y_{i+2}, \ldots$ with $\forall n : (n \geq 0 \land y_{i+n} \models \phi)$, and thus $\forall y_i : (n \geq 0 \land \phi \in y_{i+n})$. The case where $\phi$ is of the form $E[\phi_1 \ U \ \phi_2]$ follows similarly.
The other well-known CTL-X operators can be obtained through the following equivalences:
- $EF\phi \equiv E[true \ U \ \phi]$
- $AF\phi \equiv \lnot EG \lnot \phi$
- $AG\phi \equiv \lnot EF \lnot \phi$
- $A[\phi \ U \ \phi'] \equiv \lnot (E[\neg \phi' \ U \ \neg (\phi \lor \phi') \lor EG \neg \phi')$
Verification of a formula $\phi$ on the possible executions of a CPN proves that $\phi$ does or does not hold at a certain point of its execution. More specifically, a formula $\phi$ may or may not hold at a version $y_i \in Y_i(M_l)$ of marking $M_l$. When a step $(t, b)$ holds at $y_i$, that step is occurring. When a formula $\phi$ holds at all versions $y_i \in Y_i(M_l)$ of marking $M_l$, it can be written that $M_l \models \phi$.
Using the definitions above, CTL-X specifications can be used to verify BPMN service compositions. Compositions defined using BPMN can be translated into CPN, which in turn can be simulated to obtain a transition graph upon which the branching time temporal logic CTL-X can be interpreted. This interpretation can then be understood upon the possible executions of the CPN as expressed by its reachability graph. Next, further model reduction is discussed.
C. Model Reduction
The transition graph can be reduced before the model is verified by model checking. As model checking techniques verify models with given specifications in an exhaustive fashion, any reduction of the model benefits performance.
Two model reduction steps are available, both of which are based upon the removal of unused atomic propositions and model equivalence under the absence of the nexttime operator, otherwise known as equivalence with respect to stuttering [17]. Equivalence with respect to stuttering is a useful notion when considering concurrent systems— or, in our case, concurrently executing branches. In such cases it may be dangerous to evaluate the nexttime operator. Instead, the until operator can be used on the transition graph to specify the same (e.g. $AG(e \Rightarrow A[e \ U \ \text{f}])$ can be used to specify that $e$ is followed by $f$ in every execution branch of the process).
A finite Kripke structure $K$ can be uniquely identified by a single CTL formula $F_K$ [17]. As a result, $F_K$ can be used to evaluate the equivalence of other Kripke structures $K'$ to $K$. When considering $F_K$ without nexttime operators, the equivalence of $K'$ can be evaluated with respect to stuttering [17]. Two Kripke structures $K$ and $K'$ are equivalent with respect to
stuttering if all paths from the initial states \(s_0 \in S_0\) of \(K\) are stutter equivalent with the paths from the initial states \(s'_0 \in S'_0\) of \(K'\) and vice versa. Two paths are stutter equivalent \(\pi \sim_{st} \pi'\) if both paths can be partitioned into blocks of states \(\pi = k_0, k_1, \ldots\) and \(\pi' = k'_0, k'_1, \ldots\) such that \(\forall s \in k_i, \exists s' \in k'_i : L(s) = L(s')\) for \(i \geq 0 [18]\).
To reduce the model, first those atomic propositions not used by specifications, with the exception of those relating to events, are removed. Then, the atomic propositions related to markings are removed from the labels of all states and the set \(AP\) such that \(M_i \not\in AP\) and \(\forall s \in S : M_i \not\in L(s)\) for \(0 \leq i \leq n\). Finally, a stutter equivalent model with respect to the used atomic propositions is obtained. Although the removed labels were needed during the conversion process to ensure unique states to be generated, they can be removed at this point because they are not used by specifications or because specifications should only be expressed using activities or events of the business process (i.e. transitions) and not its progression information (i.e. marking).
Figure 4 depicts the stutter equivalent model of the Kripke structure depicted in Figure 3 after the removal of the unused atomic propositions a, b, c, e, h, and i. Note that several unlabeled states remain. These can not be removed, as it would affect the evaluation of formulas (e.g. \(AG(d \Rightarrow AF f)\)) would incorrectly evaluate to true).
\[
\begin{align*}
\text{Fig. 4: The abstract process as an optimized Kripke structure w.r.t the atomic propositions d, f, and g.}
\end{align*}
\]
### IV. PERFORMANCE EVALUATION
The performance of the approach was evaluated by executing an implementation of Definition 4 on artificial service compositions of several sizes that were specifically generated for performance evaluation purposes by specifying a gate type, number of branches, and branch length. Performance tests were attained using a system with an Intel Core i7-4771 CPU at 3.50 GHz, 32 GB of memory, running Windows 7 x64. The conversion algorithm was implemented using Java 7. The results of the performance tests can be found in Table II.
The columns of Table II provide information on the case number, the process (process containing sequence/exclusive/parallel branching, the number of branches \(n\), and number of activities per branch \(m\)), the Kripke structure (number of states \(S\), relations \(R\), and atomic propositions \(AP\)), the reduced Kripke structure (number of states \(\bar{S}\) and percentage of original, relations \(\bar{R}\) and percentage of original, and atomic propositions \(\bar{AP}\)), and the performance of the conversion algorithm during initialization, model conversion, and model reduction. Note that the post reduction sizes vary due to the random removal of 50% of atomic propositions.
Test cases 1-8 of Table II demonstrate that sequential processes and processes including exclusive paths are of no concern to performance. These processes are converted within 3 to 22 milliseconds and reduced within 0 to 5 milliseconds.
Compositions including parallel regions introduce an increased complexity of \(\prod_{i=1}^{n}(m_i + 1)\) states where \(m_i\) is the length of branch \(i\) (for equal branch lengths the complexity is \((m + 1)^n\)). This increased complexity is introduced due to the interleaving of concurrent activities on parallel branches. Other approaches, however, completely linearize the possible interleavings and therefore introduce a much larger complexity while proving limited insight on parallel behavior. Test cases 9-11 of Table II demonstrate that parallel interleavings of average sizes are converted within 96 milliseconds and reduced within 78 milliseconds. Increasing the length of the branches in test cases 13-14, and 18 of Table II from 5 to 50 activities increases the time to convert to 102 milliseconds with two branches and 185 seconds with four branches and increases the time to reduce to 90 milliseconds and 37 seconds.
The effect of model reduction displays varying results. Because sequential processes generate relatively simple Kripke structures, model reduction shows limited effects with reductions of 15% to 30%. For sequential processes, the worst case reduction with less than half of the \(AP\) removed is 0%. Processes with parallel interleaved paths, however, show a much larger effect with a 46% to 72% reduction. In this case, while the complexity of the Kripke structures increases with additional branches, model reduction naturally gains increased effect. With each removed atomic proposition, a significant amount of interleaved states is reduced.
Although the resulting interleaving is responsible for a state explosion, this is of little concern for processes with average and even large parallel areas. Normal sized compositions are generated and ready to be verified instantly. Extremely large parallel areas do introduce increased complexity. However, when a limited number of atomic propositions from these areas is used, these still can be reduced significantly and used for verification pre-runtime. Furthermore, when a model does turn out to be too large for model checking, our approach allows to split formulas in multiple sets, each resulting in a much smaller reduced Kripke. Each formula set can then be checked on its respective Kripke reduction, which results in a significant performance gain. In this respect, the size of the reduced model is directly related to the number of atomic propositions used within the set of formulas.
### V. RELATED WORK
Processes have been the target of formal verification for a variety of reasons. In our survey [1], we identified the main goals of process verification. The first goal focuses on the verification of processes regarding the reachability and termination properties. When also considering the absence of any running activities at process termination, i.e. proper completion, we refer to process soundness. First presented in [19], process soundness is verified using Workflow nets. The technique is perfected in [20] by introducing support for OR-joins and cancellation regions. The second goal focuses on the verification of process compliance.
In [21], a translation from Petri nets to Kripke structures is proposed. By introducing intermediate states to the Kripke structure for each transition, the approach is able to define fairness conditions concerning the firing of transitions. However, we propose a smaller and simplified mapping from transitions and places to states in the Kripke structure which provides the required domain specific occurrence information.
In [22], a framework for design-time process compliance of event-driven process chains using CTL is presented. The framework allows CTL constraints to be evaluated directly upon the process structure. However, as CTL is specified over Kripke structures, it does not support different forks and joins. In [23], a design-time compliance framework based upon annotated BPMN is presented. It includes the ability to
automatically resolve non-compliance by converting BPMN models into semantic process models. It lacks, however, full loop support. A framework for a priori verification of ConDec [24] processes is presented by [25], where the LTL process specification of ConDec is translated into an inductive logic program and used as input for verification. However, the verification algorithm is unable to terminate under certain loops. In [26], two methods are presented for model checking compliance of annotated, yet acyclic, processes. Finally, [27] propose model checking compliance of Web Service Business Process Execution Language (WS-BPEL) processes by translating to pi-calculus and then to an automaton. However, due to the heavy synchronization of the interleaving method, all parallelization is lost. Instead, we presented a process conversion including parallel and loop support.
In order to solve issues with loops and different forks and joins, [3] proposes Temporal Process Logics (TPL), a modal propositional logic that is able to reason about possible process executions. A temporal deontic logic (PENELOPE) is introduced by [28], for specifying obligations and permissions over activities rather than propositions. In [29], a CTL based language ABSL is proposed for specifying life cycle properties of artifacts, while [30] proposes a first-order extension of LTL to verify all possible process executions of artifact-centric systems for compliance. However, by introducing new or extended logics the power of known and accomplished model checkers can not be exploited.
In [4], the authors propose model checking web service flow language (WSFL) collaborations using the SPIN model checker. The WSFL is encoded into Promela, the modeling language of SPIN, by mapping activities and transitions to Promela processes and channels. A method for model checking compliance through Amber processes using SPIN is proposed by [31]. Finally, [32] builds upon the earlier proposed BPMN-Q, a query language for BPMN. Constraints are translated from BPMN-Q to PLTL (Past-time LTL) and checked against all paths in the process in the form of sub-graphs after being reduced and translated to Petri nets and subsequently to the NuSMV2 input language. In [33], a conversion from BPMN to Workflow nets, a class of Petri nets, is proposed to allow for formal analysis. However, high-level CPNs are used to express workflow patterns [9]. As such, we propose a mapping from BPMN to CPN, in order to support a wide range of patterns expressed by BPMN that are not supported by workflow nets (e.g. exclusive choice). In addition, large amounts of overhead can be introduced without careful use of modeling languages. For example, in [4], it is reported that the intermediate Promela mapping causes a simple process of five activities and four transitions to be mapped to 201 states and 586 transitions in SPIN’s internal state machine. Instead, we propose a careful conversion from CPNs to Kripke structures with a minimal amount of overhead.
In [34], compliance is modeled based on DecSerFlow [35], an LTL based declarative runtime specification for service compositions. By translating DecSerFlow rules into an extended form of event calculus, they are able to model compensation actions at runtime when a choreography violates compliance. In [36], the authors propose compliance constraints in the form of graphs. These Compliance Rule Graphs (CRG) specify compliance rules using the occurrence or absence of antecedents and consequences from a process. Event patterns, described by these CRG, are satisfied when the occurrences, absence of antecedents and consequences of process elements are matched by the execution trace of the process. By offering pre-runtime checking instead, we avoid the execution of non-compliant processes including any resulting roll-backs.
Another approach towards compliance verification is that of refinement checking. When refinement checking, a process is compliant when it is a refinement of rules expressed as another process. A method for model checking UML sequence diagrams is proposed by [37], aiming at process assurance and design. A process is implemented into the Communicating Sequential Processes (CSP) format and checked by the Failures-Divergences Refinement (FDR) checker against atomicity properties encoded as another CSP process. In [38], a BPEL implementation of a process is verified against a UML message state chart. Both BPEL and UML are translated into a Finite State Process (FSP), which are then verified towards each other. Reo, a channel based coordination language, is proposed by [39] as an intermediate layer for verifying compliance. BPMN is translated to Reo and verified using constraint automata. Instead, we allow well-known logics, such as CTL, to be used to describe compliance specifications.
VI. CONCLUSION
We presented a novel approach to pre-runtime compliance checking. The approach supports well-known branching time temporal logics over the different branching and merging constructs allowed in service compositions. As such, it goes beyond existing approaches, which either check compliance during or after execution, cause large amounts of overhead, or require new or extended logics.
In our approach, the compliance of a composition can be checked starting from a graphical business process modelling notation. Although BPMN is used throughout the paper, other notations can be translated into CPN form using similar pattern mappings. As such, the process is first translated into a CPN, and subsequently converted into a Kripke structure in such a way that the different branching and merging constructs allowed in compositions are maintained and verifiable using branching-time temporal logics. Furthermore, the original pro-
<table>
<thead>
<tr>
<th>Process</th>
<th>Kripke structure</th>
<th>Reduced Kripke structure</th>
<th>Performance results</th>
</tr>
</thead>
<tbody>
<tr>
<td>#</td>
<td>Type n m</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>SEQ 1 5</td>
<td>5</td>
<td>0 ms</td>
</tr>
<tr>
<td>2</td>
<td>XOR 2 5</td>
<td>15</td>
<td>2 ms</td>
</tr>
<tr>
<td>3</td>
<td>XOR 3 5</td>
<td>18</td>
<td>1 ms</td>
</tr>
<tr>
<td>4</td>
<td>XOR 4 5</td>
<td>23</td>
<td>1 ms</td>
</tr>
<tr>
<td>5</td>
<td>SEQ 1 50</td>
<td>53</td>
<td>26 ms</td>
</tr>
<tr>
<td>6</td>
<td>XOR 2 50</td>
<td>105</td>
<td>2 ms</td>
</tr>
<tr>
<td>7</td>
<td>XOR 3 50</td>
<td>153</td>
<td>2 ms</td>
</tr>
<tr>
<td>8</td>
<td>XOR 4 50</td>
<td>203</td>
<td>2 ms</td>
</tr>
<tr>
<td>9</td>
<td>AND 2 5</td>
<td>38</td>
<td>1 ms</td>
</tr>
<tr>
<td>10</td>
<td>AND 3 5</td>
<td>218</td>
<td>6 ms</td>
</tr>
<tr>
<td>11</td>
<td>AND 4 5</td>
<td>1298</td>
<td>9 ms</td>
</tr>
<tr>
<td>12</td>
<td>AND 5 50</td>
<td>53</td>
<td>26 ms</td>
</tr>
<tr>
<td>13</td>
<td>AND 6 50</td>
<td>203</td>
<td>2 ms</td>
</tr>
<tr>
<td>14</td>
<td>AND 7 50</td>
<td>153</td>
<td>2 ms</td>
</tr>
<tr>
<td>15</td>
<td>AND 8 50</td>
<td>203</td>
<td>2 ms</td>
</tr>
</tbody>
</table>
TABLE II: Performance results of the initialization, conversion, and reduction of processes with n branches of m activities, and resulting Kripke structure sizes before and after reduction by 50% of randomly chosen atomic propositions.
cess can be reverse engineered from the Kripke structure, allowing the results from model checking to be directly applied upon the original process. The approach in itself is notation independent due to the formal intermediate CPN form and pattern mapping.
Extensive performance tests confirm that, even for processes with large parallel regions, the conversion algorithm performs well. Moreover, very large processes can be easily reduced further before verification.
Our approach is particularly valuable in highly changeable environments, where organizations are required to adhere to frequently changing laws and regulations. Although service-oriented environments do provide the required flexibility with respect to business process support, the automated compliance checking approach in this paper ensures that new service compositions are compliant with respective laws and regulations. For future work we plan to evaluate the approach on a large real-life case and compare the results with other approaches.
ACKNOWLEDGEMENTS
We thank Marco Aiello, Doina Bucur, and Artem Polyvyanyy for their valuable feedback. NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council.
REFERENCES
|
{"Source-Url": "https://pure.rug.nl/ws/files/261565869/bpmnkripke_soca2015.pdf", "len_cl100k_base": 11283, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37596, "total-output-tokens": 14392, "length": "2e13", "weborganizer": {"__label__adult": 0.0003337860107421875, "__label__art_design": 0.0006651878356933594, "__label__crime_law": 0.0004963874816894531, "__label__education_jobs": 0.0014362335205078125, "__label__entertainment": 0.00010001659393310548, "__label__fashion_beauty": 0.0002038478851318359, "__label__finance_business": 0.0008792877197265625, "__label__food_dining": 0.00042366981506347656, "__label__games": 0.0005946159362792969, "__label__hardware": 0.001010894775390625, "__label__health": 0.0006732940673828125, "__label__history": 0.0003590583801269531, "__label__home_hobbies": 0.0001354217529296875, "__label__industrial": 0.0007495880126953125, "__label__literature": 0.0003998279571533203, "__label__politics": 0.0003821849822998047, "__label__religion": 0.0004532337188720703, "__label__science_tech": 0.1544189453125, "__label__social_life": 0.00012564659118652344, "__label__software": 0.01332855224609375, "__label__software_dev": 0.82177734375, "__label__sports_fitness": 0.00029015541076660156, "__label__transportation": 0.0007576942443847656, "__label__travel": 0.0002155303955078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54422, 0.04985]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54422, 0.40111]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54422, 0.88549]], "google_gemma-3-12b-it_contains_pii": [[0, 1915, false], [1915, 2400, null], [2400, 8759, null], [8759, 15538, null], [15538, 17706, null], [17706, 24797, null], [24797, 31803, null], [31803, 39028, null], [39028, 45961, null], [45961, 54422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1915, true], [1915, 2400, null], [2400, 8759, null], [8759, 15538, null], [15538, 17706, null], [17706, 24797, null], [24797, 31803, null], [31803, 39028, null], [39028, 45961, null], [45961, 54422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54422, null]], "pdf_page_numbers": [[0, 1915, 1], [1915, 2400, 2], [2400, 8759, 3], [8759, 15538, 4], [15538, 17706, 5], [17706, 24797, 6], [24797, 31803, 7], [31803, 39028, 8], [39028, 45961, 9], [45961, 54422, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54422, 0.13253]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.